Neural Network Libraries

Related tags

Deep Learning nnabla
Overview

Neural Network Libraries

Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. We aim to have it running everywhere: desktop PCs, HPC clusters, embedded devices and production servers.

Installation

Installing Neural Network Libraries is easy:

pip install nnabla

This installs the CPU version of Neural Network Libraries. GPU-acceleration can be added by installing the CUDA extension with following command.

pip install nnabla-ext-cuda101

Above command is for version 10.1 CUDA Toolkit.

for other versions:
pip install nnabla-ext-cuda100 for CUDA version 10.0.
pip install nnabla-ext-cuda90 for CUDA version 9.0.
pip install nnabla-ext-cuda80 for CUDA version 8.0.

CUDA ver. 9.1, ver. 9.2 are not supported now.

For more details, see the installation section of the documentation.

Building from Source

See Build Manuals.

Running on Docker

For details on running on Docker, see the installation section of the documentation.

Features

Easy, flexible and expressive

The Python API built on the Neural Network Libraries C++11 core gives you flexibility and productivity. For example, a two layer neural network with classification loss can be defined in the following 5 lines of codes (hyper parameters are enclosed by <>).

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF

x = nn.Variable(<input_shape>)
t = nn.Variable(<target_shape>)
h = F.tanh(PF.affine(x, <hidden_size>, name='affine1'))
y = PF.affine(h, <target_size>, name='affine2')
loss = F.mean(F.softmax_cross_entropy(y, t))

Training can be done by:

import nnabla.solvers as S

# Create a solver (parameter updater)
solver = S.Adam(<solver_params>)
solver.set_parameters(nn.get_parameters())

# Training iteration
for n in range(<num_training_iterations>):
    # Setting data from any data source
    x.d = <set data>
    t.d = <set label>
    # Initialize gradients
    solver.zero_grad()
    # Forward and backward execution
    loss.forward()
    loss.backward()
    # Update parameters by computed gradients
    solver.update()

The dynamic computation graph enables flexible runtime network construction. Neural Network Libraries can use both paradigms of static and dynamic graphs, both using the same API.

x.d = <set data>
t.d = <set label>
drop_depth = np.random.rand(<num_stochastic_layers>) < <layer_drop_ratio>
with nn.auto_forward():
    h = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), name='conv0'))
    for i in range(<num_stochastic_layers>):
        if drop_depth[i]:
            continue  # Stochastically drop a layer
        h2 = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), 
                                   name='conv%d' % (i + 1)))
        h = F.add2(h, h2)
    y = PF.affine(h, <target_size>, name='classification')
    loss = F.mean(F.softmax_cross_entropy(y, t))
# Backward computation (can also be done in dynamically executed graph)
loss.backward()

Command line utility

Neural Network Libraries provides a command line utility nnabla_cli for easier use of NNL.

nnabla_cli provides following functionality.

  • Training, Evaluation or Inference with NNP file.
  • Dataset and Parameter manipulation.
  • File format converter
    • From ONNX to NNP and NNP to ONNX.
    • From ONNX or NNP to NNB or C source code.

For more details see Documentation

Portable and multi-platform

  • Python API can be used on Linux and Windows
  • Most of the library code is written in C++11, deployable to embedded devices

Extensible

  • Easy to add new modules like neural network operators and optimizers
  • The library allows developers to add specialized implementations (e.g., for FPGA, ...). For example, we provide CUDA backend as an extension, which gives speed-up by GPU accelerated computation.

Efficient

  • High speed on a single CUDA GPU
  • Memory optimization engine
  • Multiple GPU support

Documentation

https://nnabla.readthedocs.org

Getting started

  • A number of Jupyter notebook tutorials can be found in the tutorial folder. We recommend starting from by_examples.ipynb for a first working example in Neural Network Libraries and python_api.ipynb for an introduction into the Neural Network Libraries API.

  • We also provide some more sophisticated examples at nnabla-examples repository.

  • C++ API examples are available in examples/cpp.

Contribution guide

The technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework. NNabla is really nice in this point. The architecture of Neural Network Libraries is clean and quite simple. Also, you can add new features very easy by the help of our code template generating system. See the following link for details.

License & Notice

Neural Network Libraries is provided under the Apache License Version 2.0 license.

It also depends on some open source software packages. For more information, see LICENSES.

Comments
  • import  nnabla_ext.cuda -  ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory

    import nnabla_ext.cuda - ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory

    Forum,

    I am getting the following error when checking my nnabla set up:

    #check nnabla import nnabla import nnabla_ext.cuda import nnabla.ext_utils as nneu import nnabla_ext.cudnn

    Error: ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory

    I am using the pip install to set up the framework. I do have anaconda installed also.

    Bests, Philip

    opened by philtsmith570 35
  • Implement assign function

    Implement assign function

    Hi, @TE-TakuyaNarihira san.

    I added F.assign function that behaves like tf.assign.

    This is used for manual assignment or manual variable update.

    usecase1: manual assignment

    x = nn.Variable((3, 3))
    y = nn.Variable.from_numpy_array(np.random.random((3, 3)))
    assign_op = F.assign(y, x)
    
    x.d = np.random.random((3, 3))
    assign_op.forward()
    print((x.d == y.d).all()) # True
    print((assign_op.d == x.d).all()) # True
    

    usecase2: manual update

    lr = 1.0
    original = np.random.random((3, 3))
    
    grad = F.constant(1.0, (3, 3))
    y = nn.Variable.from_numpy_array(original)
    train_op = F.assign(y, y + lr * grad)
    
    train_op.forward()
    print((y.d == (original + 1.0)).all()) # True, or False because of rounding floating points
    

    F.assign will be useful to implement syncing multiple parameters or manual SGD implementation in static graph way. In backwarding, F.assign propagates gradients to destination Variable.

    TODO

    • [x] add unit testing
    • [x] backward implementation
    release-note-op-layer 
    opened by takuseno 11
  • Implement forward_all function that performs forwarding on mulptiple variables at once

    Implement forward_all function that performs forwarding on mulptiple variables at once

    Hi, @TE-TakuyaNarihira san.

    I added new function forward_all.

    Formerly, forwarding multiple outputs which shared hidden layers (just like policy and value of actor-critic) requires multiple forwardings in static graph. See the example below.

    x = nn.Variable((3, 3))
    h = PF.affine(x, 3, name='hidden')
    y1 = PF.affine(h, 3, name='y1')
    y2 = PF.affine(h, 3, name='y2')
    
    y1.forward()
    y2.forward() # hidden layer was computed twice!!
    

    This is critical at huge architectures.

    Thus, I added forward_all function.

    x = nn.Variable((3, 3))
    h = PF.affine(x, 3, name='hidden')
    y1 = PF.affine(h, 3, name='y1')
    y2 = PF.affine(h, 3, name='y2')
    
    # compute h only once!
    nn.forward_all([y1, y2])
    

    This function performs forwarding with shared fclosed, which prevents shared layers from being revisited.

    What do you think of this?

    opened by takuseno 10
  • Implement batch_det function

    Implement batch_det function

    Hi, @TE-AkioHayakawa san, @TE-TakuyaNarihira san.

    I've implemented batch_det function that computes determinant of input array.

    a = nn.Variable((2, 13, 13))
    det = F.batch_det(a) # det.shape == (2,)
    
    nd = np.random.random((2, 13, 13))
    a.d = nd
    det.forward()
    
    assert np.allclose(det.d, np.array(list(map(np.linalg.det, nd))))
    

    batch_det works with forward and backward correctly now. cuda extension is implemented.

    release-note-op-layer 
    opened by takuseno 8
  • Implement OrthogonalInitializer

    Implement OrthogonalInitializer

    Hi, @TE-TakuyaNarihira san, @TE-AkioHayakawa san! #243 takes a bit time because my school started 😢

    Anyway, I implemented OrthogonalInitializer which is widely available on other DNN libraries but still effective way in many domains.

    If you think it might be good to nnabla, I will test this more to remove WIP sign.

    Thank you!

    release-note-utility 
    opened by takuseno 8
  • Error in running: mpirun -n 4 python  multi_device_multi_process_classification.py

    Error in running: mpirun -n 4 python multi_device_multi_process_classification.py

    When running multi-gpu support as suggested, the following error occurs:

    comm = C.MultiProcessDataParalellCommunicator(ctx) File "communicator.pyx", line 653, in nnabla.communicator.MultiProcessDataParallelCommunicator RuntimeError: value error in query Failed it != items_.end(): Any of [cudnn:float, cuda:float, cpu:float] could not be found in []

    How to fix it?

    opened by MiZhangWhuer 5
  • Reinforcement learning examples

    Reinforcement learning examples

    Could please anybody commit to post reinforcement learning models such as DNQ and A3C? I need some hint and guide how to build reinforcement learning RNN modlez

    opened by gusdoe 5
  • Implement inverse function

    Implement inverse function

    Hi, @TE-TakuyaNarihira san, @TE-AkioHayakawa san!

    I've implemented inverse matrix function. This uses eigen inverse function with CPU. In GPU context, cublas<t>getrfBatched() is considered to be used.

    x = nn.Variable((1, 10, 10))
    inverse = F.batch_inv(x)
    
    x.d = np.random.random((1, 10, 10))
    inverse.forward()
    
    assert np.allclose(inverse.d, np.linalg.inv(x.d)) # True
    

    CUDA implementation https://github.com/sony/nnabla-ext-cuda/pull/130

    release-note-op-layer 
    opened by takuseno 4
  • Example DeeplabV3 not works

    Example DeeplabV3 not works

    Hi,

    the example on the site https://nnabla.readthedocs.io/en/latest/python/api/models/semantic_segmentation.html didn't work.

    I always get the error: AttributeError: 'NoneType' object has no attribute 'shape'

    2022-04-02 19:54:02,584 [nnabla][INFO]: Initializing CPU extension... 2022-04-02 19:54:02,956 [nnabla][INFO]: Initializing CUDA extension... 2022-04-02 19:54:02,987 [nnabla][INFO]: Initializing cuDNN extension... Loading C:\Users\Armin/nnabla_data\nnp_models\semantic_segmentation/DeepLabV3-voc-coco-os-8.nnp. 2022-04-02 19:54:03,093 [nnabla][INFO]: Downloading DeepLabV3-voc-coco-os-8.nnp from https://nnabla.org/pretrained-models/nnp_models/semantic_segmentation/DeepLabV3-voc-coco-os-8.nnp 2022-04-02 19:54:03,093 [nnabla][INFO]: > C:\Users\Armin/nnabla_data\nnp_models\semantic_segmentation/DeepLabV3-voc-coco-os-8.nnp already exists. 2022-04-02 19:54:03,093 [nnabla][INFO]: > If you have any issue when using this file, 2022-04-02 19:54:03,093 [nnabla][INFO]: > manually remove the file and try download again. Traceback (most recent call last): File "C:/Users/Armin/Desktop/Python/DeepLabv3/main.py", line 21, in y = deeplabv3(x) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\models\semantic_segmentation\deeplabv3plus.py", line 138, in call net = self.nnp.get_network( File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 133, in get_network return NnpNetwork(self.network_dict[name], batch_size, callback=callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 45, in init self.proto_network = proto_network.promote(callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\core\graph_def.py", line 1184, in promote return self._patch_by_network_pass(callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\core\graph_def.py", line 1081, in _patch_by_network_pass functions = filter_function_by_callback(functions, callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\core\graph_def.py", line 1068, in filter_function_by_callback pf = callback._apply_generate_function_by_name(pf) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 295, in _apply_generate_function_by_name return self._function_callbacks_by_namef.name File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 187, in _callback return callback(v) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\models\semantic_segmentation\deeplabv3plus.py", line 95, in average_pooling_shape s = f.inputs[0].variable.shape AttributeError: 'NoneType' object has no attribute 'shape'

    Process finished with exit code 1

    opened by Armin234 3
  • Implement clip grad by norm at solver

    Implement clip grad by norm at solver

    Hi, @TE-TakuyaNarihira @TE-AkioHayakawa san.

    I've implemented clip_grad_by_norm at Solver class just like weight_decay.

    loss.forward()
    solver.zero_grad()
    loss.backward()
    solver.clip_grad_by_norm(10.0)
    solver.update()
    

    This mathematical formulation is as follows.

    grad = clip_norm * grad / max(clip_norm, l2norm(grad))
    

    If l2norm(grad) is less than the given norm, this function does nothing.

    Clipping gradients often appears at deep reinforcement learning implementation so that this will be very useful.

    Before working on CUDA implementation, please give me feedbacks of implementation design and necessity.

    Thank you.

    opened by takuseno 3
  • How to get or use network from nnp

    How to get or use network from nnp

    Hi.

    Neural Network Console of Windows provides h5 file and Python code. And Cloud version provides just nnp file. I know the nnp file has h5 and network of protocol buffers. But I don't know how to use network in nnp file.

    Usally we can get the python code like below.

    import nnabla as nn
    import nnabla.functions as F
    import nnabla.parametric_functions as PF
    
    def network(x, y, test=False):
        # Input:x -> 1,28,28
        # MaxPooling -> 1,14,14
        h = F.max_pooling(x, (2,2), (2,2))
        # Affine -> 100
        h = PF.affine(h, (100,), name='Affine')
        # ReLU
        h = F.relu(h, True)
        # Affine_2 -> 1
        h = PF.affine(h, (1,), name='Affine_2')
        # Sigmoid
        h = F.sigmoid(h)
        # BinaryCrossEntropy
        h = F.binary_cross_entropy(h, y)
        return h
    

    And we don't neet BinaryCrossEntropy, so delete it. The nnp file's network has it.

    import nnabla as nn
    import nnabla.functions as F
    import nnabla.parametric_functions as PF
    from nnabla.utils import nnp_graph
    
    nnp = nnp_graph.NnpLoader('./result.nnp')
    graph = nnp.get_network('MainValidation')
    y = graph.outputs
    print(y)
    

    This code outputs below.

    {'BinaryCrossEntropy': <Variable((64, 1), need_grad=True) at 0x114f926d8>}
    

    I hope use the nnp in Python like below.

    import nnabla as nn
    
    nn.load_parameters('./result.nnp')
    graph = nn.get_network('MainValidation')
    x = graph.inputs['x']
    y = graph.outputs['y']
    y.forward(x, test=True)
    y.d
    

    How to use network in the nnp file?

    I know nnabla_cli can do it. But I don't know how to do it.

    Thanks

    opened by goofmint 3
  • Bump pillow from 5.4.1 to 9.3.0 in /doc

    Bump pillow from 5.4.1 to 9.3.0 in /doc

    Bumps pillow from 5.4.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • cannot import nnabla in `numpy==1.21.4` and `nnabla>=1.29.0`

    cannot import nnabla in `numpy==1.21.4` and `nnabla>=1.29.0`

    • python: 3.8.10
      • installed by pyenv: 2.3.4-14-g093d0b3a
    • I found that I cannot import nnabla in the environment of numpy==1.21.4 and nnabla==1.29.0 or 1.30.0.
    Experiment log
    isara@SPICA:~$ pip list
    Package    Version
    ---------- -------
    pip        22.2.2
    setuptools 65.3.0
    isara@SPICA:~$ pip install numpy==1.21.4
    ...
    ...
    isara@SPICA:~$ pip install nnabla==1.29.0
    ...
    ...
    isara@SPICA:~$ pip list
    Package         Version
    --------------- -------
    boto3           1.24.76
    botocore        1.27.76
    configparser    5.3.0
    contextlib2     21.6.0
    Cython          0.29.32
    h5py            3.7.0
    imageio         2.22.0
    jmespath        1.0.1
    nnabla          1.29.0
    numpy           1.21.4
    Pillow          9.2.0
    pip             22.2.2
    protobuf        3.19.4
    python-dateutil 2.8.2
    PyYAML          6.0
    s3transfer      0.6.0
    scipy           1.9.1
    setuptools      65.3.0
    six             1.16.0
    tqdm            4.64.1
    urllib3         1.26.12
    isara@SPICA:~$ python
    Python 3.8.10 (default, Sep 20 2022, 10:17:21)
    [GCC 9.4.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import nnabla
    2022-09-20 10:53:14,554 [nnabla][INFO]: Initializing CPU extension...
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/__init__.py", line 32, in <module>
        from .variable import Variable, Context
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/variable.py", line 17, in <module>
        from ._variable import Context
      File "_variable.pyx", line 1, in init nnabla._variable
      File "_nd_array.pyx", line 1, in init nnabla._nd_array
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
    >>> exit()
    Error in atexit._run_exitfuncs:
    Traceback (most recent call last):
      File "<frozen importlib._bootstrap>", line 991, in _find_and_load
      File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/__init__.py", line 32, in <module>
        from .variable import Variable, Context
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/variable.py", line 17, in <module>
        from ._variable import Context
      File "_variable.pyx", line 1, in init nnabla._variable
      File "_nd_array.pyx", line 1, in init nnabla._nd_array
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
    isara@SPICA:~$
    
    • I got the same error in nnabla==1.30.0.

    • After that, if numpy's version is upgraded to 1.22.4, I can import nnabla.
    Experiment log
    isara@SPICA:~$ pip list
    Package    Version
    ---------- -------
    pip        22.2.2
    setuptools 65.3.0
    isara@SPICA:~$ pip install numpy==1.22.4
    ...
    ...
    isara@SPICA:~$ pip install nnabla==1.29.0
    ...
    ...
    isara@SPICA:~$ pip list
    Package         Version
    --------------- -------
    boto3           1.24.76
    botocore        1.27.76
    configparser    5.3.0
    contextlib2     21.6.0
    Cython          0.29.32
    h5py            3.7.0
    imageio         2.22.0
    jmespath        1.0.1
    nnabla          1.29.0
    numpy           1.22.4
    Pillow          9.2.0
    pip             22.2.2
    protobuf        3.19.4
    python-dateutil 2.8.2
    PyYAML          6.0
    s3transfer      0.6.0
    scipy           1.9.1
    setuptools      65.3.0
    six             1.16.0
    tqdm            4.64.1
    urllib3         1.26.12
    isara@SPICA:~$ python
    Python 3.8.10 (default, Sep 20 2022, 10:17:21)
    [GCC 9.4.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import nnabla
    2022-09-20 10:59:24,297 [nnabla][INFO]: Initializing CPU extension...
    >>> exit()
    isara@SPICA:~$
    
    • I got the same result in nnabla==1.30.0.
    • But, I have not tried the smallest version of numpy that eliminates the error.

    • I would like you to try to reproduce this problem.
    • Then, if there is a problem, please update the required version of numpy.
      • - numpy [required: >=1.20.0, installed: 1.22.4] according to pipdeptree.
    opened by IsaraAomi 3
  • Partially initialized module

    Partially initialized module

    Not sure how to approach a fix.

    Using a pip install for a django backend and I get this message

    File "C:.venv\lib\site-packages\nnabla_init_.py", line 18, in from . import _init # Must be imported first ImportError: cannot import name 'init' from partially initialized module 'nnabla' (most likely due to a circular import) (C:.venv\lib\site-packages\nnabla_init.py)

    opened by ergodaveh 2
  • segmentaion fault occurs in convolution(group=in_channel) during mixed precision training

    segmentaion fault occurs in convolution(group=in_channel) during mixed precision training

    When performing mixed precision training, I got segmentation fault when I specified the convolution group=in_channel.

    import numpy as np
    import nnabla as nn
    import nnabla.functions as F  
    import nnabla.parametric_functions as PF
    from nnabla.ext_utils import get_extension_context
    
    
    ctx = get_extension_context(
            "cudnn",
            device_id='0',
            type_config='half'
        )
    
    nn.set_default_context(ctx)
    x = nn.Variable((8, 32, 32, 32), need_grad=True)
    x.d = np.random.random(x.shape)
    
    h = PF.convolution(x, 32, (3,3), group=32)
    loss = F.sum(h)
    loss.forward(function_post_hook=lambda f: print(f'forward {f}'))
    loss.backward(function_post_hook=lambda f: print(f'backward {f}'))
    

    Above sample code outputs below:

    2022-04-20 16:49:26,777 [nnabla][INFO]: Initializing CPU extension...
    2022-04-20 16:49:26,945 [nnabla][INFO]: Initializing CUDA extension...
    2022-04-20 16:49:26,971 [nnabla][INFO]: Initializing cuDNN extension...
    forward ConvolutionCudaCudnn
    forward SumCuda
    backward SumCuda
    Segmentation fault (core dumped)
    

    For some insights, if I set channel_last option, like h = PF.convolution(x, 32, (3,3), group=32, channel_last=True), this segmentation fault does not occur.

    opened by HiromichiKamata 0
Releases(v1.32.0)
Owner
Sony
Sony Corporation
Sony
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

null 9 Oct 18, 2022
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-driven approaches built around these algorithms enable the simplification of creating faster and smaller models for the ML performance community at large.

Neural Magic 1.5k Dec 30, 2022
Collection of in-progress libraries for entity neural networks.

ENN Incubator Collection of in-progress libraries for entity neural networks: Neural Network Architectures for Structured State Entity Gym: Abstractio

null 25 Dec 1, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2020 Links Doc

Sebastian Raschka 4.2k Jan 2, 2023
🔮 A refreshing functional take on deep learning, compatible with your favorite libraries

Thinc: A refreshing functional take on deep learning, compatible with your favorite libraries From the makers of spaCy, Prodigy and FastAPI Thinc is a

Explosion 2.6k Dec 30, 2022
Framework for abstracting Amiga debuggers and access to AmigaOS libraries and devices.

Framework for abstracting Amiga debuggers. This project provides abstration to control an Amiga remotely using a debugger. The APIs are not yet stable

Roc Vallès 39 Nov 22, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
New AidForBlind - Various Libraries used like OpenCV and other mentioned in Requirements.txt

AidForBlind Recommended PyCharm IDE Various Libraries used like OpenCV and other

Aalhad Chandewar 1 Jan 13, 2022
In this project, we'll be making our own screen recorder in Python using some libraries.

Screen Recorder in Python Project Description: In this project, we'll be making our own screen recorder in Python using some libraries. Requirements:

Hassan Shahzad 4 Jan 24, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
Neural-net-from-scratch - A simple Neural Network from scratch in Python using the Pymathrix library

A Simple Neural Network from scratch A Simple Neural Network from scratch in Pyt

Youssef Chafiqui 2 Jan 7, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 4, 2022
Pytorch Implementation of Adversarial Deep Network Embedding for Cross-Network Node Classification

Pytorch Implementation of Adversarial Deep Network Embedding for Cross-Network Node Classification (ACDNE) This is a pytorch implementation of the Adv

陈志豪 8 Oct 13, 2022
Neurolab is a simple and powerful Neural Network Library for Python

Neurolab Neurolab is a simple and powerful Neural Network Library for Python. Contains based neural networks, train algorithms and flexible framework

null 152 Dec 6, 2022
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

null 4.9k Dec 31, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 21k Jan 6, 2023
Graph neural network message passing reframed as a Transformer with local attention

Adjacent Attention Network An implementation of a simple transformer that is equivalent to graph neural network where the message passing is done with

Phil Wang 49 Dec 28, 2022