Neurolab is a simple and powerful Neural Network Library for Python

Overview

Neurolab

Neurolab is a simple and powerful Neural Network Library for Python. Contains based neural networks, train algorithms and flexible framework to create and explore other neural network types.

Features

  • Pure python + numpy
  • API like Neural Network Toolbox (NNT) from MATLAB
  • Interface to use train algorithms form scipy.optimize
  • Flexible network configurations and learning algorithms. You may change: train, error, initializetion and activation functions
  • Unlimited number of neural layers and number of neurons in layers
  • Variety of supported types of Artificial Neural Network and learning algorithms

Example

    >>> import numpy as np
    >>> import neurolab as nl
    >>> # Create train samples
    >>> input = np.random.uniform(-0.5, 0.5, (10, 2))
    >>> target = (input[:, 0] + input[:, 1]).reshape(10, 1)
    >>> # Create network with 2 inputs, 5 neurons in input layer
    >>> # And 1 in output layer
    >>> net = nl.net.newff([[-0.5, 0.5], [-0.5, 0.5]], [5, 1])
    >>> # Train process
    >>> err = net.train(input, target, show=15)
    Epoch: 15; Error: 0.150308402918;
    Epoch: 30; Error: 0.072265865089;
    Epoch: 45; Error: 0.016931355131;
    The goal of learning is reached
    >>> # Test
    >>> net.sim([[0.2, 0.1]]) # 0.2 + 0.1
    array([[ 0.28757596]])

Links

Install

Install neurolab using pip:

    $> pip install neurolab

Or, if you don't have setuptools/distribute installed, use the download link at right to download the source package, and install it in the normal fashion. Ungzip and untar the source package, cd to the new directory, and:

    $> python setup.py install

Support neural networks types

Comments
  • Problem in training Neural network

    Problem in training Neural network

    I am pretty new in using python and neurolab and I have a problem with the training of my feed forward neural network. I have built the net as following: net = nl.net.newff([[-1,1]]*64, [60,1]) net.init() testerr = net.train(InputT, TargetT, epochs=100, show=1) and my target output is a vector between 0 and 4. When I use the nl.train.train_bfgs I have in the console: testerr = net.train(InputT, TargetT, epochs=10, show=1) Epoch: 1; Error: 55670.4462766; Epoch: 2; Error: 55649.5; As you can see, I fixed the number of epochs to 100 but it stops at the second epoch and after the test of the net with Netresults=net.sim(InputCross) I have as test output array a vector of 1 (totally wrong). If I use the other training functions I have the same output testing vector full of 1 but in that case during the training, the epochs reach the number that I set but the error displayed doesn't change between the epochs. The same if the target output vector is between -1 and 1. Any suggestion? Thank you very much!

    opened by pattysan 15
  • feedforward network not learning

    feedforward network not learning

    What steps will reproduce the problem?
    1. training nff
    2. large input --42 input neurons floaing point vales as inputs
    3. out put (0,0,0,1)  ,(0,0,1,0),(0,1,0,0),(1,0,0,0)
    
    What is the expected output? What do you see instead?
       decrease in error but constant error seen
    
    What version of the product are you using? On what operating system?
       most recent version on my fedora 17(linux 3.7)
    
    Please provide any additional information below.
    
    This was to recognize handgesture where the inputs are x,y,z acceleration 
    values and 4 patterns are the possible outputs
    

    Original issue reported on code.google.com by sarath.sp06 on 10 Mar 2013 at 5:56

    Attachments:

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 12
  • competitive transfer function produces incorrect output

    competitive transfer function produces incorrect output

    What steps will reproduce the problem?
    1. Import neurolab as nl and create a simple vector n with 3 values, one value 
    < 0, one value > 0 and one value < 0, e.g. n = (-0.5763, 0.8345, -0.1234)
    2. let f = nl.trans.Competitive() 
    3. a = f(n)
    
    What is the expected output? What do you see instead?
    I expect [0, 1, 0], instead I see [1, 0, 0 ]
    
    What version of the product are you using? On what operating system?
    Version 0.1.0 on Ubuntu 11.04
    
    Please provide any additional information below.
    
    
    

    Original issue reported on code.google.com by [email protected] on 5 Aug 2011 at 7:21

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 12
  • Multiprocessing can not pickle unbound function

    Multiprocessing can not pickle unbound function

    What steps will reproduce the problem?
    Using the neurolab Net modules is not possible to parallelize its execution due 
    to the unbound functions are unpickable through queues.
    
    It could be solved not defining unbound functions of trainf or errof during the 
    instantiation of the Net structure. Its just a fast solution, defininf the 
    errorf and trainf as strings and calling them in the right moment as in error 
    function calling:
    
    return getattr(error, net.errof)(target - output)
    

    Original issue reported on code.google.com by [email protected] on 6 Mar 2014 at 1:13

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 6
  • Added regularization and cross-entropy error to Neurolab

    Added regularization and cross-entropy error to Neurolab

    I have added l2 and l1 regularization capabilities for training feed-forward 
    networks to the Neurolab library. I have also added cross-entropy error (used 
    in logistic regression) to error.py.
    
    I would like to see these modifications incorporated into the standard 
    distribution of Neurolab, but I'm not sure how to contact the developers of 
    Neurolab. If anyone has advice, please let me know.
    
    You can find my modifications, an explanation, and a demonstration here:
    1. https://github.com/kwecht/NeuroLab
    2. 
    http://nbviewer.ipython.org/github/kwecht/NeuroLab/blob/master/Adding%20Regulari
    zation%20to%20Neurolab.ipynb
    

    Original issue reported on code.google.com by [email protected] on 22 Dec 2014 at 9:18

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 5
  • Cannot install successfully for Python 3.2

    Cannot install successfully for Python 3.2

    The package installs automatically and successfully into the python 2.7 
    directories...I am using python 3.2 and cannot import it into a python 3.2 
    program,  I get a "no module found" error.   When I set the path to the 2.7 
    directory and try to 'import neurolab' it finds _init_.py and then pulls an 
    error saying no 'net' module found.  I admit I am new to python and linux so 
    any help that can be given is appreciated. I have installed this package using 
    easy_install, pip and the setup.py methods and get the same result.
    
    
    

    Original issue reported on code.google.com by [email protected] on 2 Jan 2014 at 3:29

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 5
  • Failing to add Levenberg-Marquardt-training

    Failing to add Levenberg-Marquardt-training

    Hello everyone!
    
    I've been trying to add the Levenberg-Marquardt algorithm as a training-method 
    but I fail doing so - I get a strange error-message I don't understand.
    
    Here's the small method I've added to spo.py, using scipy.optimize.leastsq:
    class TrainLM(TrainSO):
        def __call__(self, net, input, target):
            from scipy.optimize import leastsq
            x = leastsq(self.fcn, self.x.copy())
            self.x[:] = x
    
    And in train/__init__.py I added the line train_lm = trainer(spo.TrainLM)
    
    A simple test-run using the XOR-function looks like this:
    >>import neurolab
    >>target = [[0], [1], [1], [0]]
    >>input = [[0,0], [0,1], [1,0], [1,1]]
    >>net = neurolab.net.newff([[-0.5, 0.5], [-0.5, 0.5]], [5, 1])
    >>net.trainf = neurolab.train.train_lm
    >>err = net.train(input, target, show=15)
    
    I get this error-message:
    
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python2.7/dist-packages/neurolab/core.py", line 163, in train
        return self.trainf(self, *args, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/neurolab/core.py", line 345, in __call__
        train(net, *args)
      File "/usr/local/lib/python2.7/dist-packages/neurolab/train/spo.py", line 44, in __call__
        x = leastsq(self.fcn, self.x.copy())
      File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 278, in leastsq
        raise TypeError('Improper input: N=%s must not exceed M=%s' % (n,m))
    TypeError: Improper input: N=21 must not exceed M=1
    
    I've been playing around with this and I do not understand what M is in this 
    case? What am I doing wrong? 
    
    Thanks in advance!
    

    Original issue reported on code.google.com by [email protected] on 6 Nov 2011 at 5:52

    Priority-Medium auto-migrated Type-Other 
    opened by GoogleCodeExporter 5
  • Elman network

    Elman network

    А можно увидеть пример работы сети Элмана 
    для предсказания числовых 
    последовательностей ?
    Обучив сеть на последовательности [1, 3, 5, 7, 
    9], и подав на вход, я хочу чтобы она 
    предсказала следующий элемент (11) ? 
    Пробежавшись по диагонали по исходникам не 
    смог найти ответа - step() требует элемент и 
    возвращает примерно его же.
    

    Original issue reported on code.google.com by [email protected] on 17 Oct 2011 at 11:02

    Priority-Medium auto-migrated Type-Other 
    opened by GoogleCodeExporter 4
  • Missing newelm example in doc

    Missing newelm example in doc

    Hi. Your doc pages:
    
    https://pythonhosted.org/neurolab/ex_newff.html
    https://pythonhosted.org/neurolab/ex_newelm.html
    
    have the same example.
    
    The newelm example is missing.
    
    
    

    Original issue reported on code.google.com by [email protected] on 20 Aug 2014 at 3:35

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 3
  • Citing neurolab

    Citing neurolab

    First, thanks for this great project.
    I would like to cite neurolab in my papers. Is there any paper about neurolab 
    itself?
    
    Thank you.
    

    Original issue reported on code.google.com by [email protected] on 1 May 2014 at 8:19

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 3
  • Setup issue

    Setup issue

    What steps will reproduce the problem?
    1.I installed by:pip install neurolab
    2.Run the Single Layer Perceptron (newp)
    
    
    What is the expected output? What do you see instead?
    The output is:
    Traceback (most recent call last):
      File "neurolab.py", line 10, in <module>
        import neurolab
      File "/home/lilei/neurolab.py", line 17, in <module>
        net = neurolab.net.newp([[0, 1],[0, 1]], 1)
    AttributeError: 'module' object has no attribute 'net'
    
    And if I copy the neurolab file to the path of the example file,then the error 
    is fixed.
    What version of the product are you using? On what operating system?
    0.2.3
    
    Please provide any additional information below.
    
    
    

    Original issue reported on code.google.com by [email protected] on 26 Jun 2013 at 4:32

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 3
  • Error while executing net

    Error while executing net

    While using function newlvq in net, I encountered the following error: line 184, in newlvq layer_out.np['w'][n][st:i].fill(1.0) TypeError: slice indices must be integers or None or have an index method

    on further investigation, I found that this relates to the solution #481.

    The array produced by line 181 inx = np.floor(cn0 * pc.cumsum()) needs to be changed from float64 to integers to be used as the i inputs in line 184.

    while I would not call this an elegant solution, it does resolve the type issue:

    def newlvq(minmax, cn0, pc):

    pc = np.asfarray(pc)
    assert sum(pc) == 1
    ci = len(minmax)
    cn1 = len(pc)
    assert cn0 > cn1
    
    layer_inp = layer.Competitive(ci, cn0)
    layer_out = layer.Perceptron(cn0, cn1, trans.PureLin())
    
    layer_out.initf = None
    layer_out.np['b'].fill(0.0)
    layer_out.np['w'].fill(0.0)
    inx = np.floor(cn0 * pc.cumsum())
    

    Modification begins

    **v = 0
    new = []
    for i in inx:
        new.append(int(i))
        v += 1**
    for n, i in enumerate(**new**):
        st = 0 if n == 0 else **new**[n - 1]
        layer_out.np['w'][n:st:i].fill(1.0)
    

    End of modification

    net = Net(minmax, cn1, [layer_inp, layer_out],
                            [[-1], [0], [1]], train.train_lvq, error.MSE())
    
    return net
    

    Thanks, Jonathan

    opened by WinterHolidays 0
  • Error while executing the example code of Learning Vector Quantization

    Error while executing the example code of Learning Vector Quantization

    From this page: https://pythonhosted.org/neurolab/ex_newlvq.html Using numpy 1.13.3 and neurolab 0.3.5 The error message I got was:

    Traceback (most recent call last):
      File "learning_vector.py", line 20, in <module>
        net = nl.net.newlvq(nl.tool.minmax(input), 4, [.6, .4])
      File "/usr/local/lib/python2.7/dist-packages/neurolab/net.py", line 179, in newlvq
        layer_out.np['w'][n][st:i].fill(1.0)
    TypeError: slice indices must be integers or None or have an __index__ method
    
    

    I fixed the issue by editing the neurolab/net.py and changing the line 127 from layer_out.np['w'][n][st:i].fill(1.0) to layer_out.np['w'][n][int(st):int(i)].fill(1.0)

    Thanks!

    Martin Rioux

    opened by martinrioux 2
  • 'epochf' before 'learn' in training algorithm

    'epochf' before 'learn' in training algorithm

    Why 'epochf' is called before 'learn' in train/gd.py algorithms?

    Imagine you need only one epoch (for example, in reinforcement learning's step). When you specify 'epochs=1' the algorithms stop before any learning. When you specify 'epochs=2' you get additional unnecessary gradient calculation (so that you double the expensive calculations count).

    opened by supersasha 0
Owner
null
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

null 605 Jan 3, 2023
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

null 9 Oct 18, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022
A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Manas Sharma 19 Feb 28, 2022
This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

BUPT GAMMA Lab 519 Jan 2, 2023
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)

Deep Daze mist over green hills shattered plates on the grass cosmic love and attention a time traveler in the crowd life during the plague meditative

Phil Wang 4.4k Jan 3, 2023
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

null 4.9k Dec 31, 2022
TensorFlow-based neural network library

Sonnet Documentation | Examples Sonnet is a library built on top of TensorFlow 2 designed to provide simple, composable abstractions for machine learn

DeepMind 9.5k Jan 7, 2023
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

null 3.8k Feb 13, 2021
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 4, 2023
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

null 4.9k Jan 3, 2023
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

null 943 Jan 7, 2023
A tiny, pedagogical neural network library with a pytorch-like API.

candl A tiny, pedagogical implementation of a neural network library with a pytorch-like API. The primary use of this library is for education. Use th

Sri Pranav 3 May 23, 2022
Several simple examples for popular neural network toolkits calling custom CUDA operators.

Neural Network CUDA Example Several simple examples for neural network toolkits (PyTorch, TensorFlow, etc.) calling custom CUDA operators. We provide

WeiYang 798 Jan 1, 2023
Pytorch implementation of "A simple neural network module for relational reasoning" (Relational Networks)

Pytorch implementation of Relational Networks - A simple neural network module for relational reasoning Implemented & tested on Sort-of-CLEVR task. So

Kim Heecheol 800 Dec 5, 2022
A simple Neural Network that predicts the label for a series of handwritten digits

Neural_Network A simple Neural Network that predicts the label for a series of handwritten numbers This program tries to predict the label (1,2,3 etc.

Ty 1 Dec 18, 2021