A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Overview

Rockpool

PyPI - Package Conda Documentation Status PyPI - Python Version Black - formatterDOI

Noodle

Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build networks, simulate, train and test them, deploy them either in simulation or on event-driven neuromorphic compute hardware. Rockpool provides layers with a number of simulation backends, including Brian2, NEST, Torch, JAX, Numba and raw numpy. Rockpool is designed to make machine learning based on SNNs easier. It is not designed for detailed simulation of biological networks.

Documentation and getting started

The best place to start with Rockpool is the documentation, which contains several tutorials and getting started guides.

The documentation is hosted online: https://rockpool.ai/

Installation instructions

Use pip to install Rockpool and required dependencies

$ pip install rockpool --user

The --user option installs the package only for the current user.

If you want to install all the extra dependencies required for Brian, PyTorch and Jax layers, use the command

$ pip install rockpool[all] --user

NEST-backed modules

The NEST simulator cannot be installed using pip. Please see the documentation for NEST at [https://nest-simulator.readthedocs.io/en/latest/] for instructions on how to get NEST running on your system.

License

Rockpool is released under a AGPL license. Commercial licenses are available on request.

Contributing

Fork the public repository at https://github.com/SynSense/rockpool, then clone your fork.

$ git clone https://github.com/your-fork-location/rockpool.git rockpool

Install the package in development mode using pip

$ cd rockpool
$ pip install -e . --user

or

$ pip install -e .[all] --user

The main branch is development. You should commit your modifications to a new feature branch.

$ git checkout -b feature/my-feature develop
...
$ git commit -m 'This is a verbose commit message.'

Then push your new branch to your repository

$ git push -u origin feature/my-feature

When you're finished with your modifications, make a merge request on github.com, from your branch in your fork to https://github.com/SynSense/rockpool.

You might also like...
Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

 GAN JAX - A toy project to generate images from GANs with JAX
GAN JAX - A toy project to generate images from GANs with JAX

GAN JAX - A toy project to generate images from GANs with JAX This project aims to bring the power of JAX, a Python framework developped by Google and

Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Torch implementation of
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Machine learning framework for both deep learning and traditional algorithms
Machine learning framework for both deep learning and traditional algorithms

NeoML is an end-to-end machine learning framework that allows you to build, train, and deploy ML models. This framework is used by ABBYY engineers for

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Callable PyTrees and filtered JIT/grad transformations = neural networks in JAX.

Equinox Callable PyTrees and filtered JIT/grad transformations = neural networks in JAX Equinox brings more power to your model building in JAX. Repr

Comments
  • Training with JAX tutorial won't work because `optimizers` moved to `optax`

    Training with JAX tutorial won't work because `optimizers` moved to `optax`

    Hello,

    JAX has moved the jax.experimental.optimizers to optax (see here)

    And the vanilla installation of JAX won't give access to the experimental module. Therefore, this tutorial breaks.

    Solving it is not as simple as changing this: from jax.experimental.optimizers import adam to this: from optax import adam

    because optax uses different functions to update the parameters. I ll try to solve it with my null understanding of JAX and optax, if I come to a fix I ll post it here

    I am using MacOS Ventura 13.0 Python 3.10.8 JAX Version: 0.3.25 optax Version: 0.1.4 rockpool Version 2.4.2

    opened by aquaresima 2
  • Getting error when trying to start training of LIF Model with TSEvents as input: ValueError: Input has wrong neuron dimension

    Getting error when trying to start training of LIF Model with TSEvents as input: ValueError: Input has wrong neuron dimension

    I am trying to build simple network with two layers of LIF neurons of size 50 and 8 respectively with spike train input:

    # - Define the network size
    input_size = 50
    hidden_size = 50
    output_size = 8
    
    # - Build a sequential stack of modules
    sequential_model = Sequential(
        LIF((input_size, hidden_size)),
        LIF((hidden_size, output_size))
    )
    

    then defining TSEvent:

    input_spike_events = TSEvent(
        times=spikes[1],
        channels=spikes[0],
        t_start=0,
        t_stop=1000,
        num_channels=50
    )
    

    where spikes is ndarray of shape 2xN, with N - number of spike, spikes[1] - timepoint of spikes, and spikes[0] - respective spiking channels

    print(input_spike_events)
    

    produces:

    non-periodic `TSEvent` object `unnamed` from t=0.0 to 1000.0. Channels: 50. 
    Events: 244
    

    But when I call:

    output, new_state, _ = sequential_model(input_spike_events)
    

    I get an error:

    ValueError: Input has wrong neuron dimension. It is 2, must be 50
    

    From what i see the issue is in /rockpool/nn/modules/module.py inside the function _auto_batch, line 763, which incorrectly derives n_connections based on data.shape instead of using num_channels parameter of TSEvent object

    rockpool v2.4.2

    opened by fire-papaya 1
Owner
SynSense
SynSense
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
Deep and online learning with spiking neural networks in Python

Introduction The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern

Jason Eshraghian 447 Jan 3, 2023
Deep learning for spiking neural networks

A deep learning library for spiking neural networks. Norse aims to exploit the advantages of bio-inspired neural components, which are sparse and even

Electronic Vision(s) Group — BrainScaleS Neuromorphic Hardware 59 Nov 28, 2022
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021

SNN_Calibration Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021 Feature Comparison of SNN calibration: Features SNN Direct Tr

Yuhang Li 60 Dec 27, 2022
PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch.

snn-localization repo PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch. Install Dependencies Orig

Sami BARCHID 1 Jan 6, 2022
CLOOB training (JAX) and inference (JAX and PyTorch)

cloob-training Pretrained models There are two pretrained CLOOB models in this repo at the moment, a 16 epoch and a 32 epoch ViT-B/16 checkpoint train

Katherine Crowson 64 Nov 27, 2022
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
Hardware accelerated, batchable and differentiable optimizers in JAX.

JAXopt Installation | Examples | References Hardware accelerated (GPU/TPU), batchable and differentiable optimizers in JAX. Installation JAXopt can be

Google 621 Jan 8, 2023
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022