A Software Framework for Neuromorphic Computing

Overview

image

A Software Framework for Neuromorphic Computing

Introduction

Lava is an open-source software framework for developing neuro-inspired applications and mapping them to neuromorphic hardware. Lava provides developers with the tools and abstractions to develop applications that fully exploit the principles of neural computation. Constrained in this way, like the brain, Lava applications allow neuromorphic platforms to intelligently process, learn from, and respond to real-world data with great gains in energy efficiency and speed compared to conventional computer architectures.

The vision behind Lava is an open, community-developed code base that unites the full range of approaches pursued by the neuromorphic computing community. It provides a modular, composable, and extensible structure for researchers to integrate their best ideas into a growing algorithms library, while introducing new abstractions that allow others to build on those ideas without having to reinvent them.

For this purpose, Lava allows developers to define versatile processes such as individual neurons, neural networks, conventionally coded programs, interfaces to peripheral devices, and bridges to other software frameworks. Lava allows collections of these processes to be encapsulated into modules and aggregated to form complex neuromorphic applications. Communication between Lava processes uses event-based message passing, where messages can range from binary spikes to kilobyte-sized packets.

The behavior of Lava processes is defined by one or more implementation models, where different models may be specified for different execution platforms ("backends"), different degrees of precision, and for high-level algorithmic modeling purposes. For example, an excitatory/inhibitory neural network process may have different implementation models for an analog neuromorphic chip compared to a digital neuromorphic chip, but the two models could share a common "E/I" process definition with each model's implementations determined by common input parameters.

Lava is platform-agnostic so that applications can be prototyped on conventional CPUs/GPUs and deployed to heterogeneous system architectures spanning both conventional processors as well as a range of neuromorphic chips such as Intel's Loihi. To compile and execute processes for different backends, Lava builds on a low-level interface called Magma with a powerful compiler and runtime library. Over time, the Lava developer community may enhance Magma to target additional neuromorphic platforms beyond its initial support for Intel's Loihi chips.

The Lava framework currently supports (to be released soon):

  • Channel-based message passing between asynchronous processes (the Communicating Sequential Processes paradigm)
  • Hyper-granular parallelism where computation emerges as the collective result of inter-process interactions
  • Heterogeneous execution platforms with both conventional and neuromorphic components
  • Measurement (and cross-platform modeling) of performance and energy consumption
  • Offline backprop-based training of a wide range of neuron models and network topologies
  • Online real-time learning using plasticity rules constrained to access only locally available process information
  • Tools for generating complex spiking neural networks such as dynamic neural fields and networks that solve well-defined optimization problems
  • Integration with third-party frameworks

Future planned enhancements include support for emerging computational paradigms such as Vector Symbolic Architectures (aka Hyperdimensional Computing) and nonlinear oscillatory networks.

For maximum developer productivity, Lava blends a simple Python Interface with accelerated performance using underlying C/C++/CUDA/OpenCL code.

For more information, visit the Lava Documentation: http://lava-nc.org/

Release plan

Intel's Neuromorphic Computing Lab (NCL) developed the initial Lava architecture as the result of an iterative (re-)design process starting from its initial Loihi Nx SDK software. As of October, 2021, this serves as the seed of the Lava open source project, which will be released in stages beginning October 2021 as final refactoring for the new Lava software architecture is completed. During the first two months after the initial Sept 30 launch, NCL will release the core Lava components and first algorithm libraries in regular bi-weekly releases.

After this first wave of releases, expect NCL releases to relax to quarterly intervals, allowing more time for significant new features and enhancements to be implemented, as well as increasing engagement with community-wide contributors.

Initial release schedule:

Component HW support Features
Magma CPU, GPU - The generic high-level and HW-agnostic API supports creation of processes that execute asynchronously, in parallel and communicate via messages over channels to enable algorithm and application development.
- Compiler and Runtime initially only support execution or simulation on CPU and GPU platform.
- A series of basic examples and tutorials explain Lava's key architectural and usage concepts
Process library CPU, GPU Process library initially supports basic processes to create spiking neural networks with different neuron models, connection topologies and input/output processes.
Deep Learning library CPU, GPU The Lava Deep Learning (DL) library allows for direct training of stateful and event-based spiking neural networks with backpropagation via SLAYER 2.0 as well as inference through Lava. Training and inference will initially only be supported on CPU/GPU HW.
Optimization library CPU, GPU The Lava optimization library offers a variety of constraint optimization solvers such as constraint satisfaction (CSP) or quadratic unconstraint binary optimization (QUBO) and more.
Dynamic Neural Field library CPU, GPU The Dynamic Neural Field (DNF) library allows to build neural attractor networks for working memory, decision making, basic neuronal representations, and learning.
Magma and Process library Loihi 1, 2 Compiler, Runtime and the process library will be upgraded to support Loihi 1 and 2 architectures.
Profiler CPU, GPU The Lava Profiler enable power and performance measurements on neuromorphic HW as well as the ability to simulate power and performance of neuromorphic HW on CPU/GPU platforms. Initially only CPU/GPU support will be available.
DL, DNF and Optimization library Loihi 1, 2 All algorithm libraries will be upgraded to support and be properly tested on neuromorphic HW.

Lava organization

Processes are the fundamental building block in the Lava architecture from which all algorithms and applications are built. Processes are stateful objects with internal variables, input and output ports for message-based communication via channels and multiple behavioral models. This architecture is inspired from the Communicating Sequential Process (CSP) paradigm for asynchronous, parallel systems that interact via message passing. Lava processes implementing the CSP API can be compiled and executed via a cross-platform compiler and runtime that support execution on neuromorphic and conventional von-Neumann HW. Together, these components form the low-level Magma layer of Lava.

At a higher level, the process library contains a growing set of generic processes that implement various kinds of neuron models, neural network connection topologies, IO processes, etc. These execute on either CPU, GPU or neuromorphic HW such as Intel's Loihi architecture.

Various algorithm and application libraries build on these these generic processes to create specialized processes and provide tools to train or configure processes for more advanced applications. A deep learning library, constrained optimization library, and dynamic neural field library are among the first to be released in Lava, with more libraries to come in future releases.

Lava is open to modification and extension to third-party libraries like Nengo, ROS, YARP and others. Additional utilities also allow users to profile power and performance of workloads, visualize complex networks, or help with the float to fixed point conversions required for many low-precision devices such as neuromorphic HW.

image

All of Lava's core APIs and higher-level components are released, by default, with permissive BSD 3 licenses in order to encourage the broadest possible community contribution. Lower-level Magma components needed for mapping processes to neuromorphic backends are generally released with more restrictive LGPL-2.1 licensing to discourage commercial proprietary forks of these technologies. The specific components of Magma needed to compile processes specifically to Intel Loihi chips remains proprietary to Intel and is not provided through this GitHub site (see below). Similar Magma-layer code for other future commercial neuromorphic platforms likely will also remain proprietary.

Getting started

Install instructions

Installing or cloning Lava

New Lava releases will be published via GitHub releases and can be installed after downloading.

   pip install lava-0.0.1.tar.gz
   pip install lava-lib-0.0.1.tar.gz

If you would like to contribute to the source code or work with the source directly, you can also clone the repository.

   git clone [email protected]:lava-nc/lava.git
   pip install -e lava/lava
   
   git clone [email protected]:lava-nc/lava-lib.git
   # [Optional]
   pip install -e lava-lib/dnf
   pip install -e lava-lib/dl
   pip install -e lava-lib/optimization

This will allow you to run Lava on your own local CPU or GPU.

Running Lava on Intel Loihi

Intel's neuromorphic Loihi 1 or 2 research systems are currently not available commercially. Developers interested in using Lava with Loihi systems, need to join the Intel Neuromorphic Research Community (INRC). Once a member of the INRC, developers will gain access to cloud-hosted Loihi systems or are able to obtain physical Loihi systems on a loan basis. In addition, Intel will provide further proprietary components of the magma library which enable compiling processes for Loihi systems that need to be installed into the same Lava namespace as in this example:

   pip install /nfs/ncl/releases/lava/0.0.1/lava-nc-0.0.1.tar.gz
   pip install /nfs/ncl/releases/lava/0.0.1/lava-nc-lib-0.0.1.tar.gz

Please email [email protected] to request a research proposal template to apply for INRC membership.

Coding example

Building a simple feed-forward network

# Instantiate Lava processes to build network
import numpy as np
from lava.proc.io import SpikeInput, SpikeOutput
from lava.proc import Dense, LIF

si = SpikeInput(path='source_data_path', shape=(28, 28))
dense = Dense(shape=(10, 784),
              weights=np.random.random((10, 784)))
lif = LIF(shape=(10,), vth=10)
so = SpikeOutput(path='result_data_path', shape=(10,))

# Connect processes via their directional input and output ports
si.out_ports.s_out.reshape(784, 1).connect(dense.in_ports.s_in)
dense.out_ports.a_out.connect(lif.in_ports.a_in)
lif.out_ports.s_out.connect(so.in_ports.s_in)

# Execute processes for fixed number of steps on Loihi 2 (by running any of them)
from lava.magma import run_configs as rcfg
from lava.magma import run_conditions as rcnd
lif.run(run_cfg=rcfg.Loihi2HwCfg(),
        condition=rcnd.RunSteps(1000, blocking=True))

Creating a custom Lava process

A process has input and output ports to interact with other processes, internal variables may have different behavioral implementations in different programming languages or for different HW platforms.

vth v[t] = v[t] - s_out*vth """ def __init__(self, **kwargs): super(AbstractProcess, self).__init__(kwargs) shape = kwargs.pop("shape", (1,)) # Declare input and output ports self.a_in = InPort(shape=shape) self.s_out = OutPort(shape=shape) # Declare internal variables self.u = Var(shape=shape, init=0) self.v = Var(shape=shape, init=0) self.decay_u = Var(shape=(1,), init=kwargs.pop('du', 1)) self.decay_v = Var(shape=(1,), init=kwargs.pop('dv', 0)) self.b = Var(shape=shape, init=kwargs.pop('b', 0)) self.vth = Var(shape=(1,), init=kwargs.pop('vth', 1)) ">
from lava.magma import AbstractProcess, InPort, Var, OutPort

class LIF(AbstractProcess):
    """Leaky-Integrate-and-Fire neural process with activation input and spike
    output ports a_in and s_out.

    Realizes the following abstract behavior:
    u[t] = u[t-1] * (1-du) + a_in
    v[t] = v[t-1] * (1-dv) + u[t] + b
    s_out = v[t] > vth
    v[t] = v[t] - s_out*vth
    """
    def __init__(self, **kwargs):
        super(AbstractProcess, self).__init__(kwargs)
        shape = kwargs.pop("shape", (1,))
        # Declare input and output ports
        self.a_in = InPort(shape=shape)
        self.s_out = OutPort(shape=shape)
        # Declare internal variables
        self.u = Var(shape=shape, init=0)
        self.v = Var(shape=shape, init=0)
        self.decay_u = Var(shape=(1,), init=kwargs.pop('du', 1))
        self.decay_v = Var(shape=(1,), init=kwargs.pop('dv', 0))
        self.b = Var(shape=shape, init=kwargs.pop('b', 0))
        self.vth = Var(shape=(1,), init=kwargs.pop('vth', 1))

Creating process models

Process models are used to provide different behavioral models of a process. This Python model implements the LIF process, the Loihi synchronization protocol and requires a CPU compute resource to run.

self.vth self.v[spikes] -= self.vth if np.any(spikes): self.s_out.send(spikes) ">
import numpy as np
from lava import magma as mg
from lava.magma.resources import CPU
from lava.magma.sync_protocol import LoihiProtocol, DONE
from lava.proc import LIF
from lava.magma.pymodel import AbstractPyProcessModel, LavaType
from lava.magma.pymodel import InPortVecDense as InPort
from lava.magma.pymodel import OutPortVecDense as OutPort

@mg.implements(proc=LIF, protocol=LoihiProtocol)
@mg.requires(CPU)
class PyLifModel(AbstractPyProcessModel):
    # Declare port implementation
    a_in: InPort =     LavaType(InPort, np.int16, precision=16)
    s_out: OutPort =   LavaType(OutPort, bool, precision=1)
    # Declare variable implementation
    u: np.ndarray =    LavaType(np.ndarray, np.int32, precision=24)
    v: np.ndarray =    LavaType(np.ndarray, np.int32, precision=24)
    b: np.ndarray =    LavaType(np.ndarray, np.int16, precision=12)
    du: int =          LavaType(int, np.uint16, precision=12)
    dv: int =          LavaType(int, np.uint16, precision=12)
    vth: int =         LavaType(int, int, precision=8)

    def run_spk(self):
        """Executed during spiking phase of synchronization protocol."""
        # Decay current
        self.u[:] = self.u * (1 - self.du)
        # Receive input activation via channel and accumulate
        activation = self.a_in.recv()
        self.u[:] += activation
        self.v[:] = self.v * (1 - self.dv) + self.u + self.b
        # Generate output spikes and send to receiver
        spikes = self.v > self.vth
        self.v[spikes] -= self.vth
        if np.any(spikes):
            self.s_out.send(spikes)

In contrast this process model also implements the LIF process but by structurally allocating neural network resources on a virtual Loihi 1 neuro core.

from lava import magma as mg
from lava.magma.resources import Loihi1NeuroCore
from lava.proc import LIF
from lava.magma.ncmodel import AbstractNcProcessModel, LavaType, InPort, OutPort, Var

@mg.implements(proc=LIF)
@mg.requires(Loihi1NeuroCore)
class NcProcessModel(AbstractNcProcessModel):
    # Declare port implementation
    a_in: InPort =   LavaType(InPort, precision=16)
    s_out: OutPort = LavaType(OutPort, precision=1)
    # Declare variable implementation
    u: Var =         LavaType(Var, precision=24)
    v: Var =         LavaType(Var, precision=24)
    b: Var =         LavaType(Var, precision=12)
    du: Var =        LavaType(Var, precision=12)
    dv: Var =        LavaType(Var, precision=12)
    vth: Var =       LavaType(Var, precision=8)

    def allocate(self, net: mg.Net):
        """Allocates neural resources in 'virtual' neuro core."""
        num_neurons = self.in_args['shape'][0]
        # Allocate output axons
        out_ax = net.out_ax.alloc(size=num_neurons)
        net.connect(self.s_out, out_ax)
        # Allocate compartments
        cx_cfg = net.cx_cfg.alloc(size=1,
                                  du=self.du,
                                  dv=self.dv,
                                  vth=self.vth)
        cx = net.cx.alloc(size=num_neurons,
                          u=self.u,
                          v=self.v,
                          b_mant=self.b,
                          cfg=cx_cfg)
        cx.connect(out_ax)
        # Allocate dendritic accumulators
        da = net.da.alloc(size=num_neurons)
        da.connect(cx)
        net.connect(self.a_in, da)

Stay in touch

To receive regular updates on the latest developments and releases of the Lava Software Framework please subscribe to our newsletter.

Comments
  • [FYI] Lava on conda-forge

    [FYI] Lava on conda-forge

    Dear Lava developers,

    You might have heard of conda, the package manager and virtual environment tool. It is becoming increasingly popular to install a variety of software. conda-forge is the community driven collection of recipes (which result in packages).

    This is not an issue, but instead to let you know that I created recipes for lava (see e.g. https://github.com/conda-forge/lava-feedstock/blob/master/recipe/meta.yaml for an example of such a recipe), lava-optimization and lava-dl. These can now be easily installed via conda install lava lava-optimization lava-dl -c conda-forge.

    If you want, I can create a PR to add conda-forge as additional means of installation - many people love it, as it's very easy to install many packages without having to worry about version conflicts (which often happen with pip).

    If one of the lava developers want to become a maintainer of these recipes, please let me know, I'd be more than happy to add you.

    Feel free to close, as I said it's not really an issue.

    documentation integration 
    opened by Tobias-Fischer 12
  • Flatter module hierarchy

    Flatter module hierarchy

    I would like to suggest a new module hierarchy to

    • expose all user-facing APIs on a high level,
    • make import paths shorter, and
    • group related content closer together.

    We currently have this:

    lava
    - magma
      - compiler
        - builders
        - channels
        - subcompilers
      - core
        - learning
        - model
        - process
        - sync
      - runtime
    - proc
      - conv
      - dense
      - lif
      - ...
    - utils
      - dataloader
      - float2fixed.py
      - profiler.py
      - system.py
      - validator.py
      - visualizer.py
      - weightutils.py
    

    This is not fully thought through but maybe something like this:

    lava
    - core
      - compiler
        - builders
        - channels
        - subcompilers
      - learning
      - process
      - model
      - sync
    - neurons
      - lif
      - scif
      - sdn
      - rf
      - ...
    - connections
      - conv
      - dense
    - sources
      - dataloader
    - sinks
      - monitor
    - learning
      - learning_dense
      - rules
        - stdp
    - datasets (utils/dataloader)
      - mnist
    - profiler
    - fixed_point (utils/float2fixed plus future developments)
    
    0-needs-review 
    opened by mathisrichter 8
  • Add rf and rf_iz neurons to lava

    Add rf and rf_iz neurons to lava

    Issue Number:

    Objective of pull request: Add rf and rf_iz neurons to lava

    Pull request checklist

    Your PR fulfills the following requirements:

    • [x] Issue created that explains the change and why it's needed
    • [x] Tests are part of the PR (for bug fixes / features)
    • [ ] Docs reviewed and added / updated if needed (for bug fixes / features)
    • [ ] PR conforms to Coding Conventions
    • [x] PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
    • [x] Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
    • [x] Build tests (pytest) passes locally

    Pull request type

    Please check your PR type:

    • [ ] Bugfix
    • [x] Feature
    • [ ] Code style update (formatting, renaming)
    • [ ] Refactoring (no functional changes, no api changes)
    • [ ] Build related changes
    • [ ] Documentation changes
    • [ ] Other (please describe):

    What is the current behavior?

    -No support for rf neurons in lava

    What is the new behavior?

    -There should be a floating point and loihi bit accurate implementation of rf neurons in lava. This implementation should match the lava-dl rf neuron implementations.

    Does this introduce a breaking change?

    • [ ] Yes
    • [x] No

    Supplemental information

    This issue is a result of an ongoing discussion in which I ask whether or not rf neurons will be added to lava.

    Please see proof of concept example here

    1-feature 
    opened by Michaeljurado42 8
  • Three Factor Learning

    Three Factor Learning

    Issue Number:

    Objective of pull request: Implements the necessary changes to LAVA to support three factor learning rules. Contains a tutorial that implements a reward-modulated STDP learning rule. This PR also proposes structural changes to the Dense and LIF process and process models to include base classes specific to learning.

    Pull request checklist

    Your PR fulfills the following requirements:

    • [ ] Issue created that explains the change and why it's needed
    • [ ] Tests are part of the PR (for bug fixes / features)
    • [ ] Docs reviewed and added / updated if needed (for bug fixes / features)
    • [ ] PR conforms to Coding Conventions
    • [ ] PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
    • [ ] Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
    • [ ] Build tests (pytest) passes locally

    Pull request type

    Please check your PR type:

    • [ ] Bugfix
    • [x] Feature
    • [ ] Code style update (formatting, renaming)
    • [ ] Refactoring (no functional changes, no api changes)
    • [ ] Build related changes
    • [ ] Documentation changes
    • [ ] Other (please describe):

    What is the current behavior?

    What is the new behavior?

    Does this introduce a breaking change?

    • [ ] Yes
    • [ ] No

    Supplemental information

    1-feature 
    opened by bala-git9 7
  • "pyb -E unit" doesn't work

    Objective of issue: Successfully run the "pyb -E unit" line.

    When I run this line, I get several errors. How do I fix this?

    Related code: Here is the code of what happens when I try to run this line:

    PyBuilder version 0.13.3 Build started at 2021-12-29 13:06:52

    [INFO] Installing or updating plugin "pypi:pybuilder_bandit, module name 'pybuilder_bandit'" [INFO] Processing plugin packages 'pybuilder_bandit' to be installed with {} [INFO] Activated environments: unit [INFO] Building lava-nc version 0.2.0 [INFO] Executing build in c:\users\sbryan\lava-main [INFO] Going to execute tasks: analyze, publish [INFO] Processing plugin packages 'coverage~=5.2' to be installed with {'upgrade': True} [INFO] Processing plugin packages 'flake8~=3.7' to be installed with {'upgrade': True} [INFO] Processing plugin packages 'pypandoc~=1.4' to be installed with {'upgrade': True} [INFO] Processing plugin packages 'setuptools>=38.6.0' to be installed with {'upgrade': True} [INFO] Processing plugin packages 'sphinx_rtd_theme' to be installed with {} [INFO] Processing plugin packages 'sphinx_tabs' to be installed with {} [INFO] Processing plugin packages 'twine>=1.15.0' to be installed with {'upgrade': True} [INFO] Processing plugin packages 'unittest-xml-reporting~=3.0.4' to be installed with {'upgrade': True} [INFO] Processing plugin packages 'wheel>=0.34.0' to be installed with {'upgrade': True} [INFO] Creating target 'build' VEnv in 'c:\users\sbryan\lava-main\target\venv\build\cpython-3.9.7.final.0' [INFO] Processing dependency packages 'requirements.txt' to be installed with {} [INFO] Creating target 'test' VEnv in 'c:\users\sbryan\lava-main\target\venv\test\cpython-3.9.7.final.0' [INFO] Processing dependency packages 'requirements.txt' to be installed with {} [INFO] Requested coverage for tasks: pybuilder.plugins.python.unittest_plugin:run_unit_tests [INFO] Running unit tests [INFO] Executing unit tests from Python modules in c:\users\sbryan\lava-main\tests\lava Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 50, in run_spk raise AssertionError("All the error info") AssertionError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 50, in run_spk raise AssertionError("All the error info") AssertionError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 63, in run_spk raise TypeError("All the error info") TypeError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 63, in run_spk raise TypeError("All the error info") TypeError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 50, in run_spk raise AssertionError("All the error info") AssertionError: All the error info

    Runtime not started yet. c:\users\sbryan\lava-main\src\lava\proc\lif\models.py:129: RuntimeWarning: divide by zero encountered in remainder wrapped_curr = np.mod(decayed_curr, [Loihi1SimCfg]: Using the first PyProcessModel PyLifModelFloat available for Process Process_128::LIF. [Loihi1SimCfg]: Using the first PyProcessModel PyLifModelFloat available for Process Process_131::LIF. [Loihi1SimCfg]: Using the first PyProcessModel PyLifModelFloat available for Process Process_134::LIF. [INFO] Executed 150 unit tests [INFO] All unit tests passed. [INFO] Executing flake8 on project sources. [INFO] Building distribution in c:\users\sbryan\lava-main\target\dist\lava-nc-0.2.0 [INFO] Copying scripts to c:\users\sbryan\lava-main\target\dist\lava-nc-0.2.0\scripts [INFO] Writing setup.py as c:\users\sbryan\lava-main\target\dist\lava-nc-0.2.0\setup.py [INFO] Collecting coverage information for 'pybuilder.plugins.python.unittest_plugin:run_unit_tests' [WARN] ut_coverage_branch_threshold_warn is 0 and branch coverage will not be checked [WARN] ut_coverage_branch_partial_threshold_warn is 0 and partial branch coverage will not be checked [INFO] Running unit tests [INFO] Executing unit tests from Python modules in c:\users\sbryan\lava-main\tests\lava Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 50, in run_spk raise AssertionError("All the error info") AssertionError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 50, in run_spk raise AssertionError("All the error info") AssertionError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 63, in run_spk raise TypeError("All the error info") TypeError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 63, in run_spk raise TypeError("All the error info") TypeError: All the error info

    Traceback (most recent call last): File "c:\users\sbryan\lava-main\src\lava\magma\runtime\message_infrastructure\multiprocessing.py", line 30, in run mp.Process.run(self) File "C:\Users\sbryan.conda\envs\lavaa\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\runtime\runtime.py", line 38, in target_fn actor.start(*args, **kwargs) File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 62, in start self.run() File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 161, in run raise inst File "c:\users\sbryan\lava-main\src\lava\magma\core\model\py\model.py", line 132, in run self.run_spk() File "c:\users\sbryan\lava-main\tests\lava\magma\runtime\test_exception_handling.py", line 50, in run_spk raise AssertionError("All the error info") AssertionError: All the error info

    Runtime not started yet. c:\users\sbryan\lava-main\src\lava\proc\lif\models.py:129: RuntimeWarning: divide by zero encountered in remainder wrapped_curr = np.mod(decayed_curr, [Loihi1SimCfg]: Using the first PyProcessModel PyLifModelFloat available for Process Process_128::LIF. [Loihi1SimCfg]: Using the first PyProcessModel PyLifModelFloat available for Process Process_131::LIF. [Loihi1SimCfg]: Using the first PyProcessModel PyLifModelFloat available for Process Process_134::LIF. [INFO] Executed 150 unit tests [INFO] All unit tests passed. [WARN] Test coverage below 70% for lava.magma.core.run_configs: 64% [WARN] Test coverage below 70% for lava.magma.core.model.model: 60% [WARN] Test coverage below 70% for lava.magma.core.model.c.type: 0% [WARN] Test coverage below 70% for lava.magma.core.model.nc.model: 43% [WARN] Test coverage below 70% for lava.magma.core.model.py.model: 39% [WARN] Test coverage below 70% for lava.magma.core.model.py.ports: 68% [WARN] Test coverage below 70% for lava.magma.runtime.runtime_service: 31% [WARN] Test coverage below 70% for lava.proc.conv.utils: 65% [WARN] Test coverage below 70% for lava.proc.dense.models: 58% [WARN] Test coverage below 70% for lava.proc.lif.models: 45% [WARN] Test coverage below 70% for lava.proc.monitor.models: 66% [WARN] Test coverage below 70% for lava.utils.float2fixed: 0% [WARN] Test coverage below 70% for lava.utils.profiler: 0% [WARN] Test coverage below 70% for lava.utils.validator: 0% [WARN] Test coverage below 70% for lava.utils.visualizer: 0% [WARN] Test coverage below 70% for lava.utils.dataloader.mnist: 0% [INFO] Overall pybuilder.plugins.python.unittest_plugin.run_unit_tests coverage is 75% [INFO] Overall pybuilder.plugins.python.unittest_plugin.run_unit_tests branch coverage is 59% [INFO] Overall pybuilder.plugins.python.unittest_plugin.run_unit_tests partial branch coverage is 88% [WARN] Test coverage below 70% for lava.magma.core.run_configs: 64% [WARN] Test coverage below 70% for lava.magma.core.model.model: 60% [WARN] Test coverage below 70% for lava.magma.core.model.c.type: 0% [WARN] Test coverage below 70% for lava.magma.core.model.nc.model: 43% [WARN] Test coverage below 70% for lava.magma.core.model.py.model: 39% [WARN] Test coverage below 70% for lava.magma.core.model.py.ports: 68% [WARN] Test coverage below 70% for lava.magma.runtime.runtime_service: 31% [WARN] Test coverage below 70% for lava.proc.conv.utils: 65% [WARN] Test coverage below 70% for lava.proc.dense.models: 58% [WARN] Test coverage below 70% for lava.proc.lif.models: 45% [WARN] Test coverage below 70% for lava.proc.monitor.models: 66% [WARN] Test coverage below 70% for lava.utils.float2fixed: 0% [WARN] Test coverage below 70% for lava.utils.profiler: 0% [WARN] Test coverage below 70% for lava.utils.validator: 0% [WARN] Test coverage below 70% for lava.utils.visualizer: 0% [WARN] Test coverage below 70% for lava.utils.dataloader.mnist: 0% [INFO] Overall lava-nc coverage is 75% [INFO] Overall lava-nc branch coverage is 59% [INFO] Overall lava-nc partial branch coverage is 88% [INFO] Building binary distribution in c:\users\sbryan\lava-main\target\dist\lava-nc-0.2.0

    BUILD FAILED - Error while executing setup command ['bdist_dumb']. See c:\users\sbryan\lava-main\target\reports\distutils\bdist_dumb for full details: .pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\resources.py to resources.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\run_conditions.py to run_conditions.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\run_configs.py to run_configs.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\sync\domain.py to domain.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\sync\protocol.py to protocol.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\sync\protocols\async_protocol.py to async_protocol.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\core\sync\protocols\loihi_protocol.py to loihi_protocol.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\runtime\message_infrastructure\factory.py to factory.cpython-39.pyc byte-compiling build\bdist.win-amd64\dumb\users\sbryan\lava-main.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\runtime\message_infrastructure\message_infrastructure_interface.py to message_infrastructure_interface.cpython-39.pyc error: [Errno 2] No such file or directory: 'build\bdist.win-amd64\dumb\users\sbryan\lava-main\.pybuilder\plugins\cpython-3.9.7.final.0\Lib\site-packages\lava\magma\runtime\message_infrastructure\pycache\message_infrastructure_interface.cpython-39.pyc.1910180129760' (site-packages\pybuilder\plugins\python\distutils_plugin.py:394)

    Build finished at 2021-12-29 13:10:38 Build took 225 seconds (225803 ms)

    Other information: I am using the most current version of Python, running on Windows.

    opened by remotepilotsam 6
  • Can't run tutorial in notebook

    Can't run tutorial in notebook

    Objective of issue: Tutorial crashes

    Lava version:

    • [x] current main (dec 6)
    • [ ] 0.3.0 (feature release)
    • [ ] 0.2.1 (bug fixes)
    • [ ] 0.2.0 (current version)
    • [ ] 0.1.2

    I'm submitting a ...

    • [x] bug report
    • [ ] feature request
    • [ ] documentation request

    I'm simply running through the cells of the tutorial one after the other from the GUI for tutorial02_processes and it freezes and the logs show the following error:

    $ jupyter notebook
    [I 14:59:09.234 NotebookApp] Serving notebooks from local directory: G:\Python\lava\lava
    [I 14:59:09.234 NotebookApp] Jupyter Notebook 6.4.6 is running at:
    [I 14:59:09.234 NotebookApp] http://localhost:8888/
    [I 14:59:09.234 NotebookApp]  or http://127.0.0.1:8888/?token
    [I 14:59:09.234 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
    [C 14:59:09.303 NotebookApp]
    
        To access the notebook, open this file in a browser:
            file:///C:/Users/Matthew%20Einhorn/AppData/Roaming/jupyter/runtime/nbserver-17496-open.html
        Or copy and paste one of these URLs:
            http://localhost:8888/?token
         or http://127.0.0.1:8888/?token
    [W 14:59:34.725 NotebookApp] Notebook tutorials/in_depth/tutorial02_processes.ipynb is not trusted
    [I 14:59:36.725 NotebookApp] Kernel started: dc72d164-3797-45f8-b4b4-ba468a763ba4, name: python3
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\Matthew Einhorn\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
        exitcode = _main(fd, parent_sentinel)
      File "C:\Users\Matthew Einhorn\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 126, in _main
        self = reduction.pickle.load(from_parent)
    AttributeError: Can't get attribute 'PyLifModel' on <module '__main__' (built-in)>
    [I 15:01:36.642 NotebookApp] Saving file at /tutorials/in_depth/tutorial02_processes.ipynb
    
    1-bug area: tutorials os: windows 
    opened by matham 6
  • Support editable install

    Support editable install

    Currently, to use lava from a cloned repo, you have to mess with the PYTHONPATH which is not ideal.

    Typically, this is avoided by supporting an editable install. Then, you just have to do pip install -e . from the root, and things will work as any normally installed package. The problem is that you need to specify setuptools as a dependency in pyproject.toml.

    • Additionally, I updated to docs to show how to use the editable install, rather than messing with the PYTHONPATH.
    • I also updated the instructions for Windows, where python is python, not python3.
    • Also pip install -U pip didn't work failing with some kind of permission error if I can remember correctly. So python -m pip install --upgrade pip seemed like the more normal and less likely to fail command. I didn't update the linux instructions for this, although you probably should, since I didn't test.
    • In windows, activate.bat is actually under Scripts, not bin. So I updated that.
    • Unfortunately, the command is still wrong, because if you use a bash shell on On Windows, like me, the command is source python3_venv/Scripts/activate without the .bat suffix. If you use the normal cmd/powershell terminal, then source doesn't work so you just do python3_venv/Scripts/activate.bat. But, I couldn't test this right now on cmd, so I didn't change this part.

    On my system, with the changes it worked:

    $ pyb -E unit
    PyBuilder version 0.13.3
    Build started at 2021-11-22 17:53:27
    ------------------------------------------------------------
    [INFO]  Installing or updating plugin "pypi:pybuilder_bandit, module name 'pybuilder_bandit'"
    [INFO]  Processing plugin packages 'pybuilder_bandit' to be installed with {}
    [INFO]  Activated environments: unit
    [INFO]  Building lava-nc version 0.1.0
    [INFO]  Executing build in g:\python\lava
    [INFO]  Going to execute tasks: analyze, publish
    [INFO]  Processing plugin packages 'coverage~=5.2' to be installed with {'upgrade': True}
    [INFO]  Processing plugin packages 'flake8~=3.7' to be installed with {'upgrade': True}
    [INFO]  Processing plugin packages 'pypandoc~=1.4' to be installed with {'upgrade': True}
    [INFO]  Processing plugin packages 'setuptools>=38.6.0' to be installed with {'upgrade': True}
    [INFO]  Processing plugin packages 'sphinx_rtd_theme' to be installed with {}
    [INFO]  Processing plugin packages 'sphinx_tabs' to be installed with {}
    [INFO]  Processing plugin packages 'twine>=1.15.0' to be installed with {'upgrade': True}
    [INFO]  Processing plugin packages 'unittest-xml-reporting~=3.0.4' to be installed with {'upgrade': True}
    [INFO]  Processing plugin packages 'wheel>=0.34.0' to be installed with {'upgrade': True}
    [INFO]  Creating target 'build' VEnv in 'g:\python\lava\target\venv\build\cpython-3.10.0.final.0'
    [INFO]  Processing dependency packages 'requirements.txt' to be installed with {}
    [INFO]  Creating target 'test' VEnv in 'g:\python\lava\target\venv\test\cpython-3.10.0.final.0'
    [INFO]  Processing dependency packages 'requirements.txt' to be installed with {}
    [INFO]  Requested coverage for tasks: pybuilder.plugins.python.unittest_plugin:run_unit_tests
    [INFO]  Running unit tests
    [INFO]  Executing unit tests from Python modules in g:\python\lava\tests
    Runtime not started yet.
    g:\python\lava\src\lava\proc\lif\models.py:129: RuntimeWarning: divide by zero encountered in remainder
      wrapped_curr = np.mod(decayed_curr,
    [INFO]  Executed 108 unit tests
    [INFO]  All unit tests passed.
    [INFO]  Executing flake8 on project sources.
    [INFO]  Building distribution in g:\python\lava\target\dist\lava-nc-0.1.0
    [INFO]  Copying scripts to g:\python\lava\target\dist\lava-nc-0.1.0\scripts
    [INFO]  Writing setup.py as g:\python\lava\target\dist\lava-nc-0.1.0\setup.py
    [INFO]  Collecting coverage information for 'pybuilder.plugins.python.unittest_plugin:run_unit_tests'
    [WARN]  ut_coverage_branch_threshold_warn is 0 and branch coverage will not be checked
    [WARN]  ut_coverage_branch_partial_threshold_warn is 0 and partial branch coverage will not be checked
    [INFO]  Running unit tests
    [INFO]  Executing unit tests from Python modules in g:\python\lava\tests
    Runtime not started yet.
    g:\python\lava\src\lava\proc\lif\models.py:129: RuntimeWarning: divide by zero encountered in remainder
      wrapped_curr = np.mod(decayed_curr,
    [INFO]  Executed 108 unit tests
    [INFO]  All unit tests passed.
    [WARN]  Test coverage below 70% for lava:  0%
    [WARN]  Test coverage below 70% for lava.magma:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.c:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.channels:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.nc:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.py:  0%
    [WARN]  Test coverage below 70% for lava.magma.core.run_configs: 50%
    [WARN]  Test coverage below 70% for lava.magma.core:  0%
    [WARN]  Test coverage below 70% for lava.magma.core.model.model: 65%
    [WARN]  Test coverage below 70% for lava.magma.core.model.c.type:  0%
    [WARN]  Test coverage below 70% for lava.magma.core.model.nc.model: 43%
    [WARN]  Test coverage below 70% for lava.magma.core.model.py.model: 39%
    [WARN]  Test coverage below 70% for lava.magma.core.model.py.ports: 68%
    [WARN]  Test coverage below 70% for lava.magma.runtime.runtime_service: 31%
    [WARN]  Test coverage below 70% for lava.magma.runtime:  0%
    [WARN]  Test coverage below 70% for lava.magma.runtime.channels:  0%
    [WARN]  Test coverage below 70% for lava.magma.runtime.node:  0%
    [WARN]  Test coverage below 70% for lava.proc.io:  0%
    [WARN]  Test coverage below 70% for lava.proc.lif.models: 45%
    [WARN]  Test coverage below 70% for lava.proc.sparse:  0%
    [WARN]  Test coverage below 70% for lava.tutorials:  0%
    [WARN]  Test coverage below 70% for lava.utils.float2fixed:  0%
    [WARN]  Test coverage below 70% for lava.utils.profiler:  0%
    [WARN]  Test coverage below 70% for lava.utils.validator:  0%
    [WARN]  Test coverage below 70% for lava.utils.visualizer:  0%
    [WARN]  Test coverage below 70% for lava.utils:  0%
    [WARN]  Test coverage below 70% for lava.utils.dataloader.mnist:  0%
    [WARN]  Test coverage below 70% for lava.utils.dataloader:  0%
    [INFO]  Overall pybuilder.plugins.python.unittest_plugin.run_unit_tests coverage is 75%
    [INFO]  Overall pybuilder.plugins.python.unittest_plugin.run_unit_tests branch coverage is 59%
    [INFO]  Overall pybuilder.plugins.python.unittest_plugin.run_unit_tests partial branch coverage is 89%
    [WARN]  Test coverage below 70% for lava:  0%
    [WARN]  Test coverage below 70% for lava.magma:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.c:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.channels:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.nc:  0%
    [WARN]  Test coverage below 70% for lava.magma.compiler.py:  0%
    [WARN]  Test coverage below 70% for lava.magma.core.run_configs: 50%
    [WARN]  Test coverage below 70% for lava.magma.core:  0%
    [WARN]  Test coverage below 70% for lava.magma.core.model.model: 65%
    [WARN]  Test coverage below 70% for lava.magma.core.model.c.type:  0%
    [WARN]  Test coverage below 70% for lava.magma.core.model.nc.model: 43%
    [WARN]  Test coverage below 70% for lava.magma.core.model.py.model: 39%
    [WARN]  Test coverage below 70% for lava.magma.core.model.py.ports: 68%
    [WARN]  Test coverage below 70% for lava.magma.runtime.runtime_service: 31%
    [WARN]  Test coverage below 70% for lava.magma.runtime:  0%
    [WARN]  Test coverage below 70% for lava.magma.runtime.channels:  0%
    [WARN]  Test coverage below 70% for lava.magma.runtime.node:  0%
    [WARN]  Test coverage below 70% for lava.proc.io:  0%
    [WARN]  Test coverage below 70% for lava.proc.lif.models: 45%
    [WARN]  Test coverage below 70% for lava.proc.sparse:  0%
    [WARN]  Test coverage below 70% for lava.tutorials:  0%
    [WARN]  Test coverage below 70% for lava.utils.float2fixed:  0%
    [WARN]  Test coverage below 70% for lava.utils.profiler:  0%
    [WARN]  Test coverage below 70% for lava.utils.validator:  0%
    [WARN]  Test coverage below 70% for lava.utils.visualizer:  0%
    [WARN]  Test coverage below 70% for lava.utils:  0%
    [WARN]  Test coverage below 70% for lava.utils.dataloader.mnist:  0%
    [WARN]  Test coverage below 70% for lava.utils.dataloader:  0%
    [INFO]  Overall lava-nc coverage is 75%
    [INFO]  Overall lava-nc branch coverage is 59%
    [INFO]  Overall lava-nc partial branch coverage is 89%
    [INFO]  Building binary distribution in g:\python\lava\target\dist\lava-nc-0.1.0
    [INFO]  Running Twine check for generated artifacts
    ------------------------------------------------------------
    BUILD SUCCESSFUL
    ------------------------------------------------------------
    Build Summary
                 Project: lava-nc
                 Version: 0.1.0
          Base directory: g:\python\lava
            Environments: unit
                   Tasks: prepare [17277 ms] compile_sources [0 ms] run_unit_tests [19036 ms] analyze [2712 ms] package [88 ms] run_integration_tests [0 ms] verify [0 ms] coverage [24489 ms] publish [5942 ms]
    Build finished at 2021-11-22 17:54:42
    Build took 75 seconds (75565 ms)
    
    opened by matham 6
  • Fix intermittent failuire of test_explicit_Ref_Var_port_write

    Fix intermittent failuire of test_explicit_Ref_Var_port_write

    Objective of issue: Fix intermittent failure of test_explicit_Ref_Var_port_write

    Lava version:

    • [x] 0.2.1 (bug fixes)

    I'm submitting a ...

    • [x] bug report

    Current behavior:

    • Intermittent failure of unit test runs:
    ======================================================================
    FAIL: test_explicit_Ref_Var_port_write (tests.lava.magma.runtime.test_ref_var_ports.TestRefVarPorts)
    Tests the connection of a RefPort to an explicitly created VarPort.
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/Users/runner/work/lava/lava/tests/lava/magma/runtime/test_ref_var_ports.py", line [120](https://github.com/lava-nc/lava/runs/5238112313?check_suite_focus=true#step:5:120), in test_explicit_Ref_Var_port_write
        self.assertTrue(np.all(recv.var2.get() == np.array([7., 7., 7.])))
    AssertionError: False is not true
    

    Expected behavior:

    • There is not an Intermittent failure of unit test runs.

    Steps to reproduce:

    • Run unit tests in CI a few times. See: https://github.com/lava-nc/lava/runs/5238112313?check_suite_focus=true#step:5:114
    1-bug 
    opened by mgkwill 5
  • Fixing MNIST dataloader

    Fixing MNIST dataloader

    tutorials/end_to_end/tutorial01_mnist_digit_classification.ipynb doesn't work if you install lava with a pip install as mnist.npy is not packaged. I fixed it by completing these to-dos.

    opened by tihbe 5
  • Recurrent connectivity hangs on execution

    Recurrent connectivity hangs on execution

    When creating a simple recurrent network, for instance a LIF that connects to itself via Dense, execution hangs indefinitely.

    import unittest
    import numpy as np
    
    from lava.magma.core.run_conditions import RunSteps
    from lava.magma.core.run_configs import Loihi1SimCfg
    from lava.proc.lif.process import LIF
    from lava.proc.dense.process import Dense
    
    
    class TestRecurrentNetwork(unittest.TestCase):
        def test_running_recurrent_network(self):
            """Tests executing an architecture with a recurrent
            connection."""
            num_steps = 10
            shape = (1,)
        
            bias = np.zeros(shape)
            bias[:] = 5000
            lif = LIF(shape=shape, bias=bias, bias_exp=np.ones(shape))
        
            dense = Dense(weights=np.ones((1, 1)))
        
            lif.s_out.connect(dense.s_in)
            dense.a_out.connect(lif.a_in)
        
            lif.run(condition=RunSteps(num_steps=num_steps),
                    run_cfg=Loihi1SimCfg())
            lif.stop()
        
            self.assertEqual(lif.runtime.current_ts, num_steps)
    
    
    1-bug 
    opened by mathisrichter 5
  • Document minimum Python version

    Document minimum Python version

    I was trying to install lava on Python 3.7 and ran into a vague syntax error. I understood that it's due to 3.7 not being supported, but it would be nice if the minimum Python version is documented somewhere.

    Typically it can be done by adding e.g. python_requires = >=3.8 to setup.cfg and it's maybe listed in the setup metadata!?

    1-bug integration 
    opened by matham 5
  • Add var port polling in host phase to flush out any pending reads

    Add var port polling in host phase to flush out any pending reads

    Issue Number:

    Objective of pull request:

    Pull request checklist

    Your PR fulfills the following requirements:

    • [ ] Issue created that explains the change and why it's needed
    • [ ] Tests are part of the PR (for bug fixes / features)
    • [ ] Docs reviewed and added / updated if needed (for bug fixes / features)
    • [ ] PR conforms to Coding Conventions
    • [ ] PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
    • [ ] Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
    • [ ] Build tests (pytest) passes locally

    Pull request type

    Please check your PR type:

    • [ ] Bugfix
    • [ ] Feature
    • [ ] Code style update (formatting, renaming)
    • [ ] Refactoring (no functional changes, no api changes)
    • [ ] Build related changes
    • [ ] Documentation changes
    • [ ] Other (please describe):

    What is the current behavior?

    What is the new behavior?

    Does this introduce a breaking change?

    • [ ] Yes
    • [ ] No

    Supplemental information

    opened by joyeshmishra 0
  • Visualization of Process graph

    Visualization of Process graph

    • Nice to have
    • For debug, understand network

    User story

    to be determined

    Conditions of satisfaction

    to be determined

    Acceptance tests

    to be determined

    1-feature 0-needs-review epic 
    opened by mathisrichter 0
  • Serializable executable

    Serializable executable

    • Avoids recompilation of networks and thus accelerates time to execution

    User story

    to be determined

    Conditions of satisfaction

    to be determined

    Acceptance tests

    to be determined

    1-feature 0-needs-review 2-important/not-urgent epic 
    opened by mathisrichter 0
  • More intuitive higher-level API, comprehensive basic tutorials and documentation

    More intuitive higher-level API, comprehensive basic tutorials and documentation

    • Steep Lava learning curve. Need to make Lava more accessible.
    • Users cannot effectively solve their problems or contribute if they don’t understand basics. We’ll be stuck in doing everything for them if we don’t improve documentation.

    User story

    to be determined

    Conditions of satisfaction

    to be determined

    Acceptance tests

    to be determined

    documentation 1-feature 0-needs-review 2-important/not-urgent epic 
    opened by mathisrichter 0
  • Explore designs for composing Processes

    Explore designs for composing Processes

    User story

    As an NCL developer, I want to understand the different designs for composing Lava Processes to facilitate a decision on how to implement the feature.

    Conditions of satisfaction

    • Design choices have been explored
    • All viable design choices are documented in a slide deck
    • Designs have been presented to a wider forum
    • A decision has been made for a design

    Tasks

    • [ ] Come up with an idea (4h, MR)
    1-spike 2-important/urgent 0-needs-estimate 
    opened by mathisrichter 0
Releases(v0.6.0)
  • v0.6.0(Dec 14, 2022)

    Lava v0.6.0 Release Notes

    December 14, 2022

    New Features and Improvements

    • Enabled 2 factor learning on Loihi 2 and in Lava simulation with the LearningLIF and LearningLIFFloat processes. (PR #528 & PR #535)
    • Resonate and Fire and Resonate and Fire Izhikevich neurons now available in Lava simulation. (PR #378)
    • New tutorial on sigma-delta networks in Lava. (PR #470)
    • Enabled state probes for Loihi 2 and added an in-depth tutorial (lava-loihi extension).

    Bug Fixes and Other Changes

    • RF neurons with variable periods now work. (PR #487)
    • Automatically cancle older CI runs of a PR if a newer one was started due to a push. (PR #488)
    • Improved learning API, related tutorials and tests and a but in the Loihi STDP implementation. (PR #500)
    • Generalisation of the pre- and post hooks into the runtime service. (PR #521)
    • Improved RSTDP learning tutorial. (PR #536)

    Breaking Changes

    • No breaking changes in this release.

    Known Issues

    • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
    • Channel communication between PyProcessModels is slow.
    • Lava networks throw error if run is invoked too many times due to a leak in shared memory descriptors in CPython implementation.
    • Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
    • Joining and forking of virtual ports is not supported.
    • The Monitor Process only supports probing a single Var per Process implemented via a PyProcessModel. Probing states on Loihi 2 is currently available using StateProbes (tutorial available in lava-loihi extension).
    • Some modules, classes, or functions lack proper docstrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.

    Thanks to our Contributors

    • Intel Labs Lava Developers
    • @Michaeljurado42 made their first contribution in https://github.com/lava-nc/lava/pull/378

    Full Changelog: https://github.com/lava-nc/lava/compare/v0.5.1...v0.6.0

    Source code(tar.gz)
    Source code(zip)
    lava_nc-0.6.0-py3-none-any.whl(7.25 MB)
    lava_nc-0.6.0.tar.gz(8.51 MB)
  • v0.5.1(Oct 31, 2022)

    Lava v0.5.1 Release Notes

    October 31, 2022

    New Features and Improvements

    • Lava now supports LIF reset models with CPU backend. (PR #415)
    • LAVA now supports three factor learning rules. This release introduces a base class for plastic neurons as well as differentiation between Loihi2FLearningRule and Loihi3FLearningRule. (PR #400)
    • New Tutorial shows how to implement and use a three-factor learning rule in Lava with an example of reward-modulated STDP. (PR #400)

    Bug Fixes and Other Changes

    • Fixes a bug in network compilation for branching/forking of CProcess and NC Process Models. (PR #391)
    • Fixes a bug to support multiple CPorts to PyPorts connectivity in a single process model. (PR #391)
    • Fixed issues with the uk conditional in the learning engine. (PR #400)
    • Fixed the explicit ordering of subcompilers in compilation stack: C-first-Nc-second heuristic. (PR #408)
    • Fixed the incorrect use of np.logical_and and np.logical_or discovered in learning-related code in Connection ProcessModels. (PR #412)
    • Fixed a warning in Compiler process model discovery and selection due to importing sub process model classes. (PR #418)
    • Fixed a bug in Compiler to select correct CProcessModel based on tag specified in run config. (PR #421)
    • Disabled overwriting of user set environment variables in systems.Loihi2. (PR #428)
    • Process Model selection now works in Jupyter Collab environment. #435
    • Added instructions to download dataset for MNIST tutorial (PR #439)
    • Fixed a bug in run config with respect to initializing pre- and post-execution hooks during multiple runs (PR #440)
    • Added an interface for Lava profiler to enable future implementations on different hardware or chip generations. (PR #444)
    • Updated PyTest and NBConvert dependencies to newer versions in poetry for installation. (PR #447)

    Breaking Changes

    • QUBO related processes and process models have now moved to lava-optimization (PR #449)

    Known Issues

    • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
    • Channel communication between PyProcessModels is slow.
    • Lava networks throw errors if run is invoked too many times due to a leak in shared memory descriptors in CPython implementation.
    • Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
    • Joining and forking of virtual ports is not supported.
    • The Monitor Process only supports probing a single Var per Process implemented via a PyProcessModel. The Monitor Process does not support probing Vars on Loihi NeuroCores.
    • Some modules, classes, or functions lack proper docstrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.

    Thanks to our Contributors

    • Intel Labs Lava Developers
    • @AlessandroPierro made their first contribution in https://github.com/lava-nc/lava/pull/439
    • @michaelbeale-IL made their first contribution in https://github.com/lava-nc/lava/pull/447
    • @bala-git9 made their first contribution in https://github.com/lava-nc/lava/pull/400
    • @a-t-0 made their first contribution in https://github.com/lava-nc/lava/pull/453
    Source code(tar.gz)
    Source code(zip)
    lava_nc-0.5.1-py3-none-any.whl(7.24 MB)
    lava_nc-0.5.1.tar.gz(8.34 MB)
  • v0.5.0(Sep 29, 2022)

    The release of Lava v0.5.0 includes major updates to the Lava Deep Learning (Lava-DL) and Lava Optimization (Lava-Optim) libraries and offers the first update to the core Lava framework following the first release of the Lava extension for Loihi in July 2022.

    • Lava offers a new learning API on CPU based on the Loihi on-chip learning engine. In addition, various functional and performance issues have been fixed since the last release.
    • Several high-level application tutorials on QUBO (maximum independent set), deep learning (PilotNet, Oxford Radcliffe spike training), 2-factor STDP-based learning, and design of an E/I network model as well as a comprehensive API reference documentation make this version more accessible to new and experienced users.

    New Features and Improvements

    • Added support for convolutional neural networks (lava-nc PR #344, lava-loihi PR #343).
      • Added NcL2ModelConv ProcessModel supporting Loihi 2 convolutional connection sharing (lava-loihi PR #343).
      • Added NcL1ModelConvAsSparse ProcessModel supporting convolutional connections on implemented as sparse connections (Compatible with both Loihi 1 and Loihi 2).
      • Added ability to represent convolution inferred connection to represent shared connection to and from Loihi 2 convolution synapse (lava-loihi PR #343).
      • Added Convolution Manager to manage the resource allocation for utilizing Loihi 2 convolution feature (lava-loihi PR #343).
      • Added convolution connection strategy to partition convolution layers to Loihi2 neurocores (lava-loihi PR #343).
      • Added support for convolution spike generation (lava-loihi PR #343).
      • Added support for Convolution specific varmodels (ConvNeuronVarModel and ConvInVarModel) for interacting with the Loihi 2 convolution configured neuron as well as Loihi 2 convolution input from a C process.
      • Added embedded IO processes and C-models to bridge the interaction between Python and Loihi 2 processes in the form of spikes as well as state read/write including convolution specific support. (lava-nc PR #344, lava-loihi PR #343)
      • Added support for compressed message passing from Python to Loihi 2 using Loihi 2’s embedded processors (lava-nc PR #344, lava-loihi PR #343).
    • Added support for resource cost sharing between Loihi 2 to allow for flexible memory allocation in neurocore (lava-loihi PR #343).
    • Added support for sharing axon instructions for output spike generation from a Loihi 2 neurocore (lava-loihi PR #287).
      • Added support for learning in simulation (CPU) according to Loihi’s learning engine (PR #332):
      • STDPLoihi class is a 2-Factor STDP learning algorithm added to the Lava Process Library based on the Loihi learning engine.
      • LoihiLearningRule class provides the ability to create custom learning rules based on the Loihi learning engine.
      • Implemented a LearningDense Process which takes the same arguments as Dense, plus an optional LearningRule argument to enable learning in its ProcessModels.
      • Implemented floating-point and bit-approximate PyLoihi ProcessModel, named PyLearningDenseModelFloat and PyLearningDenseModelBitApproximate, respectively.
      • Also implemented bit-accurate PyLoihi ProcessModel named PyLearningDenseModelBitAcc.
      • Added a tutorial to show the usage of STDPLoihi and how to create custom learning rules.

    Bug Fixes and Other Changes

    • The fixed-point PyProcessModel of the Dense Process now has the same behavior as the NcProcessModel for Loihi 2 (PR #328)
    • The Dense NcProcModel now correctly represents purely inhibitory weight matrices on Loihi 2 (PR #376).
    • The neuron current overflow behavior of the fixed point LIF model was fixed so that neuron current wraps to opposite side of integer range rather than to 0. (PR #364)

    Breaking Changes

    • Function signatures of node allocate() methods in Net-API have been updated to use explicit arguments. In addition, some function argument names have been changed to abstract away Loihi register details.
    • Removed bit-level parameters and Vars from Dense Process API.

    Known Issues

    • Only one instance of a Process targeting an embedded processor (using CProcessModel) can currently be created. Creating multiple instances in a network, results in an error. As a workaround, the behavior of multiple Processes can be fused into a single CProcessModel.
    • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
    • Channel communication between PyProcessModels is slow.
    • The Lava Compiler is still inefficient and in need of improvement to performance and memory utilization.
    • Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
    • Joining and forking of virtual ports is not supported.
    • The Monitor Process does currently only support probing of a single Var per Process implemented via a PyProcessModel. The Monitor Process does currently not support probing of Vars mapped to NeuroCores.
    • Some modules, classes, or functions lack proper docustrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.
    • Learning API does not support 3-Factor learning rules yet.

    Thanks to our Contributors

    • Alejandro Garcia Gener (@alexggener)
    • @fangwei123456
    • Julia A (@JuliaA369)
    • Maryam Parsa
    • @Michaeljurado24
    • Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab

    What's Changed

    • Update RELEASE.md by @mgkwill in https://github.com/lava-nc/lava/pull/270
    • Add C Builder Index to the channel name to make it unique in case of … by @joyeshmishra in https://github.com/lava-nc/lava/pull/271
    • Changes in DiGraphBase class to enable recurrence by @srrisbud in https://github.com/lava-nc/lava/pull/273
    • Fix for dangling ports by @bamsumit in https://github.com/lava-nc/lava/pull/274
    • Unique process models in process models discovery by @bamsumit in https://github.com/lava-nc/lava/pull/277
    • Added a compilation order heuristic for compiling C before Nc processes by @srrisbud in https://github.com/lava-nc/lava/pull/275
    • Process module search fix by @bamsumit in https://github.com/lava-nc/lava/pull/285
    • Fix default sync domain not splitting processes according to the Node by @ysingh7 in https://github.com/lava-nc/lava/pull/286
    • Add type to isinstance call by @mgkwill in https://github.com/lava-nc/lava/pull/287
    • Add intel numpy to conda install instructions by @mgkwill in https://github.com/lava-nc/lava/pull/298
    • Modified mapper to handle disconnected lif components connected to sa… by @ysingh7 in https://github.com/lava-nc/lava/pull/294
    • Bump nbconvert from 6.5.0 to 6.5.1 by @dependabot in https://github.com/lava-nc/lava/pull/317
    • Make Pre Post Functions execution on board by @joyeshmishra in https://github.com/lava-nc/lava/pull/323
    • Doc/auto api by @weidel-p in https://github.com/lava-nc/lava/pull/318
    • changed heading to improve rendering website by @weidel-p in https://github.com/lava-nc/lava/pull/338
    • Input compression features for large dimension inputs and infrastructure for convolution feature by @bamsumit in https://github.com/lava-nc/lava/pull/344
    • Stochastic Constraint Integrate and Fire (SCIF) neuron model for constraint satisfaction problems by @srrisbud in https://github.com/lava-nc/lava/pull/335
    • Update tutorials to newest version by @weidel-p in https://github.com/lava-nc/lava/pull/340
    • Transfer dev deps to dev section: Update pyproject.toml by @mgkwill in https://github.com/lava-nc/lava/pull/355
    • Ability to get/set synaptic weights by @srrisbud in https://github.com/lava-nc/lava/pull/359
    • Fixed pt lif precision by @ackurth-nc in https://github.com/lava-nc/lava/pull/330
    • Use poetry 1.1.15 explicitly by @srrisbud in https://github.com/lava-nc/lava/pull/365
    • Dev/learning rc 0.5 by @weidel-p in https://github.com/lava-nc/lava/pull/332
    • SCIF neuron model: Minor fixes by @srrisbud in https://github.com/lava-nc/lava/pull/367
    • Ei network tutorial by @ackurth-nc in https://github.com/lava-nc/lava/pull/309
    • Fix issue 334 by @PhilippPlank in https://github.com/lava-nc/lava/pull/364
    • Add Missing Variables in Conv Model by @SveaMeyer13 in https://github.com/lava-nc/lava/pull/354
    • Weight bit-accuracy of Dense (Python vs. Loihi 2) by @mathisrichter in https://github.com/lava-nc/lava/pull/328
    • Public processes for optimization solver by @phstratmann in https://github.com/lava-nc/lava/pull/374
    • Fixing Dense inhibitory sign_mode by @mathisrichter in https://github.com/lava-nc/lava/pull/376
    • Eliminate design issues in learning-related code by @mathisrichter in https://github.com/lava-nc/lava/pull/371
    • Enable Requesting Pause from Host. by @GaboFGuerra in https://github.com/lava-nc/lava/pull/373
    • Update poetry version in CI by @mgkwill in https://github.com/lava-nc/lava/pull/380
    • Enable exception proc_map, working with dataclasses, etc. by @GaboFGuerra in https://github.com/lava-nc/lava/pull/372
    • Expose noise amplitude for SCIF by @GaboFGuerra in https://github.com/lava-nc/lava/pull/383
    • Update ReadGate API according to NC model. by @GaboFGuerra in https://github.com/lava-nc/lava/pull/384
    • Add output messages for ReadGate's send_req_pause port. by @GaboFGuerra in https://github.com/lava-nc/lava/pull/385
    • Version 0.5.0 by @mgkwill in https://github.com/lava-nc/lava/pull/375

    New Contributors

    • @dependabot made their first contribution in https://github.com/lava-nc/lava/pull/317
    • @weidel-p made their first contribution in https://github.com/lava-nc/lava/pull/318
    • @ackurth-nc made their first contribution in https://github.com/lava-nc/lava/pull/330
    • @SveaMeyer13 made their first contribution in https://github.com/lava-nc/lava/pull/354
    • @phstratmann made their first contribution in https://github.com/lava-nc/lava/pull/374
    • @GaboFGuerra made their first contribution in https://github.com/lava-nc/lava/pull/373

    Full Changelog: https://github.com/lava-nc/lava/compare/v0.4.0...v0.5.0

    Source code(tar.gz)
    Source code(zip)
    lava-nc-0.5.0.tar.gz(7.17 MB)
    lava_nc-0.5.0-py3-none-any.whl(7.24 MB)
  • v0.4.0(Jul 13, 2022)

    The release of Lava v0.4.0 brings initial support to compile and run models on Loihi 2 via Intel’s cloud hosted Oheo Gulch and Kapoho Point systems. In addition, new tutorials and documentation explain how to build Lava Processes written in Python or C for CPU and Loihi backends.

    While this release offers few high-level application examples, Lava v0.4.0 provides major enhancements to the overall Lava architecture. It forms the basis for the open-source community to enable the full Loihi feature set, such as on-chip learning, convolutional connectivity, or accelerated spike IO. The Lava Compiler and Runtime architecture has also been generalized allowing extension to other backends or neuromorphic processors. Subsequent releases will improve compiler performance and provide more in-depth documentation as well as several high-level coding examples for Loihi, such as real-world applications spanning multiple chips.

    The public Lava GitHub repository (https://github.com/lava-nc/lava) continues to provide all the features necessary to run Lava applications on a CPU backend. In addition, it now also includes enhancements to enable Intel Loihi support. To run Lava applications on Loihi, users need to install the proprietary Lava extension for Loihi. This extension contains the Loihi-compatible Compiler and Runtime features as well as additional tutorials. While this extension is currently released as a tar file, it will be made available as a private GitHub repo in the future. Please help us fix any problems you encounter with the release by filing an issue on Github for the public code or sending a ticket to the team for the Lava extension for Loihi.

    New Features and Improvements

    Features marked with * are available as part of the Loihi 2 extension available to INRC members.

    • *Extended Process library including new ProcessModels and additional improvements:
      • LIF, Sigma-Delta, and Dense Processes execute on Loihi NeuroCores.
      • Prototype Convolutional Process added.
      • Sending and receiving spikes to NeuroCores via embedded processes that can be programmed in C with examples included.
      • All Lava Processes now list all constructor arguments explicitly with type annotations.
    • *Added high-level API to develop custom ProcessModels that use Loihi 2 features:
      • Loihi NeuroCores can be programmed in Python by allocating neural network resources like Axons, Synapses or Neurons. In particular, Loihi 2 NeuroCore Neurons can be configured by writing highly flexible assembly programs.
      • Loihi embedded processors can be programmed in C. But unlike the prior NxSDK, no knowledge of low-level registers details is required anymore. Instead, the C API mirrors the high-level Python API to interact with other processes via channels.
    • Compiler and Runtime support for Loihi 2:
      • General redesign of Compiler and Runtime architecture to support compilation of Processes that execute across a heterogenous backend of different compute resources. CPU and Loihi are supported via separate sub compilers.
      • *The Loihi NeuroCore sub compiler automatically distributes neural network resources across multiple cores.
      • *The Runtime supports direct channel-based communication between Processes running on Loihi NeuroCores, embedded CPUs or host CPUs written in Python or C. Of all combinations, only Python<->C and C<->NeuroCore are currently supported.
      • *Added support to access Process Variables on Loihi NeuroCores at runtime via Var.set and Var.get().
    • New tutorials and improved class and method docstrings explain how new Lava features can be used such as *NeuroCore and *embedded processor programming.
    • An extended suite of unit tests and new *integration tests validate the correctness of the Lava framework.

    Bug Fixes and Other Changes

    • Support for virtual ports on multiple incoming connections (Python Processes only) (Issue #223, PR #224)
    • Added conda install instructions (PR #225)
    • Var.set/get() works when RunContinuous RunMode is used (Issue #255, PR #256)
    • Successful execution of tutorials now covered by unit tests (Issue #243, PR #244)
    • Fixed PYTHONPATH in tutorial_01 (Issue #45, PR #239)
    • Fixed output of tutorial_07 (Issue #249, PR #253)

    Breaking Changes

    • Process constructors for standard library processes now require explicit keyword/value pairs and do not accept arbitrary input arguments via **kwargs anymore. This might break some workloads.
    • use_graded_spike kwarg has been changed to num_message_bits for all the built-in processes.
    • shape kwarg has been removed from Dense process. It is automatically inferred from the weight parameter’s shape.
    • Conv Process has additional arguments weight_exp and num_weight_bits that are relevant for fixed-point implementations.
    • The sign_mode argument in the Dense Process is now an enum rather than an integer.
    • New parameters u and v in the LIF Process enable setting initial values for current and voltage.
    • The bias parameter in the LIF Process has been renamed to bias_mant.

    Known Issues

    • Lava does currently not support on-chip learning, Loihi 1 and a variety of connectivity compression features such as convolutional encoding.
    • All Processes in a network must currently be connected via channels. Running unconnected Processes using NcProcessModels in parallel currently gives incorrect results.
    • Only one instance of a Process targeting an embedded processor (using CProcessModel) can currently be created. Creating multiple instances in a network, results in an error. As a workaround, the behavior of multiple Processes can be fused into a single CProcessModel.
    • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
    • In the scenario that InputAxons are duplicated across multiple cores and users expect to inject spikes based on the declared port size, then the current implementation leads to buffer overflows and memory corruption.
    • Channel communication between PyProcessModels is slow.
    • The Lava Compiler is still inefficient and in need of improvement to performance and memory utilization.
    • Virtual ports are only supported between Processes using PyProcModels, but not between Processes when CProcModels or NcProcModels are involved. In addition, VirtualPorts do not support concatenation yet.
    • Joining and forking of virtual ports is not supported.
    • The Monitor Process does currently only support probing of a single Var per Process implemented via a PyProcessModel. The Monitor Process does currently not support probing of Vars mapped to NeuroCores.
    • Despite new docstrings, type annotations, and parameter descriptions to most of the public user-facing API, some parts of the code still have limited documentation and are missing type annotations.

    What's Changed

    • Virtual ports on multiple incoming connections by @mathisrichter in https://github.com/lava-nc/lava/pull/224
    • Add conda install to README by @Tobias-Fischer in https://github.com/lava-nc/lava/pull/225
    • PYTHONPATH fix in tutorial by @jlubo in https://github.com/lava-nc/lava/pull/239
    • Fix tutorial04_execution.ipynb by @mgkwill in https://github.com/lava-nc/lava/pull/241
    • Tutorial tests by @mgkwill in https://github.com/lava-nc/lava/pull/244
    • Update README.md remove vlab instructions by @mgkwill in https://github.com/lava-nc/lava/pull/248
    • Tutorial bug fix by @PhilippPlank in https://github.com/lava-nc/lava/pull/253
    • Fix get set var by @PhilippPlank in https://github.com/lava-nc/lava/pull/256
    • Update runtime_service.py by @PhilippPlank in https://github.com/lava-nc/lava/pull/258
    • Release/v0.4.0 by @mgkwill in https://github.com/lava-nc/lava/pull/265

    Thanks to our Contributors

    • Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab

    Open-source community:

    New Contributors

    • @jlubo made their first contribution in https://github.com/lava-nc/lava/pull/239

    Full Changelog: https://github.com/lava-nc/lava/compare/v0.3.0...v0.4.0 The release of Lava v0.4.0 brings initial support to compile and run models on Loihi 2 via Intel’s cloud hosted Oheo Gulch and Kapoho Point systems. In addition, new tutorials and documentation explain how to build Lava Processes written in Python or C for CPU and Loihi backends. While this release offers few high-level application examples, Lava v0.4.0 provides major enhancements to the overall Lava architecture. It forms the basis for the open-source community to enable the full Loihi feature set, such as on-chip learning, convolutional connectivity, or accelerated spike IO. The Lava Compiler and Runtime architecture has also been generalized allowing extension to other backends or neuromorphic processors. Subsequent releases will improve compiler performance and provide more in-depth documentation as well as several high-level coding examples for Loihi, such as real-world applications spanning multiple chips. The public Lava GitHub repository (https://github.com/lava-nc/lava) continues to provide all the features necessary to run Lava applications on a CPU backend. In addition, it now also includes enhancements to enable Intel Loihi support. To run Lava applications on Loihi, users need to install the proprietary Lava extension for Loihi. This extension contains the Loihi-compatible Compiler and Runtime features as well as additional tutorials. While this extension is currently released as a tar file, it will be made available as a private GitHub repo in the future. Please help us fix any problems you encounter with the release by filing an issue on Github for the public code or sending a ticket to the team for the Lava extension for Loihi.

    Source code(tar.gz)
    Source code(zip)
    lava-nc-0.4.0.tar.gz(130.50 KB)
    lava_nc-0.4.0-py3-none-any.whl(166.75 KB)
  • v0.3.0(Mar 9, 2022)

    Lava 0.3.0 includes bug fixes, updated documentation, improved error handling, refactoring of the Lava Runtime and support for sigma delta neuron enconding and decoding.

    New Features and Improvements

    • Added sigma delta neuron encoding and decoding support (PR #180, Issue #179)
    • Implementation of ReadVar and ResetVar IO process (PR #156, Issue #155)
    • Added Runtime handling of exceptions occuring in ProcessModels and the Runtime now returns exeception stack traces (PR #135, Issue #83)
    • Virtual ports for reshaping and transposing (permuting) are now supported. (PR #187, Issue #185, PR #195, Issue #194)
    • A Ternary-LIF neuron model was added to the process library. This new variant supports both positive and negative threshold for processing of signed signals (PR #151, Issue #150)
    • Refactored runtime to reduce the number of channels used for communication(PR #157, Issue #86)
    • Refactored Runtime to follow a state machine model and refactored ProcessModels to use command design pattern, implemented PAUSE and RUN CONTINOUS (PR #180, Issue #86, Issue #52)
    • Refactored builder to its own package (PR #170, Issue #169)
    • Refactored PyPorts implementation to fix incomplete PyPort hierarchy (PR #131, Issue #84)
    • Added improvements to the MNIST tutorial (PR #147, Issue #146)
    • A standardized template is now in use on new Pull Requests and Issues (PR #140)
    • Support added for editable install (PR #93, Issue #19)
    • Improved runtime documentation (PR #167)

    Bug Fixes and Other Changes

    Breaking Changes

    • No breaking changes in this release

    Known Issues

    • No support for Intel Loihi
    • CSP channels process communication, implemented with Python multiprocessing, needs improvement to reduce the overhead from inter-process communication to approach native execution speeds of similar implementations without CSP channel overhead
    • Virtual ports for concatenation are not supported
    • Joining and forking of virtual ports is not supported
    • A Monitor process cannot monitor more than one Var/InPort of a process, as a result multi-var probing with a singular Monitor process is not supported
    • Limited API documentation

    What's Changed

    • Fixing multiple small issues of the Monitor proc by @elvinhajizada in https://github.com/lava-nc/lava/pull/128
    • GitHub Issue/Pull request template by @mgkwill in https://github.com/lava-nc/lava/pull/140
    • Fixing MNIST dataloader by @tihbe in https://github.com/lava-nc/lava/pull/133
    • Runtime error handling by @PhilippPlank in https://github.com/lava-nc/lava/pull/135
    • Reduced the number of channels between service and process (#1) by @ysingh7 in https://github.com/lava-nc/lava/pull/157
    • TernaryLIF and refactoring of LIF to inherit from AbstractLIF by @srrisbud in https://github.com/lava-nc/lava/pull/151
    • Proc_params for communicating arbitrary object between process and process model by @bamsumit in https://github.com/lava-nc/lava/pull/162
    • Support editable install by @matham in https://github.com/lava-nc/lava/pull/93
    • Implementation of ReadVar and ResetVar IO process and bugfixes for LIF, Dense and Conv processes by @bamsumit in https://github.com/lava-nc/lava/pull/156
    • Refactor builder to module by @mgkwill in https://github.com/lava-nc/lava/pull/170
    • Use unittest ci by @mgkwill in https://github.com/lava-nc/lava/pull/173
    • Improve mnist tutorial by @srrisbud in https://github.com/lava-nc/lava/pull/147
    • Multiproc bug by @mgkwill in https://github.com/lava-nc/lava/pull/177
    • Refactoring py/ports by @PhilippPlank in https://github.com/lava-nc/lava/pull/131
    • Adds runtime documentation by @joyeshmishra in https://github.com/lava-nc/lava/pull/167
    • Implementation of Pause and Run Continuous with refactoring of Runtime by @ysingh7 in https://github.com/lava-nc/lava/pull/171
    • Ref port debug by @PhilippPlank in https://github.com/lava-nc/lava/pull/183
    • Sigma delta neuron, encoding and decoding support by @bamsumit in https://github.com/lava-nc/lava/pull/180
    • Add NxSDKRuntimeService by @mgkwill in https://github.com/lava-nc/lava/pull/182
    • Partial implementation of virtual ports for PyProcModels by @mathisrichter in https://github.com/lava-nc/lava/pull/187
    • Remove old runtime_service.py by @mgkwill in https://github.com/lava-nc/lava/pull/192
    • Fixing priority of channel commands in model by @PhilippPlank in https://github.com/lava-nc/lava/pull/190
    • Virtual ports between RefPorts and VarPorts by @mathisrichter in https://github.com/lava-nc/lava/pull/195
    • RefPort's sometimes handled a time step late by @PhilippPlank in https://github.com/lava-nc/lava/pull/205
    • Fixed reset timing offset by @bamsumit in https://github.com/lava-nc/lava/pull/207
    • Update README.md by @mgkwill in https://github.com/lava-nc/lava/pull/202
    • Virtual ports no longer block Process discovery in compiler by @mathisrichter in https://github.com/lava-nc/lava/pull/211
    • Remove pybuilder, Add poetry by @mgkwill in https://github.com/lava-nc/lava/pull/215
    • Added wait() to refvar unittests by @bamsumit in https://github.com/lava-nc/lava/pull/220
    • Update Install Instructions by @mgkwill in https://github.com/lava-nc/lava/pull/218

    Thanks to our Contributors

    Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab

    Open-source community: (Ismael Balafrej, Matt Einhorn)

    New Contributors

    • @tihbe made their first contribution in https://github.com/lava-nc/lava/pull/133
    • @ysingh7 made their first contribution in https://github.com/lava-nc/lava/pull/157
    • @matham made their first contribution in https://github.com/lava-nc/lava/pull/93

    Full Changelog: https://github.com/lava-nc/lava/compare/v0.2.0...v0.3.0

    Source code(tar.gz)
    Source code(zip)
    lava-nc-0.3.0.tar.gz(107.96 KB)
    lava_nc-0.3.0-py3-none-any.whl(135.61 KB)
  • v0.2.0(Nov 29, 2021)

    Lava 0.2.0 includes several improvements to the Lava Runtime. One of them improves the performance of the underlying message passing framework by over 10x on CPU. We also added new floating-point and Loihi fixed-point PyProcessModels for LIF and DENSE Processes as well as a new CONV Process. In addition, Lava now supports remote memory access between Processes via RefPorts which allows Processes to reconfigure other Processes. Finally, we added/updated several new tutorials to address all these new features.

    Features and Improvements

    • Refactored Runtime and RuntimeService to separate the MessagePassingBackend from the Runtime and RuntimeService itself into its own standalone module. This will allow implementing and comparing the performance of other implementations for channel-based communication and also will enable true multi-node scaling beyond the capabilities of the Python multiprocessing module (PR #29)
    • Enhanced execution performance by removing busy waits in the Runtime and RuntimeService (Issue #36 & PR #87)
    • Enabled compiler and runtime support for RefPorts which allows remote memory access between Lava processes such that one process can reconfigure another process at runtime. Also, remote-memory access is based on channel-based message passing but can lead to side effects and should therefore be used with caution. See Remote Memory Access tutorial for how RefPorts can be used (Issue #43 & PR #46).
    • Implemented a first prototype of a Monitor Process. A Monitor provides a user interface to probe Vars and OutPorts of other Processes and records their evolution over time in a time series for post-processing. The current Monitor prototype is limited in that it can only probe a single Var or OutPort per Process. (Issue #74 & PR #80). This limitation will be addressed in the next release.
    • Added floating point and Loihi-fixed point PyProcessModels for LIF and connection processes like DENSE and CONV. See issue #40 for more details.
    • Added an in-depth tutorial on connecting processes (PR #105)
    • Added an in-depth tutorial on remote memory access (PR #99)
    • Added an in-depth tutorial on hierarchical Processes and SubProcessModels ()

    Bug Fixes and Other Changes

    • Fixed a bug in get/set Var to enable get/set of floating-point values (Issue #44)
    • Fixed install instructions (setting PYTHONPATH) (Issue #45)
    • Fixed code example in documentation (Issue #62)
    • Fixed and added missing license information (Issue #41 & Issue #63)
    • Added unit tests for merging and branching In-/OutPorts (PR #106)

    Known Issues

    • No support for Intel Loihi yet.
    • Channel-based Process communication via CSP channels implemented with Python multiprocessing improved significantly by >30x . However, more improvement is still needed to reduce the overhead from inter-process communication in implementing CSP channels in SW and to get closer to native execution speeds of similar implementations without CSP channel overhead.
    • Errors from remote system processes like PyProcessModels or the PyRuntimeService are currently not thrown to the user system process. This makes debugging of parallel processes hard. We are working on propagating exceptions thrown in remote processes to the user.
    • Virtual ports for reshaping and concatenation are not supported yet.
    • A single Monitor process cannot monitor more than one Var/InPort of single process, i.e., multi-var probing with single Monitor process is not supported yet.
    • Still limited API documentation.
    • Non-blocking execution mode not yet supported. Thus Runtime.pause() and Runtime.wait() do not work yet.

    What's Changed

    • Remove unused channel_utils by @mgkwill in https://github.com/lava-nc/lava/pull/37
    • Refactor Message Infrastructure by @joyeshmishra in https://github.com/lava-nc/lava/pull/29
    • Fixed copyright in BSD-3 LICENSE files by @mathisrichter in https://github.com/lava-nc/lava/pull/42
    • Fixed PYTHONPATH installation instructions after directory restructure of core lava repo by @drager-intel in https://github.com/lava-nc/lava/pull/48
    • Add missing license in utils folder by @Tobias-Fischer in https://github.com/lava-nc/lava/pull/58
    • Add auto Runtime.stop() by @mgkwill in https://github.com/lava-nc/lava/pull/38
    • Enablement of RefPort to Var/VarPort connections by @PhilippPlank in https://github.com/lava-nc/lava/pull/46
    • Support float data type for get/set value of Var by @PhilippPlank in https://github.com/lava-nc/lava/pull/69
    • Disable non-blocking execution by @PhilippPlank in https://github.com/lava-nc/lava/pull/67
    • LIF ProcessModels: Floating and fixed point: PR attempt #2 by @srrisbud in https://github.com/lava-nc/lava/pull/70
    • Fixed bug in README.md example code by @mathisrichter in https://github.com/lava-nc/lava/pull/61
    • PyInPort: probe() implementation by @gkarray in https://github.com/lava-nc/lava/pull/77
    • Performance improvements by @harryliu-intel in https://github.com/lava-nc/lava/pull/87
    • Clean up of explicit namespace declaration by @bamsumit in https://github.com/lava-nc/lava/pull/98
    • Enabling monitoring/probing of Vars and OutPorts of processes with Monitor Process by @elvinhajizada in https://github.com/lava-nc/lava/pull/80
    • Conv Process Implementation by @bamsumit in https://github.com/lava-nc/lava/pull/73
    • Move tutorials to root directory of the repo by @bamsumit in https://github.com/lava-nc/lava/pull/102
    • Tutorial for shared memory access (RefPorts) by @PhilippPlank in https://github.com/lava-nc/lava/pull/99
    • Move tutorial07 by @PhilippPlank in https://github.com/lava-nc/lava/pull/107
    • Added Unit tests for branching/merging of IO ports by @PhilippPlank in https://github.com/lava-nc/lava/pull/106
    • Connection tutorial finished by @PhilippPlank in https://github.com/lava-nc/lava/pull/105
    • Fix for issue #109, Monitor unit test failing non-deterministically by @mathisrichter in https://github.com/lava-nc/lava/pull/110
    • Created floating pt and bit accurate Dense ProcModels + unit tests. Fixes issues #100 and #111. by @drager-intel in https://github.com/lava-nc/lava/pull/112
    • Update test_io_ports.py by @PhilippPlank in https://github.com/lava-nc/lava/pull/113
    • Fix README.md Example Code by @mgkwill in https://github.com/lava-nc/lava/pull/94
    • Added empty list attribute tags to AbstractProcessModel by @srrisbud in https://github.com/lava-nc/lava/pull/96
    • Lava 0.2.0 by @mgkwill in https://github.com/lava-nc/lava/pull/117

    New Contributors

    • @joyeshmishra made their first contribution in https://github.com/lava-nc/lava/pull/29
    • @drager-intel made their first contribution in https://github.com/lava-nc/lava/pull/48
    • @Tobias-Fischer made their first contribution in https://github.com/lava-nc/lava/pull/58
    • @PhilippPlank made their first contribution in https://github.com/lava-nc/lava/pull/46
    • @gkarray made their first contribution in https://github.com/lava-nc/lava/pull/77
    • @harryliu-intel made their first contribution in https://github.com/lava-nc/lava/pull/87
    • @bamsumit made their first contribution in https://github.com/lava-nc/lava/pull/98
    • @elvinhajizada made their first contribution in https://github.com/lava-nc/lava/pull/80

    Full Changelog: https://github.com/lava-nc/lava/compare/v0.1.1...v0.2.0

    Source code(tar.gz)
    Source code(zip)
    lava-nc-0.2.0.tar.gz(70.19 KB)
  • v0.1.1(Nov 12, 2021)

    Minor release, mostly typo fixes and license updates.

    Notes

    • Source directory has moved from lava to src/lava

    What's Changed

    • Fix mnist.ipynb by @mgkwill in https://github.com/lava-nc/lava/pull/6
    • Add missing coverage to build-reqs by @mgkwill in https://github.com/lava-nc/lava/pull/7
    • Removed Intel Confidential header by @mathisrichter in https://github.com/lava-nc/lava/pull/8
    • Fixed typos in the tutorials by @ashishrao7 in https://github.com/lava-nc/lava/pull/14
    • Added @ tags decorator to tag ProcessModels and distinguish them by @srrisbud in https://github.com/lava-nc/lava/pull/22
    • Added basic forking/joining ports by @jlakness-intel in https://github.com/lava-nc/lava/pull/27
    • Fix Packaging Bug by @mgkwill in https://github.com/lava-nc/lava/pull/30

    New Contributors

    • @ashishrao7 made their first contribution in https://github.com/lava-nc/lava/pull/14
    • @srrisbud made their first contribution in https://github.com/lava-nc/lava/pull/22
    • @jlakness-intel made their first contribution in https://github.com/lava-nc/lava/pull/27

    Full Changelog: https://github.com/lava-nc/lava/compare/v0.1.0...v0.1.1

    Source code(tar.gz)
    Source code(zip)
    lava-nc-0.1.1.tar.gz(55.51 KB)
  • v0.1.0(Oct 27, 2021)

    Release 0.1.0

    This first release of Lava introduces its high-level, hardware-agnostic API for developing algorithms of distributed, parallel, and asynchronous processes that communicate with each other via message-passing over channels with each other. The API is released together with the Lava compiler and runtime which together form the Magma layer of the Lava software framework.

    Our initial version of Magma allows you to familiarize yourself with the Lava user interface and to build first algorithms in Python that can be executed on CPU without requiring access to physical or cloud based Loihi resources.

    New Features and Improvements

    • New Lava API to build networks of interacting Lava processes
    • New Lava Compiler to map Lava processes to executable Python code for CPU execution (support for Intel Loihi will follow)
    • New Lava Runtime to execute Lave processes
    • A range of fundamental tutorials illustrating the basic concepts of Lava

    Bug Fixes and Other Changes

    • This is the first release of Lava. No bug fixes or other changes

    Thanks to our Contributors

    @GaboFGuerra, @joyeshmishra, @PhilippPlank, @drager-intel, @mathisrichter, @srrisbud, @ysingh7, @phstratmann, @mgkwill, @awintel

    Breaking Changes

    • This is the first release of Lava. No breaking or other changes.

    Known Issues

    • No support for Intel Loihi yet
    • Multiprocessing and channel-based communication not very performant yet
    • Virtual ports for reshaping and concatenation are not supported yet
    • No support for direct memory access via RefPorts yet
    • Connectivity from one to many or from many to one port not supported yet
    • No support for live state monitoring yet
    • Still limited API documentation
    Source code(tar.gz)
    Source code(zip)
    lava-nc-0.1.0.tar.gz(54.59 KB)
Owner
Lava
A Software Framework for Neuromorphic Computing
Lava
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 6, 2023
Interactive Parallel Computing in Python

Interactive Parallel Computing with IPython ipyparallel is the new home of IPython.parallel. ipyparallel is a Python package and collection of CLI scr

IPython 2.3k Dec 30, 2022
Distributed Computing for AI Made Simple

Project Home Blog Documents Paper Media Coverage Join Fiber users email list [email protected] Fiber Distributed Computing for AI Made Simp

Uber Open Source 997 Dec 30, 2022
⏳ Tempo: The MLOps Software Development Kit

Tempo provides a unified interface to multiple MLOps projects that enable data scientists to deploy and productionise machine learning systems.

Seldon 36 Jun 20, 2021
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022
Simple structured learning framework for python

PyStruct PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perce

pystruct 666 Jan 3, 2023
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 3, 2023
Probabilistic programming framework that facilitates objective model selection for time-varying parameter models.

Time series analysis today is an important cornerstone of quantitative science in many disciplines, including natural and life sciences as well as eco

Christoph Mark 129 Dec 24, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 7, 2023
A unified framework for machine learning with time series

Welcome to sktime A unified framework for machine learning with time series We provide specialized time series algorithms and scikit-learn compatible

The Alan Turing Institute 6k Jan 6, 2023
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Ray provides a simple, universal API for building distributed applications. Ray is packaged with the following libraries for accelerating machine lear

null 23.3k Dec 31, 2022
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

Horovod Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make dis

Horovod 12.9k Jan 7, 2023
BigDL: Distributed Deep Learning Framework for Apache Spark

BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can w

null 4.1k Jan 9, 2023
A high performance and generic framework for distributed DNN training

BytePS BytePS is a high performance and general distributed training framework. It supports TensorFlow, Keras, PyTorch, and MXNet, and can run on eith

Bytedance Inc. 3.3k Dec 28, 2022
LiuAlgoTrader is a scalable, multi-process ML-ready framework for effective algorithmic trading

LiuAlgoTrader is a scalable, multi-process ML-ready framework for effective algorithmic trading. The framework simplify development, testing, deployment, analysis and training algo trading strategies. The framework automatically analyzes trading sessions, and the analysis may be used to train predictive models.

Amichay Oren 458 Dec 24, 2022
Python Research Framework

Python Research Framework

EleutherAI 106 Dec 13, 2022
A Lucid Framework for Transparent and Interpretable Machine Learning Models.

Currently a Beta-Version lucidmode is an open-source, low-code and lightweight Python framework for transparent and interpretable machine learning mod

lucidmode 15 Aug 12, 2022
Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

Facebook Research 4.1k Dec 29, 2022
Automated modeling and machine learning framework FEDOT

This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML). It can build custom modeling pipelines for different real-world processes in an automated way using an evolutionary approach. FEDOT supports classification (binary and multiclass), regression, clustering, and time series prediction tasks.

National Center for Cognitive Research of ITMO University 148 Jul 5, 2021