Understanding Hyperdimensional Computing for Parallel Single-Pass Learning

Overview

Understanding Hyperdimensional Computing for Parallel Single-Pass Learning

Authors:

*: Equal Contribution

Introduction

This repo contains implementation of the group VSA and binary HDC model with random Fourier feature (RFF) encoding, described in the paper Understanding Hyperdimensional Computing for Parallel Single-Pass Learning.

Our RFF method and group VSA can outperform the state-of-the-art HDC model while maintaining hardware efficiency. For example, on MNIST,

Model 1-Epoch Accuracy 10-Epoch Accuracy Circuit-Depth Complexity
Percep. 94.3 % 94.3 % 1299
SOTA HDC NA 89.0 % 295
RFF HDC 95.4 % 95.4 % 295
RFF G(2^3)-VSA 96.3 % 95.7 % 405

Dependencies and Data

Numpy and PyTorch>=1.0.0 are required to run the implementation. Supported datasets include MNIST, Fashion-MNIST, CIFAR-10, ISOLET and UCI-HAR. We provide the ISOLET and UCI-HAR data in dataset folder.

Usage

Please create the ./encoded_data folder before running the following code.

$ python main.py [-h] [-lr LR] [-gamma GAMMA] [-epoch EPOCH] [-gorder GORDER] [-dim DIM] 
[-data_dir DATA_DIR] [-model MODEL]
optional arguments:
  -h, --help            show this help message and exit
  -lr LR                learning rate for optimizing class representative
  -gamma GAMMA          kernel parameter for computing covariance
  -epoch EPOCH          epochs of training
  -gorder GORDER        order of the cyclic group required for G-VSA
  -dim DIM              dimension of hypervectors
  -resume               resume from existing encoded hypervectors
  -data_dir DATA_DIR    Directory used to save encoded data (hypervectors)
  -dataset {mnist,fmnist,cifar,isolet,ucihar}
                        dataset (mnist | fmnist | cifar | isolet | ucihar)
  -raw_data_dir RAW_DATA_DIR
                        Raw data directory to the dataset
  -model {rff-hdc,linear-hdc,rff-gvsa}
                        feature and model to use: (rff-hdc | linear-hdc | rff-gvsa)

For example,

$ python main.py -gamma 0.3 -epoch 10 -gorder 8 -dim 10000 -dataset mnist -model rff-gvsa

Citation

If you find this repo useful, please cite:


You might also like...
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

xitorch: differentiable scientific computing library

xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely used in scientific computing applications as well as deep learning.

A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Blender scripts for computing geodesic distance
Blender scripts for computing geodesic distance

GeoDoodle Geodesic distance computation for Blender meshes Table of Contents Overivew Usage Implementation Overview This addon provides an operator fo

Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Material related to the Principles of Cloud Computing course.

CloudComputingCourse Material related to the Principles of Cloud Computing course. This repository comprises material that I use to teach my Principle

Owner
Cornell RelaxML
Chris De Sa's Research Group
Cornell RelaxML
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning

Sky Computing Introduction Sky Computing is a load-balanced framework for federated learning model parallelism. It adaptively allocate model layers to

HPC-AI Tech 72 Dec 27, 2022
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
[ICCV 2021] Official Tensorflow Implementation for "Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions"

KPAC: Kernel-Sharing Parallel Atrous Convolutional block This repository contains the official Tensorflow implementation of the following paper: Singl

Hyeongseok Son 50 Dec 29, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

null 2 Nov 15, 2021
This Jupyter notebook shows one way to implement a simple first-order low-pass filter on sampled data in discrete time.

How to Implement a First-Order Low-Pass Filter in Discrete Time We often teach or learn about filters in continuous time, but then need to implement t

Joshua Marshall 4 Aug 24, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

null 19.4k Jan 4, 2023
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
Lyapunov-guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks

PyTorch code to reproduce LyDROO algorithm [1], which is an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability and average power constraints. It applies Lyapunov optimization to decouple the multi-stage stochastic MINLP into deterministic per-frame MINLP subproblems and solves each subproblem via DROO algorithm. It includes:

Liang HUANG 87 Dec 28, 2022
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 8, 2023