This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

Overview

neon_course

This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see our documentation and our API.

Note: this version of the neon course is synchronized to work with neon v1.8.1, and some notebooks require installation of the aeon dataloader. For install instructions, see the neon and aeon documentation. See neon_course v1.2 for a version of this repository that works with neon version 1.2.

The jupyter notebooks in this repository include:

01 MNIST example

Comprehensive walk-through of how to use neon to build a simple model to recognize handwritten digits. Recommended as an introduction to the neon framework.

02 Fine-tuning

A popular application of deep learning is to load a pre-trained model and fine-tune on a new dataset that may have a different number of categories. This example walks through how to load a VGG model that has been pre-trained on ImageNet, a large corpus of natural images belonging to 1000 categories, and re-train the final few layers on the CIFAR-10 dataset, which has only 10 categories.

03 Writing a custom dataset object

neon provides many built-in methods for loading data from images, videos, audio, text, and more. In the rare cases where you may have to implement a custom dataset object, this notebooks guides users through building a custom dataset object for a modified version of the Street View House Number (SVHN) dataset. Users will not only write a custom dataset, but also design a network to, given an image, draw a bounding box around the digit sequence.

04 Writing a custom activation function and a custom layer

This notebook walks developers through how to implement custom activation functions and layers within neon. We implement the Affine layer, and demonstrate the speed-up difference between using a python-based computation and our own heavily optimized kernels.

05 Defining complex branching models

When simple sequential lists of layers do not suffice for your complex models, we present how to build complex branching models within neon.

06 Deep Residual network on the CIFAR-10 dataset

In neon, models are constructed as python lists, which makes it easy to use for-loops to define complex models that have repeated patterns, such as deep residual networks. This notebook is an end-to-end walkthrough of building a deep residual network, training on the CIFAR-10 dataset, and then applying the model to predict categories on novel images.

07 Writing a custom callback

Callbacks allow models to report back to users its progress during training. In this notebook, we present a callback that plots training cost in real-time within the jupyter notebook.

08 Detecting overfitting

Overfitting is often encountered when training deep learning models. This tutorial demonstrates how to use our visualization tools to detect when a model has overfit on the training data, and how to apply Dropout layers to correct the problem.

For several of the guided exercises, answer keys are provided in the answers/ folder.

09 Sentiment Analysis with LSTM

These two notebooks guide the user through training a recurrent neural network to classify paragraphs of movie reviews into either a positive or negative sentiment. The second notebook contains an example of inference with a trained model, including a section for users to write their own reviews and submit to the model for classification.

Setting up notebooks on remote machines

Some of these notebooks require access to a Titan X GPU. For full instructions on launching a notebook server that one could connect to from a different machine, see http://jupyter-notebook.readthedocs.io/en/latest/public_server.html. For a simple setup, first generate a configuration file:

$ jupyter notebook --generate-config

In your ~/.jupyter directory, edit the notebook config file, jupyter_notebook_config.py and edit the following lines:

c.NotebookApp.ip = '*'

c.NotebookApp.port = 8888

Save your changes and launch the jupyter notebook:

$ jupyter notebook

From a separate machine, open your browser and point to https://[server address]:8888 to connect to the jupyter notebook.

Nervana Cloud

The Nervana Cloud includes an interactive mode to launch jupyter notebooks on our Titan X GPU servers. If you have cloud credentials, launch an interactive session with the ncloud interact command.

For more information, see: http://doc.cloud.nervanasys.com/docs/latest/interact.html

You might also like...
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.

Multi-Modal Self-Supervision using GDT and StiCa This is an official pytorch implementation of papers: Multi-modal Self-Supervision from Generalized D

Source code for our paper
Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"

Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash Abstract: Apple recently revealed its deep perceptual hashing system NeuralHash to

Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

⚡ Fast • 🪶 Lightweight • 0️⃣ Dependency • 🔌 Pluggable • 😈 TLS interception • 🔒 DNS-over-HTTPS • 🔥 Poor Man's VPN • ⏪ Reverse & ⏩ Forward • 👮🏿
Comments
  • Config variable missing etl property Error

    Config variable missing etl property Error

    Hello,

    I'm working through this tutorial and am running into an error asking for the etl property in the config variable, I was getting a batch_size missing error but I changed minibatch to batch to get rid of that:

    config = { 'manifest_filename': 'data/cifar10/train-index.csv', # CSV manifest of data 'manifest_root': 'data/cifar10', # root data directory 'image': {'height': 224, 'width': 224, # output image size 'scale': [0.875, 0.875], # random scaling of image before cropping 'flip_enable': True}, # randomly flip image 'type': 'image,label', # type of data 'minibatch_size': be.bsz # batch size }

    When I run this code the config object is missing an etl property, I looked in the source code of aeon but the example I used is still throwing an error.

    train_set = AeonDataLoader(config, be)

    Any clarification about how to structure the etl property?

    opened by pantherso48 0
  • get_outputs() with model architecture: with brach

    get_outputs() with model architecture: with brach

    Hi,

    I was trying to implement and test a model based on the branch tutorial (05 Defining complex branching models. When simple sequential lists of layers do not suffice for your complex models, we present how to build complex branching models within neon. Link )

    After fitting the model, how can we get the outputs of the final layer. I implemented a model with two branches (single branch node) . I get an error in model.get_outputs(). I looked at the source code, the variable x in my code is a list, and it fails at this line (dim0, dim1) = x.shape , with the error 'list' object has no attribute shape.

    Is there a way to get_outputs() for a branch model?

    P.S. i also saw this assert statement : assert not isinstance(x, list), "Can not get_outputs with Branch terminal" , does this mean that we cannot use get_outputs() for a branch model?

    opened by shresthamalik 0
Owner
Nervana
Intel® Nervana™ - Artificial Intelligence Products Group
Nervana
Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

François Chollet 16.2k Dec 30, 2022
This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models are Pix2Pix, Pix2PixHD, CycleGAN and PointWise.

RGB2NIR_Experimental This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models

null 5 Jan 4, 2023
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers how to leverage our APIs for optimized deep learning inference in their applications.

OpenVINO Toolkit 840 Jan 3, 2023
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022
Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Ian Pointer 368 Dec 17, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 7, 2022
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Phillip Lippe 1.1k Jan 7, 2023
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 4, 2023
This repository contains the source code of our work on designing efficient CNNs for computer vision

Efficient networks for Computer Vision This repo contains source code of our work on designing efficient networks for different computer vision tasks:

Sachin Mehta 386 Nov 26, 2022