A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

Overview

Academic-DeepNeuralNetsFromScratch

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

This project was constructed for the Introduction to Machine Learning course, class 605.649 section 84 at Johns Hopkins University. FranceLab4 is a machine learning toolkit that implements several algorithms for classification and regression tasks. Specifically, the toolkit coordinates a linear network, a logistic regressor, an autoencoder, and a neural network that implements backpropagation; it also leverages data structures built in the preceding labs. FranceLab4 is a software module written in Python 3.7 that facilitates such algorithms.

##Notes for Graders All files of concern for this project (with the exception of main.py) may be found in the Linear_Network, Logistic_Regression, and Neural_Network folders. I kept most of my files from Projects 1, 2, and 3 because I ended up using cross validation, encoding, and other helper methods. However, these three folders contains the neural network algorithms of interest.

I have created blocks of code for you to test and run each algorithm if you choose to do so. In __main__.py scroll to the bottom and find the main function. Simply comment or uncomment blocks of code to test if desired.

Each neural network and autoencoder constructed are sub-classed / inherited from the NeuralNet class in neural_net.py. I simply initialize the class differently in order to construct an autoencoder, a feed-forward neural network, or a combination of both.

Data produced in my paper were run with KFCV. However within the main program, you may notice that the number of folds k has been reduced to 2 to make the analysis quicker and the console output easier to follow.

The construction of a linear network begins on line 84 in __main__.py.

The construction of a logistic regressor begins on line 102 in __main__.py.

The construction of an autoencoder only begins on line 128 in __main__.py.

The construction of a feed-forward neural network only begins on line 141 in __main__.py.

The construction of an autoencoder that is trained, the decoder removed, and the encoder attached to a new hidden layer with a prediction layer attached to form a new neural network begins on line 221 in __main__.py.

The code for the weight updates and backward and forward propagation may be found in the following files within the Neural_Network folder:

  • layer.py
  • optimizer_function.py
  • neural_net.py

__main__.py is the driver behind importing the dataset, cleaning the data, coordinating KFCV, and initializing each of the neural network algorithms.

Running FranceLab4

  1. Ensure Python 3.7 is installed on your computer.
  2. Navigate to the Lab4 directory. For example, cd User\Documents\PythonProjects\FranceLab4. Do NOT cd into the Lab4 module.
  3. Run the program as a module: python3 -m Lab4.
  4. Input and output files ar located in the io_files subdirectory.

FranceLab4 Usage

usage: python3 -m Lab4
You might also like...
Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training

Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training Code for our paper "Predicting lncRNA–protein interactio

Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir Website | arXiv | BibTeX Official PyTo

Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

⚡ Fast • 🪶 Lightweight • 0️⃣ Dependency • 🔌 Pluggable • 😈 TLS interception • 🔒 DNS-over-HTTPS • 🔥 Poor Man's VPN • ⏪ Reverse & ⏩ Forward • 👮🏿 Official implementation of the paper
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Data Augmentation with Variational Autoencoders
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Owner
Kordel K. France
Artificial Intelligence Engineer, Algorithmic Trader. I build software that finds order within chaos.
Kordel K. France
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 40 Dec 27, 2022
Simple Linear 2nd ODE Solver GUI - A 2nd constant coefficient linear ODE solver with simple GUI using euler's method

Simple_Linear_2nd_ODE_Solver_GUI Description It is a 2nd constant coefficient li

:) 4 Feb 5, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022
To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types

To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types, from a Database Taken From Dr. Wolberg reports his Clinic Cases.

Astitva Veer Garg 1 Jul 31, 2022
Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression.

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Not an official Google product. Me

Google Research 27 Dec 12, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax

Clockwork VAEs in JAX/Flax Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax, ported

Julius Kunze 26 Oct 5, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 2, 2023