Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Related tags

Deep Learning auto
Overview

pre-commit.ci status

Project 3 - FYS-STK4155

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.


The folder p3 contains the source code for our python package where the project required implementations are defined.

To format the code there is added a pre-commit configuration so the code follows a common standard between the members of the group. After pre-commit is installed in virtual environment or globally, activate it for this repository by running pre-commit install. It will now run configured linters and formatters each time you make a commit.

Setup using virtual environment

cd 
   
# Create a virtual environment
python -m venv venv
# Activate it
venv\Scripts\activate.bat # or on linux/mac: . venv/bin/activate
# Install the package and dependencies as an editable package in the venv
pip install -e .[dev,testing]

If you are using conda something like this should maybe work:

cd 
   
conda create --prefix ./env
conda activate ./env
pip install -e .[dev,testing]
# or maybe
conda install conda-build
conda develop . -n 
   

Running tests and check coverage

To run the tests in tests folder we use pytest and coverage, who is installed, if set-up is done as described above.

# to run tests:
(.venv)$ pytest
# to run coverage
(.venv)$ coverage run -m pytest && coverage report -m

Training

❯ python -m p3 --help
usage: __main__.py [-h] [--dataset DATASET] [--epochs EPOCHS]
                   [--batch-size BATCH_SIZE] [--lr LR]

optional arguments:
  -h, --help            show this help message and exit
  --dataset DATASET     path to dataset root. If dataset is not found at
                        location it will be downloaded to this location.
                        Default: './dataset'
  --epochs EPOCHS       train for number of epochs
  --batch-size BATCH_SIZE
                        number of samples in a batch
  --lr LR               step size during optimization
You might also like...
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

Learning to See by Looking at Noise
Learning to See by Looking at Noise

Learning to See by Looking at Noise This is the official implementation of Learning to See by Looking at Noise. In this work, we investigate a suite o

《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

Angular & Electron desktop UI framework. Angular components for native looking and behaving macOS desktop UI (Electron/Web)
Angular & Electron desktop UI framework. Angular components for native looking and behaving macOS desktop UI (Electron/Web)

Angular Desktop UI This is a collection for native desktop like user interface components in Angular, especially useful for Electron apps. It starts w

sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code
sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

sequitur sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three differ

Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.
Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Owner
Tom-R.T.Kvalvaag
Tom-R.T.Kvalvaag
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022
Simple improvement of VQVAE that allow to generate x2 sized images compared to baseline

vqvae_dwt_distiller.pytorch Simple improvement of VQVAE that allow to generate x2 sized images compared to baseline. It allows to generate 512x512 ima

Sergei Belousov 25 Jul 19, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

null 35 Sep 8, 2021
Azua - build AI algorithms to aid efficient decision-making with minimum data requirements.

Project Azua 0. Overview Many modern AI algorithms are known to be data-hungry, whereas human decision-making is much more efficient. The human can re

Microsoft 197 Jan 6, 2023
A toolkit for developing and comparing reinforcement learning algorithms.

Status: Maintenance (expect bug fixes and minor updates) OpenAI Gym OpenAI Gym is a toolkit for developing and comparing reinforcement learning algori

OpenAI 29.6k Jan 8, 2023
code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning" by Zhongzheng Ren*, Raymond A. Yeh*, Alexander G. Schwing.

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning Overview This code is for paper: Not All Unlabeled Data are Equa

Jason Ren 22 Nov 23, 2022
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

null 349 Dec 8, 2022
Video Autoencoder: self-supervised disentanglement of 3D structure and motion

Video Autoencoder: self-supervised disentanglement of 3D structure and motion This repository contains the code (in PyTorch) for the model introduced

null 157 Dec 22, 2022
New AidForBlind - Various Libraries used like OpenCV and other mentioned in Requirements.txt

AidForBlind Recommended PyCharm IDE Various Libraries used like OpenCV and other

Aalhad Chandewar 1 Jan 13, 2022