Transformer Huffman coding - Complete Huffman coding through transformer

Overview

Transformer_Huffman_coding

Complete Huffman coding through transformer

2022/2/19 Release Notes 1: generate a new branch

2: Divide the previous main.py file into three: main.py ----->Responsible for sequence training
model.py ----->The construction of transformers model
utils.py ----->Create a training data generation function

3: Deleted the function in the original file that will automatically generate the corresponding Huffman encoding according to the sequence

4: import Huffman package is used to generate a constant codebook

5: Beautify my code with prettier


Issues awaiting resolution : 1:Add code to generate attention map

2:The print that checks the intermediate result is removed from the code

3:Chinese comments in translation code

You might also like...
A complete, self-contained example for training ImageNet at state-of-the-art speed with FFCV

ffcv ImageNet Training A minimal, single-file PyTorch ImageNet training script designed for hackability. Run train_imagenet.py to get... ...high accur

A task Provided by A respective Artenal Ai and Ml based Company to complete it

A task Provided by A respective Alternal Ai and Ml based Company to complete it .

VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Measuring Coding Challenge Competence With APPS

Measuring Coding Challenge Competence With APPS This is the repository for Measuring Coding Challenge Competence With APPS by Dan Hendrycks*, Steven B

This repo is to be freely used by ML devs to check the GAN performances without coding from scratch.
This repo is to be freely used by ML devs to check the GAN performances without coding from scratch.

GANs for Fun Created because I can! GOAL The goal of this repo is to be freely used by ML devs to check the GAN performances without coding from scrat

pytorch implementation of
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Using deep learning to predict gene structures of the coding genes in DNA sequences of Arabidopsis thaliana

DeepGeneAnnotator: A tool to annotate the gene in the genome The master thesis of the "Using deep learning to predict gene structures of the coding ge

Christmas face app for Decathlon xmas coding party!
Christmas face app for Decathlon xmas coding party!

Christmas Face Application Use this library to create the perfect picture for your christmas cards! Done by Hasib Zunair, Guillaume Brassard and Samue

Comments
  • Draw attention map and use one-hot encoding

    Draw attention map and use one-hot encoding

    1.In the previous encoding, 0,1 are used for encoding, but since the gradient of 0 is 0, the model cannot learn. Therefore, one-hot encoding is used to solve this problem. 0-->0 0 1 1-->0 1 0 2-->1 0 0 2.added attention map function

    opened by Y4NG333 3
  • Refactor the code

    Refactor the code

    • 숫자 직접입력 금지. (예를 들어 self.d_k = 64 와 같이 하지말고 d_k를 입력으로 받기. 해당 부분 추가로 수정)
    • for문 쓰지 않는법 연습 (코드 확인)
    • prettier대신 black사용하기
    • 그림그리는 코드 utils.py에서 완성
    • rotate도 직접 작성하지 말고 해당 기능을 수행하는 함수가 있는지 확인
    opened by albert-no 3
  • Modify encoding and padding

    Modify encoding and padding

    Mainly modified two parts: 1.Padding is added at the end of the 4-bit huffman character ---> padding is added at the end of the entire sequence 2.Use 1 and 2 for Huffman coding, use 0 as padding ------> Use 0 and 1 for Huffman coding

    Final Results : For a random sequence of length 10 : C9TGM$T516 )3)V}F7RMW The effect is very good, the sequence generated by the model is consistent with the correct sequence, but his attention map is very different from the expected. 867IFV_R%_~MJW6 CYFSU7V It can even be said that the results of the attention map are confusing :(

    opened by Y4NG333 3
  • Using argparse, adding new maps and modifying model parameters

    Using argparse, adding new maps and modifying model parameters

    In this PR, three main works are carried out:

    1. Use argparse to manage model variables
    2. add a new output map
    3. Tried to modify the size of d_ff and d_k,d_v
    opened by Y4NG333 2
Owner
null
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 3, 2023
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
Complete U-net Implementation with keras

U Net Lowered with Keras Complete U-net Implementation with keras Original Paper Link : https://arxiv.org/abs/1505.04597 Special Implementations : The

Sagnik Roy 14 Oct 10, 2022
Complete the code of prefix-tuning in low data setting

Prefix Tuning Note: 作者在论文中提到使用真实的word去初始化prefix的操作(Initializing the prefix with activations of real words,significantly improves generation)。我在使用作者提供的

Andrew Zeng 4 Jul 11, 2022
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

null 290 Dec 25, 2022
A concise but complete implementation of CLIP with various experimental improvements from recent papers

x-clip (wip) A concise but complete implementation of CLIP with various experimental improvements from recent papers Install $ pip install x-clip Usag

Phil Wang 515 Dec 26, 2022
A concise but complete implementation of CLIP with various experimental improvements from recent papers

x-clip (wip) A concise but complete implementation of CLIP with various experimental improvements from recent papers Install $ pip install x-clip Usag

Phil Wang 115 Dec 9, 2021
Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

null 2 Dec 28, 2021
Complete system for facial identity system

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

null 4 May 2, 2022