A pytorch implementation of Pytorch-Sketch-RNN

Overview
Comments
  • Make code more compatible for Pytorch version 0.4.0 and above

    Make code more compatible for Pytorch version 0.4.0 and above

    The following changes were made from the base branch:

    • From Pytorch version 0.4 onwards, Variable class is deprecated. So removed all calls to Variable.

    • Increased readability in line 46 by replacing len(seq[:,0]) with seq.shape[0]. Both means the same.

    • For lines 186 -- 191, transposing a 3D tensor throws an error. So squeeze operation was done before transpose rather than doing after.

    • For lines 219 -- 226, the use of detach() is unnecessary and superfluous as these tensors are not part of the main computation graph and the variable batch has requires_grad to be False.

    opened by varshaneya 1
  • Cannot train for conditional_generation

    Cannot train for conditional_generation

    Hi I was just trying to exactly run your code but I bumped into an error at epoch=100

    <ipython-input-48-6ce735b7273a> in <module>
          2     model = Model()
          3     for epoch in range(500):
    ----> 4         model.train(epoch)
    
    <ipython-input-44-a793444e3f37> in train(self, epoch)
         78         if epoch%100==0:
         79             #self.save(epoch)
    ---> 80             self.conditional_generation(epoch)
         81 
         82     def bivariate_normal_pdf(self, dx, dy):
    
    <ipython-input-44-a793444e3f37> in conditional_generation(self, epoch)
        142             hidden_cell = (hidden, cell)
        143             # sample from parameters:
    --> 144             s, dx, dy, pen_down, eos = self.sample_next_state()
        145             #------
        146             seq_x.append(dx)
    
    <ipython-input-44-a793444e3f37> in sample_next_state(self)
        180         sigma_y = self.sigma_y.data[0,0,pi_idx]
        181         rho_xy = self.rho_xy.data[0,0,pi_idx]
    --> 182         x,y = sample_bivariate_normal(mu_x,mu_y,sigma_x,sigma_y,rho_xy,greedy=False)
        183         next_state = torch.zeros(5)
        184         next_state[0] = x
    
    <ipython-input-47-60080b137134> in sample_bivariate_normal(mu_x, mu_y, sigma_x, sigma_y, rho_xy, greedy)
          8     cov = [[sigma_x * sigma_x, rho_xy * sigma_x * sigma_y],\
          9         [rho_xy * sigma_x * sigma_y, sigma_y * sigma_y]]
    ---> 10     x = np.random.multivariate_normal(mean, cov, 1)
         11     return x[0][0], x[0][1]
         12 
    
    mtrand.pyx in numpy.random.mtrand.RandomState.multivariate_normal()
    
    TypeError: ufunc 'add' output (typecode 'O') could not be coerced to provided output parameter (typecode 'd') according to the casting rule ''same_kind''```
    
    I've looked into this error but I cannot quite grasp what is going on in here.
    
    Would you minf if you help me out with this please?
    opened by DustinBaek 0
  • The result was not as perfect as the  images

    The result was not as perfect as the images

    After running the code, the result was not as perfect as the images you gave. I have tried CPU & GPU the result was till look like a mess after training 50001 epoch. I was wonder is there any problem of the code or anything which affect the result. I'm using the latest version you provided, and the version of pytorch was 1.8.1, the version of python was 3.7.1 Hope anyone could help

    opened by LTHsuan 1
  • Wrong Decoder RNN Architecture

    Wrong Decoder RNN Architecture

    In the decoder an LSTM is used

    https://github.com/alexis-jacq/Pytorch-Sketch-RNN/blob/5c3e21375dfe7695c1c37a0acccf6da17c049f77/sketch_rnn.py#L151-L157

    While in the original paper, the description of the architecture at page 6 states that

    For the decoder RNN, we use HyperLSTM, as this type of RNN cell excels at sequence generation tasks

    Referring to a very different implementation of an LSTM that can generate different weights for itself for every element in a sequence. The model is defined in this paper as well as implementation details defined in the Appendix Sections 2.2 and 2.3

    opened by Ar-Kareem 0
  • Version related, syntax and hyperparameter changes

    Version related, syntax and hyperparameter changes

    1. Specifying dropout parameter to nn.LSTM with one layer will not apply dropout. You have to separately specify dropout using nn.Dropout and apply it to hidden since the paper talks only recurrent dropout and not input or output dropout.
    2. The original implementation calls for a 0.9 keep probability but yours calls for a 0.9 dropout probability. You need to change that in the class of Hyperparameters.
    3. F.softmax requires dimension parameter dim to be mentioned.
    4. Removed unnecessary t() and squeeze() operations and replaced them with view() directly.
    5. Missing closing parenthesis.
    6. For tensors with single data element in it, using square brackets with 0 will be considered as error from pytorch version 0.5 onwards. So replaced them with item().
    opened by varshaneya 0
  • in reconstruction_loss function, the divisor part should be Nmax+1

    in reconstruction_loss function, the divisor part should be Nmax+1

    It was in the class Model of sketch_rnn.py:

    def reconstruction_loss(self, mask, dx, dy, p, epoch):
        pdf = self.bivariate_normal_pdf(dx, dy)
        LS = -torch.sum(mask*torch.log(1e-5+torch.sum(self.pi * pdf, 2)))\
            /float(Nmax*hp.batch_size)
        LP = -torch.sum(p*torch.log(self.q))/float(Nmax*hp.batch_size)
        return LS+LP
    

    Each Nmax in both LS and LP line, should be (Nmax+1) instead. As in the train function of class Model , each sequence has concated an sos at the begining:

    # create start of sequence:
    if use_cuda:
        sos = Variable(torch.stack([torch.Tensor([0,0,1,0,0])]\
            *hp.batch_size).cuda()).unsqueeze(0)
    else:
        sos = Variable(torch.stack([torch.Tensor([0,0,1,0,0])]\
            *hp.batch_size)).unsqueeze(0)
    # had sos at the begining of the batch:
    batch_init = torch.cat([sos, batch],0)
    # expend z to be ready to concatenate with inputs:
    z_stack = torch.stack([z]*(Nmax+1))
    # inputs is concatenation of z and batch_inputs
    inputs = torch.cat([batch_init, z_stack],2)
    
    opened by rardz 0
  • RuntimeError: t() expects a 2D Variable, but self is 3D

    RuntimeError: t() expects a 2D Variable, but self is 3D

    I suspect that there have been some updates to PyTorch after this code was published, since when trying to run it, I get:

    File "sketch_rnn.py", line 420, in <module>
        model.train(epoch)
      File "sketch_rnn.py", line 251, in train
        self.rho_xy, self.q, _, _ = self.decoder(inputs, z)
      File "/home/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
        result = self.forward(*input, **kwargs)
      File "sketch_rnn.py", line 186, in forward
        pi = F.softmax(pi.t().squeeze()).view(len_out,-1,hp.M)
      File "/home/.local/lib/python2.7/site-packages/torch/autograd/variable.py", line 729, in t
        raise RuntimeError("t() expects a 2D Variable, but self is {}D".format(self.dim()))
    RuntimeError: t() expects a 2D Variable, but self is 3D
    

    I could fix this by replacing the .t() calls with .transpose(0,1) - I am not sure if this is actually correct, but it seems like it works. So the changed code will look like this:

    pi = F.softmax(pi.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
    sigma_x = torch.exp(sigma_x.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
    sigma_y = torch.exp(sigma_y.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
    rho_xy = torch.tanh(rho_xy.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
    mu_x = mu_x.transpose(0,1).squeeze().contiguous().view(len_out,-1,hp.M)
    mu_y = mu_y.transpose(0,1).squeeze().contiguous().view(len_out,-1,hp.M)
    
    opened by Quasimondo 1
Owner
Alexis David Jacq
Alexis David Jacq
Pytorch implementation of the popular Improv RNN model originally proposed by the Magenta team.

Pytorch Implementation of Improv RNN Overview This code is a pytorch implementation of the popular Improv RNN model originally implemented by the Mage

Sebastian Murgul 3 Nov 11, 2022
GANSketchingJittor - Implementation of Sketch Your Own GAN in Jittor

GANSketching in Jittor Implementation of (Sketch Your Own GAN) in Jittor(่ฎกๅ›พ). Or

Bernard Tan 10 Jul 2, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
A sketch extractor for anime/illustration.

Anime2Sketch Anime2Sketch: A sketch extractor for illustration, anime art, manga By Xiaoyu Xiang Updates 2021.5.2: Upload more example results of anim

Xiaoyu Xiang 1.6k Jan 1, 2023
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Compositional Sketch Search

Compositional Sketch Search Official repository for ICIP 2021 Paper: Compositional Sketch Search Requirements Install and activate conda environment c

Alexander Black 8 Sep 6, 2021
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

null 677 Dec 28, 2022
Open CV - Convert a picture to look like a cartoon sketch in python

Use the video https://www.youtube.com/watch?v=k7cVPGpnels for initial learning.

Sammith S Bharadwaj 3 Jan 29, 2022
๐Ÿ“ Wrapper library for text generation / language models at char and word level with RNN in TensorFlow

tensorlm Generate Shakespeare poems with 4 lines of code. Installation tensorlm is written in / for Python 3.4+ and TensorFlow 1.1+ pip3 install tenso

Kilian Batzner 63 May 22, 2021
Algorithmic Trading using RNN

Deep-Trading This an implementation adapted from Rachnog Neural networks for algorithmic trading. Part Oneโ€Šโ€”โ€ŠSimple time series forecasting and this c

Hazem Nomer 29 Sep 4, 2022
LSTMs (Long Short Term Memory) RNN for prediction of price trends

Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L

null 5 Nov 12, 2021
Implements Stacked-RNN in numpy and torch with manual forward and backward functions

Recurrent Neural Networks Implements simple recurrent network and a stacked recurrent network in numpy and torch respectively. Both flavours implement

Vishal R 1 Nov 16, 2021
keyframes-CNN-RNN(action recognition)

keyframes-CNN-RNN(action recognition) Environment: python>=3.7 pytorch>=1.2 Datasets: Following the format of UCF101 action recognition. Run steps: Mo

null 4 Feb 9, 2022
Using a Seq2Seq RNN architecture via TensorFlow to predict future Bitcoin prices

Recurrent Bitcoin Network A Data Science Thesis Project About This repository contains the source code for implementing Bitcoin price prediciton using

Frizu 6 Sep 8, 2022
RNN Predict Street Commercial Vitality

RNN-for-Predicting-Street-Vitality Code and dataset for Predicting the Vitality of Stores along the Street based on Business Type Sequence via Recurre

Zidong LIU 1 Dec 15, 2021
Emotion classification of online comments based on RNN

emotion_classification Emotion classification of online comments based on RNN, the accuracy of the model in the test set reaches 99% data: Large Movie

null 1 Nov 23, 2021
Static Features Classifier - A static features classifier for Point-Could clusters using an Attention-RNN model

Static Features Classifier This is a static features classifier for Point-Could

ABDALKARIM MOHTASIB 1 Jan 25, 2022
ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... ๋ชจ๋ธ์˜ ๊ฐœ๋…์ดํ•ด๋ฅผ ๋•๊ธฐ ์œ„ํ•œ ๊ตฌํ˜„๋ฌผ๋กœ ํ˜„์žฌ ๋ณ€์ˆ˜๋ช…์„ ์ƒ์„ธํžˆ ์ ์—ˆ๊ณ 

BG Kim 3 Oct 6, 2022
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022