I suspect that there have been some updates to PyTorch after this code was published, since when trying to run it, I get:
File "sketch_rnn.py", line 420, in <module>
model.train(epoch)
File "sketch_rnn.py", line 251, in train
self.rho_xy, self.q, _, _ = self.decoder(inputs, z)
File "/home/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "sketch_rnn.py", line 186, in forward
pi = F.softmax(pi.t().squeeze()).view(len_out,-1,hp.M)
File "/home/.local/lib/python2.7/site-packages/torch/autograd/variable.py", line 729, in t
raise RuntimeError("t() expects a 2D Variable, but self is {}D".format(self.dim()))
RuntimeError: t() expects a 2D Variable, but self is 3D
I could fix this by replacing the .t() calls with .transpose(0,1) - I am not sure if this is actually correct, but it seems like it works. So the changed code will look like this:
pi = F.softmax(pi.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
sigma_x = torch.exp(sigma_x.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
sigma_y = torch.exp(sigma_y.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
rho_xy = torch.tanh(rho_xy.transpose(0,1).squeeze()).view(len_out,-1,hp.M)
mu_x = mu_x.transpose(0,1).squeeze().contiguous().view(len_out,-1,hp.M)
mu_y = mu_y.transpose(0,1).squeeze().contiguous().view(len_out,-1,hp.M)