I downloaded link "The Neural Rasterizer :image size 128x128","The Image Super-resolution model image size 128x128 -> 256x256","The Main model image size 128x128",and put them in the corresponding folder,To train our main model, run python main.py --mode train --experiment_name dvf --model_name main_model , The following error appears
(dvf) miao@miao:~/data/file/bwt/deepvecfont-master$ python main.py --mode train --experiment_name dvf --model_name main_model
Training on experiment dvf_main_model...
Loading data/vecfont_dataset_pkls/train/train_all.pkl pickle file ...
Finished loading pkls
Loading data/vecfont_dataset_pkls/test/test_all.pkl pickle file ...
Finished loading pkls
Traceback (most recent call last):
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 351, in
main()
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 345, in main
train(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 320, in train
train_main_model(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 99, in train_main_model
for idx, data in enumerate(train_loader):
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/miao/data/file/bwt/deepvecfont-master/dataloader.py", line 52, in getitem
item['rendered'] = torch.FloatTensor(cur_glyph['rendered']).view(self.char_num, self.img_size, self.img_size) / 255.
RuntimeError: shape '[52, 128, 128]' is invalid for input of size 212992
The following error occurs when I change image_size default=128 in line 13 of options.py to 64
(dvf) miao@miao:~/data/file/bwt/deepvecfont-master$ python main.py --mode train --experiment_name dvf --model_name main_model
Training on experiment dvf_main_model...
Loading data/vecfont_dataset_pkls/train/train_all.pkl pickle file ...
Finished loading pkls
Loading data/vecfont_dataset_pkls/test/test_all.pkl pickle file ...
Finished loading pkls
Traceback (most recent call last):
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 351, in
main()
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 345, in main
train(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 320, in train
train_main_model(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 64, in train_main_model
neural_rasterizer.load_state_dict(torch.load(neural_rasterizer_fpath))
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for NeuralRasterizer:
Unexpected key(s) in state_dict: "decode.21.weight", "decode.21.bias", "decode.19.weight", "decode.19.bias".
size mismatch for decode.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 512, 3, 3]).
size mismatch for decode.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decode.1.weight: copying a param with shape torch.Size([1024, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 2, 2]).
size mismatch for decode.1.bias: copying a param with shape torch.Size([1024, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 2, 2]).
size mismatch for decode.3.weight: copying a param with shape torch.Size([1024, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
size mismatch for decode.3.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode.4.weight: copying a param with shape torch.Size([512, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 4, 4]).
size mismatch for decode.4.bias: copying a param with shape torch.Size([512, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 4, 4]).
size mismatch for decode.6.weight: copying a param with shape torch.Size([512, 256, 5, 5]) from checkpoint, the shape in current model is torch.Size([256, 128, 5, 5]).
size mismatch for decode.6.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decode.7.weight: copying a param with shape torch.Size([256, 8, 8]) from checkpoint, the shape in current model is torch.Size([128, 8, 8]).
size mismatch for decode.7.bias: copying a param with shape torch.Size([256, 8, 8]) from checkpoint, the shape in current model is torch.Size([128, 8, 8]).
size mismatch for decode.9.weight: copying a param with shape torch.Size([256, 128, 5, 5]) from checkpoint, the shape in current model is torch.Size([128, 64, 5, 5]).
size mismatch for decode.9.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for decode.10.weight: copying a param with shape torch.Size([128, 16, 16]) from checkpoint, the shape in current model is torch.Size([64, 16, 16]).
size mismatch for decode.10.bias: copying a param with shape torch.Size([128, 16, 16]) from checkpoint, the shape in current model is torch.Size([64, 16, 16]).
size mismatch for decode.12.weight: copying a param with shape torch.Size([128, 64, 5, 5]) from checkpoint, the shape in current model is torch.Size([64, 32, 5, 5]).
size mismatch for decode.12.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for decode.13.weight: copying a param with shape torch.Size([64, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 32, 32]).
size mismatch for decode.13.bias: copying a param with shape torch.Size([64, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 32, 32]).
size mismatch for decode.15.weight: copying a param with shape torch.Size([64, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 16, 5, 5]).
size mismatch for decode.15.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for decode.16.weight: copying a param with shape torch.Size([32, 64, 64]) from checkpoint, the shape in current model is torch.Size([16, 64, 64]).
size mismatch for decode.16.bias: copying a param with shape torch.Size([32, 64, 64]) from checkpoint, the shape in current model is torch.Size([16, 64, 64]).
size mismatch for decode.18.weight: copying a param with shape torch.Size([32, 16, 5, 5]) from checkpoint, the shape in current model is torch.Size([1, 16, 7, 7]).
size mismatch for decode.18.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([1]).
What is the problem and how can I fix it?Thank you!!