Hi, I'm trying to train a model on my data. The dataset is pretty small, less than 6000 sentences. I used the first tutorial for preprocessing, everything worked just fine.
Now when I try model training I get an error:
[2022-11-19 10:42:07,824 WARNING] Corpus corpus_1's weight should be given. We default it to 1 for you.
[2022-11-19 10:42:07,825 INFO] Parsed 2 corpora from -data.
[2022-11-19 10:42:07,826 INFO] Get special vocabs from Transforms: {'src': set(), 'tgt': set()}.
[2022-11-19 10:42:07,887 INFO] Building model...
Traceback (most recent call last):
File "/usr/local/bin/onmt_train", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/onmt/bin/train.py", line 65, in main
train(opt)
File "/usr/local/lib/python3.7/dist-packages/onmt/bin/train.py", line 50, in train
train_process(opt, device_id=0)
File "/usr/local/lib/python3.7/dist-packages/onmt/train_single.py", line 136, in main
model = build_model(model_opt, opt, vocabs, checkpoint)
File "/usr/local/lib/python3.7/dist-packages/onmt/model_builder.py", line 327, in build_model
model = build_base_model(model_opt, vocabs, use_gpu(opt), checkpoint)
File "/usr/local/lib/python3.7/dist-packages/onmt/model_builder.py", line 242, in build_base_model
model = build_task_specific_model(model_opt, vocabs)
File "/usr/local/lib/python3.7/dist-packages/onmt/model_builder.py", line 158, in build_task_specific_model
encoder, src_emb = build_encoder_with_embeddings(model_opt, vocabs)
File "/usr/local/lib/python3.7/dist-packages/onmt/model_builder.py", line 131, in build_encoder_with_embeddings
encoder = build_encoder(model_opt, src_emb)
File "/usr/local/lib/python3.7/dist-packages/onmt/model_builder.py", line 73, in build_encoder
return str2enc[enc_type].from_opt(opt, embeddings)
File "/usr/local/lib/python3.7/dist-packages/onmt/encoders/transformer.py", line 120, in from_opt
add_qkvbias=opt.add_qkvbias
File "/usr/local/lib/python3.7/dist-packages/onmt/encoders/transformer.py", line 103, in __init__
for i in range(num_layers)])
File "/usr/local/lib/python3.7/dist-packages/onmt/encoders/transformer.py", line 103, in <listcomp>
for i in range(num_layers)])
File "/usr/local/lib/python3.7/dist-packages/onmt/encoders/transformer.py", line 38, in __init__
attn_type="self", add_qkvbias=add_qkvbias)
File "/usr/local/lib/python3.7/dist-packages/onmt/modules/multi_headed_attn.py", line 118, in __init__
assert model_dim % head_count == 0
AssertionError
I found this issue with the same error, the solution is supposed to be with hyperparameters, but I checked them and can't find the problem. Could you give a hint how to solve this? Thank you!