Low-resource-Machine-Translation
This repository contains the code for the project relative to the course Deep Natural Language Processing
. The goal of the project is to replicate the experiments performed by Dabre et al. on low-resource machine translation. In particular, starting from a machine translation model pretrained on a large dataset, we finetune it on a low-resource language.
Implementation details
The initial model chosen for the task is MarianMT, a transformer-based model pretrained on a large English-Chinese corpus. The model is finetuned on three low-resource languages from the ALT dataset (Vietnamese, Indonesian and Filipino). The finetuning is performed using the Huggingface