This model can be used for translation and text-to-text generation.
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021) ).
Further details about the dataset for this model can be found in the OPUS readme: zho-eng
pre-processing: normalization + SentencePiece (spm32k,spm32k)
ref_len: 82826.0
dataset: opus
download original weights: opus-2020-07-17.zip
test set translations: opus-2020-07-17.test.txt
test set scores: opus-2020-07-17.eval.txt
brevity_penalty: 0.948
testset | BLEU | chr-F |
---|---|---|
Tatoeba-test.zho.eng | 36.1 | 0.548 |
@InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} }
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en")