数据集:
wmt18
Warning: There are issues with the Common Crawl corpus data ( training-parallel-commoncrawl.tgz ):
We have contacted the WMT organizers.
Translation dataset based on the data from statmt.org.
Versions exist for different years using a combination of data sources. The base wmt allows you to create a custom dataset by choosing your own data/language pair. This can be done as follows:
from datasets import inspect_dataset, load_dataset_builder
inspect_dataset("wmt18", "path/to/scripts")
builder = load_dataset_builder(
"path/to/scripts/wmt_utils.py",
language_pair=("fr", "de"),
subsets={
datasets.Split.TRAIN: ["commoncrawl_frde"],
datasets.Split.VALIDATION: ["euelections_dev2019"],
},
)
# Standard version
builder.download_and_prepare()
ds = builder.as_dataset()
# Streamable version
ds = builder.as_streaming_dataset()
An example of 'validation' looks as follows.
The data fields are the same among all splits.
cs-en| name | train | validation | test |
|---|---|---|---|
| cs-en | 11046024 | 3005 | 2983 |
@InProceedings{bojar-EtAl:2018:WMT1,
author = {Bojar, Ond
{r}ej and Federmann, Christian and Fishel, Mark
and Graham, Yvette and Haddow, Barry and Huck, Matthias and
Koehn, Philipp and Monz, Christof},
title = {Findings of the 2018 Conference on Machine Translation (WMT18)},
booktitle = {Proceedings of the Third Conference on Machine Translation,
Volume 2: Shared Task Papers},
month = {October},
year = {2018},
address = {Belgium, Brussels},
publisher = {Association for Computational Linguistics},
pages = {272--307},
url = {http://www.aclweb.org/anthology/W18-6401}
}
Thanks to @thomwolf , @patrickvonplaten for adding this dataset.