数据集:
copenlu/citeworth
任务:
文本分类语言:
en计算机处理:
monolingual大小:
1M<n<10M语言创建人:
found批注创建人:
expert-generated源数据集:
extended|s2orc许可:
cc-by-nc-4.0Scientific document understanding is challenging as the data is highly domain specific and diverse. However, datasets for tasks with scientific text require expensive manual annotation and tend to be small and limited to only one or a few fields. At the same time, scientific documents contain many potential training signals, such as citations, which can be used to build large labelled datasets. Given this, we present an in-depth study of cite-worthiness detection in English, where a sentence is labelled for whether or not it cites an external source. To accomplish this, we introduce CiteWorth, a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents. We show that CiteWorth is high-quality, challenging, and suitable for studying problems such as domain adaptation. Our best performing cite-worthiness detection model is a paragraph-level contextualized sentence labelling model based on Longformer, exhibiting a 5 F1 point improvement over SciBERT which considers only individual sentences. Finally, we demonstrate that language model fine-tuning with cite-worthiness as a secondary task leads to improved performance on downstream scientific document understanding tasks.
The data is structured as follows
The data is derived from the S2ORC dataset , specifically the 20200705v1 release of the data. It is licensed under the CC By-NC 2.0 license. For details on the dataset creation process, see section 3 of our paper .
Please use the following citation when referencing this work or using the data:
@inproceedings{wright2021citeworth, title={{CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding}}, author={Dustin Wright and Isabelle Augenstein}, booktitle = {Findings of ACL-IJCNLP}, publisher = {Association for Computational Linguistics}, year = 2021 }