数据集:
albertvillanova/meqsum
MeQSum corpus is a dataset for medical question summarization. It contains 1,000 summarized consumer health questions.
[More Information Needed]
English ( en ).
{ "CHQ": "SUBJECT: who and where to get cetirizine - D\\nMESSAGE: I need\\/want to know who manufscturs Cetirizine. My Walmart is looking for a new supply and are not getting the recent", "Summary": "Who manufactures cetirizine?", "File": "1-131188152.xml.txt" }
The dataset consists of a single train split containing 1,000 examples.
[More Information Needed]
[More Information Needed]
Who are the source language producers?[More Information Needed]
[More Information Needed]
Who are the annotators?[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
If you use the MeQSum corpus, please cite:
@inproceedings{ben-abacha-demner-fushman-2019-summarization, title = "On the Summarization of Consumer Health Questions", author = "Ben Abacha, Asma and Demner-Fushman, Dina", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1215", doi = "10.18653/v1/P19-1215", pages = "2228--2234", abstract = "Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16{\%}. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization.", }
Thanks to @albertvillanova for adding this dataset.