Dataset Card for "guardian_authorship"
Dataset Summary
A dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013.
1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross_topic_1 => row 1:P S U&W ).
2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross_genre_1 => row 1:B P S&U&W).
3- The same-topic/genre scenario is created by grouping all the datasts as follows.
For ex., to use same_topic and split the data 60-40 use:
train_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>",
split='train[:60%]+validation[:60%]+test[:60%]')
tests_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>",
split='train[-40%:]+validation[-40%:]+test[-40%:]')
IMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
cross_genre_1
-
Size of downloaded dataset files:
3.10 MB
-
Size of the generated dataset:
2.74 MB
-
Total amount of disk used:
5.84 MB
An example of 'train' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 4
}
cross_genre_2
-
Size of downloaded dataset files:
3.10 MB
-
Size of the generated dataset:
2.74 MB
-
Total amount of disk used:
5.84 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 1
}
cross_genre_3
-
Size of downloaded dataset files:
3.10 MB
-
Size of the generated dataset:
2.74 MB
-
Total amount of disk used:
5.84 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 2
}
cross_genre_4
-
Size of downloaded dataset files:
3.10 MB
-
Size of the generated dataset:
2.74 MB
-
Total amount of disk used:
5.84 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 3
}
cross_topic_1
-
Size of downloaded dataset files:
3.10 MB
-
Size of the generated dataset:
2.34 MB
-
Total amount of disk used:
5.43 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 1
}
Data Fields
The data fields are the same among all splits.
cross_genre_1
-
author
: a classification label, with possible values including
catherinebennett
(0),
georgemonbiot
(1),
hugoyoung
(2),
jonathanfreedland
(3),
martinkettle
(4).
-
topic
: a classification label, with possible values including
Politics
(0),
Society
(1),
UK
(2),
World
(3),
Books
(4).
-
article
: a
string
feature.
cross_genre_2
-
author
: a classification label, with possible values including
catherinebennett
(0),
georgemonbiot
(1),
hugoyoung
(2),
jonathanfreedland
(3),
martinkettle
(4).
-
topic
: a classification label, with possible values including
Politics
(0),
Society
(1),
UK
(2),
World
(3),
Books
(4).
-
article
: a
string
feature.
cross_genre_3
-
author
: a classification label, with possible values including
catherinebennett
(0),
georgemonbiot
(1),
hugoyoung
(2),
jonathanfreedland
(3),
martinkettle
(4).
-
topic
: a classification label, with possible values including
Politics
(0),
Society
(1),
UK
(2),
World
(3),
Books
(4).
-
article
: a
string
feature.
cross_genre_4
-
author
: a classification label, with possible values including
catherinebennett
(0),
georgemonbiot
(1),
hugoyoung
(2),
jonathanfreedland
(3),
martinkettle
(4).
-
topic
: a classification label, with possible values including
Politics
(0),
Society
(1),
UK
(2),
World
(3),
Books
(4).
-
article
: a
string
feature.
cross_topic_1
-
author
: a classification label, with possible values including
catherinebennett
(0),
georgemonbiot
(1),
hugoyoung
(2),
jonathanfreedland
(3),
martinkettle
(4).
-
topic
: a classification label, with possible values including
Politics
(0),
Society
(1),
UK
(2),
World
(3),
Books
(4).
-
article
: a
string
feature.
Data Splits
name
|
train
|
validation
|
test
|
cross_genre_1
|
63
|
112
|
269
|
cross_genre_2
|
63
|
62
|
319
|
cross_genre_3
|
63
|
90
|
291
|
cross_genre_4
|
63
|
117
|
264
|
cross_topic_1
|
112
|
62
|
207
|
Dataset Creation
Curation Rationale
More Information Needed
Source Data
Initial Data Collection and Normalization
More Information Needed
Who are the source language producers?
More Information Needed
Annotations
Annotation process
More Information Needed
Who are the annotators?
More Information Needed
Personal and Sensitive Information
More Information Needed
Considerations for Using the Data
Social Impact of Dataset
More Information Needed
Discussion of Biases
More Information Needed
Other Known Limitations
More Information Needed
Additional Information
Dataset Curators
More Information Needed
Licensing Information
More Information Needed
Citation Information
@article{article,
author = {Stamatatos, Efstathios},
year = {2013},
month = {01},
pages = {421-439},
title = {On the robustness of authorship attribution based on character n-gram features},
volume = {21},
journal = {Journal of Law and Policy}
}
@inproceedings{stamatatos2017authorship,
title={Authorship attribution using text distortion},
author={Stamatatos, Efstathios},
booktitle={Proc. of the 15th Conf. of the European Chapter of the Association for Computational Linguistics},
volume={1}
pages={1138--1149},
year={2017}
}
Contributions
Thanks to
@thomwolf
,
@eltoto1219
,
@malikaltakrori
for adding this dataset.