模型:
HooshvareLab/bert-fa-base-uncased-clf-persiannews
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the ParsBERT repo for the latest information about previous and current models.
The task target is labeling texts in a supervised manner in both existing datasets DigiMag and Persian News .
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
Label | # |
---|---|
Social | 2170 |
Economic | 1564 |
International | 1975 |
Political | 2269 |
Science Technology | 2436 |
Cultural Art | 2558 |
Sport | 1381 |
Medical | 2085 |
Download You can download the dataset from here
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
---|---|---|---|
Persian News | 97.44* | 97.19 | 95.79 |
Task | Notebook |
---|---|
Text Classification |
Please cite in publications as the following:
@article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} }
Post a Github issue on the ParsBERT Issues repo.