模型:
projecte-aina/roberta-base-ca-v2-cased-sts
The roberta-base-ca-v2-cased-sts is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the roberta-base-ca-v2 model, a RoBERTa base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
roberta-base-ca-v2-cased-sts model can be used to assess the similarity between two snippets of text. The model is limited by its training dataset and may not generalize well for all use cases.
To get the correct 1 model's prediction scores with values between 0.0 and 5.0, use the following code:
from transformers import pipeline, AutoTokenizer from scipy.special import logit model = 'projecte-aina/roberta-base-ca-v2-cased-sts' tokenizer = AutoTokenizer.from_pretrained(model) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) def prepare(sentence_pairs): sentence_pairs_prep = [] for s1, s2 in sentence_pairs: sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}") return sentence_pairs_prep sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."), ("M'agrades.", "T'estimo."), ("M'agrada el sol i la calor", "A la Garrotxa plou molt.")] predictions = pipe(prepare(sentence_pairs), add_special_tokens=False) # convert back to scores to the original 0 and 5 interval for prediction in predictions: prediction['score'] = logit(prediction['score']) print(predictions)
Expected output:
[{'label': 'SIMILARITY', 'score': 2.118301674983813}, {'label': 'SIMILARITY', 'score': 2.1799755855125853}, {'label': 'SIMILARITY', 'score': 0.9511617858568939}]
1 avoid using the widget scores since they are normalized and do not reflect the original annotation values.
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
We used the STS dataset in Catalan called STS-ca for training and evaluation.
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set, and then evaluated it on the test set.
This model was finetuned maximizing the average score between the Pearson and Spearman correlations.
We evaluated the roberta-base-ca-v2-cased-sts on the STS-ca test set against standard multilingual and monolingual baselines:
Model | STS-ca (Combined score) |
---|---|
roberta-base-ca-v2-cased-sts | 79.07 |
roberta-base-ca-cased-sts | 80.19 |
mBERT | 74.26 |
XLM-RoBERTa | 61.61 |
For more details, check the fine-tuning and evaluation scripts in the official GitHub repository .
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ( bsc-temu@bsc.es )
For further information, send an email to aina@bsc.es
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA .
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
@inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", }
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.