模型:
PORTULAN/albertina-ptbr
This is the model card for Albertina PT-BR. You may be interested in some of the other models in the Albertina (encoders) and Gervásio (decoders) families .
Albertina PT- * is a foundation, large language model for the Portuguese language .
It is an encoder of the BERT family, based on the neural architecture Transformer and developed over the DeBERTa model, and with most competitive performance for this language. It has different versions that were trained for different variants of Portuguese (PT), namely the European variant from Portugal ( PT-PT ) and the American variant from Brazil ( PT-BR ), and it is distributed free of charge and under a most permissible license.
Albertina PT-BR is the version for American Portuguese from Brazil , trained on the brWaC data set.
You may be interested also in Albertina PT-BR No-brWaC , trained on data sets other than brWaC and thus with a more permissive license. To the best of our knowledge, these are encoders specifically for this language and variant that, at the time of its initial distribution, set a new state of the art for it, and is made publicly available and distributed for reuse.
Albertina PT-BR is developed by a joint team from the University of Lisbon and the University of Porto, Portugal. For further details, check the respective publication :
@misc{albertina-pt, title={Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*}, author={João Rodrigues and Luís Gomes and João Silva and António Branco and Rodrigo Santos and Henrique Lopes Cardoso and Tomás Osório}, year={2023}, eprint={2305.06721}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Please use the above cannonical reference when using or citing this model.
This model card is for Albertina-PT-BR , with 900M parameters, 24 layers and a hidden size of 1536.
This model is distributed respecting the license granted by the BrWac data set on which it was trained, namely that it is "available solely for academic research purposes, and you agreed not to use it for any commercial applications".
Albertina PT-BR was trained over the 2.7 billion token BrWac data set.
Albertina PT-PT , in turn, was trained over a 2.2 billion token data set that resulted from gathering some openly available corpora of European Portuguese from the following sources:
We filtered the PT-PT corpora using the BLOOM pre-processing pipeline, resulting in a data set of 8 million documents, containing around 2.2 billion tokens. We skipped the default filtering of stopwords since it would disrupt the syntactic structure, and also the filtering for language identification given the corpus was pre-selected as Portuguese.
As codebase, we resorted to the DeBERTa V2 XLarge , for English.
To train Albertina-PT-BR the BrWac data set was tokenized with the original DeBERTA tokenizer with a 128 token sequence truncation and dynamic padding. The model was trained using the maximum available memory capacity resulting in a batch size of 896 samples (56 samples per GPU without gradient accumulation steps). We chose a learning rate of 1e-5 with linear decay and 10k warm-up steps based on the results of exploratory experiments. In total, around 200k training steps were taken across 50 epochs. The model was trained for 1 day and 11 hours on a2-megagpu-16gb Google Cloud A2 VMs with 16 GPUs, 96 vCPUs and 1.360 GB of RAM.
To train Albertina PT-PT , the data set was tokenized with the original DeBERTa tokenizer with a 128 token sequence truncation and dynamic padding. The model was trained using the maximum available memory capacity resulting in a batch size of 832 samples (52 samples per GPU and applying gradient accumulation in order to approximate the batch size of the PT-BR model). Similarly to the PT-BR variant above, we opted for a learning rate of 1e-5 with linear decay and 10k warm-up steps. However, since the number of training examples is approximately twice of that in the PT-BR variant, we reduced the number of training epochs to half and completed only 25 epochs, which resulted in approximately 245k steps. The model was trained for 3 days on a2-highgpu-8gb Google Cloud A2 VMs with 8 GPUs, 96 vCPUs and 680 GB of RAM.
The two model versions were evaluated on downstream tasks organized into two groups.
In one group, we have the two data sets from the ASSIN 2 benchmark , namely STS and RTE, that were used to evaluate the previous state-of-the-art model BERTimbau Large . In the other group of data sets, we have the translations into PT-BR and PT-PT of the English data sets used for a few of the tasks in the widely-used GLUE benchmark , which allowed us to test both Albertina-PT-* variants on a wider variety of downstream tasks.
ASSIN 2 is a PT-BR data set of approximately 10.000 sentence pairs, split into 6.500 for training, 500 for validation, and 2.448 for testing, annotated with semantic relatedness scores (range 1 to 5) and with binary entailment judgments. This data set supports the task of semantic textual similarity (STS), which consists of assigning a score of how semantically related two sentences are; and the task of recognizing textual entailment (RTE), which given a pair of sentences, consists of determining whether the first entails the second.
Model | RTE (Accuracy) | STS (Pearson) |
---|---|---|
Albertina-PT-BR | 0.9130 | 0.8676 |
BERTimbau-large | 0.8913 | 0.8531 |
We resort to PLUE (Portuguese Language Understanding Evaluation), a data set that was obtained by automatically translating GLUE into PT-BR . We address four tasks from those in PLUE, namely:
Model | RTE (Accuracy) | WNLI (Accuracy) | MRPC (F1) | STS-B (Pearson) |
---|---|---|---|---|
Albertina-PT-BR | 0.7545 | 0.4601 | 0.9071 | 0.8910 |
BERTimbau-large | 0.6546 | 0.5634 | 0.887 | 0.8842 |
Albertina-PT-PT | 0.7960 | 0.4507 | 0.9151 | 0.8799 |
We resorted to GLUE-PT , a PT-PT version of the GLUE benchmark. We automatically translated the same four tasks from GLUE using DeepL Translate , which specifically provides translation from English to PT-PT as an option.
Model | RTE (Accuracy) | WNLI (Accuracy) | MRPC (F1) | STS-B (Pearson) |
---|---|---|---|---|
Albertina-PT-PT | 0.8339 | 0.4225 | 0.9171 | 0.8801 |
Albertina-PT-BR | 0.7942 | 0.4085 | 0.9048 | 0.8847 |
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='PORTULAN/albertina-ptbr') >>> unmasker("A culinária brasileira é rica em sabores e [MASK], tornando-se um dos maiores patrimônios do país.") [{'score': 0.6145166158676147, 'token': 23395, 'token_str': 'aromas', 'sequence': 'A culinária brasileira é rica em sabores e aromas, tornando-se um dos maiores patrimônios do país.'}, {'score': 0.1720353364944458, 'token': 21925, 'token_str': 'cores', 'sequence': 'A culinária brasileira é rica em sabores e cores, tornando-se um dos maiores patrimônios do país.'}, {'score': 0.1438736468553543, 'token': 10392, 'token_str': 'costumes', 'sequence': 'A culinária brasileira é rica em sabores e costumes, tornando-se um dos maiores patrimônios do país.'}, {'score': 0.02997930906713009, 'token': 117371, 'token_str': 'cultura', 'sequence': 'A culinária brasileira é rica em sabores e cultura, tornando-se um dos maiores patrimônios do país.'}, {'score': 0.015540072694420815, 'token': 22647, 'token_str': 'nuances', 'sequence': 'A culinária brasileira é rica em sabores e nuances, tornando-se um dos maiores patrimônios do país.'}]
The model can be used by fine-tuning it for a specific task:
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer >>> from datasets import load_dataset >>> model = AutoModelForSequenceClassification.from_pretrained("PORTULAN/albertina-ptbr", num_labels=2) >>> tokenizer = AutoTokenizer.from_pretrained("PORTULAN/albertina-ptbr") >>> dataset = load_dataset("PORTULAN/glue-ptpt", "rte") >>> def tokenize_function(examples): ... return tokenizer(examples["sentence1"], examples["sentence2"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) >>> training_args = TrainingArguments(output_dir="albertina-ptbr-rte", evaluation_strategy="epoch") >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_datasets["train"], ... eval_dataset=tokenized_datasets["validation"], ... ) >>> trainer.train()
When using or citing this model, kindly cite the following publication :
@misc{albertina-pt, title={Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*}, author={João Rodrigues and Luís Gomes and João Silva and António Branco and Rodrigo Santos and Henrique Lopes Cardoso and Tomás Osório}, year={2023}, eprint={2305.06721}, archivePrefix={arXiv}, primaryClass={cs.CL} }
The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language, funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020.