模型:
bowphs/LaBerta
The paper Exploring Language Models for Classical Philology is the first effort to systematically provide state-of-the-art language models for Classical Philology. LaBerta is a RoBerta-base sized, monolingual, encoder-only variant.
This model was trained on the Corpus Corporum .
Further information can be found in our paper or in our GitHub repository .
from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('bowphs/LaBerta') model = AutoModelForMaskedLM.from_pretrained('bowphs/LaBerta')
Please check out the awesome Hugging Face tutorials on how to fine-tune our models.
When fine-tuned on PoS data from EvaLatin 2022 , LaBerta achieves the following results:
Task | Classical | Cross-genre | Cross-time |
---|---|---|---|
98.11 | 96.73 | 93.33 |
If you have any questions or problems, feel free to reach out .
@incollection{riemenschneiderfrank:2023, address = "Toronto, Canada", author = "Riemenschneider, Frederick and Frank, Anette", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)", note = "to appear", pubType = "incollection", publisher = "Association for Computational Linguistics", title = "Exploring Large Language Models for Classical Philology", url = "https://arxiv.org/abs/2305.13698", year = "2023", key = "riemenschneiderfrank:2023" }