模型:

iarfmoose/roberta-base-bulgarian

中文

RoBERTa-base-bulgarian

The RoBERTa model was originally introduced in this paper . This is a version of RoBERTa-base pretrained on Bulgarian text.

Intended uses

This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.

Limitations and bias

The training data is unfiltered text from the internet and may contain all sorts of biases.

Training data

This model was trained on the following data:

Training procedure

The model was pretrained using a masked language-modeling objective with dynamic masking as described here

It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.