In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a German Europeana ConvBERT model ?
We use the open source Europeana newspapers that were provided by The European Library . The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in this repository .
For results on Historic NER, please refer to this repository .
With Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like:
from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/convbert-base-german-europeana-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name)
All other German Europeana models are available on the Huggingface model hub .
For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion here ?
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the Hugging Face team, it is possible to download both cased and uncased models from their S3 storage ?