模型:
aubmindlab/bert-base-arabert
AraBERT is an Arabic pretrained lanaguage model based on Google's BERT architechture . AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter .
We evalaute AraBERT models on different downstream tasks and compare them to mBERT , and other state of the art models ( To the extent of our knowledge ). The Tasks were Sentiment Analysis on 6 different datasets ( HARD , ASTD-Balanced , ArsenTD-Lev , LABR ), Named Entity Recognition with the ANERcorp , and Arabic Question Answering on Arabic-SQuAD and ARCD
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the README and in the AraBERT Paper
Model | HuggingFace Model Name | Size (MB/Params) | Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|---|---|---|---|
AraBERTv0.2-base | bert-base-arabertv02 | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large | bert-large-arabertv02 | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base | bert-base-arabertv2 | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large | bert-large-arabertv2 | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base | bert-base-arabertv01 | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base | bert-base-arabert | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the HuggingFace model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the BertWordpieceTokenizer from the tokenizers library, and should now support the Fast tokenizer implementation from the transformers library.
P.S. : All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction Please read the section on how to use the preprocessing function
We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section
Model | Hardware | num of examples with seq len (128 / 512) | 128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|---|---|---|---|---|---|
AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - |
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - |
AraBERTv2-base | TPUv3-8 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - |
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - |
AraBERT-base (v1/v0.1) | TPUv2-8 | - | 512 / 900K | 128 / 300K | 1.2M | 4 days |
The pretraining data used for the new AraBERT model is also used for Arabic GPT2 and ELECTRA .
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
It is recommended to apply our preprocessing function before training/testing on any dataset. Install farasapy to segment text for AraBERT v1 & v2 pip install farasapy
from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabert" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري"
bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega
The TF1.x model are available in the HuggingFace models repo. You can download them as follows:
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
where MODEL_NAME is any model under the aubmindlab name
Google Scholar has our Bibtex wrong (missing name), use this instead
@inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} }
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal ( https://www.behance.net/rahalhabib ), for putting a face to AraBERT.
Wissam Antoun : Linkedin | Twitter | Github | wfa07@mail.aub.edu | wissam.antoun@gmail.com
Fady Baly : Linkedin | Twitter | Github | fgb06@mail.aub.edu | baly.fady@gmail.com