This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.
To setup the project, run the following command,
pip install -r requirements.txt
Pretrained model on Tamil language using a causal language modeling (CLM) objective.
The GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
To perform training, do the following steps,
>>> export MODEL_DIR=<model_dir>
>>> python src/create_config.py
>>> python src/train_tokenizer.py
>>> python scripts/train_gpt2-oscar-tamil.sh
To perform language generation using the model, pipeline can be used directly.
python src/convert_flax_to_pytorch.py
>>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline >>> model_name = 'abinayam/gpt-2-tamil' >>> model = AutoModelWithLMHead.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> set_seed(42) >>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு" >>> max_len = 300 >>> no_seq = 5 >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq)