模型:
AlexWortega/taskGPT2-xl-v0.2a
I finetuned GPT2 on text2code, cot, math and FLAN tasks, on some tasks its performs better than GPT-JT
from transformers import pipeline pipe = pipeline(model='AlexWortega/taskGPT2-xl') pipe('''"I love this!" Is it positive? A:''')
or
from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("taskGPT2-xl") model = AutoModelForCausalLM.from_pretrained("taskGPT2-xl")
The weights of taskGPT2-xl are licensed under version 2.0 of the Apache License.
I used datasets from huggingface:
I used Novograd with a learning rate of 2e-5 and global batch size of 6 (3 for each data parallel worker). I use both data parallelism and pipeline parallelism to conduct training. During training, we truncate the input sequence to 512 tokens, and for input sequence that contains less than 512 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
#Metrics
SOON
@article{ title={GPT2xl is underrated task solver}, author={Nickolich Aleksandr, Karina Romanova, Arseniy Shahmatov, Maksim Gersimenko}, year={2023} }