模型:
theblackcat102/pythia-1.4b-deduped-sft-r2
Model was supervised fine tuned on only Open Assistant crowd souce platform.
See the example on the right
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "theblackcat102/pythia-1.4b-deduped-sft-r2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name).half().eval().cuda() input_text = """ <|startoftoken|>system You are a helpful assistant<|endoftoken|><|startoftoken|>human What's the population of the earth?<|endoftoken|><|startoftoken|>assistant """ inputs = tokenizer(input_text, return_tensors="pt", padding=True).to(0) outputs = model.generate( **inputs, early_stopping=True, max_new_tokens=args.max_new_tokens, do_sample=True, top_k=args.top_k, temperature=args.temperature, pad_token_id=tokenizer.eos_token_id, # dialogue_collator.py line 36 ) output = tokenizer.decode(outputs[0], truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"]) print(output)
deepspeed trainer_sft.py --configs defaults pythia-1-4b-ost --deepspeed
This model was trained for 200 iterations. After 200 iterations the accuracy started to drop and loss increasing which is a sign of overfitting.
defaults: learning_rate: 1e-5 gradient_checkpointing: false gradient_accumulation_steps: 32 per_device_train_batch_size: 2 per_device_eval_batch_size: 2 weight_decay: 0.00 warmup_steps: 600 eval_steps: 250 save_steps: 250 max_length: 512 num_train_epochs: 2 logging_steps: 10 max_grad_norm: 2.0 save_total_limit: 4 fp16: true eval_accumulation_steps: freeze_layer: datasets: - oa_private: data_path: .cache split: sft val_split: 0.01 fraction: 1 file: 2023-02-26_oasst_default.jsonl cache_dir: .cache loss_fn: CrossEntropyLoss eval_size: log_dir: "base" quantization: false seq2seqmodel: false poly_eps: 1.0 fuse_gelu: false log_wandb: true samples_mixing: true # uses collator that mixes samples in the batch to create a single sample with possible multiple tasks within verbose: false pythia-1-4b-ost: learning_rate: 1e-6 model_name: EleutherAI/pythia-1.4b-deduped weight_decay: 0.01 max_length: 1024 warmup_steps: 100 gradient_checkpointing: false gradient_accumulation_steps: 12 per_device_train_batch_size: 5 per_device_eval_batch_size: 6 eval_steps: 100 save_steps: 100 num_train_epochs: 50 save_total_limit: 4
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019) .
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]