模型:
stanford-crfm/levanter-backpack-1b
This is 1.4B parameter version of Backpack architecture , intended to combine strong modeling performance with an interface for interpretability and control.
This model was trained on the OpenWebText corpus.
This model was trained for 450k gradient steps and cosine decaying learning rate from 1e-4 to zero, with a linear warmup of 5k steps.
This model was trained to minimize the cross-entropy loss, and is a Backpack language model .
This model was trained with Levanter and Jax .
Please install transformers , safetensors and torch to use this model.
pip install transformers safetensors torch
Run the following Python code:
import torch import transformers from transformers import AutoModelForCausalLM model_id = "stanford-crfm/levanter-backpack-1b" config = transformers.AutoConfig.from_pretrained(model_id, trust_remote_code=True) torch_model = AutoModelForCausalLM.from_pretrained( model_id, config=config, trust_remote_code=True ) torch_model.eval() input = torch.randint(0, 50264, (1, 512), dtype=torch.long) torch_out = torch_model(input, position_ids=None,) torch_out = torch.nn.functional.softmax(torch_out.logits, dim=-1) print(torch_out.shape)