模型:
pszemraj/bart-base-instructiongen-w-inputs
Use this text2text model to find out what LLM instruction ( and inputs if relevant) might have generated <arbitrary input text> !
This model is a fine-tuned version of facebook/bart-base on the pszemraj/fleece2instructions-inputs-alpaca-cleaned dataset.
It achieves the following results on the evaluation set:
This model is intended to be used to generate instructions from arbitrary text. You can then use these instructions + your data to fine-tune an LLM on instructions w.r.t. a specific domain. This model is primarily intended to enable low-resource domain adaptation , rather than " I want to generate even better prompts for the FLAN-V2 dataset! ".
The fleece2instructions-inputs-alpaca-cleaned dataset, obtained from the alpaca-lora repo under the ODC-BY license, has been converted to a text2text format for use with language models. In this dataset, the original 'inputs' and 'instructions' columns are combined into a single 'instructions_inputs' column. To clearly separate the two types of content, each piece of text is prefixed with either an <instruction> or <inputs> token. These tokens not only facilitate model comprehension, but also allow for easy regex separation of model outputs during inference.
As such, users can expect the output of this model to be similarly structured with <instruction> and <inputs> tokens.
This is just the base model, for better performance (but slower/compute intensive) see the bart-large version. Further exploration/data may lead to even better models!
Refer to the fleece2instructions-inputs-alpaca-cleaned dataset
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
1.1147 | 1.0 | 680 | 0.9901 | 61.8451 | 38.8293 | 58.3372 | 59.8658 | 25.2401 |
0.9565 | 2.0 | 1360 | 0.9579 | 62.3604 | 39.5109 | 58.8843 | 60.4494 | 24.9917 |