模型:
Qiliang/distilbart-xsum-12-3-whole_summary_chatGPT_and_tweetsum
任务:
文生文许可:
apache-2.0This model is a fine-tuned version of sshleifer/distilbart-xsum-12-3 on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 397 | 2.8069 | 42.233 | 23.7538 | 39.2701 | 39.2701 | 17.0 |
2.8673 | 2.0 | 794 | 2.7736 | 48.2389 | 29.6927 | 43.5004 | 43.5004 | 17.4 |
1.8043 | 3.0 | 1191 | 2.7952 | 45.7353 | 29.1566 | 45.8429 | 45.7353 | 16.6 |