模型:
philschmid/tiny-bert-sst2-distilled
任务:
文本分类许可:
apache-2.0This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the glue dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.77 | 1.0 | 66 | 1.6939 | 0.8165 |
0.729 | 2.0 | 132 | 1.5090 | 0.8326 |
0.5242 | 3.0 | 198 | 1.5369 | 0.8257 |
0.4017 | 4.0 | 264 | 1.7025 | 0.8326 |
0.327 | 5.0 | 330 | 1.6743 | 0.8245 |
0.2749 | 6.0 | 396 | 1.7305 | 0.8337 |
0.2521 | 7.0 | 462 | 1.7305 | 0.8326 |