模型:

Qiliang/bart-large-cnn-samsum-ElectrifAi_v13

中文

bart-large-cnn-samsum-ElectrifAi_v13

This model is a fine-tuned version of philschmid/bart-large-cnn-samsum on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9198
  • Rouge1: 46.6683
  • Rouge2: 22.3077
  • Rougel: 37.436
  • Rougelsum: 44.2797
  • Gen Len: 86.6

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 10 3.0963 46.6161 24.9843 36.3484 42.7551 81.4
No log 2.0 20 2.9398 49.7463 23.695 36.3679 45.2876 84.6
No log 3.0 30 2.9198 46.6683 22.3077 37.436 44.2797 86.6

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.8.0
  • Tokenizers 0.13.2