模型:
sileod/deberta-v3-large-tasksource-rlhf-reward-model
For 1 epoch with 1e-5 learning rate.
The data are described in the paper: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback .
Validation accuracy is currently the best publicly available reported: 75.16% (vs 69.25% for OpenAssistant/reward-model-deberta-v3-large-v2 ).