模型:

sileod/deberta-v3-large-tasksource-rlhf-reward-model

中文

Reward model based deberta-v3-large-tasksource-nli fine-tuned on Anthropic/hh-rlhf

For 1 epoch with 1e-5 learning rate.

The data are described in the paper: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback .

Validation accuracy is currently the best publicly available reported: 75.16% (vs 69.25% for OpenAssistant/reward-model-deberta-v3-large-v2 ).