Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence

Introduction Image
Junru Lu1,*, Jiazheng Li2,*, Siyu An3, Meng Zhao3, Yulan He1,2,4, Di Yin3, Xing Sun3
1University of Warwick, 2King’s College London, 3Tencent YouTu Lab, 4The Alan Turing Institute

Abstract

Direct Preference Optimization (DPO) has emerged as a prominent algorithm for the direct and robust alignment of Large Language Models (LLMs) with human preferences, offering a more straightforward alternative to the complex Reinforcement Learning from Human Feedback (RLHF). Despite its promising efficacy, DPO faces a notable drawback: verbosity, a common over-optimization phenomenon also observed in RLHF.

While previous studies mainly attributed verbosity to biased labels within the data, we propose that the issue also stems from an inherent algorithmic length reliance in DPO. Specifically, we suggest that the discrepancy between sequence-level Kullback–Leibler (KL) divergences between chosen and rejected sequences, used in DPO, results in overestimated or underestimated rewards due to varying token lengths.

Empirically, we utilize datasets with different label lengths to demonstrate the presence of biased rewards. We then introduce an effective downsampling approach, named SamPO, to eliminate potential length reliance. Our experimental evaluations, conducted across three LLMs of varying scales and a diverse array of conditional and open-ended benchmarks, highlight the efficacy of SamPO in mitigating verbosity, achieving improvements of 5% to 12% over DPO through debaised reward.

Video

Analysis of Reward at token-level

Introduction Image

The disparity in pairwise responses, illustrated by typical examples, forces DPO to overestimate or underestimate the actual rewards. Check out the key difference between SamPO and DPO.

BibTeX

@misc{lu2024eliminatingbiasedlengthreliance,
      title={Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence}, 
      author={Junru Lu and Jiazheng Li and Siyu An and Meng Zhao and Yulan He and Di Yin and Xing Sun},
      year={2024},
      eprint={2406.10957},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.10957}, 
}