热点
"Direct Preference Optimization" 相关文章
Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering
cs.AI updates on arXiv.org 2025-07-29T04:22:16.000000Z
How Important is the Reference Model in Direct Preference Optimization DPO? An Empirical Study on Optimal KL-Divergence Constraints and Necessity
MarkTechPost@AI 2024-08-01T06:34:34.000000Z