<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>DPO on Answer</title>
    <link>https://answer.freetools.me/tags/dpo/</link>
    <description>Recent content in DPO on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 11 Mar 2026 14:25:15 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/dpo/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>大模型是如何被训练出来的？从预训练到对齐的三阶段技术全景</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%98%AF%E5%A6%82%E4%BD%95%E8%A2%AB%E8%AE%AD%E7%BB%83%E5%87%BA%E6%9D%A5%E7%9A%84%E4%BB%8E%E9%A2%84%E8%AE%AD%E7%BB%83%E5%88%B0%E5%AF%B9%E9%BD%90%E7%9A%84%E4%B8%89%E9%98%B6%E6%AE%B5%E6%8A%80%E6%9C%AF%E5%85%A8%E6%99%AF/</link>
      <pubDate>Wed, 11 Mar 2026 14:25:15 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%98%AF%E5%A6%82%E4%BD%95%E8%A2%AB%E8%AE%AD%E7%BB%83%E5%87%BA%E6%9D%A5%E7%9A%84%E4%BB%8E%E9%A2%84%E8%AE%AD%E7%BB%83%E5%88%B0%E5%AF%B9%E9%BD%90%E7%9A%84%E4%B8%89%E9%98%B6%E6%AE%B5%E6%8A%80%E6%9C%AF%E5%85%A8%E6%99%AF/</guid>
      <description>深入解析大语言模型训练的完整技术链路：从海量数据收集与清洗、分词器构建，到预训练阶段的自监督学习与分布式训练，再到监督微调和RLHF/DPO对齐，全面揭示千亿参数模型从零到可用的技术演进过程。</description>
    </item>
    <item>
      <title>DPO为何能取代RLHF成为大模型对齐的主流方法：从奖励函数重参数化到偏好优化的数学革命</title>
      <link>https://answer.freetools.me/dpo%E4%B8%BA%E4%BD%95%E8%83%BD%E5%8F%96%E4%BB%A3rlhf%E6%88%90%E4%B8%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%AF%B9%E9%BD%90%E7%9A%84%E4%B8%BB%E6%B5%81%E6%96%B9%E6%B3%95%E4%BB%8E%E5%A5%96%E5%8A%B1%E5%87%BD%E6%95%B0%E9%87%8D%E5%8F%82%E6%95%B0%E5%8C%96%E5%88%B0%E5%81%8F%E5%A5%BD%E4%BC%98%E5%8C%96%E7%9A%84%E6%95%B0%E5%AD%A6%E9%9D%A9%E5%91%BD/</link>
      <pubDate>Mon, 09 Mar 2026 05:13:58 +0800</pubDate>
      <guid>https://answer.freetools.me/dpo%E4%B8%BA%E4%BD%95%E8%83%BD%E5%8F%96%E4%BB%A3rlhf%E6%88%90%E4%B8%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%AF%B9%E9%BD%90%E7%9A%84%E4%B8%BB%E6%B5%81%E6%96%B9%E6%B3%95%E4%BB%8E%E5%A5%96%E5%8A%B1%E5%87%BD%E6%95%B0%E9%87%8D%E5%8F%82%E6%95%B0%E5%8C%96%E5%88%B0%E5%81%8F%E5%A5%BD%E4%BC%98%E5%8C%96%E7%9A%84%E6%95%B0%E5%AD%A6%E9%9D%A9%E5%91%BD/</guid>
      <description>深入解析直接偏好优化（DPO）的数学原理与工程实践。从Bradley-Terry偏好模型到奖励函数重参数化的核心洞察，系统阐述DPO如何避免RLHF的复杂性。涵盖DPO与PPO的性能对比、IPO/KTO/ORPO等变体方法的演进脉络，以及β超参数调优、过拟合规避等最佳实践。包含Zephyr等实际模型案例和完整数学推导。</description>
    </item>
  </channel>
</rss>
