<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>AI技术 on Answer</title>
    <link>https://answer.freetools.me/categories/ai%E6%8A%80%E6%9C%AF/</link>
    <description>Recent content in AI技术 on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Mon, 09 Mar 2026 07:42:35 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/categories/ai%E6%8A%80%E6%9C%AF/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>不是所有 Token 都值得被同等对待：Mixture-of-Depths 如何重塑 Transformer 的计算范式</title>
      <link>https://answer.freetools.me/%E4%B8%8D%E6%98%AF%E6%89%80%E6%9C%89-token-%E9%83%BD%E5%80%BC%E5%BE%97%E8%A2%AB%E5%90%8C%E7%AD%89%E5%AF%B9%E5%BE%85mixture-of-depths-%E5%A6%82%E4%BD%95%E9%87%8D%E5%A1%91-transformer-%E7%9A%84%E8%AE%A1%E7%AE%97%E8%8C%83%E5%BC%8F/</link>
      <pubDate>Mon, 09 Mar 2026 07:42:35 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%8D%E6%98%AF%E6%89%80%E6%9C%89-token-%E9%83%BD%E5%80%BC%E5%BE%97%E8%A2%AB%E5%90%8C%E7%AD%89%E5%AF%B9%E5%BE%85mixture-of-depths-%E5%A6%82%E4%BD%95%E9%87%8D%E5%A1%91-transformer-%E7%9A%84%E8%AE%A1%E7%AE%97%E8%8C%83%E5%BC%8F/</guid>
      <description>深入解析 Google DeepMind 提出的 Mixture-of-Depths 架构，探讨如何通过动态计算分配重塑 Transformer 的效率范式。从条件计算的演进历史到路由机制的设计细节，再到 MoDification 等后续改进，全面呈现这一技术路线的核心洞见与实践权衡。</description>
    </item>
    <item>
      <title>量化训练为何能用8位精度完成模型学习从数值稳定性到误差补偿的数学原理</title>
      <link>https://answer.freetools.me/%E9%87%8F%E5%8C%96%E8%AE%AD%E7%BB%83%E4%B8%BA%E4%BD%95%E8%83%BD%E7%94%A88%E4%BD%8D%E7%B2%BE%E5%BA%A6%E5%AE%8C%E6%88%90%E6%A8%A1%E5%9E%8B%E5%AD%A6%E4%B9%A0%E4%BB%8E%E6%95%B0%E5%80%BC%E7%A8%B3%E5%AE%9A%E6%80%A7%E5%88%B0%E8%AF%AF%E5%B7%AE%E8%A1%A5%E5%81%BF%E7%9A%84%E6%95%B0%E5%AD%A6%E5%8E%9F%E7%90%86/</link>
      <pubDate>Mon, 09 Mar 2026 04:59:59 +0800</pubDate>
      <guid>https://answer.freetools.me/%E9%87%8F%E5%8C%96%E8%AE%AD%E7%BB%83%E4%B8%BA%E4%BD%95%E8%83%BD%E7%94%A88%E4%BD%8D%E7%B2%BE%E5%BA%A6%E5%AE%8C%E6%88%90%E6%A8%A1%E5%9E%8B%E5%AD%A6%E4%B9%A0%E4%BB%8E%E6%95%B0%E5%80%BC%E7%A8%B3%E5%AE%9A%E6%80%A7%E5%88%B0%E8%AF%AF%E5%B7%AE%E8%A1%A5%E5%81%BF%E7%9A%84%E6%95%B0%E5%AD%A6%E5%8E%9F%E7%90%86/</guid>
      <description>深入解析神经网络量化训练的数学原理：为何低精度训练能保持模型性能？从量化误差分析到FP8格式设计，从直通估计器到信息论最优的NF4，揭示深度学习对数值精度的真实需求。</description>
    </item>
    <item>
      <title>为什么一张显卡能干翻整个CPU集群：GPU并行计算如何成为深度学习的基石</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E5%BC%A0%E6%98%BE%E5%8D%A1%E8%83%BD%E5%B9%B2%E7%BF%BB%E6%95%B4%E4%B8%AAcpu%E9%9B%86%E7%BE%A4gpu%E5%B9%B6%E8%A1%8C%E8%AE%A1%E7%AE%97%E5%A6%82%E4%BD%95%E6%88%90%E4%B8%BA%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E7%9A%84%E5%9F%BA%E7%9F%B3/</link>
      <pubDate>Mon, 09 Mar 2026 04:26:49 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E5%BC%A0%E6%98%BE%E5%8D%A1%E8%83%BD%E5%B9%B2%E7%BF%BB%E6%95%B4%E4%B8%AAcpu%E9%9B%86%E7%BE%A4gpu%E5%B9%B6%E8%A1%8C%E8%AE%A1%E7%AE%97%E5%A6%82%E4%BD%95%E6%88%90%E4%B8%BA%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E7%9A%84%E5%9F%BA%E7%9F%B3/</guid>
      <description>深度解析GPU为什么适合深度学习：从架构设计哲学、SIMT执行模型、内存带宽优势到Tensor Core硬件加速，揭示GPU并行计算成为深度学习基石的技术原理。</description>
    </item>
  </channel>
</rss>
