<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>MoE on Answer</title>
    <link>https://answer.freetools.me/tags/moe/</link>
    <description>Recent content in MoE on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Mon, 09 Mar 2026 07:42:35 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/moe/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>不是所有 Token 都值得被同等对待：Mixture-of-Depths 如何重塑 Transformer 的计算范式</title>
      <link>https://answer.freetools.me/%E4%B8%8D%E6%98%AF%E6%89%80%E6%9C%89-token-%E9%83%BD%E5%80%BC%E5%BE%97%E8%A2%AB%E5%90%8C%E7%AD%89%E5%AF%B9%E5%BE%85mixture-of-depths-%E5%A6%82%E4%BD%95%E9%87%8D%E5%A1%91-transformer-%E7%9A%84%E8%AE%A1%E7%AE%97%E8%8C%83%E5%BC%8F/</link>
      <pubDate>Mon, 09 Mar 2026 07:42:35 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%8D%E6%98%AF%E6%89%80%E6%9C%89-token-%E9%83%BD%E5%80%BC%E5%BE%97%E8%A2%AB%E5%90%8C%E7%AD%89%E5%AF%B9%E5%BE%85mixture-of-depths-%E5%A6%82%E4%BD%95%E9%87%8D%E5%A1%91-transformer-%E7%9A%84%E8%AE%A1%E7%AE%97%E8%8C%83%E5%BC%8F/</guid>
      <description>深入解析 Google DeepMind 提出的 Mixture-of-Depths 架构，探讨如何通过动态计算分配重塑 Transformer 的效率范式。从条件计算的演进历史到路由机制的设计细节，再到 MoDification 等后续改进，全面呈现这一技术路线的核心洞见与实践权衡。</description>
    </item>
    <item>
      <title>MoE的门控路由为何如此难以训练？从负载均衡到专家坍缩的技术困境</title>
      <link>https://answer.freetools.me/moe%E7%9A%84%E9%97%A8%E6%8E%A7%E8%B7%AF%E7%94%B1%E4%B8%BA%E4%BD%95%E5%A6%82%E6%AD%A4%E9%9A%BE%E4%BB%A5%E8%AE%AD%E7%BB%83%E4%BB%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%88%B0%E4%B8%93%E5%AE%B6%E5%9D%8D%E7%BC%A9%E7%9A%84%E6%8A%80%E6%9C%AF%E5%9B%B0%E5%A2%83/</link>
      <pubDate>Mon, 09 Mar 2026 04:56:00 +0800</pubDate>
      <guid>https://answer.freetools.me/moe%E7%9A%84%E9%97%A8%E6%8E%A7%E8%B7%AF%E7%94%B1%E4%B8%BA%E4%BD%95%E5%A6%82%E6%AD%A4%E9%9A%BE%E4%BB%A5%E8%AE%AD%E7%BB%83%E4%BB%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%88%B0%E4%B8%93%E5%AE%B6%E5%9D%8D%E7%BC%A9%E7%9A%84%E6%8A%80%E6%9C%AF%E5%9B%B0%E5%A2%83/</guid>
      <description>深入解析MoE（混合专家模型）门控路由训练的核心困境：从专家坍缩的数学根源到辅助损失的两难权衡，从Loss-Free Balancing的突破到DeepSeekMoE的架构创新。</description>
    </item>
    <item>
      <title>为什么千亿参数的模型只需激活百亿？MoE架构的三十年技术突围</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%8D%83%E4%BA%BF%E5%8F%82%E6%95%B0%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8F%AA%E9%9C%80%E6%BF%80%E6%B4%BB%E7%99%BE%E4%BA%BFmoe%E6%9E%B6%E6%9E%84%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Sun, 08 Mar 2026 13:47:29 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%8D%83%E4%BA%BF%E5%8F%82%E6%95%B0%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8F%AA%E9%9C%80%E6%BF%80%E6%B4%BB%E7%99%BE%E4%BA%BFmoe%E6%9E%B6%E6%9E%84%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深入解析Mixture of Experts架构的原理与演进。从1991年Jordan和Jacobs的理论雏形，到2024年DeepSeek-V3的671B总参数仅激活37B的革命性设计，系统阐述MoE的核心机制：稀疏激活、门控路由、负载均衡。涵盖Switch Transformer、Mixtral 8x7B、GShard等里程碑模型，分析专家特化现象、分布式训练挑战、以及无辅助损失负载均衡策略的技术突破。</description>
    </item>
  </channel>
</rss>
