<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>大模型架构 on Answer</title>
    <link>https://answer.freetools.me/tags/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%9E%B6%E6%9E%84/</link>
    <description>Recent content in 大模型架构 on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 20:51:25 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%9E%B6%E6%9E%84/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Layer Normalization的可学习参数：为什么gamma和beta正在从大模型中消失</title>
      <link>https://answer.freetools.me/layer-normalization%E7%9A%84%E5%8F%AF%E5%AD%A6%E4%B9%A0%E5%8F%82%E6%95%B0%E4%B8%BA%E4%BB%80%E4%B9%88gamma%E5%92%8Cbeta%E6%AD%A3%E5%9C%A8%E4%BB%8E%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%B8%AD%E6%B6%88%E5%A4%B1/</link>
      <pubDate>Thu, 12 Mar 2026 20:51:25 +0800</pubDate>
      <guid>https://answer.freetools.me/layer-normalization%E7%9A%84%E5%8F%AF%E5%AD%A6%E4%B9%A0%E5%8F%82%E6%95%B0%E4%B8%BA%E4%BB%80%E4%B9%88gamma%E5%92%8Cbeta%E6%AD%A3%E5%9C%A8%E4%BB%8E%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%B8%AD%E6%B6%88%E5%A4%B1/</guid>
      <description>从LayerNorm的原始设计到现代大模型的简化趋势，深入解析gamma和beta参数的技术原理、作用机制与演进历程。涵盖T5移除beta、RMSNorm的兴起、Pre-LN与Post-LN的差异，以及Dynamic Tanh替代归一化层的最新突破。</description>
    </item>
    <item>
      <title>Transformer参数量计算：从Embedding到FFN的完整公式推导</title>
      <link>https://answer.freetools.me/transformer%E5%8F%82%E6%95%B0%E9%87%8F%E8%AE%A1%E7%AE%97%E4%BB%8Eembedding%E5%88%B0ffn%E7%9A%84%E5%AE%8C%E6%95%B4%E5%85%AC%E5%BC%8F%E6%8E%A8%E5%AF%BC/</link>
      <pubDate>Thu, 12 Mar 2026 19:55:07 +0800</pubDate>
      <guid>https://answer.freetools.me/transformer%E5%8F%82%E6%95%B0%E9%87%8F%E8%AE%A1%E7%AE%97%E4%BB%8Eembedding%E5%88%B0ffn%E7%9A%84%E5%AE%8C%E6%95%B4%E5%85%AC%E5%BC%8F%E6%8E%A8%E5%AF%BC/</guid>
      <description>深入解析Transformer模型参数量的计算方法，从Embedding层到Attention层再到FFN层，通过数学公式推导每个组件的参数贡献，并以GPT-3、LLaMA等实际模型为例进行验证。</description>
    </item>
    <item>
      <title>位置编码的二十年演进：从Sinusoidal到RoPE，Transformer如何理解「位置」</title>
      <link>https://answer.freetools.me/%E4%BD%8D%E7%BD%AE%E7%BC%96%E7%A0%81%E7%9A%84%E4%BA%8C%E5%8D%81%E5%B9%B4%E6%BC%94%E8%BF%9B%E4%BB%8Esinusoidal%E5%88%B0ropetransformer%E5%A6%82%E4%BD%95%E7%90%86%E8%A7%A3%E4%BD%8D%E7%BD%AE/</link>
      <pubDate>Mon, 09 Mar 2026 05:07:24 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%BD%8D%E7%BD%AE%E7%BC%96%E7%A0%81%E7%9A%84%E4%BA%8C%E5%8D%81%E5%B9%B4%E6%BC%94%E8%BF%9B%E4%BB%8Esinusoidal%E5%88%B0ropetransformer%E5%A6%82%E4%BD%95%E7%90%86%E8%A7%A3%E4%BD%8D%E7%BD%AE/</guid>
      <description>深入解析Transformer位置编码的技术演进。从Sinusoidal的三角函数设计，到相对位置编码的范式转换，再到RoPE复数旋转的数学之美，以及ALiBi的长序列外推能力。涵盖各大模型的位置编码选择、YaRN长上下文扩展技术、Llama 4的iRoPE创新，以及实践中的选择指南。</description>
    </item>
    <item>
      <title>为什么千亿参数的模型只需激活百亿？MoE架构的三十年技术突围</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%8D%83%E4%BA%BF%E5%8F%82%E6%95%B0%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8F%AA%E9%9C%80%E6%BF%80%E6%B4%BB%E7%99%BE%E4%BA%BFmoe%E6%9E%B6%E6%9E%84%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Sun, 08 Mar 2026 13:47:29 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%8D%83%E4%BA%BF%E5%8F%82%E6%95%B0%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%8F%AA%E9%9C%80%E6%BF%80%E6%B4%BB%E7%99%BE%E4%BA%BFmoe%E6%9E%B6%E6%9E%84%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深入解析Mixture of Experts架构的原理与演进。从1991年Jordan和Jacobs的理论雏形，到2024年DeepSeek-V3的671B总参数仅激活37B的革命性设计，系统阐述MoE的核心机制：稀疏激活、门控路由、负载均衡。涵盖Switch Transformer、Mixtral 8x7B、GShard等里程碑模型，分析专家特化现象、分布式训练挑战、以及无辅助损失负载均衡策略的技术突破。</description>
    </item>
  </channel>
</rss>
