<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>GPT on Answer</title>
    <link>https://answer.freetools.me/tags/gpt/</link>
    <description>Recent content in GPT on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 19:55:07 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/gpt/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Transformer参数量计算：从Embedding到FFN的完整公式推导</title>
      <link>https://answer.freetools.me/transformer%E5%8F%82%E6%95%B0%E9%87%8F%E8%AE%A1%E7%AE%97%E4%BB%8Eembedding%E5%88%B0ffn%E7%9A%84%E5%AE%8C%E6%95%B4%E5%85%AC%E5%BC%8F%E6%8E%A8%E5%AF%BC/</link>
      <pubDate>Thu, 12 Mar 2026 19:55:07 +0800</pubDate>
      <guid>https://answer.freetools.me/transformer%E5%8F%82%E6%95%B0%E9%87%8F%E8%AE%A1%E7%AE%97%E4%BB%8Eembedding%E5%88%B0ffn%E7%9A%84%E5%AE%8C%E6%95%B4%E5%85%AC%E5%BC%8F%E6%8E%A8%E5%AF%BC/</guid>
      <description>深入解析Transformer模型参数量的计算方法，从Embedding层到Attention层再到FFN层，通过数学公式推导每个组件的参数贡献，并以GPT-3、LLaMA等实际模型为例进行验证。</description>
    </item>
    <item>
      <title>Transformer的权重共享：为什么一行代码能省下两亿参数</title>
      <link>https://answer.freetools.me/transformer%E7%9A%84%E6%9D%83%E9%87%8D%E5%85%B1%E4%BA%AB%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E8%A1%8C%E4%BB%A3%E7%A0%81%E8%83%BD%E7%9C%81%E4%B8%8B%E4%B8%A4%E4%BA%BF%E5%8F%82%E6%95%B0/</link>
      <pubDate>Thu, 12 Mar 2026 06:33:31 +0800</pubDate>
      <guid>https://answer.freetools.me/transformer%E7%9A%84%E6%9D%83%E9%87%8D%E5%85%B1%E4%BA%AB%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E8%A1%8C%E4%BB%A3%E7%A0%81%E8%83%BD%E7%9C%81%E4%B8%8B%E4%B8%A4%E4%BA%BF%E5%8F%82%E6%95%B0/</guid>
      <description>深入解析Transformer模型中输入嵌入层与输出层共享权重的技术原理，从直觉理解到数学推导，揭示这个看似简单的设计决策背后的深层逻辑。</description>
    </item>
    <item>
      <title>自注意力与交叉注意力：Transformer如何用两种机制处理「同一序列」与「两个世界」</title>
      <link>https://answer.freetools.me/%E8%87%AA%E6%B3%A8%E6%84%8F%E5%8A%9B%E4%B8%8E%E4%BA%A4%E5%8F%89%E6%B3%A8%E6%84%8F%E5%8A%9Btransformer%E5%A6%82%E4%BD%95%E7%94%A8%E4%B8%A4%E7%A7%8D%E6%9C%BA%E5%88%B6%E5%A4%84%E7%90%86%E5%90%8C%E4%B8%80%E5%BA%8F%E5%88%97%E4%B8%8E%E4%B8%A4%E4%B8%AA%E4%B8%96%E7%95%8C/</link>
      <pubDate>Thu, 12 Mar 2026 03:15:16 +0800</pubDate>
      <guid>https://answer.freetools.me/%E8%87%AA%E6%B3%A8%E6%84%8F%E5%8A%9B%E4%B8%8E%E4%BA%A4%E5%8F%89%E6%B3%A8%E6%84%8F%E5%8A%9Btransformer%E5%A6%82%E4%BD%95%E7%94%A8%E4%B8%A4%E7%A7%8D%E6%9C%BA%E5%88%B6%E5%A4%84%E7%90%86%E5%90%8C%E4%B8%80%E5%BA%8F%E5%88%97%E4%B8%8E%E4%B8%A4%E4%B8%AA%E4%B8%96%E7%95%8C/</guid>
      <description>深入解析Transformer中Self-Attention和Cross-Attention的技术原理、数学公式、历史演进与实际应用。从GPT的自回归生成到机器翻译的编码器-解码器架构，揭示这两种注意力机制如何塑造现代大模型的设计哲学。</description>
    </item>
    <item>
      <title>Teacher Forcing：为什么这个&#34;作弊&#34;技术统治了序列模型训练三十年</title>
      <link>https://answer.freetools.me/teacher-forcing%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E4%BD%9C%E5%BC%8A%E6%8A%80%E6%9C%AF%E7%BB%9F%E6%B2%BB%E4%BA%86%E5%BA%8F%E5%88%97%E6%A8%A1%E5%9E%8B%E8%AE%AD%E7%BB%83%E4%B8%89%E5%8D%81%E5%B9%B4/</link>
      <pubDate>Thu, 12 Mar 2026 02:39:25 +0800</pubDate>
      <guid>https://answer.freetools.me/teacher-forcing%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E4%BD%9C%E5%BC%8A%E6%8A%80%E6%9C%AF%E7%BB%9F%E6%B2%BB%E4%BA%86%E5%BA%8F%E5%88%97%E6%A8%A1%E5%9E%8B%E8%AE%AD%E7%BB%83%E4%B8%89%E5%8D%81%E5%B9%B4/</guid>
      <description>深入解析Teacher Forcing训练技术的本质、Exposure Bias问题的根源，以及三十年来研究者为解决这一困境所提出的各种方案。从Scheduled Sampling到Professor Forcing，从TeaForN到Minimum Risk Training，全面剖析序列模型训练的核心难题。</description>
    </item>
    <item>
      <title>因果语言模型与掩码语言模型：两种预训练范式的本质差异</title>
      <link>https://answer.freetools.me/%E5%9B%A0%E6%9E%9C%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E4%B8%8E%E6%8E%A9%E7%A0%81%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E4%B8%A4%E7%A7%8D%E9%A2%84%E8%AE%AD%E7%BB%83%E8%8C%83%E5%BC%8F%E7%9A%84%E6%9C%AC%E8%B4%A8%E5%B7%AE%E5%BC%82/</link>
      <pubDate>Wed, 11 Mar 2026 21:12:01 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%9B%A0%E6%9E%9C%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E4%B8%8E%E6%8E%A9%E7%A0%81%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E4%B8%A4%E7%A7%8D%E9%A2%84%E8%AE%AD%E7%BB%83%E8%8C%83%E5%BC%8F%E7%9A%84%E6%9C%AC%E8%B4%A8%E5%B7%AE%E5%BC%82/</guid>
      <description>深度解析Transformer两大预训练范式：因果语言模型(CLM)与掩码语言模型(MLM)的工作原理、注意力机制差异、训练目标、应用场景对比，以及现代大模型为何普遍选择decoder-only架构</description>
    </item>
    <item>
      <title>SwiGLU为何成为大模型的标配：从ReLU到门控激活函数的十五年演进</title>
      <link>https://answer.freetools.me/swiglu%E4%B8%BA%E4%BD%95%E6%88%90%E4%B8%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E6%A0%87%E9%85%8D%E4%BB%8Erelu%E5%88%B0%E9%97%A8%E6%8E%A7%E6%BF%80%E6%B4%BB%E5%87%BD%E6%95%B0%E7%9A%84%E5%8D%81%E4%BA%94%E5%B9%B4%E6%BC%94%E8%BF%9B/</link>
      <pubDate>Wed, 11 Mar 2026 15:12:58 +0800</pubDate>
      <guid>https://answer.freetools.me/swiglu%E4%B8%BA%E4%BD%95%E6%88%90%E4%B8%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E6%A0%87%E9%85%8D%E4%BB%8Erelu%E5%88%B0%E9%97%A8%E6%8E%A7%E6%BF%80%E6%B4%BB%E5%87%BD%E6%95%B0%E7%9A%84%E5%8D%81%E4%BA%94%E5%B9%B4%E6%BC%94%E8%BF%9B/</guid>
      <description>深入解析大语言模型激活函数的演进历程：从ReLU的困境到GELU的平滑化，从GLU的门控机制到SwiGLU的完美结合。基于Google 2020年GLU论文的实验数据，揭示为什么LLaMA、Mistral等现代大模型都选择了SwiGLU作为FFN层的激活函数，以及参数量与性能之间的权衡考量。</description>
    </item>
    <item>
      <title>为什么大模型能从几个例子中学会新任务：从隐式梯度下降到Induction Head的技术解密</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%A4%A7%E6%A8%A1%E5%9E%8B%E8%83%BD%E4%BB%8E%E5%87%A0%E4%B8%AA%E4%BE%8B%E5%AD%90%E4%B8%AD%E5%AD%A6%E4%BC%9A%E6%96%B0%E4%BB%BB%E5%8A%A1%E4%BB%8E%E9%9A%90%E5%BC%8F%E6%A2%AF%E5%BA%A6%E4%B8%8B%E9%99%8D%E5%88%B0induction-head%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E5%AF%86/</link>
      <pubDate>Mon, 09 Mar 2026 01:56:34 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%A4%A7%E6%A8%A1%E5%9E%8B%E8%83%BD%E4%BB%8E%E5%87%A0%E4%B8%AA%E4%BE%8B%E5%AD%90%E4%B8%AD%E5%AD%A6%E4%BC%9A%E6%96%B0%E4%BB%BB%E5%8A%A1%E4%BB%8E%E9%9A%90%E5%BC%8F%E6%A2%AF%E5%BA%A6%E4%B8%8B%E9%99%8D%E5%88%B0induction-head%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E5%AF%86/</guid>
      <description>深入解析大语言模型上下文学习(In-Context Learning)的底层机制。从2020年GPT-3的意外发现，到2023年微软研究院的隐式微调理论，再到Anthropic的Induction Head机制，系统梳理这一改变AI应用范式的核心技术。涵盖Transformer注意力与梯度下降的对偶形式、训练过程中的相变现象、ICL与微调的质量差距分析，以及影响ICL性能的关键因素。</description>
    </item>
    <item>
      <title>大模型为什么会产生涌现能力？从Scaling Laws到相变理论的科学解密</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%B8%BA%E4%BB%80%E4%B9%88%E4%BC%9A%E4%BA%A7%E7%94%9F%E6%B6%8C%E7%8E%B0%E8%83%BD%E5%8A%9B%E4%BB%8Escaling-laws%E5%88%B0%E7%9B%B8%E5%8F%98%E7%90%86%E8%AE%BA%E7%9A%84%E7%A7%91%E5%AD%A6%E8%A7%A3%E5%AF%86/</link>
      <pubDate>Sun, 08 Mar 2026 13:28:16 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%B8%BA%E4%BB%80%E4%B9%88%E4%BC%9A%E4%BA%A7%E7%94%9F%E6%B6%8C%E7%8E%B0%E8%83%BD%E5%8A%9B%E4%BB%8Escaling-laws%E5%88%B0%E7%9B%B8%E5%8F%98%E7%90%86%E8%AE%BA%E7%9A%84%E7%A7%91%E5%AD%A6%E8%A7%A3%E5%AF%86/</guid>
      <description>深入解析大语言模型涌现能力的科学机制。从2022年Wei等人定义涌现能力，到2023年斯坦福团队的&amp;#34;海市蜃楼&amp;#34;质疑，再到2024年预训练损失视角的理论突破，系统阐述涌现能力的定义、具体案例、理论解释与学术争议。涵盖Induction Heads机制、BIG-Bench基准测试、Chain-of-Thought推理、预训练损失阈值等关键概念，以及涌现能力对AI安全与发展的深远影响。</description>
    </item>
    <item>
      <title>Tokenizer决定大模型&#34;看到&#34;的世界：从BPE算法到草莓问题的技术解密</title>
      <link>https://answer.freetools.me/tokenizer%E5%86%B3%E5%AE%9A%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9C%8B%E5%88%B0%E7%9A%84%E4%B8%96%E7%95%8C%E4%BB%8Ebpe%E7%AE%97%E6%B3%95%E5%88%B0%E8%8D%89%E8%8E%93%E9%97%AE%E9%A2%98%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E5%AF%86/</link>
      <pubDate>Sun, 08 Mar 2026 13:12:23 +0800</pubDate>
      <guid>https://answer.freetools.me/tokenizer%E5%86%B3%E5%AE%9A%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9C%8B%E5%88%B0%E7%9A%84%E4%B8%96%E7%95%8C%E4%BB%8Ebpe%E7%AE%97%E6%B3%95%E5%88%B0%E8%8D%89%E8%8E%93%E9%97%AE%E9%A2%98%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E5%AF%86/</guid>
      <description>深入解析大语言模型Tokenizer的技术原理与设计权衡。从1994年Philip Gage的数据压缩算法到Sennrich等人2015年的NLP应用，系统阐述BPE算法的工作机制、词表大小的权衡、多语言效率差异、以及\&amp;#34;strawberry问题\&amp;#34;等经典案例。涵盖GPT-4与GPT-4o tokenizer对比、中文token效率分析、数学能力影响，以及tokenizer-free架构的未来展望。</description>
    </item>
  </channel>
</rss>
