<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Self-Attention on Answer</title>
    <link>https://answer.freetools.me/tags/self-attention/</link>
    <description>Recent content in Self-Attention on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 18:36:22 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/self-attention/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Self-Attention计算全解：从矩阵乘法到梯度流动的完整技术解析</title>
      <link>https://answer.freetools.me/self-attention%E8%AE%A1%E7%AE%97%E5%85%A8%E8%A7%A3%E4%BB%8E%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95%E5%88%B0%E6%A2%AF%E5%BA%A6%E6%B5%81%E5%8A%A8%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Thu, 12 Mar 2026 18:36:22 +0800</pubDate>
      <guid>https://answer.freetools.me/self-attention%E8%AE%A1%E7%AE%97%E5%85%A8%E8%A7%A3%E4%BB%8E%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95%E5%88%B0%E6%A2%AF%E5%BA%A6%E6%B5%81%E5%8A%A8%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析Transformer中Self-Attention的完整计算流程，从Query/Key/Value的直观含义到多头注意力的实现细节，涵盖注意力分数计算、缩放原理、掩码机制、残差连接等核心技术，以及面试高频考点与常见误区。</description>
    </item>
    <item>
      <title>自注意力与交叉注意力：Transformer如何用两种机制处理「同一序列」与「两个世界」</title>
      <link>https://answer.freetools.me/%E8%87%AA%E6%B3%A8%E6%84%8F%E5%8A%9B%E4%B8%8E%E4%BA%A4%E5%8F%89%E6%B3%A8%E6%84%8F%E5%8A%9Btransformer%E5%A6%82%E4%BD%95%E7%94%A8%E4%B8%A4%E7%A7%8D%E6%9C%BA%E5%88%B6%E5%A4%84%E7%90%86%E5%90%8C%E4%B8%80%E5%BA%8F%E5%88%97%E4%B8%8E%E4%B8%A4%E4%B8%AA%E4%B8%96%E7%95%8C/</link>
      <pubDate>Thu, 12 Mar 2026 03:15:16 +0800</pubDate>
      <guid>https://answer.freetools.me/%E8%87%AA%E6%B3%A8%E6%84%8F%E5%8A%9B%E4%B8%8E%E4%BA%A4%E5%8F%89%E6%B3%A8%E6%84%8F%E5%8A%9Btransformer%E5%A6%82%E4%BD%95%E7%94%A8%E4%B8%A4%E7%A7%8D%E6%9C%BA%E5%88%B6%E5%A4%84%E7%90%86%E5%90%8C%E4%B8%80%E5%BA%8F%E5%88%97%E4%B8%8E%E4%B8%A4%E4%B8%AA%E4%B8%96%E7%95%8C/</guid>
      <description>深入解析Transformer中Self-Attention和Cross-Attention的技术原理、数学公式、历史演进与实际应用。从GPT的自回归生成到机器翻译的编码器-解码器架构，揭示这两种注意力机制如何塑造现代大模型的设计哲学。</description>
    </item>
    <item>
      <title>Transformer 的注意力机制究竟在计算什么？从 QKV 到多头的完整解析</title>
      <link>https://answer.freetools.me/transformer-%E7%9A%84%E6%B3%A8%E6%84%8F%E5%8A%9B%E6%9C%BA%E5%88%B6%E7%A9%B6%E7%AB%9F%E5%9C%A8%E8%AE%A1%E7%AE%97%E4%BB%80%E4%B9%88%E4%BB%8E-qkv-%E5%88%B0%E5%A4%9A%E5%A4%B4%E7%9A%84%E5%AE%8C%E6%95%B4%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Wed, 11 Mar 2026 12:31:47 +0800</pubDate>
      <guid>https://answer.freetools.me/transformer-%E7%9A%84%E6%B3%A8%E6%84%8F%E5%8A%9B%E6%9C%BA%E5%88%B6%E7%A9%B6%E7%AB%9F%E5%9C%A8%E8%AE%A1%E7%AE%97%E4%BB%80%E4%B9%88%E4%BB%8E-qkv-%E5%88%B0%E5%A4%9A%E5%A4%B4%E7%9A%84%E5%AE%8C%E6%95%B4%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析 Transformer 注意力机制的核心原理：从 Query、Key、Value 的直观含义到缩放点积注意力的数学推导，从多头注意力的设计哲学到自注意力与交叉注意力的本质区别。基于 2017 年原始论文与最新研究进展，系统梳理注意力机制如何让模型&amp;#34;理解&amp;#34;序列中词语之间的关系。</description>
    </item>
  </channel>
</rss>
