<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>T5 on Answer</title>
    <link>https://answer.freetools.me/tags/t5/</link>
    <description>Recent content in T5 on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 05:34:41 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/t5/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>相对位置偏置如何改变Transformer的序列理解能力：从Shaw到ALiBi的七年技术演进</title>
      <link>https://answer.freetools.me/%E7%9B%B8%E5%AF%B9%E4%BD%8D%E7%BD%AE%E5%81%8F%E7%BD%AE%E5%A6%82%E4%BD%95%E6%94%B9%E5%8F%98transformer%E7%9A%84%E5%BA%8F%E5%88%97%E7%90%86%E8%A7%A3%E8%83%BD%E5%8A%9B%E4%BB%8Eshaw%E5%88%B0alibi%E7%9A%84%E4%B8%83%E5%B9%B4%E6%8A%80%E6%9C%AF%E6%BC%94%E8%BF%9B/</link>
      <pubDate>Thu, 12 Mar 2026 05:34:41 +0800</pubDate>
      <guid>https://answer.freetools.me/%E7%9B%B8%E5%AF%B9%E4%BD%8D%E7%BD%AE%E5%81%8F%E7%BD%AE%E5%A6%82%E4%BD%95%E6%94%B9%E5%8F%98transformer%E7%9A%84%E5%BA%8F%E5%88%97%E7%90%86%E8%A7%A3%E8%83%BD%E5%8A%9B%E4%BB%8Eshaw%E5%88%B0alibi%E7%9A%84%E4%B8%83%E5%B9%B4%E6%8A%80%E6%9C%AF%E6%BC%94%E8%BF%9B/</guid>
      <description>深入解析Transformer相对位置编码的技术原理与演进历程。从2018年Shaw的开创性论文到T5的分桶策略、ALiBi的线性偏置、Swin的2D相对位置编码，系统阐述为什么&amp;#34;距离比坐标更重要&amp;#34;，以及相对位置信息如何在注意力计算中发挥作用。涵盖数学公式、实现细节、性能对比与工程权衡。</description>
    </item>
    <item>
      <title>自注意力与交叉注意力：Transformer如何用两种机制处理「同一序列」与「两个世界」</title>
      <link>https://answer.freetools.me/%E8%87%AA%E6%B3%A8%E6%84%8F%E5%8A%9B%E4%B8%8E%E4%BA%A4%E5%8F%89%E6%B3%A8%E6%84%8F%E5%8A%9Btransformer%E5%A6%82%E4%BD%95%E7%94%A8%E4%B8%A4%E7%A7%8D%E6%9C%BA%E5%88%B6%E5%A4%84%E7%90%86%E5%90%8C%E4%B8%80%E5%BA%8F%E5%88%97%E4%B8%8E%E4%B8%A4%E4%B8%AA%E4%B8%96%E7%95%8C/</link>
      <pubDate>Thu, 12 Mar 2026 03:15:16 +0800</pubDate>
      <guid>https://answer.freetools.me/%E8%87%AA%E6%B3%A8%E6%84%8F%E5%8A%9B%E4%B8%8E%E4%BA%A4%E5%8F%89%E6%B3%A8%E6%84%8F%E5%8A%9Btransformer%E5%A6%82%E4%BD%95%E7%94%A8%E4%B8%A4%E7%A7%8D%E6%9C%BA%E5%88%B6%E5%A4%84%E7%90%86%E5%90%8C%E4%B8%80%E5%BA%8F%E5%88%97%E4%B8%8E%E4%B8%A4%E4%B8%AA%E4%B8%96%E7%95%8C/</guid>
      <description>深入解析Transformer中Self-Attention和Cross-Attention的技术原理、数学公式、历史演进与实际应用。从GPT的自回归生成到机器翻译的编码器-解码器架构，揭示这两种注意力机制如何塑造现代大模型的设计哲学。</description>
    </item>
  </channel>
</rss>
