<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Attention Mask on Answer</title>
    <link>https://answer.freetools.me/tags/attention-mask/</link>
    <description>Recent content in Attention Mask on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 22:55:24 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/attention-mask/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>变长序列处理：大模型如何应对长短不一的输入</title>
      <link>https://answer.freetools.me/%E5%8F%98%E9%95%BF%E5%BA%8F%E5%88%97%E5%A4%84%E7%90%86%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E5%BA%94%E5%AF%B9%E9%95%BF%E7%9F%AD%E4%B8%8D%E4%B8%80%E7%9A%84%E8%BE%93%E5%85%A5/</link>
      <pubDate>Thu, 12 Mar 2026 22:55:24 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%8F%98%E9%95%BF%E5%BA%8F%E5%88%97%E5%A4%84%E7%90%86%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E5%BA%94%E5%AF%B9%E9%95%BF%E7%9F%AD%E4%B8%8D%E4%B8%80%E7%9A%84%E8%BE%93%E5%85%A5/</guid>
      <description>深入解析大语言模型处理变长序列的核心技术：从padding策略的选择困境到attention mask的工作原理，从sequence packing的训练优化到Flash Attention的varlen实现，揭示这项看似简单的预处理如何深刻影响模型训练和推理的效率。</description>
    </item>
    <item>
      <title>大模型的Padding陷阱：为什么Decoder推理必须左填充，而BERT却用右填充？</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84padding%E9%99%B7%E9%98%B1%E4%B8%BA%E4%BB%80%E4%B9%88decoder%E6%8E%A8%E7%90%86%E5%BF%85%E9%A1%BB%E5%B7%A6%E5%A1%AB%E5%85%85%E8%80%8Cbert%E5%8D%B4%E7%94%A8%E5%8F%B3%E5%A1%AB%E5%85%85/</link>
      <pubDate>Thu, 12 Mar 2026 02:54:34 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84padding%E9%99%B7%E9%98%B1%E4%B8%BA%E4%BB%80%E4%B9%88decoder%E6%8E%A8%E7%90%86%E5%BF%85%E9%A1%BB%E5%B7%A6%E5%A1%AB%E5%85%85%E8%80%8Cbert%E5%8D%B4%E7%94%A8%E5%8F%B3%E5%A1%AB%E5%85%85/</guid>
      <description>深入解析大模型中padding、truncation与attention mask的协同工作原理。从Decoder-only模型的生成机制出发，揭示为什么GPT推理必须使用左填充，而BERT使用右填充。涵盖位置编码交互、序列打包优化、Flash Attention处理、训练推理差异等核心技术细节。</description>
    </item>
    <item>
      <title>Attention Mask：Transformer如何通过一个矩阵控制信息流向</title>
      <link>https://answer.freetools.me/attention-masktransformer%E5%A6%82%E4%BD%95%E9%80%9A%E8%BF%87%E4%B8%80%E4%B8%AA%E7%9F%A9%E9%98%B5%E6%8E%A7%E5%88%B6%E4%BF%A1%E6%81%AF%E6%B5%81%E5%90%91/</link>
      <pubDate>Wed, 11 Mar 2026 22:55:24 +0800</pubDate>
      <guid>https://answer.freetools.me/attention-masktransformer%E5%A6%82%E4%BD%95%E9%80%9A%E8%BF%87%E4%B8%80%E4%B8%AA%E7%9F%A9%E9%98%B5%E6%8E%A7%E5%88%B6%E4%BF%A1%E6%81%AF%E6%B5%81%E5%90%91/</guid>
      <description>深入解析Transformer中Attention Mask的工作原理：从因果掩码的下三角矩阵设计，到填充掩码的批处理机制，揭示为什么一个简单的矩阵能够实现因果性保证、变长序列处理和计算优化。涵盖数学原理、实现细节、常见陷阱和现代优化技术。</description>
    </item>
  </channel>
</rss>
