<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>FlashAttention on Answer</title>
    <link>https://answer.freetools.me/tags/flashattention/</link>
    <description>Recent content in FlashAttention on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 10:44:33 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/flashattention/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>序列长度增加一倍，推理时间翻四倍？Transformer注意力复杂度的技术真相</title>
      <link>https://answer.freetools.me/%E5%BA%8F%E5%88%97%E9%95%BF%E5%BA%A6%E5%A2%9E%E5%8A%A0%E4%B8%80%E5%80%8D%E6%8E%A8%E7%90%86%E6%97%B6%E9%97%B4%E7%BF%BB%E5%9B%9B%E5%80%8Dtransformer%E6%B3%A8%E6%84%8F%E5%8A%9B%E5%A4%8D%E6%9D%82%E5%BA%A6%E7%9A%84%E6%8A%80%E6%9C%AF%E7%9C%9F%E7%9B%B8/</link>
      <pubDate>Thu, 12 Mar 2026 10:44:33 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%BA%8F%E5%88%97%E9%95%BF%E5%BA%A6%E5%A2%9E%E5%8A%A0%E4%B8%80%E5%80%8D%E6%8E%A8%E7%90%86%E6%97%B6%E9%97%B4%E7%BF%BB%E5%9B%9B%E5%80%8Dtransformer%E6%B3%A8%E6%84%8F%E5%8A%9B%E5%A4%8D%E6%9D%82%E5%BA%A6%E7%9A%84%E6%8A%80%E6%9C%AF%E7%9C%9F%E7%9B%B8/</guid>
      <description>深入解析Transformer注意力机制的O(n²)复杂度瓶颈，从GPU内存层次、Prefill与Decode阶段差异、KV Cache优化到FlashAttention的IO感知算法，揭示序列长度影响推理速度的根本原因与优化路径。</description>
    </item>
    <item>
      <title>GPU显存为何总是不够用：从内存墙到KV Cache碎片化的技术突围</title>
      <link>https://answer.freetools.me/gpu%E6%98%BE%E5%AD%98%E4%B8%BA%E4%BD%95%E6%80%BB%E6%98%AF%E4%B8%8D%E5%A4%9F%E7%94%A8%E4%BB%8E%E5%86%85%E5%AD%98%E5%A2%99%E5%88%B0kv-cache%E7%A2%8E%E7%89%87%E5%8C%96%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Fri, 06 Mar 2026 22:30:03 +0800</pubDate>
      <guid>https://answer.freetools.me/gpu%E6%98%BE%E5%AD%98%E4%B8%BA%E4%BD%95%E6%80%BB%E6%98%AF%E4%B8%8D%E5%A4%9F%E7%94%A8%E4%BB%8E%E5%86%85%E5%AD%98%E5%A2%99%E5%88%B0kv-cache%E7%A2%8E%E7%89%87%E5%8C%96%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深度解析GPU显存瓶颈的本质原因，从硬件层面的内存墙问题到软件层面的KV Cache管理困境，全面剖析PagedAttention、FlashAttention等突破性技术的原理与权衡。</description>
    </item>
    <item>
      <title>大模型推理为何这么慢？从内存带宽瓶颈到KV Cache优化的技术突围</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E4%B8%BA%E4%BD%95%E8%BF%99%E4%B9%88%E6%85%A2%E4%BB%8E%E5%86%85%E5%AD%98%E5%B8%A6%E5%AE%BD%E7%93%B6%E9%A2%88%E5%88%B0kv-cache%E4%BC%98%E5%8C%96%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Fri, 06 Mar 2026 12:41:49 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E4%B8%BA%E4%BD%95%E8%BF%99%E4%B9%88%E6%85%A2%E4%BB%8E%E5%86%85%E5%AD%98%E5%B8%A6%E5%AE%BD%E7%93%B6%E9%A2%88%E5%88%B0kv-cache%E4%BC%98%E5%8C%96%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深入解析大语言模型推理的性能瓶颈，从内存带宽限制到KV Cache优化的完整技术演进路线。涵盖FlashAttention、PagedAttention、GQA、连续批处理等核心技术，以及vLLM与TensorRT-LLM框架的选型建议。</description>
    </item>
  </channel>
</rss>
