<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>GPU on Answer</title>
    <link>https://answer.freetools.me/tags/gpu/</link>
    <description>Recent content in GPU on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Fri, 13 Mar 2026 06:48:36 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/gpu/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>GPU的SIMT执行模型：为什么32个线程必须执行同一条指令</title>
      <link>https://answer.freetools.me/gpu%E7%9A%84simt%E6%89%A7%E8%A1%8C%E6%A8%A1%E5%9E%8B%E4%B8%BA%E4%BB%80%E4%B9%8832%E4%B8%AA%E7%BA%BF%E7%A8%8B%E5%BF%85%E9%A1%BB%E6%89%A7%E8%A1%8C%E5%90%8C%E4%B8%80%E6%9D%A1%E6%8C%87%E4%BB%A4/</link>
      <pubDate>Fri, 13 Mar 2026 06:48:36 +0800</pubDate>
      <guid>https://answer.freetools.me/gpu%E7%9A%84simt%E6%89%A7%E8%A1%8C%E6%A8%A1%E5%9E%8B%E4%B8%BA%E4%BB%80%E4%B9%8832%E4%B8%AA%E7%BA%BF%E7%A8%8B%E5%BF%85%E9%A1%BB%E6%89%A7%E8%A1%8C%E5%90%8C%E4%B8%80%E6%9D%A1%E6%8C%87%E4%BB%A4/</guid>
      <description>深入解析GPU的SIMT执行模型如何实现大规模并行计算，从Warp机制到分支分歧处理，从内存合并访问到延迟隐藏，揭示为什么GPU能够同时执行数千个线程的核心技术原理。</description>
    </item>
    <item>
      <title>GPU渲染管线如何将3D世界变成屏幕像素从固定功能到可编程着色器的三十年演进</title>
      <link>https://answer.freetools.me/gpu%E6%B8%B2%E6%9F%93%E7%AE%A1%E7%BA%BF%E5%A6%82%E4%BD%95%E5%B0%863d%E4%B8%96%E7%95%8C%E5%8F%98%E6%88%90%E5%B1%8F%E5%B9%95%E5%83%8F%E7%B4%A0%E4%BB%8E%E5%9B%BA%E5%AE%9A%E5%8A%9F%E8%83%BD%E5%88%B0%E5%8F%AF%E7%BC%96%E7%A8%8B%E7%9D%80%E8%89%B2%E5%99%A8%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%BC%94%E8%BF%9B/</link>
      <pubDate>Fri, 13 Mar 2026 03:55:18 +0800</pubDate>
      <guid>https://answer.freetools.me/gpu%E6%B8%B2%E6%9F%93%E7%AE%A1%E7%BA%BF%E5%A6%82%E4%BD%95%E5%B0%863d%E4%B8%96%E7%95%8C%E5%8F%98%E6%88%90%E5%B1%8F%E5%B9%95%E5%83%8F%E7%B4%A0%E4%BB%8E%E5%9B%BA%E5%AE%9A%E5%8A%9F%E8%83%BD%E5%88%B0%E5%8F%AF%E7%BC%96%E7%A8%8B%E7%9D%80%E8%89%B2%E5%99%A8%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%BC%94%E8%BF%9B/</guid>
      <description>深入解析GPU渲染管线的技术原理，从顶点着色器到片元着色器的完整流程，光栅化算法的核心数学，以及从固定功能管线到现代混合渲染架构的演进历程。</description>
    </item>
    <item>
      <title>Seed参数：为什么这个整数能决定大模型的输出轨迹</title>
      <link>https://answer.freetools.me/seed%E5%8F%82%E6%95%B0%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E6%95%B4%E6%95%B0%E8%83%BD%E5%86%B3%E5%AE%9A%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E8%BE%93%E5%87%BA%E8%BD%A8%E8%BF%B9/</link>
      <pubDate>Thu, 12 Mar 2026 05:41:44 +0800</pubDate>
      <guid>https://answer.freetools.me/seed%E5%8F%82%E6%95%B0%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E6%95%B4%E6%95%B0%E8%83%BD%E5%86%B3%E5%AE%9A%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E8%BE%93%E5%87%BA%E8%BD%A8%E8%BF%B9/</guid>
      <description>深入解析大语言模型中seed参数的技术原理：从伪随机数生成器的底层实现，到温度采样的数学机制，再到GPU非确定性的根源。涵盖system_fingerprint、批次不变性、以及生产环境中实现可复现输出的完整工程实践。</description>
    </item>
    <item>
      <title>为什么一张显卡能干翻整个CPU集群：GPU并行计算如何成为深度学习的基石</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E5%BC%A0%E6%98%BE%E5%8D%A1%E8%83%BD%E5%B9%B2%E7%BF%BB%E6%95%B4%E4%B8%AAcpu%E9%9B%86%E7%BE%A4gpu%E5%B9%B6%E8%A1%8C%E8%AE%A1%E7%AE%97%E5%A6%82%E4%BD%95%E6%88%90%E4%B8%BA%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E7%9A%84%E5%9F%BA%E7%9F%B3/</link>
      <pubDate>Mon, 09 Mar 2026 04:26:49 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E5%BC%A0%E6%98%BE%E5%8D%A1%E8%83%BD%E5%B9%B2%E7%BF%BB%E6%95%B4%E4%B8%AAcpu%E9%9B%86%E7%BE%A4gpu%E5%B9%B6%E8%A1%8C%E8%AE%A1%E7%AE%97%E5%A6%82%E4%BD%95%E6%88%90%E4%B8%BA%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E7%9A%84%E5%9F%BA%E7%9F%B3/</guid>
      <description>深度解析GPU为什么适合深度学习：从架构设计哲学、SIMT执行模型、内存带宽优势到Tensor Core硬件加速，揭示GPU并行计算成为深度学习基石的技术原理。</description>
    </item>
    <item>
      <title>为什么Flash Attention能将注意力计算提速数倍而不损失精度——从GPU内存墙到IO感知算法的技术突围</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88flash-attention%E8%83%BD%E5%B0%86%E6%B3%A8%E6%84%8F%E5%8A%9B%E8%AE%A1%E7%AE%97%E6%8F%90%E9%80%9F%E6%95%B0%E5%80%8D%E8%80%8C%E4%B8%8D%E6%8D%9F%E5%A4%B1%E7%B2%BE%E5%BA%A6%E4%BB%8Egpu%E5%86%85%E5%AD%98%E5%A2%99%E5%88%B0io%E6%84%9F%E7%9F%A5%E7%AE%97%E6%B3%95%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Mon, 09 Mar 2026 03:57:50 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88flash-attention%E8%83%BD%E5%B0%86%E6%B3%A8%E6%84%8F%E5%8A%9B%E8%AE%A1%E7%AE%97%E6%8F%90%E9%80%9F%E6%95%B0%E5%80%8D%E8%80%8C%E4%B8%8D%E6%8D%9F%E5%A4%B1%E7%B2%BE%E5%BA%A6%E4%BB%8Egpu%E5%86%85%E5%AD%98%E5%A2%99%E5%88%B0io%E6%84%9F%E7%9F%A5%E7%AE%97%E6%B3%95%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深度解析Flash Attention如何通过IO感知算法设计突破GPU内存墙瓶颈，实现注意力计算的数倍加速。从GPU内存层级到分块计算，全面揭示这项改变大模型训练格局的核心技术。</description>
    </item>
    <item>
      <title>WebGPU不是WebGL的升级版：从全局状态机到现代GPU架构的二十年重构</title>
      <link>https://answer.freetools.me/webgpu%E4%B8%8D%E6%98%AFwebgl%E7%9A%84%E5%8D%87%E7%BA%A7%E7%89%88%E4%BB%8E%E5%85%A8%E5%B1%80%E7%8A%B6%E6%80%81%E6%9C%BA%E5%88%B0%E7%8E%B0%E4%BB%A3gpu%E6%9E%B6%E6%9E%84%E7%9A%84%E4%BA%8C%E5%8D%81%E5%B9%B4%E9%87%8D%E6%9E%84/</link>
      <pubDate>Sun, 08 Mar 2026 14:40:00 +0800</pubDate>
      <guid>https://answer.freetools.me/webgpu%E4%B8%8D%E6%98%AFwebgl%E7%9A%84%E5%8D%87%E7%BA%A7%E7%89%88%E4%BB%8E%E5%85%A8%E5%B1%80%E7%8A%B6%E6%80%81%E6%9C%BA%E5%88%B0%E7%8E%B0%E4%BB%A3gpu%E6%9E%B6%E6%9E%84%E7%9A%84%E4%BA%8C%E5%8D%81%E5%B9%B4%E9%87%8D%E6%9E%84/</guid>
      <description>深入解析WebGPU与WebGL的架构差异。从2011年WebGL的OpenGL ES遗产，到2025年WebGPU的Vulkan/Metal/DX12底层架构；从全局状态机的编程困境，到Compute Shader解锁GPU通用计算能力。系统阐述WebGPU如何实现3-8倍GEMM性能提升、为何LLM推理加速3.8倍，以及无状态API、异步架构、工作组模型的技术原理。</description>
    </item>
    <item>
      <title>GPU显存为何总是不够用：从内存墙到KV Cache碎片化的技术突围</title>
      <link>https://answer.freetools.me/gpu%E6%98%BE%E5%AD%98%E4%B8%BA%E4%BD%95%E6%80%BB%E6%98%AF%E4%B8%8D%E5%A4%9F%E7%94%A8%E4%BB%8E%E5%86%85%E5%AD%98%E5%A2%99%E5%88%B0kv-cache%E7%A2%8E%E7%89%87%E5%8C%96%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Fri, 06 Mar 2026 22:30:03 +0800</pubDate>
      <guid>https://answer.freetools.me/gpu%E6%98%BE%E5%AD%98%E4%B8%BA%E4%BD%95%E6%80%BB%E6%98%AF%E4%B8%8D%E5%A4%9F%E7%94%A8%E4%BB%8E%E5%86%85%E5%AD%98%E5%A2%99%E5%88%B0kv-cache%E7%A2%8E%E7%89%87%E5%8C%96%E7%9A%84%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深度解析GPU显存瓶颈的本质原因，从硬件层面的内存墙问题到软件层面的KV Cache管理困境，全面剖析PagedAttention、FlashAttention等突破性技术的原理与权衡。</description>
    </item>
  </channel>
</rss>
