<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>SGLang on Answer</title>
    <link>https://answer.freetools.me/tags/sglang/</link>
    <description>Recent content in SGLang on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 14:29:39 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/sglang/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Temperature=0为什么不等于确定性输出：大模型推理非确定性的完整技术解析</title>
      <link>https://answer.freetools.me/temperature0%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%8D%E7%AD%89%E4%BA%8E%E7%A1%AE%E5%AE%9A%E6%80%A7%E8%BE%93%E5%87%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E9%9D%9E%E7%A1%AE%E5%AE%9A%E6%80%A7%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Thu, 12 Mar 2026 14:29:39 +0800</pubDate>
      <guid>https://answer.freetools.me/temperature0%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%8D%E7%AD%89%E4%BA%8E%E7%A1%AE%E5%AE%9A%E6%80%A7%E8%BE%93%E5%87%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E9%9D%9E%E7%A1%AE%E5%AE%9A%E6%80%A7%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大模型推理非确定性的根本原因：从浮点数非结合性到批量大小变化，从&amp;#34;并发&#43;浮点数&amp;#34;假说的谬误到批量不变性解决方案，全面揭示为什么设置Temperature=0仍然无法获得可复现输出。</description>
    </item>
    <item>
      <title>Prefix Caching 如何让重复提示词在大模型推理中&#34;零成本&#34;通过</title>
      <link>https://answer.freetools.me/prefix-caching-%E5%A6%82%E4%BD%95%E8%AE%A9%E9%87%8D%E5%A4%8D%E6%8F%90%E7%A4%BA%E8%AF%8D%E5%9C%A8%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E4%B8%AD%E9%9B%B6%E6%88%90%E6%9C%AC%E9%80%9A%E8%BF%87/</link>
      <pubDate>Mon, 09 Mar 2026 07:03:42 +0800</pubDate>
      <guid>https://answer.freetools.me/prefix-caching-%E5%A6%82%E4%BD%95%E8%AE%A9%E9%87%8D%E5%A4%8D%E6%8F%90%E7%A4%BA%E8%AF%8D%E5%9C%A8%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E4%B8%AD%E9%9B%B6%E6%88%90%E6%9C%AC%E9%80%9A%E8%BF%87/</guid>
      <description>深入解析大模型推理中的 Prefix Caching 技术。从 KV Cache 的工作原理出发，系统阐述 vLLM 的 Block-Level Hashing 与 SGLang 的 RadixAttention 两种技术流派，分析 OpenAI 与 Anthropic 的 Prompt Caching 商业化实践，探讨 NeurIPS 2025 论文提出的 Learned Prefix Caching 智能淘汰策略，并提供提示词设计优化指南。涵盖缓存命中率、TTFT 降低 80%、成本节省 90% 等核心性能数据。</description>
    </item>
  </channel>
</rss>
