<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>LLM on Answer</title>
    <link>https://answer.freetools.me/tags/llm/</link>
    <description>Recent content in LLM on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Fri, 13 Mar 2026 08:07:25 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/llm/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>大模型代码生成能力的边界与突破——从语法理解到语义推理的技术解析</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%BB%A3%E7%A0%81%E7%94%9F%E6%88%90%E8%83%BD%E5%8A%9B%E7%9A%84%E8%BE%B9%E7%95%8C%E4%B8%8E%E7%AA%81%E7%A0%B4%E4%BB%8E%E8%AF%AD%E6%B3%95%E7%90%86%E8%A7%A3%E5%88%B0%E8%AF%AD%E4%B9%89%E6%8E%A8%E7%90%86%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Fri, 13 Mar 2026 08:07:25 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%BB%A3%E7%A0%81%E7%94%9F%E6%88%90%E8%83%BD%E5%8A%9B%E7%9A%84%E8%BE%B9%E7%95%8C%E4%B8%8E%E7%AA%81%E7%A0%B4%E4%BB%8E%E8%AF%AD%E6%B3%95%E7%90%86%E8%A7%A3%E5%88%B0%E8%AF%AD%E4%B9%89%E6%8E%A8%E7%90%86%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入分析大语言模型在代码生成任务中的真实能力边界，从语法理解、静态语义分析到动态语义推理三个层次展开，揭示模型幻觉问题、安全性隐患以及评估基准的局限性，帮助开发者正确理解和使用代码生成工具。</description>
    </item>
    <item>
      <title>变长序列处理：大模型如何应对长短不一的输入</title>
      <link>https://answer.freetools.me/%E5%8F%98%E9%95%BF%E5%BA%8F%E5%88%97%E5%A4%84%E7%90%86%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E5%BA%94%E5%AF%B9%E9%95%BF%E7%9F%AD%E4%B8%8D%E4%B8%80%E7%9A%84%E8%BE%93%E5%85%A5/</link>
      <pubDate>Thu, 12 Mar 2026 22:55:24 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%8F%98%E9%95%BF%E5%BA%8F%E5%88%97%E5%A4%84%E7%90%86%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E5%BA%94%E5%AF%B9%E9%95%BF%E7%9F%AD%E4%B8%8D%E4%B8%80%E7%9A%84%E8%BE%93%E5%85%A5/</guid>
      <description>深入解析大语言模型处理变长序列的核心技术：从padding策略的选择困境到attention mask的工作原理，从sequence packing的训练优化到Flash Attention的varlen实现，揭示这项看似简单的预处理如何深刻影响模型训练和推理的效率。</description>
    </item>
    <item>
      <title>置信度校准：当大模型说&#34;我有80%把握&#34;时，它真的知道自己在说什么吗？</title>
      <link>https://answer.freetools.me/%E7%BD%AE%E4%BF%A1%E5%BA%A6%E6%A0%A1%E5%87%86%E5%BD%93%E5%A4%A7%E6%A8%A1%E5%9E%8B%E8%AF%B4%E6%88%91%E6%9C%8980%E6%8A%8A%E6%8F%A1%E6%97%B6%E5%AE%83%E7%9C%9F%E7%9A%84%E7%9F%A5%E9%81%93%E8%87%AA%E5%B7%B1%E5%9C%A8%E8%AF%B4%E4%BB%80%E4%B9%88%E5%90%97/</link>
      <pubDate>Thu, 12 Mar 2026 15:13:23 +0800</pubDate>
      <guid>https://answer.freetools.me/%E7%BD%AE%E4%BF%A1%E5%BA%A6%E6%A0%A1%E5%87%86%E5%BD%93%E5%A4%A7%E6%A8%A1%E5%9E%8B%E8%AF%B4%E6%88%91%E6%9C%8980%E6%8A%8A%E6%8F%A1%E6%97%B6%E5%AE%83%E7%9C%9F%E7%9A%84%E7%9F%A5%E9%81%93%E8%87%AA%E5%B7%B1%E5%9C%A8%E8%AF%B4%E4%BB%80%E4%B9%88%E5%90%97/</guid>
      <description>深入解析大语言模型置信度校准的核心问题：从2017年Guo等人的开创性论文出发，系统阐述ECE、可靠性图等评估方法，揭示LLM过度自信的深层原因，详解温度缩放、Platt Scaling等校准技术，并探讨医疗AI、幻觉检测等关键应用场景。涵盖RLHF对校准的损害、verbalized confidence的新进展，以及&amp;#34;knowing when not to know&amp;#34;这一AI安全的核心命题。</description>
    </item>
    <item>
      <title>Temperature=0为什么不等于确定性输出：大模型推理非确定性的完整技术解析</title>
      <link>https://answer.freetools.me/temperature0%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%8D%E7%AD%89%E4%BA%8E%E7%A1%AE%E5%AE%9A%E6%80%A7%E8%BE%93%E5%87%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E9%9D%9E%E7%A1%AE%E5%AE%9A%E6%80%A7%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Thu, 12 Mar 2026 14:29:39 +0800</pubDate>
      <guid>https://answer.freetools.me/temperature0%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%8D%E7%AD%89%E4%BA%8E%E7%A1%AE%E5%AE%9A%E6%80%A7%E8%BE%93%E5%87%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E9%9D%9E%E7%A1%AE%E5%AE%9A%E6%80%A7%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大模型推理非确定性的根本原因：从浮点数非结合性到批量大小变化，从&amp;#34;并发&#43;浮点数&amp;#34;假说的谬误到批量不变性解决方案，全面揭示为什么设置Temperature=0仍然无法获得可复现输出。</description>
    </item>
    <item>
      <title>为什么大模型每次回答都不一样：从温度参数到批次不变性的完整技术解析</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%AF%8F%E6%AC%A1%E5%9B%9E%E7%AD%94%E9%83%BD%E4%B8%8D%E4%B8%80%E6%A0%B7%E4%BB%8E%E6%B8%A9%E5%BA%A6%E5%8F%82%E6%95%B0%E5%88%B0%E6%89%B9%E6%AC%A1%E4%B8%8D%E5%8F%98%E6%80%A7%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Thu, 12 Mar 2026 14:00:43 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%AF%8F%E6%AC%A1%E5%9B%9E%E7%AD%94%E9%83%BD%E4%B8%8D%E4%B8%80%E6%A0%B7%E4%BB%8E%E6%B8%A9%E5%BA%A6%E5%8F%82%E6%95%B0%E5%88%B0%E6%89%B9%E6%AC%A1%E4%B8%8D%E5%8F%98%E6%80%A7%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入分析LLM输出随机性的技术根源，从温度参数的数学原理到batch invariance这一被忽视的真正原因，以及如何在生产环境中实现可复现输出</description>
    </item>
    <item>
      <title>大模型的上下文窗口：从Token限制到有效上下文管理的完整解析</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E4%B8%8A%E4%B8%8B%E6%96%87%E7%AA%97%E5%8F%A3%E4%BB%8Etoken%E9%99%90%E5%88%B6%E5%88%B0%E6%9C%89%E6%95%88%E4%B8%8A%E4%B8%8B%E6%96%87%E7%AE%A1%E7%90%86%E7%9A%84%E5%AE%8C%E6%95%B4%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Thu, 12 Mar 2026 08:57:03 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E4%B8%8A%E4%B8%8B%E6%96%87%E7%AA%97%E5%8F%A3%E4%BB%8Etoken%E9%99%90%E5%88%B6%E5%88%B0%E6%9C%89%E6%95%88%E4%B8%8A%E4%B8%8B%E6%96%87%E7%AE%A1%E7%90%86%E7%9A%84%E5%AE%8C%E6%95%B4%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大语言模型上下文窗口的技术本质：从注意力机制的O(n²)复杂度到KV Cache内存消耗，从&amp;#34;迷失在中间&amp;#34;现象到有效上下文长度的差距，系统阐述上下文限制的根源、管理策略与最佳实践。</description>
    </item>
    <item>
      <title>Logprobs深度解析：大模型输出的隐藏信息</title>
      <link>https://answer.freetools.me/logprobs%E6%B7%B1%E5%BA%A6%E8%A7%A3%E6%9E%90%E5%A4%A7%E6%A8%A1%E5%9E%8B%E8%BE%93%E5%87%BA%E7%9A%84%E9%9A%90%E8%97%8F%E4%BF%A1%E6%81%AF/</link>
      <pubDate>Thu, 12 Mar 2026 07:08:36 +0800</pubDate>
      <guid>https://answer.freetools.me/logprobs%E6%B7%B1%E5%BA%A6%E8%A7%A3%E6%9E%90%E5%A4%A7%E6%A8%A1%E5%9E%8B%E8%BE%93%E5%87%BA%E7%9A%84%E9%9A%90%E8%97%8F%E4%BF%A1%E6%81%AF/</guid>
      <description>从信息论基础到工程实践，深入解析logprobs的技术原理、数值稳定性、置信度评估与幻觉检测应用</description>
    </item>
    <item>
      <title>大模型如何选择下一个词：从概率预测到文本生成的完整技术链路</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E9%80%89%E6%8B%A9%E4%B8%8B%E4%B8%80%E4%B8%AA%E8%AF%8D%E4%BB%8E%E6%A6%82%E7%8E%87%E9%A2%84%E6%B5%8B%E5%88%B0%E6%96%87%E6%9C%AC%E7%94%9F%E6%88%90%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E9%93%BE%E8%B7%AF/</link>
      <pubDate>Thu, 12 Mar 2026 06:53:41 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E9%80%89%E6%8B%A9%E4%B8%8B%E4%B8%80%E4%B8%AA%E8%AF%8D%E4%BB%8E%E6%A6%82%E7%8E%87%E9%A2%84%E6%B5%8B%E5%88%B0%E6%96%87%E6%9C%AC%E7%94%9F%E6%88%90%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E9%93%BE%E8%B7%AF/</guid>
      <description>深入解析大语言模型文本生成的核心技术原理，从Logits的本质、Softmax转换、到各种解码策略的博弈，再到神经文本退化问题与参数协同的最佳实践，为你揭示&amp;#34;模型做决策&amp;#34;背后的完整技术真相。</description>
    </item>
    <item>
      <title>Seed参数：为什么这个整数能决定大模型的输出轨迹</title>
      <link>https://answer.freetools.me/seed%E5%8F%82%E6%95%B0%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E6%95%B4%E6%95%B0%E8%83%BD%E5%86%B3%E5%AE%9A%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E8%BE%93%E5%87%BA%E8%BD%A8%E8%BF%B9/</link>
      <pubDate>Thu, 12 Mar 2026 05:41:44 +0800</pubDate>
      <guid>https://answer.freetools.me/seed%E5%8F%82%E6%95%B0%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E6%95%B4%E6%95%B0%E8%83%BD%E5%86%B3%E5%AE%9A%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E8%BE%93%E5%87%BA%E8%BD%A8%E8%BF%B9/</guid>
      <description>深入解析大语言模型中seed参数的技术原理：从伪随机数生成器的底层实现，到温度采样的数学机制，再到GPU非确定性的根源。涵盖system_fingerprint、批次不变性、以及生产环境中实现可复现输出的完整工程实践。</description>
    </item>
    <item>
      <title>从输入文本到输出：大模型推理的完整流程解析</title>
      <link>https://answer.freetools.me/%E4%BB%8E%E8%BE%93%E5%85%A5%E6%96%87%E6%9C%AC%E5%88%B0%E8%BE%93%E5%87%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E7%9A%84%E5%AE%8C%E6%95%B4%E6%B5%81%E7%A8%8B%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Thu, 12 Mar 2026 04:10:51 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%BB%8E%E8%BE%93%E5%85%A5%E6%96%87%E6%9C%AC%E5%88%B0%E8%BE%93%E5%87%BA%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E7%9A%84%E5%AE%8C%E6%95%B4%E6%B5%81%E7%A8%8B%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大语言模型推理的完整技术链路，从分词、嵌入、位置编码、注意力计算到自回归生成，揭示模型如何将输入文本转化为输出响应的每一步。</description>
    </item>
    <item>
      <title>大模型的Padding陷阱：为什么Decoder推理必须左填充，而BERT却用右填充？</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84padding%E9%99%B7%E9%98%B1%E4%B8%BA%E4%BB%80%E4%B9%88decoder%E6%8E%A8%E7%90%86%E5%BF%85%E9%A1%BB%E5%B7%A6%E5%A1%AB%E5%85%85%E8%80%8Cbert%E5%8D%B4%E7%94%A8%E5%8F%B3%E5%A1%AB%E5%85%85/</link>
      <pubDate>Thu, 12 Mar 2026 02:54:34 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84padding%E9%99%B7%E9%98%B1%E4%B8%BA%E4%BB%80%E4%B9%88decoder%E6%8E%A8%E7%90%86%E5%BF%85%E9%A1%BB%E5%B7%A6%E5%A1%AB%E5%85%85%E8%80%8Cbert%E5%8D%B4%E7%94%A8%E5%8F%B3%E5%A1%AB%E5%85%85/</guid>
      <description>深入解析大模型中padding、truncation与attention mask的协同工作原理。从Decoder-only模型的生成机制出发，揭示为什么GPT推理必须使用左填充，而BERT使用右填充。涵盖位置编码交互、序列打包优化、Flash Attention处理、训练推理差异等核心技术细节。</description>
    </item>
    <item>
      <title>大模型的指令微调是如何工作的：从预训练到指令遵循的完整技术解析</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E6%8C%87%E4%BB%A4%E5%BE%AE%E8%B0%83%E6%98%AF%E5%A6%82%E4%BD%95%E5%B7%A5%E4%BD%9C%E7%9A%84%E4%BB%8E%E9%A2%84%E8%AE%AD%E7%BB%83%E5%88%B0%E6%8C%87%E4%BB%A4%E9%81%B5%E5%BE%AA%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Wed, 11 Mar 2026 23:10:04 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E6%8C%87%E4%BB%A4%E5%BE%AE%E8%B0%83%E6%98%AF%E5%A6%82%E4%BD%95%E5%B7%A5%E4%BD%9C%E7%9A%84%E4%BB%8E%E9%A2%84%E8%AE%AD%E7%BB%83%E5%88%B0%E6%8C%87%E4%BB%A4%E9%81%B5%E5%BE%AA%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大模型指令微调的完整技术链路：从预训练模型的局限性出发，详细阐述指令微调的核心机制、损失掩码策略、数据集构建方法、与RLHF的关系，以及实践中的关键决策。</description>
    </item>
    <item>
      <title>为什么千亿参数模型的词表只有32K？从压缩效率到计算最优的完整解析</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%8D%83%E4%BA%BF%E5%8F%82%E6%95%B0%E6%A8%A1%E5%9E%8B%E7%9A%84%E8%AF%8D%E8%A1%A8%E5%8F%AA%E6%9C%8932k%E4%BB%8E%E5%8E%8B%E7%BC%A9%E6%95%88%E7%8E%87%E5%88%B0%E8%AE%A1%E7%AE%97%E6%9C%80%E4%BC%98%E7%9A%84%E5%AE%8C%E6%95%B4%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Wed, 11 Mar 2026 19:30:52 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%8D%83%E4%BA%BF%E5%8F%82%E6%95%B0%E6%A8%A1%E5%9E%8B%E7%9A%84%E8%AF%8D%E8%A1%A8%E5%8F%AA%E6%9C%8932k%E4%BB%8E%E5%8E%8B%E7%BC%A9%E6%95%88%E7%8E%87%E5%88%B0%E8%AE%A1%E7%AE%97%E6%9C%80%E4%BC%98%E7%9A%84%E5%AE%8C%E6%95%B4%E8%A7%A3%E6%9E%90/</guid>
      <description>从压缩效率到计算最优的完整解析：为什么千亿参数模型的词表只有32K？深入探讨词表大小对模型性能、多语言处理效率、内存占用的影响，以及NeurIPS 2024论文揭示的最优词表大小计算方法。</description>
    </item>
    <item>
      <title>提示词工程的技术原理：为什么同样的意思不同的问法，大模型的回答天差地别</title>
      <link>https://answer.freetools.me/%E6%8F%90%E7%A4%BA%E8%AF%8D%E5%B7%A5%E7%A8%8B%E7%9A%84%E6%8A%80%E6%9C%AF%E5%8E%9F%E7%90%86%E4%B8%BA%E4%BB%80%E4%B9%88%E5%90%8C%E6%A0%B7%E7%9A%84%E6%84%8F%E6%80%9D%E4%B8%8D%E5%90%8C%E7%9A%84%E9%97%AE%E6%B3%95%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%9B%9E%E7%AD%94%E5%A4%A9%E5%B7%AE%E5%9C%B0%E5%88%AB/</link>
      <pubDate>Wed, 11 Mar 2026 18:59:19 +0800</pubDate>
      <guid>https://answer.freetools.me/%E6%8F%90%E7%A4%BA%E8%AF%8D%E5%B7%A5%E7%A8%8B%E7%9A%84%E6%8A%80%E6%9C%AF%E5%8E%9F%E7%90%86%E4%B8%BA%E4%BB%80%E4%B9%88%E5%90%8C%E6%A0%B7%E7%9A%84%E6%84%8F%E6%80%9D%E4%B8%8D%E5%90%8C%E7%9A%84%E9%97%AE%E6%B3%95%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%9B%9E%E7%AD%94%E5%A4%A9%E5%B7%AE%E5%9C%B0%E5%88%AB/</guid>
      <description>从注意力机制的数学原理出发，深入剖析提示词工程的核心技术：为什么同样的意思不同的问法会导致天差地别的输出？文章涵盖思维链推理、U型注意力曲线、少样本学习、系统提示词优先级、采样参数协同、提示词注入防御等关键技术，结合代码示例和可视化图表，帮助你真正理解提示词背后的技术本质。</description>
    </item>
    <item>
      <title>对话模板：大模型应用中最容易被忽视的隐形语言</title>
      <link>https://answer.freetools.me/%E5%AF%B9%E8%AF%9D%E6%A8%A1%E6%9D%BF%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%BA%94%E7%94%A8%E4%B8%AD%E6%9C%80%E5%AE%B9%E6%98%93%E8%A2%AB%E5%BF%BD%E8%A7%86%E7%9A%84%E9%9A%90%E5%BD%A2%E8%AF%AD%E8%A8%80/</link>
      <pubDate>Wed, 11 Mar 2026 14:38:05 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%AF%B9%E8%AF%9D%E6%A8%A1%E6%9D%BF%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%BA%94%E7%94%A8%E4%B8%AD%E6%9C%80%E5%AE%B9%E6%98%93%E8%A2%AB%E5%BF%BD%E8%A7%86%E7%9A%84%E9%9A%90%E5%BD%A2%E8%AF%AD%E8%A8%80/</guid>
      <description>深入解析大语言模型对话模板的设计原理、格式差异与技术演进。从ChatML到Llama 3、Mistral的[INST]格式，再到ChatBug安全漏洞，全面揭示这个连接用户输入与模型输出的关键桥梁如何影响模型表现、安全性与生态碎片化。</description>
    </item>
    <item>
      <title>Tokenizer 如何塑造大语言模型的世界观：从 BPE 到 Byte Latent Transformer 的三十年技术演进</title>
      <link>https://answer.freetools.me/tokenizer-%E5%A6%82%E4%BD%95%E5%A1%91%E9%80%A0%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E7%9A%84%E4%B8%96%E7%95%8C%E8%A7%82%E4%BB%8E-bpe-%E5%88%B0-byte-latent-transformer-%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E6%BC%94%E8%BF%9B/</link>
      <pubDate>Wed, 11 Mar 2026 14:01:29 +0800</pubDate>
      <guid>https://answer.freetools.me/tokenizer-%E5%A6%82%E4%BD%95%E5%A1%91%E9%80%A0%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E7%9A%84%E4%B8%96%E7%95%8C%E8%A7%82%E4%BB%8E-bpe-%E5%88%B0-byte-latent-transformer-%E7%9A%84%E4%B8%89%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E6%BC%94%E8%BF%9B/</guid>
      <description>深入解析大语言模型 Tokenizer 的工作原理：从 BPE、WordPiece 到 Unigram 三种主流算法的技术差异，到 tokenization 对算术推理、多语言处理、字符级任务的深层影响，以及 Byte Latent Transformer 等无 tokenizer 架构的未来探索。</description>
    </item>
    <item>
      <title>大模型如何评估：从标准化考试到人类偏好的完整技术解析</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E8%AF%84%E4%BC%B0%E4%BB%8E%E6%A0%87%E5%87%86%E5%8C%96%E8%80%83%E8%AF%95%E5%88%B0%E4%BA%BA%E7%B1%BB%E5%81%8F%E5%A5%BD%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Wed, 11 Mar 2026 13:52:30 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%A6%82%E4%BD%95%E8%AF%84%E4%BC%B0%E4%BB%8E%E6%A0%87%E5%87%86%E5%8C%96%E8%80%83%E8%AF%95%E5%88%B0%E4%BA%BA%E7%B1%BB%E5%81%8F%E5%A5%BD%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大语言模型评估体系的演进历程。从MMLU、GSM8K等标准化基准测试，到Chatbot Arena的人类偏好排行，再到数据污染、基准饱和等核心挑战，全面揭示如何科学评估一个大模型的真正能力。</description>
    </item>
    <item>
      <title>大模型推理为什么第一个 Token 总是很慢：从 Prefill 到 Decode 的完整技术解析</title>
      <link>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E4%B8%BA%E4%BB%80%E4%B9%88%E7%AC%AC%E4%B8%80%E4%B8%AA-token-%E6%80%BB%E6%98%AF%E5%BE%88%E6%85%A2%E4%BB%8E-prefill-%E5%88%B0-decode-%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Wed, 11 Mar 2026 12:42:37 +0800</pubDate>
      <guid>https://answer.freetools.me/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86%E4%B8%BA%E4%BB%80%E4%B9%88%E7%AC%AC%E4%B8%80%E4%B8%AA-token-%E6%80%BB%E6%98%AF%E5%BE%88%E6%85%A2%E4%BB%8E-prefill-%E5%88%B0-decode-%E7%9A%84%E5%AE%8C%E6%95%B4%E6%8A%80%E6%9C%AF%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析大模型推理中 Prefill 与 Decode 两个阶段的本质差异。从计算强度、内存带宽瓶颈到 KV Cache 机制，揭示为什么首 Token 延迟与后续 Token 生成速度存在巨大差异，以及连续批处理、Chunked Prefill 等优化技术的原理。</description>
    </item>
    <item>
      <title>Temperature 参数如何控制大模型的&#34;创造性&#34;与&#34;确定性&#34;</title>
      <link>https://answer.freetools.me/temperature-%E5%8F%82%E6%95%B0%E5%A6%82%E4%BD%95%E6%8E%A7%E5%88%B6%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%88%9B%E9%80%A0%E6%80%A7%E4%B8%8E%E7%A1%AE%E5%AE%9A%E6%80%A7/</link>
      <pubDate>Wed, 11 Mar 2026 12:14:36 +0800</pubDate>
      <guid>https://answer.freetools.me/temperature-%E5%8F%82%E6%95%B0%E5%A6%82%E4%BD%95%E6%8E%A7%E5%88%B6%E5%A4%A7%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%88%9B%E9%80%A0%E6%80%A7%E4%B8%8E%E7%A1%AE%E5%AE%9A%E6%80%A7/</guid>
      <description>深入解析大语言模型中 Temperature 参数的数学原理、物理渊源与实践指南。从 Softmax 函数到玻尔兹曼分布，揭示这个看似简单的参数如何重塑模型的输出分布，以及在不同任务场景下如何选择合适的温度值。</description>
    </item>
    <item>
      <title>为什么大模型会一本正经地胡说八道？从概率生成到注意力机制的技术解剖</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%BC%9A%E4%B8%80%E6%9C%AC%E6%AD%A3%E7%BB%8F%E5%9C%B0%E8%83%A1%E8%AF%B4%E5%85%AB%E9%81%93%E4%BB%8E%E6%A6%82%E7%8E%87%E7%94%9F%E6%88%90%E5%88%B0%E6%B3%A8%E6%84%8F%E5%8A%9B%E6%9C%BA%E5%88%B6%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E5%89%96/</link>
      <pubDate>Sat, 07 Mar 2026 09:12:30 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88%E5%A4%A7%E6%A8%A1%E5%9E%8B%E4%BC%9A%E4%B8%80%E6%9C%AC%E6%AD%A3%E7%BB%8F%E5%9C%B0%E8%83%A1%E8%AF%B4%E5%85%AB%E9%81%93%E4%BB%8E%E6%A6%82%E7%8E%87%E7%94%9F%E6%88%90%E5%88%B0%E6%B3%A8%E6%84%8F%E5%8A%9B%E6%9C%BA%E5%88%B6%E7%9A%84%E6%8A%80%E6%9C%AF%E8%A7%A3%E5%89%96/</guid>
      <description>深入解析大语言模型幻觉现象的技术本质，从Transformer架构限制、训练数据缺陷到softmax瓶颈，揭示为什么幻觉不是bug而是架构的必然产物，以及RAG、思维链等缓解方案的有效性边界。</description>
    </item>
  </channel>
</rss>
