<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>梯度消失 on Answer</title>
    <link>https://answer.freetools.me/tags/%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1/</link>
    <description>Recent content in 梯度消失 on Answer</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>zh-cn</language>
    <lastBuildDate>Thu, 12 Mar 2026 23:08:50 +0800</lastBuildDate>
    <atom:link href="https://answer.freetools.me/tags/%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>RNN为什么无法记住超过二十步的信息：从梯度消失到现代序列模型的四十年技术突围</title>
      <link>https://answer.freetools.me/rnn%E4%B8%BA%E4%BB%80%E4%B9%88%E6%97%A0%E6%B3%95%E8%AE%B0%E4%BD%8F%E8%B6%85%E8%BF%87%E4%BA%8C%E5%8D%81%E6%AD%A5%E7%9A%84%E4%BF%A1%E6%81%AF%E4%BB%8E%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1%E5%88%B0%E7%8E%B0%E4%BB%A3%E5%BA%8F%E5%88%97%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%9B%9B%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</link>
      <pubDate>Thu, 12 Mar 2026 23:08:50 +0800</pubDate>
      <guid>https://answer.freetools.me/rnn%E4%B8%BA%E4%BB%80%E4%B9%88%E6%97%A0%E6%B3%95%E8%AE%B0%E4%BD%8F%E8%B6%85%E8%BF%87%E4%BA%8C%E5%8D%81%E6%AD%A5%E7%9A%84%E4%BF%A1%E6%81%AF%E4%BB%8E%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1%E5%88%B0%E7%8E%B0%E4%BB%A3%E5%BA%8F%E5%88%97%E6%A8%A1%E5%9E%8B%E7%9A%84%E5%9B%9B%E5%8D%81%E5%B9%B4%E6%8A%80%E6%9C%AF%E7%AA%81%E5%9B%B4/</guid>
      <description>深入解析循环神经网络梯度消失问题的数学本质，从Hochreiter 1991年的开创性发现到LSTM的常数误差转盘机制，揭示为什么这个困扰深度学习三十年的问题催生了从门控循环单元到Transformer的完整技术演进。</description>
    </item>
    <item>
      <title>LSTM长短期记忆网络：为什么这个门控机制统治了序列建模二十年</title>
      <link>https://answer.freetools.me/lstm%E9%95%BF%E7%9F%AD%E6%9C%9F%E8%AE%B0%E5%BF%86%E7%BD%91%E7%BB%9C%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E9%97%A8%E6%8E%A7%E6%9C%BA%E5%88%B6%E7%BB%9F%E6%B2%BB%E4%BA%86%E5%BA%8F%E5%88%97%E5%BB%BA%E6%A8%A1%E4%BA%8C%E5%8D%81%E5%B9%B4/</link>
      <pubDate>Thu, 12 Mar 2026 12:33:22 +0800</pubDate>
      <guid>https://answer.freetools.me/lstm%E9%95%BF%E7%9F%AD%E6%9C%9F%E8%AE%B0%E5%BF%86%E7%BD%91%E7%BB%9C%E4%B8%BA%E4%BB%80%E4%B9%88%E8%BF%99%E4%B8%AA%E9%97%A8%E6%8E%A7%E6%9C%BA%E5%88%B6%E7%BB%9F%E6%B2%BB%E4%BA%86%E5%BA%8F%E5%88%97%E5%BB%BA%E6%A8%A1%E4%BA%8C%E5%8D%81%E5%B9%B4/</guid>
      <description>深入解析LSTM的核心原理、数学推导、梯度流机制，以及与GRU和Transformer的对比分析，理解为什么LSTM能够解决RNN的梯度消失问题，以及在什么场景下LSTM仍然优于Transformer。</description>
    </item>
    <item>
      <title>梯度消失与梯度爆炸：为什么深层神经网络曾经只能堆叠五层？</title>
      <link>https://answer.freetools.me/%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1%E4%B8%8E%E6%A2%AF%E5%BA%A6%E7%88%86%E7%82%B8%E4%B8%BA%E4%BB%80%E4%B9%88%E6%B7%B1%E5%B1%82%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E6%9B%BE%E7%BB%8F%E5%8F%AA%E8%83%BD%E5%A0%86%E5%8F%A0%E4%BA%94%E5%B1%82/</link>
      <pubDate>Thu, 12 Mar 2026 00:23:55 +0800</pubDate>
      <guid>https://answer.freetools.me/%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1%E4%B8%8E%E6%A2%AF%E5%BA%A6%E7%88%86%E7%82%B8%E4%B8%BA%E4%BB%80%E4%B9%88%E6%B7%B1%E5%B1%82%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E6%9B%BE%E7%BB%8F%E5%8F%AA%E8%83%BD%E5%A0%86%E5%8F%A0%E4%BA%94%E5%B1%82/</guid>
      <description>从1991年Hochreiter发现梯度消失问题，到2015年ResNet突破1000层训练障碍，深度学习的&amp;#34;深度&amp;#34;困境经历了二十五年的技术突围。本文深入解析梯度问题的数学本质、历史演进与解决方案。</description>
    </item>
    <item>
      <title>权重初始化：为什么一行代码能决定神经网络的生死</title>
      <link>https://answer.freetools.me/%E6%9D%83%E9%87%8D%E5%88%9D%E5%A7%8B%E5%8C%96%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E8%A1%8C%E4%BB%A3%E7%A0%81%E8%83%BD%E5%86%B3%E5%AE%9A%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E7%94%9F%E6%AD%BB/</link>
      <pubDate>Wed, 11 Mar 2026 22:16:36 +0800</pubDate>
      <guid>https://answer.freetools.me/%E6%9D%83%E9%87%8D%E5%88%9D%E5%A7%8B%E5%8C%96%E4%B8%BA%E4%BB%80%E4%B9%88%E4%B8%80%E8%A1%8C%E4%BB%A3%E7%A0%81%E8%83%BD%E5%86%B3%E5%AE%9A%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E7%94%9F%E6%AD%BB/</guid>
      <description>从零初始化的失败到Xavier和He初始化的数学推导，深入解析神经网络权重初始化的技术原理与实践指南。</description>
    </item>
    <item>
      <title>为什么Transformer的注意力要除以√dₖ：从方差到梯度消失的完整数学解析</title>
      <link>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88transformer%E7%9A%84%E6%B3%A8%E6%84%8F%E5%8A%9B%E8%A6%81%E9%99%A4%E4%BB%A5d%E2%82%96%E4%BB%8E%E6%96%B9%E5%B7%AE%E5%88%B0%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1%E7%9A%84%E5%AE%8C%E6%95%B4%E6%95%B0%E5%AD%A6%E8%A7%A3%E6%9E%90/</link>
      <pubDate>Wed, 11 Mar 2026 19:16:29 +0800</pubDate>
      <guid>https://answer.freetools.me/%E4%B8%BA%E4%BB%80%E4%B9%88transformer%E7%9A%84%E6%B3%A8%E6%84%8F%E5%8A%9B%E8%A6%81%E9%99%A4%E4%BB%A5d%E2%82%96%E4%BB%8E%E6%96%B9%E5%B7%AE%E5%88%B0%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1%E7%9A%84%E5%AE%8C%E6%95%B4%E6%95%B0%E5%AD%A6%E8%A7%A3%E6%9E%90/</guid>
      <description>深入解析Transformer缩放点积注意力中√dₖ缩放因子的数学原理：从点积方差随维度增长、Softmax饱和导致的梯度消失，到与Xavier初始化的深层联系。涵盖完整数学推导、数值示例、与加性注意力的对比分析。</description>
    </item>
    <item>
      <title>残差连接：为什么 Transformer 能堆叠到百层而不梯度消失？</title>
      <link>https://answer.freetools.me/%E6%AE%8B%E5%B7%AE%E8%BF%9E%E6%8E%A5%E4%B8%BA%E4%BB%80%E4%B9%88-transformer-%E8%83%BD%E5%A0%86%E5%8F%A0%E5%88%B0%E7%99%BE%E5%B1%82%E8%80%8C%E4%B8%8D%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1/</link>
      <pubDate>Wed, 11 Mar 2026 12:51:06 +0800</pubDate>
      <guid>https://answer.freetools.me/%E6%AE%8B%E5%B7%AE%E8%BF%9E%E6%8E%A5%E4%B8%BA%E4%BB%80%E4%B9%88-transformer-%E8%83%BD%E5%A0%86%E5%8F%A0%E5%88%B0%E7%99%BE%E5%B1%82%E8%80%8C%E4%B8%8D%E6%A2%AF%E5%BA%A6%E6%B6%88%E5%A4%B1/</guid>
      <description>深入解析Transformer中残差连接的设计原理：从梯度消失问题到恒等映射的数学本质，从Pre-Norm与Post-Norm的权衡到DeepNet实现1000层训练，揭示这个让现代大模型成为可能的核心架构组件。</description>
    </item>
  </channel>
</rss>
