Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726 Podcast Por  capa

Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

Ouça grátis

Ver detalhes do programa

Sobre este áudio

Today, we're joined by Maohao Shen, PhD student at MIT to discuss his paper, “Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search.” We dig into how Satori leverages reinforcement learning to improve language model reasoning—enabling model self-reflection, self-correction, and exploration of alternative solutions. We explore the Chain-of-Action-Thought (COAT) approach, which uses special tokens—continue, reflect, and explore—to guide the model through distinct reasoning actions, allowing it to navigate complex reasoning tasks without external supervision. We also break down Satori’s two-stage training process: format tuning, which teaches the model to understand and utilize the special action tokens, and reinforcement learning, which optimizes reasoning through trial-and-error self-improvement. We cover key techniques such “restart and explore,” which allows the model to self-correct and generalize beyond its training domain. Finally, Maohao reviews Satori’s performance and how it compares to other models, the reward design, the benchmarks used, and the surprising observations made during the research. The complete show notes for this episode can be found at https://twimlai.com/go/726.

O que os ouvintes dizem sobre Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

Nota média dos ouvintes. Apenas ouvintes que tiverem escutado o título podem escrever avaliações.

Avaliações - Selecione as abas abaixo para mudar a fonte das avaliações.