From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731 Podcast Por  capa

From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

Ouça grátis

Ver detalhes do programa

Sobre este áudio

Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they’ve used, and Bespoke Labs’ open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA. The complete show notes for this episode can be found at https://twimlai.com/go/731.

O que os ouvintes dizem sobre From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

Nota média dos ouvintes. Apenas ouvintes que tiverem escutado o título podem escrever avaliações.

Avaliações - Selecione as abas abaixo para mudar a fonte das avaliações.