Best AI papers explained
A podcast by Enoch H. Kang
550 Episodes
-
How do LLMs use their depth?
Published: 10/27/2025 -
Thought Communication in Multiagent Collaboration
Published: 10/27/2025 -
Reasoning with Sampling: Base Models Outperform RL
Published: 10/26/2025 -
Continual Learning via Sparse Memory Finetuning
Published: 10/26/2025 -
Direct Preference Optimization with Unobserved Preference Heterogeneity: The Necessity of Ternary Preferences
Published: 10/24/2025 -
The Coverage Principle: How Pre-Training Enables Post-Training
Published: 10/24/2025 -
The Era of Real-World Human Interaction: RL from User Conversations
Published: 10/24/2025 -
Agent Learning via Early Experience
Published: 10/24/2025 -
Demystifying the Mechanisms Behind Emergent Exploration in Goal-conditioned RL
Published: 10/22/2025 -
Rewriting History: A Recipe for Interventional Analyses to Study Data Effects on Model Behavior
Published: 10/22/2025 -
A Definition of AGI
Published: 10/22/2025 -
Provably Learning from Language Feedback
Published: 10/21/2025 -
In-Context Learning for Pure Exploration
Published: 10/21/2025 -
On the Role of Preference Variance in Preference Optimization
Published: 10/20/2025 -
Training LLM Agents to Empower Humans
Published: 10/20/2025 -
Richard Sutton Declares LLMs a Dead End
Published: 10/20/2025 -
Demystifying Reinforcement Learning in Agentic Reasoning
Published: 10/19/2025 -
Emergent coordination in multi-agent language models
Published: 10/19/2025 -
Learning-to-measure: in-context active feature acquisition
Published: 10/19/2025 -
Andrej Karpathy's insights: AGI, Intelligence, and Evolution
Published: 10/19/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
