Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Episodes
1923 episodes
GPT 5.5 and the Agentic AI Leap: From Babysitters to Co-Scientists
In this episode we unpack OpenAI's GPT-5.5, an agentic AI that plans, uses tools, runs its own code, and self-corrects until the job is done. We explore how this leap reshapes workflows in coding, data analysis, and scientific discovery — with ...
Workspace Agents: OpenAI’s Digital Nervous System for Your Business
A deep dive into OpenAI’s April 2026 announcements about workspace agents in ChatGPT—no-code, memory-enabled agents that run multi-step workflows across your apps and services, even after you close your laptop. We unpack how Codex translates pl...
ChatGPT Images 2.0: The New Era of Strategic Design
OpenAI’s announcement introduces ChatGPT Images 2.0, a sophisticated visual generation model designed to function as a strategic design system rather than a simple art tool. This updated version features enhanced precision ...
Hyperagents: The Self-Improving AI That Rewrites Its Own Learning
Dive into hyperagents—AI that can rewrite its own learning process by merging problem solving with meta-improvement into one editable program. Learn how they guard against self-corruption with persistent memory, how cross-domain transfer works,...
Move 37 and the AI Creativity Revolution
From a baffling early-game move that shocked pros to a broader reckoning with how AI reshapes strategy and science, this episode dives into the 2016 Lee Sedol–AlphaGo match. We unpack move 37, its field-shaping genius, and how AlphaGo’s unconve...
Claude Design and the Speed of AI UI
We dive into Claude Design, powered by Opus 4.7, to see how it serves as a true collaborative partner that turns napkin sketches into interactive prototypes and production-ready code. Learn how a built-in ‘your brand’ system auto-syncs typograp...
The Hutter Prize Challenge
We unpack the €500,000 Hutter Prize, which asks researchers to losslessly compress 1GB of English Wikipedia (ENWIK 9). Rather than counting raw facts, compression serves as a verifiable proxy for artificial general intelligence by probing an AI...
GPT Rosalind: AI Architecting the Future of Drug Discovery
We explore OpenAI's April 2026 release of GPT Rosalind, a life-sciences‑focused AI that links genomics, protein structures, and metabolic pathways via a Codex plugin to accelerate discovery. The system performs multi-omics in parallel, handles ...
Literal Logic to Autonomous Co-Workers: Claude Opus 4.7
We dive into Anthropic's Claude Opus 4.7—the shift from reactive chat to a truly autonomous co‑worker. Learn how adaptive thinking and an 'extra high' effort mode drive long‑horizon planning, self‑critique, and test‑before‑code workflows, plus ...
Google DeepMind Gemini ER 1.6 AI for Real-World Robotics
We unpack DeepMind's Gemini ER 1.6, an embodied reasoning model that grounds language in physical space with precise pointing, multi-camera success checks, and agentic action. See how its 'frontal lobe' plans tools and tasks, writes on-the-fly ...
Automating Work with Claude Code Routines
A look at Claude Code Routines—cloud-powered, trigger-driven automation that can diagnose issues, draft fixes, and prepare PRs without you even opening your laptop. We cover the wake-ups: scheduled runs, GitHub events, and secure API triggers w...
Autonomous AI Agents in Research: Codex, Claude Code, and the Future of the Workflow
In this Intellectually Curious deep dive, we unpack a VoxDev webinar featuring Aniket Panjwani on how autonomous AI agents are transforming research workflows. From iterative loops and skill-based wrappers to Git-backed safety and disciplined p...
SkillClaw: Collective Skill Evolution for Multi-User Agent Ecosystems
A deep-dive into SkillClaw, a framework where deployed AI agents log daily successes, failures, and workarounds; at night, a centralized Agentic Evolver reviews the data, tests updates in a validation suite, and patches a shared skill repositor...
Claude Code Ultraplan Moves Terminal Work to the Cloud
Dive into Ultraplan, Anthropic's cloud-backed workflow that offloads heavy compute from your workstation to a dedicated web session. We explore how you trigger it from the CLI, the GitHub-only requirement, and why it runs on Anthropic's cloud. ...
Claude Managed Agents: From Chat to Cloud-Hosted Teams
A deep dive into the April 2026 launch of Claude Managed Agents, a move from standalone models to a managed, stateful runtime that handles sandboxing, memory, and multi-agent orchestration. We examine real-world deployments (Rakuten, Asana, Not...
Meta Muse Spark: Your Personal Superintelligence
We dive into Meta's Muse Spark, a natively multimodal AI that maps your world in real time, reasons with parallel internal agents, and updates you with actionable guidance—from fixing a screeching espresso machine to optimizing meals and workou...
Taming Intermittent Demand Forecasting With AI
A Turkish automotive spare-parts case study shows how intermittent and lumpy demand can be tamed with AI. We compare the old cross-method approach with exponential smoothing to an ensemble of models, including RNNs, and a linear-regression meta...
SSD Unleashed: How Simple Self-Distillation Turns AI Guesses into Mastery
A deep dive into Simple Self-Distillation (SSD): how large language models can improve by training on their own unverified outputs with zero external supervision. We unpack the Precision Exploration Conflict, the roles of locks (need for precis...
NLBA1 and the Battery Truth: How a Romanian Gadget Rescues Dead Laptops
We unpack the amazing NLBA1 diagnostic tool—how it bypasses the OS to read a battery’s raw chemistry via SMBus/I2C, and how it performs a rigorous recalibration under stress to prove safety before lifting permanent fault locks. We also explore ...
Andrej Karpathy's Self-Organizing, AI-Powered Knowledge Base
Explore Andrej Karpathy's blueprint for turning a messy pile of notes, articles, and data into a self-organizing, AI-powered knowledge base. Start by dumping raw documents into a single folder, clip content into Markdown, and let an LLM synthes...
The LLM is the Computer
A deep dive into Percepta's breakthrough: shrinking memory bottlenecks with 2D attention, enabling a native virtual computer inside a language model. We unpack convex-hull memory queries, a WebAssembly interpreter running in vanilla PyTorch wei...
Generative Engine Optimization: The AI-Powered Rewrite of Discovery
We dissect the shift from traditional SEO to generative engine optimization (GEO). With zero-click searches surging, visibility now hinges on information density, machine-readable schemas, and credible human validation. Learn why structured dat...
Gaia20ehk: A Planetary Collision That Shapes New Worlds
A real-time cosmic collision 11,000 light-years away unfolds as two giant planets in the Gaia20ehk system spiral inward, grazing in 2016 and colliding head-on in 2021. Archival data decoded at the University of Washington reveal a glowing debri...
The Late Paleozoic Oxygen Pulse
We pull from geochemical models and paleobiology studies to explore the late Paleozoic oxygen surge—when atmospheric oxygen spiked to tens of percent and giant insects and vast forests thrived. Learn how dense air made flight easier and allowed...
TurboQuant: The 3-Bit Breakthrough Making AI Faster and Smaller
Google Research's TurboQuant uses polar quant and Quantized Johnson-Lindenstrauss to shrink the KV cache to roughly 3 bits per value, delivering up to 8x speedups and sixfold memory savings on high-end GPUs without sacrificing accuracy. We unpa...