Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Intellectually Curious
SkillClaw: Collective Skill Evolution for Multi-User Agent Ecosystems
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
A deep-dive into SkillClaw, a framework where deployed AI agents log daily successes, failures, and workarounds; at night, a centralized Agentic Evolver reviews the data, tests updates in a validation suite, and patches a shared skill repository for all users. We explore practical examples—from Slack integration fixes to the SAM3 model—demonstrating how crowdsourced learning prevents repeated mistakes and accelerates human–AI collaboration in business automation.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
So uh I actually have confessed something to you right up front. For the last three days, I mean literally three days in a row, I have made the exact same formatting error in my spreadsheet software.
SPEAKER_00Oh no, three days.
SPEAKER_01Yeah. Every single morning I click the wrong column, the dates turn into absolute gibberish, and I have to spend, you know, five minutes manually undoing it. It's just that universal frustration of repeating a totally preventable mistake.
SPEAKER_00It's maddening, really, because I mean you know better, but the muscle memory just takes over.
SPEAKER_01Exactly. And uh it really got me thinking about how we fail to learn from our mistakes sometimes. Which actually brings us to our mission for this deep dive. We are exploring a fascinating new research paper on a framework called SkillClaw.
SPEAKER_00Yes, such a great concept.
SPEAKER_01It's this really uplifting system that allows AI agents to collectively learn from their errors. So, you know, no user ever has to start from scratch again. Aaron Powell Right.
SPEAKER_00It's basically aimed at solving this kind of uh amnesia that currently limits AI systems.
SPEAKER_01Okay, let's unpack this because I feel like a lot of people listening might push back here. Don't AI agents already know like practically everything?
SPEAKER_00Well, I mean, the base models, the massive neural networks underlying the AI, they are extremely capable. But when you deploy an AI agent to do a specific task, like uh executing a multi-step workflow on your computer, it relies on deployed skills.
SPEAKER_01Right. The specific instructions.
SPEAKER_00Exactly. And those deployed skills are mostly static. So if an agent hits a bug, it might, you know, trial and error its way to a fix for you in that specific session.
SPEAKER_01Okay.
SPEAKER_00But the very next user who asks for that same task, they face the exact same error all over again.
SPEAKER_01Wow. So it's like a massive company where a million employees are working super hard, but they absolutely refuse to share their notes with each other. Everyone is just constantly reinventing the wheel.
SPEAKER_00That's exactly what's happening. Yeah. So to fix this, uh, SkillClaw introduces this brilliant day and night cycle.
SPEAKER_01Okay, I love the sound of that. How does it work?
SPEAKER_00Well, during the day, as different people use their AI agents, the system aggregates what they call agent trajectories.
SPEAKER_01Meaning what exactly?
SPEAKER_00Basically, it logs every click, every failure, and every successful workaround across the whole network.
SPEAKER_01And then at night?
SPEAKER_00At night, a centralized system called the agentic evolver takes over. It reviews all those daily logs, identifies the patterns, and actually edits the shared skill repository.
SPEAKER_01Whoa, wait, hold on. You're saying the AI is essentially rewriting its own underlying instruction manual while we sleep.
SPEAKER_00Yep, that's exactly what it's doing.
SPEAKER_01I have to be honest, leaving an AI totally unsupervised to change its own code sounds uh a little reckless. How does it not just break everything?
SPEAKER_00It's a valid concern. Yeah. But the key is that it doesn't just guess or blindly push updates. The evolver looks at those logs to see exactly how the daytime agents organically solved a problem.
SPEAKER_01Oh, I see.
SPEAKER_00Yeah. So it drafts a code update based on those proven workarounds. Then, crucially, before anyone wakes up, it rigorously tests that new code against a validation suite.
SPEAKER_01So it proves the new method works first.
SPEAKER_00Exactly. It only updates the shared library if the new code actually passes the tests.
SPEAKER_01Okay, let's ground this a bit for everyone listening. What does this actually look like in practice?
SPEAKER_00Well, the paper includes a great example analyzing Slack integrations. Agents kept failing to summarize messages because of a bad API port configuration. Right. Think of an API port like a delivery driver repeatedly trying to drop off a package at the wrong door of an apartment building.
SPEAKER_01The digital equivalent of my spreadsheet formatting error.
SPEAKER_00Precisely. The agents kept hitting the wrong door, eventually figuring out the right one through trial and error. But the Nightly Evolver saw this pattern, updated the delivery manual for all future drivers, and by the next morning, that error was completely eliminated for every user.
SPEAKER_01That is just incredible. And you know, uncovering where AI agents can make the most impact for your business is exactly what Embersilk does. Oh, absolutely. Yeah. If you need help with AI training, automation, integration, or software development, you really should check out Embersilk.com for your AI needs. They are fantastic at streamlining these exact kinds of workflows.
SPEAKER_00And you know, these updates get incredibly sophisticated. They also tested a coding AI model called SAM3.
SPEAKER_01Okay, what happened there?
SPEAKER_00Initially, if a file the AI needed to do its job was missing, the agent would just crash entirely.
SPEAKER_01Oh, totally unhelpful.
SPEAKER_00Right. But the Evolve skill didn't just patch a typo, it actually taught the SAM3 model to intelligently search the workspace for nearby assets.
SPEAKER_01Wait, really? So it learned a better problem-solving strategy entirely.
SPEAKER_00Exactly. It's so uplifting when you think about human-machine collaboration going forward. When you're no longer fighting static tools, right.
SPEAKER_01You wake up and your software has actually adapted to the weird edge cases you ran into the day before.
SPEAKER_00Yes. We are building this incredibly optimistic future where our technology acts as a collaborative, constantly improving partner. It's ready to actively elevate human potential.
SPEAKER_01A future where our tools are always growing with us. It really is inspiring and it leaves you with something to ponder. If our AI assistants are perfectly crowdsourcing their mistakes and evolving while we sleep, what hidden inefficiencies in our own daily human routines might they soon be able to point out to us?
SPEAKER_00Oh, that's a great thought. Right.
SPEAKER_01Well, if you enjoyed this deep dive, please subscribe to the show. Hey, leave us a five star review if you can. It really does help get the word out. Thanks for tuning in.