Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Intellectually Curious
From Local News to GroundSource: AI That Predicts Floods 24 Hours Ahead
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This episode explains how satellites miss localized weather and how GroundSource uses 20+ years of local journalism to train an AI that converts unstructured headlines into precise, actionable flood forecasts. Through a strict four-step prompt—classification, temporal reasoning, spatial precision, and location reconciliation—the system achieves 82% practical precision across millions of articles, enabling near-global forecasts up to 24 hours before a flood. We discuss implications for emergency planning, challenges of AI reliability, and what other unstructured human memories we could transform into data for humanity.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
So picture this. It is a gorgeous, um, sunny morning. I decide today is the day I debut the brand new pristine white shoes.
SPEAKER_00Oh no, I see where this is going.
SPEAKER_01Right. I checked my weather app, clear skies, zero percent chance of rain, you know. I walk out the door and bam, I am instantly caught in this sudden, uh, highly localized downpour that seemed to literally just exist directly over my head.
SPEAKER_00Yeah, the classic weather app betrayal.
SPEAKER_01Exactly. My shoes were completely ruined. But uh my app failed because standard weather apps rely on satellites. And satellites, well, they have blind spots. So today we are taking an optimistic deep dive into how researchers are fixing those blind spots. Not with more satellites, but by feeding over 20 years of local news into an AI to create this brilliantly predictive tool called ground source.
SPEAKER_00Right, because to accurately predict where water is going to pool tomorrow, you really need to know exactly what it did yesterday. The problem is we've kind of been stuck in what scientists call a data desert.
SPEAKER_01A data desert. Okay, let's unpack this a bit.
SPEAKER_00Well, satellites are incredible tools, obviously. But if there is heavy cloud cover, or you know, the satellite just isn't flying over that specific spot when a really quick-moving localized event happens, that data simply never gets recorded.
SPEAKER_01So it is like trying to write a definitive global history book, but your only source material is a batch of blurry photos taken from space once a week.
SPEAKER_00Exactly. You are going to miss a ton of the action on the ground.
SPEAKER_01Which is a great reminder of just how powerful AI can be when we apply it creatively to solve problems. And speaking of AI, this podcast is sponsored by Embersilk. If you need help with AI training or automation or integration or even software development, or if you are just uncovering where agents could make the most impact for your business or personal life, you definitely need to check out Embrasilk.com for your AI needs.
SPEAKER_00Yeah, and to fix that blurry history book problem, scientists realized they needed a completely different lens. So they turned to the world's ultimate unstructured memory bank, which is local journalism.
SPEAKER_01Wait, local news, like local newspapers.
SPEAKER_00Yeah, exactly. They fed the Gemini large language model over five million news articles from the year 2000 to the present, spanning more than 150 countries.
SPEAKER_01Okay, here is where it gets really interesting, though, because I mean LLMs hallucinate, right? If a journalist writes something like, uh, the team was flooded with fan mail, isn't Gemini going to accidentally map a catastrophic water event right over a sports stadium? How do they actually trust this data?
SPEAKER_00Aaron Ross Powell That was definitely their biggest hurdle. And they solved it by forcing the AI through a very strict um four-step prompt.
SPEAKER_01Oh, a four-step prompt. Okay.
SPEAKER_00Right. So first is classification. That means distinguishing between an actual past physical event and a metaphor, like your fan mail example. Next is temporal reasoning.
SPEAKER_01Like uh figuring out the time.
SPEAKER_00Exactly. Anchoring relative phrases like last Tuesday to the article's actual publication date to get the exact timing. Third is spatial precision, identifying the really granular street or neighborhood affected. And finally, location reconciliation.
SPEAKER_01Wait, what does location reconciliation actually mean in this context?
SPEAKER_00It means matching those specific neighborhood names to precise geographic boundaries on a digital map. Data scientists call these map polygons.
SPEAKER_01Ah, okay, map polygons.
SPEAKER_00Yeah. By forcing the AI to extract the when and where as separate, strictly formatted steps, they sort of strip away its tendency to get creative.
SPEAKER_01So it turns the AI from a storyteller into like a strict data entry clerk.
SPEAKER_00Exactly, a very rigid clerk, which allowed them to achieve an incredible 82% practical precision rate.
SPEAKER_01Wow. 82% accuracy on millions of old articles. So they aren't just saying, hey, find the floods. They are methodically extracting hard structured data from millions of fragmented local stories. Which brings us back to my ruined shoes. Right, the shoes. Yeah. How does Google actually apply that massive new data set to stop people from getting caught in the rain tomorrow?
SPEAKER_00Well, by using ground source to extract 2.6 million historical records, scientists suddenly have this massive new baseline of how water behaves in hyperlocal areas.
SPEAKER_01Oh wow, 2.6 million.
SPEAKER_00Yeah, it is huge. And Google's Fled Hub uses that historical baseline to provide near global urban forecasts up to 24 hours before a water event even happens.
SPEAKER_01So what does this all mean for you listening? It is a massive practical shift. Going from being totally blind in these localized areas to having a full 24-hour head start completely changes how emergency services operate. Families can prepare, cities can plan, and really no one is caught off guard.
SPEAKER_00It is incredibly inspiring. It shifts our entire approach from reactive to proactive, and it leaves us with a really compelling question to ponder.
SPEAKER_01I love a good question. Let's hear it.
SPEAKER_00If we can use AI to turn decades of old scattered news into a life saving, predictive map for water, what other unstructured human memories could we transform into data for the progress of humanity?
SPEAKER_01What an amazing thought to leave on. Think about that next time you check your weather app. If you enjoyed this podcast, please subscribe to the show. Hey, leave us a five star review if you can. It really does help get the word out. Thanks for tuning in.