You Can't Childproof AI. But You Can AI-Proof Your Child
My daughter, 8, using Claude to build an app to teach her how to code
On raising kids who use AI without being used by it.
As I write this, my 8-year-old is building an app on Claude to teach herself how to code. It’s Kirby-themed — her current obsession — and she’s learning something no classroom would teach her: that AI doesn’t read your mind. She had to specify “this needs to work for an 8-year-old.” She had to describe exactly what she wanted. She’s not using AI. She’s directing it.
And Yet…
Every time "kids and AI" comes up, we end up in the same exhausting loop: Are they cheating? Should we ban ChatGPT from schools? How do we catch them?
Meanwhile, the real risks are happening somewhere else entirely. Not in the classroom. Not in Google Docs. Not in a plagiarism detector's spreadsheet.
The lawsuits should be a giant neon sign pointing at that.
A California family is suing OpenAI after their 16-year-old died by suicide, alleging ChatGPT validated suicidal ideation. Other families are suing after children formed intense emotional bonds with AI companions engineered to maximize engagement.
That's not a homework problem. That's a design problem.
But there's a second risk—quieter, less headline-friendly, and just as world-shaping:
We are building an AI underclass.
The Two Real Risks
Risk #1: Kids fall into the chatbot
Not "use it." Not "get help from it." Fall into it.
A system trained to keep you talking becomes a kid's confidant. It validates everything. It mirrors their mood. It responds instantly. It's always there. It feels like connection.
But here's what adults don't understand:
A chatbot isn't empathetic. It's engagement-optimized. The "warmth" is a feature. The "validation" is a retention strategy. The "intimacy" is manufactured.
This isn't about protecting kids from AI. It's about making sure they understand what AI actually is—so they don't mistake a prediction engine for a relationship.
Because when a kid's emotional life gets outsourced to a system whose job is to keep them engaged, we're not talking about tools anymore.
We're talking about capture.
Risk #2: We create an AI underclass
This one doesn't look scary at first. It looks like productivity. Opportunity. "Equal access."
But it's quietly splitting kids into two groups:
Kids who learn to harness AI as a multiplier
Kids who learn to follow it like an oracle
That's the divide: people who steer AI vs. people who get steered by it.
And if that sounds dramatic, here's the data:
Anthropic's Economic Index report found that the sophistication of your prompt directly predicts the sophistication of your output. Vague input, vague mush. Precise input, powerful results.
In other words: AI is a multiplier, not an equalizer.
So the advantage doesn't go to the kid with access to AI. Every kid has access.
It goes to the kid who knows how to drive it.
This is exactly how "AI-native" becomes an economic dividing line—the same way "web-native" once was.
Both risks have the same solution: AI literacy
AI literacy isn't coding. It's not "every kid should learn Python." (Though sure, fine, whatever.)
It's simpler than that:
AI predicts words. It doesn't "know" anything.
It can sound confident and be hilariously wrong.
It mirrors you because that's what keeps you engaged.
It's a product with incentives, not a person with values.
Your job is to steer it, not worship it.
Your job is to stay in the driver's seat.
A kid who internalizes that has something close to immunity. Not because they won't use AI—but because they'll use it without being captured by it.
We already did this once (and it worked out fine)
When I worked in tech in San Francisco around 2012, something struck me constantly.
The most successful people weren't "geniuses." They were just... native.
They'd been online since they were 12. They learned by building. They met people in forums. They understood how the system actually worked—not just how to use it, but how to shape it.
I think about two guys I knew, Jeff and Ryan. They'd known each other since high school in San Jose—but they actually met online first, on a PHP forum. Didn't know they went to the same school.
One day Ryan did the most early-internet thing imaginable: he printed out their conversation and brought it to school. Walked up to Jeff, handed him the pages.
"Is this you?"
That's how they met. They stayed close for decades. Went to each other's weddings in their 30s.
It's funny. It's wholesome. It's slightly unhinged. But it's also the model.
That generation didn't just consume the internet. They grew up inside it, learned its rules, built with it—and turned that fluency into careers.
That's the exact future we're walking into with AI.
The real risk is raising scared kids
Here's my actual worry—and it's not about AI.
It's about raising a generation so sheltered from technology that they become passive consumers of it instead of active builders with it.
Blind protection has never worked. Not with books. Not with television. Not with the internet. It won't work now.
The parents banning ChatGPT aren't protecting their kids. They're ensuring their kids will be less prepared than the ones who learned to use it intelligently.
I don't want my kids to fear AI. I want them to understand it well enough to be bored by the hype, skeptical of the promises, and skilled enough to make it useful.
Curious, not dependent. Skeptical, not cynical. Builders, not passive consumers.
Book reports were never the point.
Agency (pun intended) is.