How My Dad Prepared Me for AI Without Even Knowing It
Every generation faces a moment when a new technology forces us to ask: How will our kids understand the world? Not just consume information, but interpret it, question it, and resist the pull of easy certainty.
I was born in 1985, which means I’ve lived through several waves of anxiety about what was supposedly going to ruin children’s minds: first the Internet, then Wikipedia and fears that “kids won’t cite sources,” and now AI, which seems to resurrect every worry we’ve ever had about truth, ownership, and intellectual honesty. AI really does raise big questions about information and interpretation. But these aren’t new problems. And long before anyone had heard of ChatGPT, my dad, a historian, was quietly preparing me for this moment.
What My Historian Father Taught Me About Information
When I was in high school, Stephen Ambrose,, an enormously popular historian, was accused of lifting passages from other writers without proper attribution. Doris Kearns Goodwin faced similar accusations soon after. Teachers panicked—if even the experts were falling short, what did that mean for the rest of us?
Our head of school asked my father, a historian, to speak to the students. She didn't want moral panic; she wanted context.
My dad explained that none of this was new. People have been borrowing, adapting, and misattributing ideas since the beginning of recorded history. He loved quoting E.H. Carr: 'A fact is like a sack—it won't stand up till you've put something in it.' Truth requires effort: understanding context, identifying bias, interrogating sources. The scandals weren't about catching cheaters but about participating responsibly in the long chain of human knowledge—something messy, human, and ongoing.
That framework turns out to be exactly what I want my kids to bring to AI today.
How AI Showed Up in Real Life With My Kids
Over Thanksgiving, AI didn’t show up as a distraction or a threat. It arrived in surprisingly human ways — extending curiosity, rescuing small moments, and amplifying creativity.
The Board Game Spinner That Didn’t Exist
When we opened a board game in our Airbnb and discovered that the spinner was missing, the meltdown felt inevitable. But the spinner’s picture was still printed in the instructions, so I took a photo and asked ChatGPT to “spin.” The AI gave dramatic, over-the-top play-by-play commentary with every result (even after the kids begged it to stop narrating), but it worked. The game continued, the mood lifted, and everyone laughed. In this case, AI didn’t replace real-world play. It rescued it.
A Pluto Question at the Top of the Mountain
Later that week, after a long hike, my son asked why Pluto isn’t a planet anymore. I grew up with nine planets and knew the headlines but not the details. So we asked ChatGPT. We learned about dwarf planets, orbital paths, and mass, but more importantly, the explanation turned into a conversation that lasted the entire walk down. AI didn’t shut down curiosity; it expanded it.
Nanobanana and the Pokémon That Saved a Travel Day
On our marathon travel day home from Mexico, the kind of day that includes delays, diversions, and landing in the wrong city, we leaned hard into imagination to get through it. Enter nanobanana, Google’s image-generation tool, which my kids mentioned casually as if every family spends travel days “doing nanobanana.”
They quickly discovered that if they described a character aloud, nanobanana would generate an image they could upload into a virtual Pokémon card app to create custom decks. The results ranged from outrageous to surprisingly thoughtful. One favorite was Poop-Your-Pants Pokémon, a frantic brown creature surrounded by cartoon stink lines with a special move called “Emergency Exit.” Another was Family Fusion Pokémon, created by uploading a photo of the four of us and asking nanobanana to merge us into a single neon-colored guardian with the attack “Teamwork Strike.”
Once the images were generated, they assigned stats, invented backstories, and built entire decks. Somewhere during this process, the day stopped feeling like a disaster and became something we knew we’d remember — not because it was perfect, but because we created something together.
Kids Don’t Blindly Trust AI — And That’s the Point
Halfway through that same travel day, we urgently needed a place to get breakfast in Austin (it's a long story). ChatGPT kept recommending spots that were all closed until after we had to leave. My daughter looked at the screen, frowned, and said, "Sometimes ChatGPT is totally not that smart."
We opened Google Maps instead and found a place in five minutes. Good old-fashioned search saved the day—but the moment that mattered was watching her recognize that one tool had failed and deciding what to try next. That little jolt of skepticism is exactly the literacy my father tried to teach us. It's the historian's instinct: not to accept information wholesale but to ask whether it makes sense, what assumptions it rests on, what might be missing, and how we would check it. She wasn't dazzled by AI or intimidated by it. She was evaluating, forming her own judgment. It's the same muscle my dad wanted us to build, and it's the muscle our kids will need most.
Parenting in an Age of Infinite Information
We don't need to lock AI away from kids any more than my father wanted to ban Wikipedia. The challenge isn't the existence of information; it's helping kids learn to make meaning from it.
That instinct—the willingness to look inside the sack and decide what belongs there—is what my father spent his career teaching. It's what I watched my kids practice over Thanksgiving, sometimes without even realizing it. A missing spinner, a demoted planet, an improvised Pokémon, a closed restaurant: small moments where they learned that truth isn't something you receive. It's something you build, one question at a time.