Two Women Had a Business Meeting. AI Called It Childcare.

How biased data quietly rewrites gender roles — and what it means for the next generation of family tech.

The “Emily / Sophia” Problem

Every Tuesday and Thursday from 8:30 to 9:30 AM, I hop on a call with my co-founder, Emily. Boston ↔ England. Our calendar just says “Emily / Sophia.” It’s my personal Gmail and her consultancy address — back from before we had a company name, much less a domain.

My Morning Routine

When I tested our AI calendar analysis — meant to surface important family commitments outside standard work hours — it flagged our meeting. Great — it worked!

But it interpreted it as we were meeting to discuss childcare. Two women on a recurring event? Must be child-rearing-related.

Not a cross-Atlantic startup stand-up. Not product decisions. Not fundraising. Just… who’s watching whose kids.

Yes, I could “prompt” it: babysitting events in my life look like “Emily babysitting” or “Emily at home.” But I shouldn’t have to teach a system that two women can be co-founders.

Would the same model assume “David/Michael” is childcare? We all know the answer.

The Haircut AI Refused to See

This weekend, my kids — Rex and Noa — had haircuts. Hold My Juice asked a perfect question to help me better prepare as a parent.

Hold My Juice Asks…

Sitting in the salon, I watched my son toss his long blond hair like Thor…

My son, Rex, 5

..and my daughter settling into her spa day — it’s hard to be 8!

My daughter, Noa, 8

I replied:

“Both kids are total hams. They love the wash and the blow dry.”

And what did the AI save?

The interpretation

It asked about both kids and then discarded Rex entirely. I even retried the whole thing and got the same result. Why? Because somewhere in its training diet, “liking the salon” is coded as a girl thing.

The system literally couldn’t see him. A boy enjoying self-expression didn’t fit its math.

The Same Bias — Now Closer to Home

Joy Buolamwini and Timnit Gebru’s Gender Shades study showed that facial recognition systems almost never misclassified light-skinned men — but mislabeled dark-skinned women up to 35% of the time. When data skews, systems “see” some people and “mis-see” others.

Language models repeat the same pattern: nurses = she, doctors = he; worried parent = mom, fun parent = dad. In parenting tech, those shortcuts hit harder — because they don’t just shape code, they shape childhood.

Every time AI quietly decides who’s visible, it trains the next generation of systems — and the next generation of kids — to see the world the same way.

The Invention of Normal

Sociologist George Herbert Mead called it the generalized other — the inner voice that tells us what “people like me” do. It shapes how we understand what’s normal, acceptable, or expected.

But Mead’s “other” wasn’t neutral. It was modeled on the social center of his time — white, male, able-bodied, middle class — and that lens quietly became the template for who counted as “normal.” Everyone else was defined in relation to it.

AI inherits the same pattern. It doesn’t evolve with society; it learns from the past and plays it back as truth. Historical norms become training data, and training data becomes tomorrow’s “intelligence.” The cycle freezes progress.

So when AI assumes two women are coordinating childcare or filters out a boy’s love of the salon, it’s not just an error — it’s that old generalized other, encoded in math.

How Bias Spreads — and What We’re Doing Differently

AI didn’t invent bias — it just automated our idea of normal. The real work is unlearning it.

When you train on decades of rigid examples, you get rigid results. When biased outputs feed back into the next generation of models, the bias compounds. When teams look alike, blind spots get shipped as features. And when algorithms optimize for what’s most common, they mistake frequency for truth.

At Hold My Juice, we start from a different assumption: bias is the default, not the exception. So we design against it from day one.

We use messy, real family data — not sanitized templates that erase difference. We stress-test for bias before launch. We turn user corrections into permanent test cases, so every “that’s not us” makes the system smarter. And wherever nuance matters, humans stay in the loop.

What’s at Stake — and How We Show Up

Every “Emily/Sophia = childcare” quietly tells girls that women’s time belongs at home. Every “Rex doesn’t count” whispers to boys that beauty, play, and self-expression come with boundaries. Those aren’t harmless bugs — they’re mirrors shaping how our kids see themselves.

We’re not chasing perfection. We’re chasing progress you can measure: fewer blind spots, faster fixes, clearer accountability. AI will never be neutral — but it can be self-aware. Our job is to surface bias, own it, and keep shrinking it.

Two women, meeting twice a week, building something better.

Not a babysitting roster — a product that actually sees your family.


We’re turning these lessons into Hold My Juice — an AI family assistant that helps you stay on top of life with more joy, not judgment. It learns what actually matters in your family — the vibe, the chaos, the humor — and keeps you organized without flattening who you are.

If you want tech that feels more like another family member than another inbox, join the waitlist.

My kids, rocking their new ‘dos

And special thanks to Honey & Comb Salon in Boston — and to Kristen, for keeping my kids’ hair flips legendary.

Previous
Previous

What the Reactions to My AI & Gender Bias Post Revealed

Next
Next

Pokémon GO, IRL — With an Assist From ChatGPT