AI, the Protestant Ethic, and the Spirit of "More"

In 1905, Max Weber described the trap we're building with AI. He also showed us the exit.

In The Protestant Ethic and the Spirit of Capitalism, Weber traced how Calvinist theology accidentally created modern capitalism. The logic was elegant: If salvation was predetermined, the only way to glimpse God's favor was through worldly success. Work became a "calling" (Beruf), simultaneously a job and a religious vocation. Idleness became sin. Productivity became proof of grace.

But Weber saw where this led. Once the religious scaffolding fell away, the ethic would remain, a ghost in the machine. He called it the "iron cage": a system of rational efficiency that would trap us, demanding endless optimization while offering no meaning in return.

Here's what's less often quoted: Weber believed we could recognize the cage. And once recognized, we could choose differently.

We're at that moment now.

The AI That Never Rests (Until We Tell It To)

Last week, my scheduling AI helpfully reorganized my meetings to "maximize efficiency." It wasn't wrong on the math—I could fit in three more calls if I just moved things around.

What it didn't know: my daughter's dad has been traveling for two weeks. She's been clingy, tearful at drop-off, asking when he's coming home. The meetings could be rescheduled. What I probably needed was to call it and take a few hours off.

But the AI saw my calendar, not my life. It saw gaps to fill, not space I desperately needed. It optimized for throughput, not for the fact that I was running on fumes and my kid needed me present, not just physically there.

That's when I realized: AI isn't neutral about how we live. It has inherited values. And right now, those values look exactly like Weber's Protestant ethic.

Modern AI tools are astonishing. They summarize, optimize, schedule, draft, analyze. They erase friction. They compress time. They promise scale. But buried beneath the magic is a very old assumption:

That more output is always better. That idle time is suspect. That you are what you produce.

The good news? These are design choices, not laws of nature.

We Built the Cage. We Can Build the Door.

Weber warned that the Protestant ethic would outlive its religious origins, leaving us in systems of rational calculation with no higher purpose attached. Labor would become its own justification. The means would become the end.

He wrote: "The Puritan wanted to work in a calling; we are forced to do so."

But here's what's different now: We're the ones building the systems. Every productivity tool, every AI assistant, every algorithm. They're not inevitable. They're choices we made. Values we coded. Assumptions we embedded.

And that means we can make different choices.

Right now, most AI optimizes for one thing: maximum output from minimum input. That's the efficiency equation. And it's not wrong for manufacturing widgets or routing packages.

But you're not a widget. Your life isn't a package route.

When efficiency becomes the only measure, everything gets flattened into the same question: "Is this the fastest way to get from A to B?"

But sometimes the point isn't to get from A to B. Sometimes the point is C: the thing you didn't know you needed until you found it by accident. The conversation that started as a quick check-in and became the most important talk you've had in months. The project that "took too long" but taught you something that changed everything.

Efficiency optimizes for the knowable. Life happens in the unknowable.

Look at your phone. Look at your to-do list app. Look at the AI assistant that just suggested you "might have time" to prep tomorrow's presentation at 9 PM. Technically correct, because your six-year-old is asleep and the algorithm doesn't value the 30 minutes you were going to spend doing absolutely nothing.

Current AI reinforces the iron cage. But it doesn't have to.

Every productivity tool is trained on the same assumption: efficiency is virtuous, rest must be earned, and the goal is always more. More meetings fit into the day. More emails processed. More tasks completed. More. More. More.

But what if we trained AI on different data? Different questions? Different definitions of success?

What We Could Optimize For Instead

I'm constantly impressed by what AI can do. And increasingly excited about what it could do if we pointed it in a different direction.

We're not trapped. We're just optimizing for the wrong things.

If efficiency is the highest goal, AI will always push us to:

  • Fill every open hour

  • Treat every gap as a problem

  • Optimize every moment

  • View rest as something you earn through sufficient productivity

But if humanity is the goal, AI could help us:

  • Protect time for what matters most

  • Notice when we're running on fumes

  • Create space for the unplannable moments that become our best memories

  • Actually live by the values we claim to have

Some Inefficiencies Are the Point

The most meaningful parts of life are famously inefficient:

  • Kids interrupting you with something unexpectedly profound

  • Long walking conversations with your co-founder through Copenhagen (yes, Emily)

  • A wrong turn that leads to the best meal of your life

  • A catastrophic kitchen failure that becomes family legend

  • The afternoon where you accomplish nothing and somehow figure out everything

These aren't bugs in the system of life. They're features we should design for.

Weber understood this tension. The tragedy of the Protestant ethic wasn't hard work. It was work stripped of choice, joy, and humanity. Work that demanded everything and gave back nothing but the permission to keep working.

When we optimize for efficiency above all else, we're making a specific claim: that the value of time is measured by what it produces. A meeting produces a decision. An email produces a response. A task produces completion.

But what does breakfast with your daughter produce? What's the ROI on a walk where you're just... thinking? How do you measure the output of doing nothing?

You can't. And that's precisely why efficiency-first AI will always devalue these moments.

Current AI sees your calendar and thinks: "Gap. Problem. Solution: fill it."

But what if the gap isn't a problem? What if it's breathing room? What if it's the space where life actually happens?

What if AI asked: "You have an hour free. What do you want it to be?"

Not "what could it be used for." What do you want it to be.

Weber understood the danger of systems that crowd out meaning. But he also understood that humans are meaning-making creatures. We don't have to accept the values our tools inherited. We can give them new ones.

A Different Calling

So here's the question: What if AI didn't start with "How can we optimize your time?"

What if it asked:

  • What do you actually want more of?

  • What do you want less of?

  • What matters enough to be inefficient about?

What if it noticed moments of human inefficiency and protected them instead of fixing them?

Imagine an AI that says:

  • "Your afternoon is already shot. Want to take it off and be with your kids?"

  • "You said family matters. Want ideas for actually bringing them into your work world?"

  • "Your flight got diverted to Austin. I found a great restaurant. Go have dinner and call it a night."

That's not inefficiency. That's intelligence recognizing what intelligence is for.

This isn't science fiction. This is just... different design choices. Different questions. Different values in the training data.

Hold My Juice: AI That Asks Different Questions

This is why we're building Hold My Juice.

It's AI for families who refuse to optimize their humanity away. We're not trying to help you fit more into your day. We're trying to help you build a day that fits your life.

Our AI doesn't start with "How do we maximize your output?" It starts with "What do you want your life to look like?" And then it builds systems that protect that vision instead of eroding it.

We're not anti-technology. We're pro-human.

We're de-moralizing productivity. Decoupling worth from output. Letting meaning—not efficiency—be the thing that gets optimized.

Weber described the spirit that helped build capitalism. We're building AI that operates by a different spirit entirely. One that recognizes the cage, names it, and then helps you find the door.

The Most Intelligent Thing a System Can Do

Sometimes the smartest thing AI can do is say:

  • You're fried. Close the laptop.

  • You need space to think. Take the walk.

  • This situation just sucks, and that's okay.

  • Your kids are only this age once. Go.

That's not a limitation of intelligence. It's intelligence that knows what it's in service of.

The Protestant ethic turned work into a calling. AI can help us find a different one—one where being fully human, not fully productive, is the point.

The cage is real. But so is the door.

We're building it.

Next
Next

How My Dad Prepared Me for AI Without Even Knowing It