Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Big Ideas

Episode 1: You Get to Die (and Other Rights AI Will Never Have)

Published on
December 18, 2025
4 min read

Introducing Enkrypt AI’s new series on

“Rights, Wanting, and Why AI Can’t Tell You How to Live” - Merritt Baer, CSO, Enkrypt AI

At BSides recently, I found myself saying something that made a few people laugh and a few more people look uncomfortable—which is usually how I know I’ve hit on a truth worth staying with:

“Your AI usage matters in terms of data and how you manage it. But AI can’t tell you how to live. And unlike AI, you get to die.”

I’m thinking of Max Weber’s line (paraphrased), “science cannot tell us how to live.” Now that AI has reshaped the nature of data, Secruity practices must change. And that leads security leaders to bigger, broader, and more important questions. Questions about what we want from security, and, even more expansively, what world we want to live in. I spend a lot of time thinking through how security behaviors change— how they must change— in an age where AI exists. The nature of data is changing. The nature of identity is changing.

And while CISOs might not be “romantic,” we are, for better or worse, people who care about how things fit together. Supply chain security? That’s about where your chips are from. Which is about geopolitics. Which is about war. Which is about energy prices. Which is—one way or another—about people.

Humans. The ones who get old, who build code, who design kitchens and drink tea in them, who fall in love, taste breakfast, get a backache… and yes, eventually get to die.

Why a series on “rights” or “wanting”?

At Enkrypt AI, we talk a lot about the technical scaffolding around AI systems: data controls, model evaluation, redteaming, guardrails, governance. This will always matter—because these are the mode in which we constrain real risks.

But something else has been happening in my conversations with CISOs, founders, regulators, and engineers: AI is forcing everyone into a deeper conversation about what we value. What are you trying to do? (And then we will script closer to the how.)

Not because AI wants anything. It doesn’t, of course. It can’t.

But AI’s absence of wanting, its lack of heart, ego, mortality, and all the messy human constraints, shines a brighter light on our own. What do we want out of systems that now “behave” but do not “care”? What do we owe to the humans upstream and downstream of our AI? What are our rights in a world where machines can generate but cannot desire?

This series—whatever we ultimately call it (“Rights,” “Wanting,” “The Obligations of the Living”)—is about exploring those tensions.

AI obligates us. Or invites us. Maybe both.

AI doesn’t have morals, but its deployment forces us to make behavioral decisions:

When AI becomes part of our infrastructure, it obligates us to ask harder questions. Responsible engineers and security leaders know that good safety and security looks a lot like good behavior over time. And values (desired outcomes) drive behaviors.

Parallel Processing Changed Expectations from and for Machines

For decades, computing was fundamentally serial. Faster clocks, smarter instruction pipelines, greater computing power— these were important but essentially behaved like larger and more capable versions of a known system. Even when we distributed workloads, we were mostly decomposing problems that humans already understood how to sequence. The machine was fast, but it still proceeded one step at a time, in sequence. The breaking points and hacking techniques reflected that.

GPUs—and later TPUs, NPUs, and custom accelerators—weren’t just faster CPUs. They were architectures optimized across massive volumes of data simultaneously. Matrix multiplication. Vector operations. Gradient updates. The unglamorous math at the heart of modern machine learning.

Once you could do millions of those operations at once, something important happened: We stopped telling computers how to solve problems, and started giving them enough compute to approximate solutions statistically.

This is the real shift— Not intelligence. Not reasoning, but context and scale. Now, the fundamental question is trust.

From Algorithms to Systems That Behave

Functionally, parallel processing allowed models to move from brittle, rules-based tools into systems that appear adaptive.

Large language models don’t reason in the human sense. They don’t plan. They don’t understand. What they do is compress vast amounts of human-generated data into high-dimensional statistical representations—and then sample from them efficiently enough to be useful in real time.

Meanwhile, parallelism —shifting computing from rules and instructions to statistical approximation— is inseparable from silicon, power, and supply chains—meaning AI’s “intelligence” is as much geopolitical and infrastructural as it is technical.

Parallel processing created what we think of as current AI—but it also centralized power. Compute is not evenly distributed. Access, availability, and content outputs are not neutral. And— critically— even if everything is running “as it should be,” we are in a series of trust based and confidence level decisions.

Mortality Is Still the Boundary Condition

This is where I’ll return to the line that made people uncomfortable: AI doesn’t get to die.

And this isn’t just about doing security and safety in the AI you need to guardrail, it’s about living—and doing executive security work— in a world where AI exists.

AI will change the behaviors of security teams because that team will use AI for security, but also because that team will live in an enterprise, and a broader world, that AI changed.

In my view, mortality isn’t a flaw —it’s a design principle. What you do in your life matters, because you and I shall not live forever. But we create systems that survive.

Meet the Writer
Merritt Baer
Latest posts

More articles

Industry Trends

NeurIPS 2025: Scale, Benchmarks, and the Signals We Should Be Paying Attention To

NeurIPS 2025 shattered records with 29,000 attendees, sparking debates on AI scale, benchmark flaws, paper volume, and the shift from security to reliability. Tanay Baswa breaks down the signals mattering most for AI's future.
Read post
Big Ideas

Episode 5 : The Supply Chain of Values: How War, Energy, and Compute Shape AI Risk

AI obligates us to see the whole picture—even the parts we may have little control over in the immediate. Explore how responsible AI demands holistic awareness, ethical foresight, and action in an interconnected world.
Read post
Big Ideas

Episode 4 : Mortality as a Design Principle: Why Only Humans Have Skin in the Game

Explore why mortality makes human ethics real in AI security. CISO Merritt Baer argues for designing AI with fragile human outcomes in mind—reversible workflows, human overrides, and survivable failures. AI has power without stakes; we don't.
Read post