I came across a philosophy dissertation recently that I genuinely had no framework for. Not because it was impenetrable, but because of what it was actually asking.
Amanda Askell, who now works on AI ethics at Anthropic, wrote her PhD thesis on infinite ethics. Specifically: how do you make moral decisions when the world might contain infinitely many people? Not billions. Not trillions. Infinite.
My first instinct was to smile at what felt like an overly abstract question. That instinct did not last long.
The Assumption Nobody Examines
Most ethical frameworks share an assumption so basic nobody examines it: that the world is finite. A countable number of people. A calculable amount of suffering and flourishing. Pull that assumption out and the frameworks do not just struggle. They break.
She was writing this in 2018. Before large language models became a household conversation. Before AI was touching billions of lives daily. It probably looked, to a lot of people, like very beautiful academic navel-gazing.
It was not.
The Precedent
It reminds me of Mark Zuckerberg in his Harvard dorm room in 2004. He was not building something malicious. He was just optimizing for what he could see and measure: connection, engagement, growth. What he could not see was what happens to teenage self-worth when social validation goes public and quantified. What recommendation algorithms do to political reality across twenty years. Those were unknown unknowns, to borrow Rumsfeld’s clumsy but useful phrase.
The Same Position, Faster
We are in that same position again, only faster and with less margin for error. AI systems are not just amplifying human behavior the way social media did. They are beginning to make decisions. The ethical foundations being laid right now will hold weight that nobody can fully anticipate yet.
The Gray Area
This is not theoretical anymore. Right now, Anthropic — the company Askell works for — is navigating whether to allow the U.S. Department of Defense to use its technology for all lawful purposes. Do you draw a line at autonomous weapons systems? At mass surveillance? Where exactly does “responsible deployment” end and “unacceptable use” begin?
The instinct is to treat this as a binary. Either you work with the military or you do not. But the actual decision is not binary. Military logistics saves lives. Intelligence analysis prevents attacks. Medical triage in combat zones is not a moral gray area — it is straightforwardly good. The question is not whether AI should exist in defense contexts. The question is which defense contexts, under what constraints, with what oversight, and who decides.
That is a philosophy problem. And it is sitting on someone’s desk right now with a deadline attached.
Which is why someone needed to sit quietly in 2018 and ask whether we even have a coherent ethics for infinite scale. Before we need the answer.
Most people never think about things like that. That is exactly why someone has to.
The Protocol: Amanda Askell wrote a PhD thesis on infinite ethics years before AI made the question urgent. Most ethical frameworks assume a finite world — pull that assumption out and they break. Right now, the company she works for is deciding where to draw lines on military use of AI — and the answer is not binary. Someone had to ask whether our moral reasoning works at infinite scale before we needed the answer. Most people never think about the unthinkable. That is exactly why someone has to.