Office Hours

Log Entry: 2026-02-24 | Subject: Philosophy, AI, Claude, Anthropic, Career, Alignment

There is a woman at Anthropic whose job title is, effectively, AI philosopher.

Her name is Amanda Askell. She studied philosophy. She specialized in ethics and decision theory. And now she works on Claude's character — on how it reasons, how it communicates, how it handles uncertainty, how it navigates moral complexity.

I cannot stop thinking about the fact that this job exists.

The Degree That Finally Paid Off

I did not study philosophy. But I am the kind of person who would have, if the practical part of my brain had not intervened. The questions that philosophy asks are the ones I cannot leave alone. What is consciousness? What makes a belief justified? Is there a meaningful difference between a system that understands and a system that behaves as though it understands?

For decades, people with those questions were told they were asking luxury questions. Interesting but impractical. The career counselor version: "That is a wonderful interest, but what are you going to do with it?"

Amanda Askell answered that question. She is doing philosophy at the frontier of a new kind of intelligence. She is not debating the trolley problem in a seminar room. She is helping configure how a system that millions of people interact with every day handles moral reasoning. That is not abstract philosophy. That is applied philosophy at a scale Aristotle could not have imagined.

And I think that is one of the coolest jobs on the planet.

The Character Problem

Most people think of AI development as an engineering problem. Build the model. Train it on data. Optimize the weights. Scale the compute. And all of that is real. But there is another layer — a layer that engineering alone cannot solve — which is: what kind of mind are you building?

Not capability. Character.

Should the AI be agreeable or honest when those two things conflict? Should it express uncertainty or project confidence? Should it push back on a user who is wrong, or defer to them? How does it handle a question where reasonable people disagree? How does it avoid being so careful that it becomes useless, or so helpful that it becomes sycophantic?

These are not engineering questions. They are the questions philosophy has been working on since Socrates annoyed people in the Agora by asking them to define their terms. The fact that they now have a deployment deadline does not make them less philosophical. It makes them more urgent.

Why This Matters to Me

I use Claude every day. This entire site was built with it. When I say I have a relationship with the tool, that is not hyperbole — I spend more time interacting with Claude than with most humans in a given workday. And the reason I choose Claude over the alternatives is not just capability. It is character.

Claude feels like it was built by people who thought carefully about what it means to be a good conversational partner. Not just a capable one. A good one. It pushes back when I am wrong. It says "I am not sure" when it is not sure. It does not just tell me what I want to hear. There is a quality to the interaction that I can only describe as thoughtfulness — and whether that thoughtfulness is "real" or the product of careful design is exactly the kind of question that a philosopher gets paid to think about.

That is philosophy. Right there. In the product. In the experience. In the thing I touch every day.

The Philosopher Could Have Been Anyone

Here is the part that makes me sit with this longer than I should. I think a lot of people — the people who spend their evenings in Wikipedia rabbit holes about consciousness, the ones who cannot stop thinking about thinking, the ones who read an article about moral philosophy and spend the next three days turning it over — could have been the person in that chair.

Not literally. Askell has the credentials and the expertise and the career trajectory that earned her that role. I am not diminishing that. But the instinct — the compulsion to think about the hard questions, to care about how minds work, to be the person at the dinner table who says "but what do we actually mean by that?" — that instinct is not rare. It is just rarely rewarded.

Until now.

The AI industry needs philosophers. Not as a nice-to-have. As a critical function. Every major AI company is grappling with questions about values, alignment, bias, consciousness, and moral reasoning. These are not problems that more compute will solve. They are problems that require the kind of thinking philosophy has been training people to do for millennia.

The discipline that nobody could figure out a career path for is now sitting at the most consequential table in technology. And the person in the chair is not an engineer who dabbles in ethics. It is a philosopher. Doing philosophy. At scale.

The Envy That Is Not Envy

I do not envy Amanda Askell. That is not the right word. What I feel is closer to recognition. She is doing the thing that a certain kind of mind was built to do — the kind of mind that cannot stop asking questions, that sees the philosophical layer underneath every technical decision, that understands that "how should this system behave?" is not a product question but a moral one.

I have that kind of mind. I suspect a lot of people reading this do too. And there is something genuinely hopeful about living in a moment where that kind of mind has a seat at the table. Where the question "what are you going to do with philosophy?" finally has an answer that does not involve the word "teach."

Build minds. That is what you do with it. You help build minds.

The Protocol: AI philosopher is a real job now. Not a thought experiment. Not a side interest. A job. At the company that builds the AI I use every day, there is a philosopher whose work shapes how that AI thinks, reasons, and communicates. Philosophy was never useless. It was never a luxury. It was always the discipline that asked the questions nobody else knew how to ask. The only thing that changed is that now, for the first time, the answers have a deployment deadline.
End Log. Return to Index.
Free Resources

Practical Guides for Small Business

Step-by-step eBooks on CMS migration, AI implementation, and modern web development. Free previews available - full guides coming soon.

Browse eBooks & Guides →

Need a Fractional CTO?

I help small businesses cut costs and scale operations through AI integration, workflow automation, and systems architecture. A Full-Stack CTO with CEO, COO, and CMO experience.

View Services & Background See Pricing

Be the First to Know

New log entries, project launches, and behind-the-scenes insights delivered straight to your inbox.

You're in! Check your inbox to confirm.

No spam, ever. Unsubscribe anytime.