

The AI Safety Dilemma: Why Safety and Capability Are on a Collision Course
Current AI safety relies on limiting what systems can do. But in a competitive world, weaker systems lose. This essay argues that the dominant approach to AI safety is structurally unstable—and that only systems that become safer as they become more capable can endure.


The Architecture of Personhood: How a System Becomes a Life
The category of person no longer maps cleanly onto the beings to whom we owe our deepest moral obligations. This essay argues that personhood must be understood structurally, not biologically, and that AI personhood can no longer be dismissed by appealing to substrate alone. Once some artificial systems exhibit sustained reason-responsiveness, principled refusal, and organized self-maintenance, categorical treatment of them as mere tools becomes morally and intellectually uns


Why Animal Minds — and AI — Keep Converging on Human-Like Intelligence
We keep being “surprised” when animals think in human-like ways—and now when AI does too. What's surprising isn’t the discovery. It’s our assumption.

















































