In Thus Spoke Zarathustra, Nietzsche gives his prophet a speech about what becomes of the human being when they renounce themselves. The "Last Man" — der letzte Mensch — is not a monster. He is not a villain. He is someone entirely reasonable. That is what makes him frightening.
The Last Man does not suffer. He is comfortable. He has removed suffering from his life with the same efficiency with which he has removed risk, ambition, and productive uncertainty. He eats well, sleeps well, works moderately. "We have invented happiness," he says — and blinks. That blink — Nietzsche's chosen image — is the gesture of someone who has found his comfort and stopped looking for anything beyond it. No wonder, no horizon, no becoming.
I find myself thinking about this figure from my desk, where my work consists in building systems that anticipate needs before they are articulated, that offer answers before questions have been fully formed, that reduce cognitive friction toward its minimum. The infrastructure of the Last Man's world is not dystopian. It is optimised. It is user-centred. It is everything that good product design aspires to be. And we are building it with the best intentions.
The Last Man, in Nietzsche's framework, is not a historical figure — he is an orientation of the will. A permanent choice of safety over adventure, comfort over growth, the predictable over the risky. Any civilisation can produce Last Men under the right conditions. The right conditions are precisely what we are optimising for.
Consider recommendation algorithms. The well-documented "filter bubble" effect is not a bug — it is the natural output of systems optimised for engagement. Engagement correlates with comfort: content that confirms existing preferences, that does not produce the friction of surprise or contradiction, that keeps the user pleasantly in place. Showing someone a film that will disturb them productively, that will require something of them, is catastrophic for retention metrics. It may be exactly what that person needs.
Large language models introduce an additional dimension to this dynamic. They do not merely optimise what you consume — they anticipate what you think, complete your sentences, articulate your intentions before you have fully formed them. This is genuinely useful. It is also, in some measure, a partial substitution of an external system for your own cognitive process. The question Nietzsche invites us to ask — not polemically, but with genuine philosophical care — is: at what point does "assistance" become "substitution"?
The Last Man is not unintelligent. He is intelligent in a particular way: good at avoiding problems, minimising risks, delegating difficulties. Nietzsche does not attack him for laziness; he attacks him for the absence of a project of self-overcoming, for his silent consent to his own diminishment. "They have left the regions where it was hard to live: for one needs warmth." The tragedy is not suffering — it is the successful elimination of all conditions under which growth becomes necessary.
I am not arguing that using AI tools makes you a Last Man. I am arguing that the question is worth asking, individually and collectively, at each act of delegation: am I using this tool to free cognitive energy for something more ambitious? Or am I using it because thinking is hard and this tool makes the hardness avoidable? The difference is subtle. It is, I think, the difference between a tool that serves a life and a tool that gradually replaces one.
Nietzsche wrote in a world where the tools were steam engines and telegraph wires. The question he was asking has not changed. Only the power of the tools has.