At an airport in Marseille, a few years ago, I made a choice that I have not been able to stop thinking about. I was standing on a train platform with a broken SIM card, no connection to the outside world, and an escalating sense of the kind of low-grade panic that only the digitally dependent will recognise. A woman nearby, travelling alone with three young children, was lost. She asked me for help finding a taxi.
Then a man appeared and offered me Wi-Fi from his phone. The world snapped back open. Messages flooded in. In the space of ten minutes, I had a car, a plan, a solution. I left the platform. I left the woman. I left the children. On the drive, I was already calculating the next step. It was only later, much later, when the operational clarity lifted, that I understood what I had actually done: I had chosen the person who offered me utility over the person who needed help.
I was using the logic of a system — triage, optimise, execute — on a human situation. And the system had worked perfectly. That is what frightened me.
This is the question at the centre of La paresse de penser, my forthcoming book: what happens when we internalise the logic of the machines we build? Not as a dramatic transformation, but as a slow drift — a gradual preference for the efficient over the considered, the probable over the true, the optimised path over the one that requires us to stop and think.
I want to be clear about what this book is and is not. I have spent nine years as an Applied Scientist at Microsoft, building NLP systems that are used by tens of millions of people. I have filed nine patents in AI and Human-Computer Interaction. I teach large language model architectures at EPITA Paris. I believe in this work. But believing in a technology does not exempt you from examining its effects — particularly when you are close enough to understand how it actually operates.
The book is built around three observations that have accumulated over those nine years. The first is structural: the large language models at the centre of today's AI revolution are, fundamentally, next-token predictors. They are not searching for truth — they are searching for plausibility. They generate what is most likely, most fluent, most statistically coherent. This is a powerful capability. It is also, as anyone who has studied cognitive psychology will recognise, a precise description of how human heuristic thinking works. We too prefer the probable to the verified, the familiar to the accurate, the fluent to the rigorous. Kahneman called it System 1. The AI researchers call it a language model. The architecture is different. The behavioural profile is strikingly similar.
The second observation is about direction. When a human relies on a heuristic, they remain the agent of that choice — even if unconsciously. With training, reflection, and the right cultural environment, they can learn to notice their shortcuts and correct for them. What concerns me is what happens when the shortcut is externalised: when the machine takes the shortcut on your behalf, and you sign off on the result without reading it. This is not hypothetical. It is the daily behaviour of millions of people using AI writing assistants, AI summarisation tools, and AI decision-support systems. The shortcut is still there. It has simply been outsourced.
The third observation is the most personal. Building systems designed to make people more productive — to reduce friction, to anticipate needs, to complete thoughts — I have noticed a threshold. A point at which "better assisted" begins to shade into "relieved of the need to think." I am not sure exactly where that threshold is. I am not sure it is the same for everyone. But I am increasingly convinced that we are not paying enough attention to it.
La paresse de penser — "the laziness of thinking" — will be published in May 2026 by FYP Éditions, in France. It is written in French, in the tradition of the French essay: a sustained attempt to think a difficult question through without pretending to resolve it. I am writing it as an engineer, not a philosopher, which has its limitations. But it also means I am not speculating about these systems from the outside. I have built some of them. I know what they optimise for. And I know what they cannot do — which is often quite different from what their users believe.
The book does not offer a solution. I do not think there is one to offer, at least not in the form of a technology fix or a regulatory framework. What it offers is a way of looking: a set of questions to bring to your next interaction with an AI system, the way you might bring a slightly more careful eye to a beautiful argument whose conclusion you are not quite sure you trust.
The platform is building the world's best infrastructure for cognitive laziness. Whether we use it that way is still, for now, a choice.