One of the patents I hold is for a system that detects digital fatigue in users — monitoring interaction patterns, attention signals, and behavioural cues to assess whether someone is mentally depleted and should be nudged toward a break or a lighter cognitive task. Building it required a detailed model of what cognitive overload looks like in interface data. And in building that model, I noticed something that has been with me ever since: cognitive overload and cognitive underuse can look, from the outside, almost identical.

A person glazed over from too much screen time and a person coasting through a frictionless interface that asks nothing of them both produce similar behavioural signals. Both show shallow engagement. Both show reduced response variation. Both, in some sense, have left the building. The causes are opposite. The surface is the same.

This is the paradox at the centre of modern product design. The dominant philosophy of human-computer interaction — across hardware, software, and increasingly AI — is that friction is the enemy. Every additional click is a failure. Every moment of hesitation is a design defect to be patched. Every cognitive demand placed on the user is a tax to be reduced. This philosophy is so deeply embedded that it barely needs stating; it operates as an axiom in product reviews, design sprints, and engineering backlogs.

And it is, in many contexts, correct. The friction involved in finding a medical record in a poorly designed hospital information system is not productive. It is not building anyone's cognitive capacity. It is simply wasting time and increasing the chance of error. Removing that friction is straightforwardly good.

But the logic does not generalise. The cognitive science literature on learning and memory has documented something called desirable difficulty: the counterintuitive finding that learning is more durable when acquisition is harder. When you have to retrieve information effortfully rather than having it immediately available, when you have to interleave different types of problems rather than block-practice a single skill, when the text is slightly harder to read — you remember it better, and you understand it more deeply. The fluency that makes an interface pleasant to use is often the exact condition that prevents learning from occurring.

We are building the world's most efficient System 1 infrastructure. Autocomplete, AI-assisted writing, predictive search, intelligent summarisation — all of these systems are, in technical terms, next-token predictors offering to take over the pattern-matching work before the user has to do it themselves. This is powerful. But System 1 without System 2 is not thinking — it is association. And the question that product design rarely asks is: what happens to System 2 when it is never needed?

One answer comes from the well-being and attention research that underpins several of my patent filings. The systems I helped design were trying to protect users from one kind of harm — overload, fatigue, burnout. But looking at the broader picture, I think the more prevalent harm in the next decade will not be overload. It will be atrophy. The quiet diminishment of capacities that were never called upon, in an environment optimised to anticipate every need before it becomes conscious.

The practical implication is not to make products worse. It is to make a distinction that product design currently collapses: between friction that is parasitic — that wastes time and increases errors without building anything — and friction that is generative — that creates the conditions for attention, reflection, and genuine understanding. A confirmation step before a consequential action is generative friction. A mandatory password rotation every ninety days is parasitic friction. The design principles for each are completely different.

For AI systems specifically — the ones that write, summarise, decide, and recommend — the question of where generative friction belongs is urgent and mostly unasked. Systems that surface their reasoning, that ask the user to engage with an argument rather than just accept a conclusion, that flag low-confidence outputs as requiring human judgment rather than smoothing them into confident prose — these are systems designed for the long-term cognitive capacity of their users, not just for the short-term satisfaction of the interaction.

The technology to build this way already exists. What is missing is not capability but priority. Friction has been defined as waste. The design community has accepted this definition so completely that proposing productive friction sounds paradoxical. It should sound, instead, like exactly the kind of slow thinking that fast technology has been designing out — and that we will eventually need to design back in.