Jacques Ellul published The Technological Society in 1954. The world he described — a civilisation entirely restructured by the internal logic of technique — was, at the time, a projection. Reading him now, from a desk where language models self-optimise and autonomous agents process decisions without human intervention, the projection has collapsed into the present.
Ellul was not a technophobe. He was something more difficult: he was clear-sighted. His central thesis is not that machines are bad, or that technical progress is illusory, or that we should return to some pre-industrial state. His thesis is that technique — which he uses broadly to mean any rational method optimised for efficiency — has developed its own autonomous logic that imposes itself on all its participants: engineers, institutions, states, individuals. This autonomy is not the product of anyone's intention. It is the emergent property of a system in which efficiency is the supreme value.
What this autonomy means in practice: once a technique is available and efficient, it tends to be adopted because it is efficient, not because its users have collectively deliberated about its consequences. The adoption is not the result of a considered collective choice. It is the resultant of thousands of individual and institutional decisions that each follow the same logic — this technique works better, therefore we use it. The "better" is always defined in the technique's own terms: faster, cheaper, more scalable. What cannot be measured in those terms is elided.
Ellul identifies several properties that distinguish the modern technological phenomenon. Two are particularly relevant for AI. The first is what he calls technical self-augmentation: techniques call and condition each other. The automobile required the highway, which required suburban sprawl, which required the automobile to function. The large language model requires the data centre, which requires the energy infrastructure, which requires an industrial policy, which is shaped by the technology companies. At each step, the previous technique constrains the choices for the next one.
The second property is what Ellul calls the formation of the technicien — the technical specialist, trained to think within the categories of technique. The technicien is not malevolent. He is simply blind to the dimensions of reality that resist technical formalisation. He optimises. He improves. He solves problems within the frame of the problem as technically stated. The question of whether the problem should be framed differently — whether the objective itself merits examination — does not come naturally.
Reading this as an engineer, the recognition is uncomfortable. I am part of the system Ellul describes. My daily work is building more efficient techniques. Every patent I have filed is a technical solution to a technical problem, which creates a new set of constraints that makes certain ethical and cognitive questions harder to ask. I have not often paused mid-sprint to ask whether the sprint itself was in the right direction.
The application to AI is direct. Building increasingly capable AI systems is a perfectly coherent technical logic. Reducing cognitive friction is a legitimate improvement objective. Making decisions faster, information more accessible, workflows more fluent — all of this is the grammar of technical progress. But the Ellulian question persists: in optimising for all of this, what are we doing to our capacity to think for ourselves?
Ellul did not claim to have a political answer to this question. He thought technique was a civilisational problem, not an engineering one. The implication for those of us inside the system is not to stop building, but to refuse to be blind to what we are building. The minimum that Ellul asks of the technicien is awareness. That may be less than enough. But it is more than we are currently managing.