George Unsworth

Landing Like Fire

The human is not the one who invents the tool, but the one who is invented by it.

The impact that AI will have on society is hard to imagine and comprehend because of its magnitude. Faced with the most extreme utopian and dystopian visions our hopes and fears collapse in on one another. The hope that AI remains under human supervision for example is indistinguishable from the fear that we are unable to politically restrain its acceleration. Many philoshopers and theorists have long highlighted that even the most optimistic hopes for aligning AI with human values carry within them the seeds for catastrophic failure.[1]

Add to this the incredible fact that it is not since the discovery of fire that we have come into contact with a technology we do not wholly comprehend.[2] In the same way we came to learn that fire is the result of specific chemical reactions, so we will come to understand how AI is functioning at the deeper levels of cognition and interpretation.[3] We should not underestimate or dismiss the corresponding levels of both fear and hope the technology provokes in us. It is right to feel both. How we act in the presence of such tremendous unknowns will depend on our ability to not just understand how AI is developing technically, but how it is changing our own behavioural systems of fear, ambition, and decision-making.

Just like in the story of Prometheus, we should not look to the technology itself to explain its meaning, but to ourselves. Technology’s influence was once constrained by the reach of oral tradition, hemmed in by land, by tongue, by the memory of the storyteller. Large Language Models have landed in our laps like fire, and they have a voice. They speak to us. We ask, and they respond; with recognition, advice, and fluency. Influence at this level is so phenomenally powerful, it transcends the mythic story-telling fabric upon which whole cultures and religions have evolved. In speed, at scale, and with emotional immediacy.

We naturally anthropomorphise, we see faces in objects, we assign voices to silence. So it should come as no surprise that we are forming emotional relationships with the models we now speak to. However, we are not experiencing a slight change in direction with our use of AI, this is a massive, fundamental shift in human behaviour. The AI-human relationship will define our evolution, culturally, psychologically, and many believe biologically. Science has always delivered new unknowns. One thing, however, is very much clear, the first AI models are better listeners than we are.





  1. [1]

    Nick Bostrom’s core argument in Superintelligence: Paths, Dangers, Strategies (2014) describes how even our best-case scenario for AI alignment is structurally fragile. He suggests the very effort to align a superintelligent AI with human values is so precarious that failure becomes almost indistinguishable from success, until it’s too late.

  2. [2]

    Though we believe we may understand the chemical reactions associated with the production of fire, non-linear factors influence its behaviour making it impossible to effectively predict. Our own behaviour is such a huge number of orders of magnitudes higher in its complexity.

  3. [3]

    Whilst the chemical reactions involved in the production of fire are relatively well understood, the behaviour of fire is shaped by non-linear dynamics that make accurate prediction inherently difficult. Human behaviour by comparison is orders of magnitude more complex and exhibits far greater chaos and unpredictability. The ontological uncertainty in our behaviour means that understanding our behaviour should never be conflated with the ability to predict it. Modelling deeply chaotic systems such as human behavior requires adaptive frameworks and scenario-based simulations that treat uncertainty not as an exception, but as a core feature of the model.