Humans

In 2015 Channel 4 produced a television drama series called Humans. It ran to three series, ending in 2018, and is set in a world identical in many ways to our own except that there have been some very significant advances in robotics. As a result, human society now includes a large number of ‘synths’, who live and work alongside humans, often in menial roles. For the most part, the synths are insentient, but there are some synths who are evidently conscious.

It’s an enjoyable series, though at times some of the acting is less than convincing and the budget was probably fairly modest. Setting it in a world identical to our own, with no other technological advances – apart from the robots who look like humans – enabled the producers to save quite a bit on props, sets, and special effects. But this also serves to focus attention on the central drama of the evolving relationship between human and synth, which unfolds in an entertaining and inventive way. The series contained a number of storylines, many of which explore interesting philosophical, sociological and psychological themes. There is even an emerging movement involving human children choosing to identify and live as synths.

Questions around artificial intelligence, and the consequences for humanity, are a staple of science fiction, and have been explored in a number of works. In Bladerunner, one of the most well-known examples of the genre, the distinction between human and machine intelligence is predicated on the assumption that only human beings are capable of feeling genuine emotion. The point being that an emotion is not just an instinctive reaction to some kind of stimuli, but a state about which we are capable of self-reflection. To experience emotions requires more than just the capacity to process information. Similarly, in Humans, the theme is consciousness, and one thing the show undoubtedly gets right is that consciousness must also necessarily entail free will. If this is true, then Asimov’s three laws of robotics, contrived to protect humans from being harmed by a machine possessed of artificial intelligence, go out the window. For artificial intelligence to be truly intelligent it must be conscious, and if truly conscious it must also have free will.

Thus, in the first series of Humans, one of the conscious synths attacks and kills a human. She agrees to turn herself in on condition that she be accorded a fair trial, as a human, in recognition of her being a conscious being. Things don’t go quite according to plan, and the inevitable rivalry, mistrust and conflict between the two species escalates with a depressing predictability.

The humans are very often – but not always – portrayed in a bad light: as capricious, deceitful and morally inconsistent. Shamefully, they continue to exploit and abuse the synths, even when they do finally realise that they are indeed truly conscious. Meanwhile, the conscious synths are frequently portrayed as having clearer and more logically consistent reasons for their actions – even when those actions may be unethical, as they too succumb to violence and hatred.

In an interesting sub-theme, by the third series some of the synths start to develop a sense of wonder at their own existence, leading them to compose a quasi-religious narrative of a creator, which in turn prompts them to describe certain things as miraculous. This suggests – perhaps unfashionably in the current climate – that religious instincts, whatever they may be, are in fact a universal human constant.

The underlying philosophical question driving the narrative of the series, and many of its various subplots is this: if machines can be conscious, what does that make them? Should they be considered equivalent to human beings, and treated accordingly, with due regard to human rights and so on; or are they still merely machines, with an anomalous software modification? Can the two ‘species’ overcome their mutual fear and distrust of one another in order to co-exist, or must they inevitably compete in a zero-sum fight for survival?

These questions lead us to the very heart of what it is to be human, feeding the drama that is played out in the many twists and turns of the plot. However, these tantalising excursions into the nature of consciousness also provoke another question that doesn’t really get asked – never mind answered – perhaps because it is much more difficult.

If machine consciousness is possible, it must be a consequence of programming – whether that is deliberate, or a glitch. But, if consciousness can be reduced to computer algorithms, and effortlessly transferred between devices – as happens in the series – then what does that make us? Are we, effectively, no more than machines?

Either consciousness is unique and irreducible, which would make the notion of AI logically impossible, or we – and life itself – are really nothing more than a computer simulation.

Scroll to Top