We haven't been able to take payment
You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Act now to keep your subscription
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Your subscription is due to terminate
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account, otherwise your subscription will terminate.
BOOKS | TECHNOLOGY

The New Breed by Kate Darling review — should we fear the march of the robots?

A new book argues that we need to ditch our ‘Frankenstein complex’ about a robot takeover. Review by James Bloodworth
Metal head: the Maschinenmensch from Fritz Lang’s 1927 film Metropolis
Metal head: the Maschinenmensch from Fritz Lang’s 1927 film Metropolis
EUREKA ENTERTAINMENT

What comes to mind when we think about robots? Probably a mechanical version of ourselves. Popular culture is rife with humanoid creations, from the Maschinenmensch of Fritz Lang’s Metropolis to Harrison Ford’s replicant lover Rachael in Blade Runner. A member of my family played a remote-controlled woman in the 1949 Stanley Holloway comedy The Perfect Woman.

The humanoid portrayal of robots in speculative fiction is a product of our solipsistic tendency to “anthropomorphise” things. This lends itself to what Isaac Asimov, the science-fiction writer, once called “the Frankenstein complex”, a belief that robots will either replace humans or violently subordinate us, perhaps because history is littered with examples of us doing precisely that to each other.

In The New Breed Kate Darling, a researcher at the MIT Media Lab in the US and an expert in robot ethics, argues that we should stop thinking of robots as “quasi-humans”. Instead we should view them like domestic animals: as partners — albeit mechanical ones — rather than adversaries.

“Technological determinism” — of the sort that wildly exaggerates advances in robotics — has stoked our morbid fear of human replaceability. The reality is more prosaic, as was demonstrated during the recent attempt to fully automate the assembly line at the Tesla factory in Silicon Valley. The experiment resulted in what Elon Musk, the chief executive of Tesla, described as “manufacturing hell” when robots failed to recognise defects. A mea culpa followed from Musk, usually an arch-proponent of automation. “Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated,” he tweeted.

Humans are underrated because computer processing power is frequently confused with human intelligence, which is far more complex. Moore’s law — the prediction made by American engineer Gordon Moore that computer processing power doubles every two years — has proved largely true. Yet as Darling notes, “intelligence isn’t as simple as a linear graph of processing power”. Robots that possess the intelligence or innate skills of humans remain a long way off. Moral panics around an imminent robot takeover are, she says, “faith-based, not science”.

Advertisement

Darling makes a strong case that we should look to animals for an idea of how our relationship with robots will unfold. They are not “less developed versions of us that will eventually catch up as we increase their computing power; like animals, they have a different type of intelligence entirely”.

We will also run into the same ethical dilemmas with robots as we have encountered with animals. Animals are incapable of following human moral codes, yet during the Middle Ages they were put on trial for crimes. In Paris in 1394 a sow was hanged “for having sacrilegiously eaten a consecrated wafer”. Convicted pigs were buried alive in Saint-Quentin, imprisoned in Pont-de-l’Arche, and tortured in Falaise. Even worms, leeches and moles faced trial.

Today we assign blame to the owners of animals rather than the creatures themselves. Yet Darling notes that it is strange that “animals are remarkably absent” from the discussion about robots and responsibility for physical harm to humans. She worries that our tendency to anthropomorphise may lead us in the wrong direction; that we will project our moral codes on to robots instead of holding their corporate controllers accountable.

That anthropomorphising instinct runs deep. Think about such phenomenon as “face pareidolia”, in which onlookers see images of Jesus or Elvis in toast, or the popularity of Tamagotchis, the 1990s craze for digital pets. And should we forget the quirkiness of human sexuality, there is such a thing as objectophilia, a romantic or sexual attraction to objects.

Metropolis was an early example of popular culture’s fascination with humanoid creations
Metropolis was an early example of popular culture’s fascination with humanoid creations
ALAMY

Darling has interesting insights and marshals her arguments well. Yet I’m less optimistic than she is about the next wave of robotic automation in the world of work, which is predicted to make between one in six and one in ten jobs obsolete.

Advertisement

Looking at recent decades of deindustrialisation in the West, she glibly asserts that “once [manufacturing jobs] started to dwindle, the service sector blossomed as new jobs appeared in distribution, sales, and management”.

This is true in a very basic sense: as jobs in industry and manufacturing declined in the West we transitioned to a world with different jobs rather than no jobs. But there are many places — the Welsh valleys, the rust belt of the US and northern Europe — where people are still waiting for this elusive “blossoming”. The economic and social fallout from the last wave of deindustrialisation lingers on. We are arguably still living with the political fallout too, from the rise of Donald Trump to Britain’s exit from the European Union. Why assume the next wave of technological upheaval will be better managed?

When advanced robots do arrive — and they will, even if we exaggerate the extent to which humans are replaceable — projecting lifelike characteristics on to them “could make us feel that they deserve moral consideration”. But will it stop corporations and governments from treating human beings like robots? In looking to create a just and fair world under automation, Darling urges us to “look beyond the robots and instead to the [capitalist] systems and choices that put us at risk”.

I agree. But what should we do in the meantime? Isn’t the call to work towards a less profit-orientated system a bit like telling us to hope for the best?

It won’t be us deciding which jobs get automated; it may not even be the governments we elect. Instead it will be people such as Jeff Bezos, Elon Musk and the politburo of the Chinese Communist Party. You don’t have to be suffering from a Frankenstein complex to worry that the machines of the future may yet be used to subordinate us.
The New Breed: How to Think About Robots by Kate Darling, Allen Lane, 336pp; £20