We haven't been able to take payment
You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Act now to keep your subscription
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Your subscription is due to terminate
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account, otherwise your subscription will terminate.
author-image
RED BOX | COMMENT

Maybot talks robots

Matt Chorley
The Times

It must have been a brave adviser who suggested to the prime minister that she give a speech about robots, rather than like one.

But as Theresa May wakes in a snowy Davos, she is determined to talk tech.

She’d rather not talk about Brexit. There is to be no repeat of “citizens of nowhere” (though Team May is stressing that she is “uncomfortable” with the idea of Davos). Defence and security is a bit tricky, given rows back home about spending.

Now, it’s easy to joke that on the list of possible topics robots and artificial intelligence comes just above the weather, but actually this is exactly the sort of big cultural issue that can get overlooked by a political system gripped by short-termism.

For a long time politicians on both sides of the Atlantic were reluctant to take on the cool kids in Silicon Valley with their space hoppers and breakout areas and codes.

Advertisement

There was a not entirely unjustified fear that they would look like analogue Grandad if they started asking questions about the popular microblogging websites. By definition politicians caught up with the business of running the country are less likely to be early adopters of new technology. Tony Blair didn’t have a mobile phone until he left Downing Street in 2007. I was there when David Cameron inquired “what is the Buzzfeed”, to someone from “the Buzzfeed”. Amber Rudd was mocked when she used the phrase “necessary hashtags” when talking about WhatsApp.

It’s a reluctance that the tech firms were happy to encourage. For years they had a belief that they are trans-national, without borders or rules or, for journalists, a press office with something as retro as a phone number.

There are signs that this is changing. The move towards treating social media sites not just as platforms but publishers, and making them take responsibility for what they publish, is important and one that May will hint at today.

She will tell the world’s biggest investment companies today to put pressure on social media providers to remove terrorist and extremist content.

“No one wants to be known as ‘the terrorists’ platform or the first choice app for paedophiles,” she says, pointing out that it’s not just the law which matters to these firms, but their reputation too.

Advertisement

If Facebook can stop me uploading a video because the music track is copyrighted, why can’t it stop someone uploading a video showing a beheading? If Twitter can read my tweets to target ads at me, why can’t they curb abuse?

It’s also important to remember that tech and artificial intelligence is not all about Facebook and Twitter and Netflix.

There are big ethical questions about the way companies and public bodies can use data and algorithms, especially online, to research, segment, target and decide. (Something we discuss in this week’s podcast.)

The government promises a £9 million Centre for Data Ethics and Innovation to get to grips with the emergence of artificial intelligence, which promises great gains for countries such as the UK but real concerns too.

Just this week we’ve seen how an insurance website charges people more if they log in with a Hotmail email address. A computer system used for sentencing in a Florida court was accused of being racist after treating black suspects more harshly. In 2016 Microsoft’s chatbot, Tay, was quickly corrupted by Twitter users so that when it was asked “Do you support genocide?”, it replied: “I do indeed.”

Advertisement

Who is responsible if a driverless car crashes? Or if artificial intelligence controls where a missile goes? Or wrongly diagnoses an illness? Who rules the robots?

Matt Chorley’s analysis first appeared in The Times Red Box morning email. Sign up at thetimes.co.uk/redbox