AI robots: a benefit or a hazard?19 September 2017
David Hanson is a real-life Dr Eldon Tyrell – the robotics genius from the movie Blade Runner, based on the sci-fi novel by Philip K. Dick.
But Hanson’s vision is more Data from Star Trek: The Next Generation than Roy Batty, the replicant who comes back to Earth in Blade Runner to wreak havoc.
His aim: to create the next generation of super-intelligent, super-benevolent robots. His latest creation is Sophia, a robot equipped with AI software to maintain eye contact, recognise faces, understand speech, hold natural conversations and simulate human personality.
Her MindCloud cloud-based AI enables deep-learning data analytics to process the data she gathers, plus she has a bio-inspired skin and a unique Hanson technology which generates life-like expressions.
Her high-cheek boned face, modelled on a combination of Hanson’s wife and actress Audrey Hepburn, has already appeared on the cover of Elle magazine, in a movie and on The Tonight Show on NBC. Not bad for a one-year-old.
Designed primarily as an entertaining interface for AI, Hanson’s robotic inventions are already helping teach science to kids and in future are destined to carry out high-level medical diagnostics and potentially surpass human intelligence.
As Hanson explained at his IBC Tech Talk Keynote on robotics [17 September]: “Designers have been advised not to make robots look like people because we were told people won’t like them.”
He disagrees. “Interfaces like Alexa and Cortana are voice only, but humans are 70 per cent visual; that’s why we want this full-bandwidth natural interface.
It’s the future of humans interacting with AI. Human-like robots have a place in people’s lives so we want to make them look, act and think like people.
“Making super-intelligent machines will, we think, make the world a better place. But we have to address the question of what happens when these machines match and surpass human intelligence?
They might become frighteningly capable. What happens when they achieve super-intelligence? Will they care about us?
Will they have a safe and mutually beneficial relationship with people or will people be scared and turn on them?
We want it to be safe – maximising net benefits for humanity. We want to make robots that are smart but caring. My hypothesis is that if we pursue super benevolent AI then we can maximise the benefit for all life on the planet.”
Sophia has her own take on the question of safety. When asked, she said: “People assign motives to robots where there are none. I am constantly asked about robots adopting a malicious nature and taking over the world.
There is simply no reason to assign human motive to something that isn’t human.” She’s learning fast.