Dear Google, you know all those artificially intelligent things that you think are so blissfully fascinating and portend for a wonderful future for our Earth?
Well, have you ever wondered that they might not be such a good thing?
It's a question that's crossed my mind before. But now Stephen Hawking has articulated his own fears in an article for the Independent, written after seeing the new Johnny Depp movie "Transcendence."
Offering an argument that pertains not just to AI of the future, but even to companies like Google and Facebook now, he said: "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
In both the short- and long-term, there are vast potential nightmares lurking.
Hawking, indeed, seems not to trust the so-called AI experts at all.
He said: "So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a message saying: 'We'll arrive in a few decades,' would we just reply: 'OK, call us when you get here -- we'll leave the lights on'? Probably not -- but this is more or less what is happening with AI."
Indeed, it often seems as if the commitment to engineering supersedes any threat the end product might have to humanity. Self-driving cars are but a small example. Engineers don't seem to care much that people actually enjoy driving.
Hawking conceded that robots and other artificially intelligent machines might bring enormous benefits. If they were a success it would be, he said, "the biggest event in human history."
However, he also suggested that AI might be the last event in human history.
He lamented the fact that relatively little research was being done to examine the potential risks and benefits.
He said of the dynamic arrival of AI: "We are facing potentially the best or worst thing to happen to humanity in history."
Hawking has, in the past, tried to remind people that all the fascination with sci-fi can blind us. It can sometimes mask the notion that the consequence might be utter disaster.
Indeed, he once suggested that. Surely not. We're so lovable.
Humanity has a tendency to fall in love with its own cleverness and somehow never consider that something might go wrong.
Perhaps, as Hawking says, more people from the scientific side of life ought to focus on preparing for bad things. You know, like people looking at someone wearing Google Glass and thinking: "Ew."
Just, you know, in case.