Elon Musk, Stephen Hawking win Luddite award as AI 'alarmists'
The world's most famous visionaries, including Microsoft co-founder Bill Gates, are included in a "loose coalition" of experts blamed for artificial intelligence scare tactics.
Rochelle GarnerFeatures Editor / News
Rochelle Garner is features editor for CNET News. A native of the mythical land known as Silicon Valley, she has written about the technology industry for more than 20 years. She has worked in an odd mix of publications -- from National Geographic magazine to MacWEEK and Bloomberg News.
You probably don't think of Elon Musk or Stephen Hawking as technophobes. After all, Musk's Tesla Motors practically created the electric-vehicle market, and Hawking is the brain behind the theory of everything.
Still, both visionaries were identified as members of a "loose coalition" of "alarmists" who won its 2015 Luddite Award, a prize given by the Information Technology & Innovation Foundation. For good measure, Bill Gates, the man most responsible for bringing computers into our homes, was also named.
Why would such august company, behind some of the greatest technological and scientific advances of the past 30 years, be deemed Luddites? Because they "stirred fear and hysteria" in warning that artificial intelligence could too easily run amok. The so-called alarmists were among 10 nominees for the ITIF's second annual Luddite Award, but earned top "honors" after garnering 27 percent of 3,680 votes.
The term Luddite comes from 19th century England, where weavers and textile workers protested power looms and other labor-saving devices. Merriam Webster defines a Luddite as someone opposed to tech advances.
Musk "is the antithesis of a Luddite, but I do think he's giving aid and comfort to the Luddite community," said Rob Atkinson, president of the Washington, DC-based think tank. Musk, Hawking and AI experts say "this is the largest existential threat to humanity. That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation," Atkinson said.
Tesla and Hawking didn't respond to emails seeking comment.
For sure, Musk hasn't been shy about the potential dangers of AI. His worry is that machines endowed with human-level intelligence could harm us all. In 2014, he tweeted that AI is "potentially more dangerous than nukes." In May, Hawking co-wrote an article for The Independent, saying: "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Last January, the two signed an open letter issued by the Future of Life Institute pledging that advancements in the field wouldn't grow beyond humanity's control. In July, they signed another letter urging a ban on autonomous weapons that "select and engage targets without human intervention." The Future of Life Institute researches ways to reduce the potential risks of artificial intelligence running amok. It was founded by mathematicians and computer science experts, including Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.