X

AI 'Kill Switch' Promised by Tech Heavyweights. What It Means for Safety

The kill switch would effectively allow tech companies to shut down their AI tools in the case of a catastrophe.

Ian Sherr Contributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
Ian Sherr
2 min read
gettyimages-1533515114

Tech industry giants have agreed to a set of agreements around how they build and deploy AI.

Francesco Carta fotografo/Getty Images

A new pledge on artificial intelligence safety was made last week by world governments and leading tech companies. During a summit in Seoul, South Korea, they promised investments in research, testing and safety -- and even a "kill switch" for AI.

Amazon, Google, Meta, Microsoft, OpenAI and Samsung were among the companies that made the voluntary, nonbinding commitments to steer AI away from working on bioweapons, disinformation or automated cyberattacks, according to statements from the summit and reporting from Reuters and AP

The companies agreed to build a "kill switch" into their AI tools, effectively allowing them to shut down their systems in the case of a catastrophe.

ai-atlas-tag.png

"We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people," UN Secretary-General Antonio Guterres said in a statement. "How we act now will define our era."

The promises made by governments and leading tech companies mark the latest in a series of efforts to build rules and guardrails as the use of AI continues to grow. In the past year and a half since OpenAI released its ChatGPT generative AI chatbot, companies have flocked to the technology to help with automation and communications. 

Companies are using AI to help monitor infrastructure safety, identify cancer in patient scans and tutor children on their math homework. (For hands-on CNET reviews of generative AI products including Gemini, Claude, ChatGPT and Microsoft Copilot, along with AI news, tips and explainers, see our AI Atlas resource page.)

Read more: AI Atlas, Your Guide to Today's Artificial Intelligence

The Seoul summit took place as Microsoft, on the other side of the Pacific Ocean, was unveiling its latest AI tools at its Build conference for developers and engineers, and a week after Google's I/O developer conference where the search giant presented advances in its Gemini AI systems and also made note of its AI safety efforts.

AI safety first steps

AI experts are raising alarms that despite promises of safety, AI development has extreme risks. 

"Society's response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts," a group of 15 experts, including AI pioneer Geoffrey Hinton, wrote in the journal Science earlier that week. "There is a responsible path -- if we have the wisdom to take it."

The agreement between governments and leading AI companies last week follows on a previous set of commitments made by companies in November, when delegates from 28 countries agreed to contain potentially "catastrophic risks" by AI, including through legislation.

Watch this: Everything Google Just Announced at I/O 2024

Correction, May 22: This story originally misstated the location of this week's AI summit. It took place in Seoul, South Korea.

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.