Google reportedly won't build AI weapons after Project Maven controversy

Ethical guidelines are on the way, reports The New York Times.

Erin Carson Former Senior Writer
Erin Carson covered internet culture, online dating and the weird ways tech and science are changing your life.
Expertise Erin has been a tech reporter for almost 10 years. Her reporting has taken her from the Johnson Space Center to San Diego Comic-Con's famous Hall H. Credentials
  • She has a master's degree in journalism from Syracuse University.
Erin Carson

Google might have ethical guidelines in the works regarding AI and defense contracts. 

Chesnot / Getty Images

Google may be working on a set of principles to guide its work with defense and intelligence contracting. 

Google reportedly promised its employees ethical guidelines relating to military contracts following backlash over the company's work on the Defense Department's Project Maven, a pilot AI program that could be used for drone strikes. The new guidelines would prohibit the use of artificial intelligence technology in weaponry, The New York Times reported on Wednesday, and are expected to be announced within the next few weeks.

Earlier this year, more than 3,000 employees reportedly signed a letter to CEO Sundar Pichai that spoke out against Project Maven, saying "Google should not be in the business of war." Last month, Gizmodo reported that nearly a dozen employees had quit their jobs in protest of Google's continued actions. 

During a companywide meeting last week, Pichai told employees that Google wanted to adopt guidelines that would "stand the test of time," reported the Times. 

Google is not the only major tech company working with parts of the US military. Amazon reportedly provides image recognition tech to the DoD, and Microsoft offers cloud services to military and defense agencies.