X

At I/O, Google Talks Up 'Responsible AI.' What's That All About?

To guard against potential harms, AI makers need to stay vigilant. At its developer conference, Google outlined what it's up to.

Lisa Lacy Lead AI Writer
Lisa joined CNET after more than 20 years as a reporter and editor. Career highlights include a 2020 story about problematic brand mascots, which preceded historic name changes, and going viral in 2021 after daring to ask, "Why are cans of cranberry sauce labeled upside-down?" She has interviewed celebrities like Serena Williams, Brian Cox and Tracee Ellis Ross. Anna Kendrick said her name sounds like a character from Beverly Hills, 90210. Rick Astley asked if she knew what Rickrolling was. She lives outside Atlanta with her son, two golden retrievers and two cats.
Expertise Technology | AI | Advertising | Retail
Lisa Lacy
3 min read
James Manyika outlines how Google is balancing AI's risks and benefits at its Google I/O event.
Google/Screenshot by CNET

We know great power requires great responsibility, and that's especially true in AI. The chatbots and other generative AI tools that have proliferated over the last year and a half can engage you in human-sounding dialogues; write plausible emails and essays; whip up audio that sounds just like real-world politicians; and create imaginary photos and videos that look closer and closer to the real thing.

AI Atlas art badge tag

I mean, what's not to worry, right?

Actually, worrying about AI is a really big deal, whether it's concern about the potential for misuse by humans or rogue acts by AI itself.

Which is why when a company like Google hosts a splashy event for software developers, it talks about the notion of responsible AI. That came through clearly on Tuesday during the two-hour Google I/O keynote presentation, which was heavy on the company's latest AI developments, especially as they relate to its Gemini chatbot.

While advancements like long context windows, multimodality and personalized agents could help us to save time and to work more efficiently, they also present opportunities for, say, scam artists to scam... and worse.  

To guard against those sorts of bad outcomes, AI makers need to stay vigilant. In the keynote presentation, Google outlined its approach to responsible AI, which includes a combination of automated and human resources. 

"We're doing a lot of research in this area, including the potential for harm and misuse," James Manyika, senior vice president of research, technology and society at Google, said during the keynote event.

Google's not alone in talking up the need for AI principles to help balance innovation with safety. ChatGPT maker OpenAI, in announcing its GPT-4o model on Monday, referenced its own guidelines. In its blog post, it noted that "GPT-4o has safety built-in by design" including new systems "to provide guardrails on voice outputs."

Do a quick, well, Google search and you'll find that seemingly every company has pages dedicated to responsible or ethical AI. For instance: Microsoft, Meta, Adobe and Anthropic, along with OpenAI and Google itself.

It's a challenge that will only get more difficult as AI yields increasingly realistic images, videos and audio.

Here's a look at some of what Google is doing.

Watch this: Everything Google Just Announced at I/O 2024

AI-assisted red teaming

In addition to standard red teaming, which occurs when ethical hackers are allowed to emulate the tactics of malicious hackers against a company's systems to identify weaknesses, Google is developing what it calls AI-assisted red teaming.

With this tactic, Google trains AI agents to compete with each other and thereby expand the scope of traditional red-teaming capabilities.

"We're developing AI models with these capabilities to help address adversarial prompting, and limit problematic outputs,"Manyika said.

Google has also recruited two groups of safety experts from a range of disciplines to provide feedback on its models.

"Both groups help us identify emerging risks, from cybersecurity threats to potentially dangerous capabilities in areas like chem bio," Manyika said.

OpenAI also taps into red teaming and automated and human evaluations in the model training process to help identify risks and build guardrails.

Synth ID

To prevent misuse of its models, including the Imagen 3 image generator and the new Veo video generator, for spreading misinformation, Google is expanding its Synth ID tool, which adds watermarks to AI-generated images and audio, to text and video.

It will open source Synth ID text watermarking "in the coming months."

Last week, TikTok announced that it would start watermarking AI-generated content.

Societal benefits

Google's responsible AI efforts also focus on how to benefit society, such as helping scientists treat diseases, predict floods and help organizations like the United Nations track progress of the world's 17 Sustainable Development Goals.

In his presentation, Manyika focused on how generative AI can improve education, such as acting as tutors for students or assistants for teachers.

This includes a Gem, or a custom version of Gemini like ChatGPT's custom GPTs, called Learning Coach, which provides study guidance, as well as practice and memory techniques, along with a family of Gemini models focused on learning, called Lear LM. They'll be accessible via Google products like Search, Android, Gemini and YouTube.

These Gems will be available in Gemini "in the coming months," Manyika said.

Editor's note: CNET is using an AI engine to help create a handful of stories. Reviews of AI products like this, just like CNET's other hands-on reviews, are written by our human team of in-house experts. For more, see CNET's AI policy and how we test AI.