X

Google's Genesis AI Tool Could Write the News. It Should Be Stopped

Commentary: Generative AI isn't fit to write news stories. We should ensure it stays away.

Jackson Ryan Former Science Editor
Jackson Ryan was CNET's science editor, and a multiple award-winning one at that. Earlier, he'd been a scientist, but he realized he wasn't very happy sitting at a lab bench all day. Science writing, he realized, was the best job in the world -- it let him tell stories about space, the planet, climate change and the people working at the frontiers of human knowledge. He also owns a lot of ugly Christmas sweaters.
Jackson Ryan
6 min read
Sundar Pichai stands to the left of a giant screen that says "Making AI helpful for everyone"

Sundar Pichai discusses Google's plans for AI at the company's I/O conference in May 2023.

Getty Images

As Google CEO Sundar Pichai opened the tech giant's annual I/O developer conference in May, the phrase "Making AI helpful for everyone" was emblazoned on the crisp white screen behind him. Pichai noted this idea was the most profound way to advance the company's mission of organizing the world's information.

The world's information, for the most part, flows through the company's practically unstoppable search engine and its myriad apps and tools. As Pichai noted at I/O, 2 billion people are using six of Google's core apps. A good portion of all that information — the news — is written and delivered by tens of thousands of journalists and writers and content creators, from globe-spanning publications to small startups to freelancers pinch-hitting across magazines and news cadets honing their craft at local papers. 

Google believes it can help them by introducing AI to the mix. But AI isn't what journalists need. And the search giant's foray into newsrooms should concern readers, too.

It's developing a tool, code-named Genesis, that "can take in information — details of current events, for example — and generate news content," according to anonymous sources cited by The New York Times. It has approached organizations like the Times, The Washington Post and News Corp, which owns The Wall Street Journal. It's unclear whether Google is pitching it to be used in news-gathering or is looking to collaborate on development. 

Google didn't respond to requests for comment for this article. 

A Google spokesperson told CNET on July 20 that "these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles," but the very description of the tool by the Times suggests the opposite. Publishing executives who've seen Google's pitch for Genesis described it as "unsettling," according to the Times. The tool can reportedly automate some tasks, providing "options for headlines or different writing styles," according to the spokesperson.

Just over seven months ago, I wrote that ChatGPT wouldn't be coming for journalism jobs (at least in the near future) because it simply can't do what a journalist does. OpenAI's flagship AI app is a word organizer, not a truth collector or imaginative story teller. It can't go out and report from a crime scene or interview a doctor, a schoolteacher or anyone else. It also isn't trained on up-to-the-minute data. Though I was specifically discussing ChatGPT's abilities as a journalist, the argument could broadly be applied to large language models and generative AI at the end of 2022. Their deficiencies were too numerous and their hallucinations too common to present a real threat, I thought. 

That was then. Now I'm not so sure.

Not because I think ChatGPT, large language models or generative AI have gained those capabilities and can adequately do a journalist's job — they can't. But this doesn't seem to matter. The tech giants have gone ahead and manufactured the tools, anyway. They may not be designed to replace journalists, but (what little information we have of) their capabilities suggest they potentially could do just that. 

Watching Trinity

Maybe it's because I'm exhausted by the jamming of AI-sized-pegs into a human-sized-holes or just because I'm fresh out of watching Oppenheimer (it's definitely the latter), but the lasting consequences of developing a tool like Genesis feel like they're bigger than anything we've seen from generative AI so far, with resounding repercussions for the people who will be most affected by the misuse or abuse of that tool: you — the reader.

The Oppenheimer analogy is hauntingly apt here. When scientists learned how to split the atom, it was immediately apparent to them that the reaction could help build a devastating atomic bomb. The first test of such a weapon, Trinity, was conducted in the New Mexico desert on July 16, 1945. The bomb detonated; it worked. Not even a month later, two atomic bombs had been dropped on the Japanese cities of Hiroshima and Nagasaki. I don't wish to diminish the horrors of the A-bomb or equate the power of generative AI with those tragedies. I only want to highlight how quickly we can move from theory to practice, with little understanding of the long-term consequences. It's alarming.

Genesis, as it's currently understood, can't generate news. It takes one element of the journalistic endeavor — writing — and makes it appear as if it's the whole damn show. It isn't. It does a disservice to journalists of all stripes to even suggest this and should concern readers who understand that important stories are more than words placed in sequence. Journalism is sourcing, verifying, fact-checking, spending hours on phones, years in documents. 

Yes, Google claims Genesis could aid a journalist, but the early description of the tool appears to see it function exactly like an aggregation tool. It can quickly piece together something resembling a news article. But if it can only remix previous reporting, would we not be better served by an AI that just presents those original reports to us, with all their sourcing, verifying and fact-checking already done? Isn't this just creating and exacerbating the same problems we already have, as humans writing stories?

This isn't a holier-than-thou tirade about journalists being the arbiters of truth, up in our standing-desk castles, infallible geniuses with knowledge of all things. We aren't and can't be. We're human. And there are already pages and pages of aggregation on the internet. Entire websites are built on it. Every small detail, aggregated for content. For instance, practically every sentence Cillian Murphy has uttered during the press tour for Oppenheimer has hit the internet in one way or another. Sites take one piece of news, remix and rehash and republish it in a fight to the death for eyeballs and Google rankings. 

It results in a homogenous slop of similar-sounding stories flooding the web, TV and social media. Twitter, uh sorry, "X" users will drop "5 key takeaways" ripped from someone else's tweet, and TikTok creators post videos they didn't produce of stories they didn't research, without sourcing where that information came from or even checking if it's true. This slop is, at least partly, why the very idea of journalism has been blurred. We equate the slop with the substance because we see so much of the slop. 

Add a generative AI to the mix and we could ramp up the slop. More worryingly, we introduce risk. Timeliness matters. Accuracy matters. Readers deserve both and want both. AI might provide the former, but what about the latter? It feels like we're standing at the fringes of the Trinity test, watching the bomb go off and imagining it'll never be used in a potentially catastrophic way.

There's no stopping the rise of AI in newsrooms. Organizations are exploring how generative AI can be used judiciously and appropriately as one part of their tool kit, in conjunction with human writers and editors. After some hard firsthand experience, CNET has established parameters for when generative AI is a reasonable tool for journalists to use and when it isn't. You can read our full AI policy right here.

The sort of experience we had has been seen elsewhere, too. At Gizmodo's io9 a few weeks back, AI generated an error-ridden story about Star Wars films and TV shows. These experiences serve as warnings about the rush to develop AI as a provider of news.

Push back against Genesis

Google says it's exploring its AI-enabled tools "in partnership" with publishers, which means there's time for publishers to push back. If the demos are unsettling, as one anonymous executive told the Times, then perhaps we should prevent the use of Genesis? Fortunately, unlike the Trinity tests, these AI experiments aren't being conducted in secret, in a remote tract of New Mexico desert. 

It isn't too late. The standards can be set now: AI shouldn't be used to generate news articles based on current events. It shouldn't be given the capability to do so. 

The only reason we know about Genesis at all is because journalists, speaking to sources with knowledge of the secretive product, were able to verify its existence. They were able to prod Google and get the company on the record, revealing the truth of its existence. They provided context, by speaking with experts, about the potential promise and pitfalls of such a tool. Eventually, they wrote something down.

Human beings broke the Genesis story. Genesis could never.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.