AI model creates fake UN speeches that are scarily real

It only takes a few words for the model to generate hateful rhetoric.

Dhara Singh CNET News Intern
Dhara Singh is one of CNET's summer interns and a student at the Columbia Graduate School of Journalism. She loves digging deep into the social issues that arise from everyday technology. Aside from wording around, you can catch her discussing Game of Thrones or on a random New York City adventure with her dSLR.
Dhara Singh
2 min read

AI generated speeches add to the concerns over fake news.

James Martin/CNET

It only takes half a day for an AI model to teach itself how to write fake UN speeches, according to a research study published this week. 

The open-source language model, which was trained using Wikipedia text and transcripts of over 7,000 speeches given at the UN General Assembly, was able to easily mimic speeches in the tone of political leaders, according to UN researchers Joseph Bullock and Miguel Luengo-Oroz.  

The researchers said they only had to feed the model a couple words for it to produce coherent, "high-quality" generated texts. For example, when the researchers feed the model, "The Secretary-General strongly condemns the deadly terrorist attacks that took place in Mogadishu," the model generated a speech showing support for the UN's decision. The researchers said the AI text was nearly indistinguishable from human-made text.

But not all the results are worth clapping over. A change of a few words can mean the difference between a diplomatic speech and a hateful diatribe. 

The researchers highlighted that language models can be used for malicious purposes. For example, when the researchers fed the model an inflammatory phrase such as, "Immigrants are to blame," it generated a discriminatory speech that alleged immigrants are to blame for the spread of HIV/AIDS.

In an era of political deepfakes, the study adds to concerns about fake news. The accessibility of data makes it easier for more people to use AI to generate fake text, the researchers said. It only took them 13 hours and $7.80 to train the model. 

"Monitoring and responding to automated hate speech -- which can be disseminated at a large scale, and often indistinguishable from human speech -- is becoming increasingly challenging and will require new types of counter measures and strategies at both the technical and regulatory level," the researchers said in the study.

Some AI research groups, such as Elon Musk-backed nonprofit OpenAI, have refrained from released advanced text-generation models for fear of malicious use. 

The United Nations in an effort to curtail the negative effects of AI has launched an action plan to scale up its response to hate speech including its digital dimension, according to researcher Miguel Luengo-Oroz. 

"We, at the UN, are working hard to make sure that AI is used ethnically to ensure that we leave no one behind," said Luengo-Oroz. 

Watch this: This is how biased AI could quickly become a big problem

Update, June 10: Includes comments from researchers.