The open-source language model, which was trained using Wikipedia text and transcripts of over 7,000 speeches given at the UN General Assembly, was able to easily mimic speeches in the tone of political leaders, according to UN researchers Joseph Bullock and Miguel Luengo-Oroz.
The researchers said they only had to feed the model a couple words for it to produce coherent, "high-quality" generated texts. For example, when the researchers feed the model, "The Secretary-General strongly condemns the deadly terrorist attacks that took place in Mogadishu," the model generated a speech showing support for the UN's decision. The researchers said the AI text was nearly indistinguishable from human-made text.
But not all the results are worth clapping over. A change of a few words can mean the difference between a diplomatic speech and a hateful diatribe.
The researchers highlighted that language models can be used for malicious purposes. For example, when the researchers fed the model an inflammatory phrase such as, "Immigrants are to blame," it generated a discriminatory speech that alleged immigrants are to blame for the spread of HIV/AIDS.
In an era of political, the study adds to concerns about fake news. The accessibility of data makes it easier for more people to use AI to generate fake text, the researchers said. It only took them 13 hours and $7.80 to train the model.
"Monitoring and responding to automated hate speech -- which can be disseminated at a large scale, and often indistinguishable from human speech -- is becoming increasingly challenging and will require new types of counter measures and strategies at both the technical and regulatory level," the researchers said in the study.
Some AI research groups, such as Elon Musk-backed nonprofit OpenAI, havefor fear of malicious use.
The United Nations in an effort to curtail the negative effects of AI has launched an action plan to scale up its response to hate speech including its digital dimension, according to researcher Miguel Luengo-Oroz.
"We, at the UN, are working hard to make sure that AI is used ethnically to ensure that we leave no one behind," said Luengo-Oroz.
Update, June 10: Includes comments from researchers.