Twitter said Monday it's going to make policy changes around how it deals with manipulated videos such as and it's asking the public for help.
"We think that a lot of people will have an interest in this space," said Twitter Chief Legal Officer Vijaya Gadde at the WSJ Tech Live conference in Laguna Beach, California.
Deepfakes use artificial intelligence to create videos of people doing or saying something they didn't. Social networks, including Facebook and Twitter, have beenahead of the 2020 elections. Earlier this year, Twitter and Facebook left up an that made it seem like she was slurring her words, a move that drew criticism, especially from Democrats.
The US intelligence community's 2019 Worldwide Threat Assessment also noted that deepfakes could be used to meddle in elections both in the US and in allied nations.
Gadde didn't say when Twitter will roll out these new policy changes around manipulated videos. The company is looking at what to do once deepfakes are detected, including whether to label the videos or take them down.
Twitter said in a tweet that it will start gathering public feedback in the coming weeks.