X

Deepfakes Pose a Growing Danger, New Research Says

Bree Fowler Senior Writer
Bree Fowler writes about cybersecurity and digital privacy. Before joining CNET she reported for The Associated Press and Consumer Reports. A Michigan native, she's a long-suffering Detroit sports fan, world traveler, wannabe runner and champion baker of over-the-top birthday cakes and all-things sourdough.
Expertise cybersecurity, digital privacy, IoT, consumer tech, smartphones, wearables
Bree Fowler
2 min read
An illustration of a digital rendering of a face.

Deepfakes are here.

Getty

What's happening

A new report from VMware shows that cybersecurity professionals are seeing more deepfakes being used in cyber attacks.

Why it matters

Deepfakes use artificial intelligence to manipulate video and audio are make it seem like someone is saying or doing something that they're not.

Deepfakes are increasingly being used in cyberattacks, a new report said, as the threat of the technology moves from hypothetical harms to real ones.

Reports of attacks using the face- and voice-altering technology jumped 13% last year, according to VMware's annual Global Incident Response Threat Report, which was released Monday. In addition, 66% of the cybersecurity professionals surveyed for this year's report said they had spotted one in the past year.

"Deepfakes in cyberattacks aren't coming," Rick McElroy, principal cybersecurity strategist at VMware, said in a statement. "They're already here."    

Deepfakes use artificial intelligence to make it look as if a person is doing or saying things he or she actually isn't. The technology entered the mainstream in 2019, sparking fears it could convincingly re-create other people's faces and voices. Victims could see their likeness used for artificially created pornography and the technique could be used to sow political upheaval, experts warned.

While early deepfakes were largely easy to spot, the technology has since evolved and become much more convincing. In March, a video posted to social media appeared to show Ukrainian President Volodymyr Zelenskyy directing his soldiers to surrender to Russian forces. It was quickly denounced by Zelenskyy but showed the potential for harm posed by deepfakes.

Recently, the FBI warned that fraudsters have started using deepfakes to interview for remote or work-from-home jobs in information technology, programming and other software-related roles. The scammers also tried to pass along personally identifiable information stolen from someone else in order to pass background checks, according to the FBI's public service announcement

According to the VMware study, which polled 125 cybersecurity and incident response professionals, email was the top delivery method for last year's deepfake attacks, accounting for 78% of them. That ties in with the continued rise in business-email compromise attempts, where an attacker will pretend to be someone they're not in hopes of getting their target to hand over company information or pay a fake invoice.

Also according to the report, 60% of those polled say they've seen an overall increase in cyberattacks since the start of the Russia-Ukraine war. Ransomware attacks show no sign of letting up, with more than half of those surveyed reporting they'd experienced a ransomware attack in the past 12 months.