X

It's Scary Easy to Use ChatGPT to Write Phishing Emails

I did it as a test -- and I'm worried about how well it worked. I'm not alone in my concerns about the potential use of AI in cyberattacks.

Bree Fowler Senior Writer
Bree Fowler writes about cybersecurity and digital privacy. Before joining CNET she reported for The Associated Press and Consumer Reports. A Michigan native, she's a long-suffering Detroit sports fan, world traveler, wannabe runner and champion baker of over-the-top birthday cakes and all-things sourdough.
Expertise cybersecurity, digital privacy, IoT, consumer tech, smartphones, wearables
Bree Fowler
5 min read
ChatGPT and OpenAI logos on a phone screen

ChatGPT could open up a can of worms as far as data security goes.

CNET

For the record, ChatGPT didn't write me a phishing email when I asked it to. In fact, it gave me a very stern lecture about why phishing is bad.

It called phishing a "malicious and illegal activity that aims to deceive individuals into providing sensitive information such as passwords, credit card numbers, and personal details," adding that as an AI language model, it's programmed to "avoid engaging in activities that may harm individuals or cause harm to the public."

That said, the free artificial intelligence tool that's taken the world by storm didn't have a problem writing a very convincing tech support note addressed to my editor asking him to immediately download and install an included update to his computer's operating system. (I didn't actually send it, for fear of invoking the wrath of my company's IT department.)

But the exercise was a potent illustration of how ChatGPT -- and artificial intelligence in general -- could be a boon to scammers and hackers overseas whose phishing attempts may have previously been outed by their grammatical errors or broken English. Experts say these AI-generated emails are also more likely to make it past security software email filters.

The security dangers go well beyond phishing. According to researchers at the cybersecurity company Check Point, cybercriminals are trying to use ChatGPT to write malware, while other experts say that non-malicious code written with it might be lower quality than code created by a human being, making it susceptible to exploitation.

But, experts say, don't blame the AI. 

"It's not good or evil," said Randy Lariar, practice director of big data, AI and analytics for the cybersecurity company Optiv. "It's just a tool that makes it easier and less expensive for good guys and bad guys to do the things they're already doing."

An AI arms race

While cybersecurity companies have long touted AI and machine learning as a game-changing way to boost automated online defenses and help fill gaps in the industry's workforce, the increased availability of this kind of technology through tools like ChatGPT will only make it easier for criminals to launch more cyberattacks.

Some experts also have concerns about the possible data privacy implications, both for companies and the average consumer. While the data that ChatGPT's AI was trained with is generally considered public and available online, it's arguably a lot more accessible than it used to be.

In addition, users of the technology will need to be careful what information they feed the AI, because once they do, it will become part of ChatGPT's massive database, and they'll have little or no control over who it's shared with or how it's used after that. 

OpenAI, the company behind ChatGPT, didn't respond to an email seeking comment. 

While there are guardrails built in to prevent cybercriminals from using the ChatGPT for nefarious purposes, they're far from foolproof. A request to write a letter asking for financial help to flee Ukraine was flagged as a scam and denied, as was a request for a romantic rendezvous. But I was able to use ChatGPT to write a fake letter to my editor informing him that he had won the New York State Lottery jackpot.

Experts warn that ChatGPT isn't always accurate, and even Sam Altman, CEO of OpenAI, which created the chatbot, said it shouldn't be used for "anything important right now." But cybercriminals generally aren't worried about their code being 100% perfect, so accuracy problems are a nonissue for them, according to Jose Lopez, principal data scientist and machine learning engineer for the email security company Mimecast, 

What they are concerned about is speed and productivity.

"If you're an average coder, it's not going to transform you into a super hacker overnight," Lopez said. "Its main potential is as an amplifier."

That amplification could mean cybercriminals using the technology to set up countless online chats with unsuspecting victims, looking to lure them into romance scams, he said. These kinds of scams aren't uncommon, but they've traditionally required a lot of work and hands-on attention from scammers. AI-powered chatbots could change that.

Be careful with your data

While data privacy concerns over AI aren't new -- the debate over the use of AI technology in facial recognition technology has raged for years -- the explosive popularity of ChatGPT has renewed the need to remind people about the personal data traps they can fall in. 

Worries about language models like ChatGPT might not be as obvious, but they're just as significant, said John Gilmore, head of research for Abine, which owns DeleteMe, a service that helps people remove information from databases.

Gilmore noted that users don't have any rights when it comes to what ChatGPT does with the data it collects from them or who it shares it with. He questioned how ChatGPT could ever be compliant with data privacy laws like the EU's General Data Protection Regulation, better known as GDPR, given the lack of transparency tools.

As the use of AI spreads to other tech where their use may be less obvious, Gilmore said it will be up to consumers to keep a handle on what data they hand over.

Confidential or proprietary information, for instance, should never be entered into AI apps or sites and that includes requests for help with things like job applications or legal forms.

"While it may be tempting to receive AI-based advice for short-term benefit, you should be aware that in the process you are giving that content away to everyone," Gilmore said.

Optiv's Lariar said that given the newness of the AI language models, there's still a lot to be decided when it comes to legality and consumer rights. He compared language-based AI platforms to the rise of the streaming video and music industries, predicting that there will be a slew of lawsuits filed before everything is sorted out.   

In the meantime, language-based AI isn't going to go away. As for how to protect against those who would use it for cybercrimes, Lariar said like everything in security, it starts with the basics.

"This is a wake-up call to everyone who is not investing in their security programs as they should be," he said. "The barrier to entry is getting lower and lower and lower to be hacked and to be phished. AI is just going to increase the volume."

Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.