Visa Uses Fraudsters' AI Tools to Beat Them at Their Own Game

A new tool assesses the likelihood of fraudulent transactions in real time. What does it mean for you?

Lisa Lacy Lead AI Writer
Lisa joined CNET after more than 20 years as a reporter and editor. Career highlights include a 2020 story about problematic brand mascots, which preceded historic name changes, and going viral in 2021 after daring to ask, "Why are cans of cranberry sauce labeled upside-down?" She has interviewed celebrities like Serena Williams, Brian Cox and Tracee Ellis Ross. Anna Kendrick said her name sounds like a character from Beverly Hills, 90210. Rick Astley asked if she knew what Rickrolling was. She lives outside Atlanta with her son, two golden retrievers and two cats.
Expertise Technology | AI | Advertising | Retail
Lisa Lacy
3 min read
A hacker attempts to steal financial information.
Seksan Mongkhonkhamsao/Getty Images

About 42 million US adults have been impacted by identity theft, according to a 2023 survey from analytics firm Gallup. Perhaps as a result, identity fraud ranked as respondents' top ongoing concern — more so even than violent crime.

It's a problem that's getting worse, thanks in part to increasingly powerful AI tools that are easily accessible to cybercriminals. The good news is that these tools are also available to financial institutions and cybersecurity firms.

AI Atlas tag
Zooey Liao/CNET

One such institution, Visa, is now using generative AI to better determine the likelihood of one of the most common types of fraud — and hopefully prevent it, to reduce not only financial loss but also the headaches that go with it.

The tech Visa is using is based on the same type of technology we use via AI chatbots to boost our resumes, generate lifelike images and create wild poetry. But instead of ingesting content like books, articles and social media posts to understand how words are used, this anti-fraud tool has been trained on financial transactions.

How likely are fraudulent transactions? 

Also known as a brute force attack, an enumeration attack occurs when a bad actor uses automated scripts and botnets to submit hundreds of thousands of card-not-present transactions, which include web-based payments like online orders when no one has to actually physically hand over a card to the merchant.

The miscreant's goal is to enter a massive amount of possible combinations of account numbers, expiration dates and three-digit codes to see what sticks. An approval signifies legitimate account details, which hackers can then sell on the dark web or use to make unauthorized purchases.

Enumeration attacks have resulted in more than $1 billion in losses in the last year on Visa's network alone, making them one of the most prevalent types of fraud, according to Michael Jabbara, senior vice president and global head of fraud services at Visa.

How can generative AI help?

The Visa Account Attack Intelligence, or VAAI, tool applies deep learning technology to card-not-present transactions to help identify the financial institutions and merchants fraudsters are targeting. It's been available since 2019

Now Visa is adding what it calls the VAAI Score to better determine the likelihood of enumeration attacks by assigning each transaction a risk score in real time. This score will help issuers make better decisions when it comes to blocking transactions, Paul Fabara, chief risk and client services officer at Visa, said in a statement. That means cardholders making legitimate purchases don't have to worry about the card issuer declining their transactions as a preventative measure.

The VAAI Score will be available to US issuers in August.

The VAAI Score has been trained on more than 15 billion Visa transactions to learn normal and abnormal transaction patterns and then evaluate each card-not-present transaction against previous spending patterns to determine the risk score, according to a press release.

Fraudsters don't actually want to buy anything when making an enumeration attack on, say, an online flower shop — they just want to validate credentials, according to Jabbara. But it's nevertheless still challenging for card issuers to differentiate between these fraudulent transactions and legitimate activity.

"We've been able to bring in a generative model that allows us to generate new data that resembles the attack data," he said. That helps the model get even better at identifying potentially fraudulent transactions.

Now the VAAI Score can help detect an attack on an individual level, which means Visa can block the bad actor instead of multiple transactions from the same retailer.

"Then those legitimate orders can continue to flow through," Jabbara said. "So it's kind of a radical enhancement in terms of detection and mitigation without customer impact."

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.