Bots frequently outsmart us. They're able to break through barriers like CAPTCHA and "I am not a robot" verification tests to access sensitive data. Telling them apart from humans will likely get harder as they're trained using advanced technologies like machine learning and neural networks.
But there may be a secret weapon to effectively set humans and bots apart: memes.
That's right. An image with a surprised-looking cat and some snarky text may be good for more than just laughs and emotional comfort. It may also help keep bots from signing into our personal accounts.
Researchers from the University of Delaware published a study online last month suggesting memes can be effectively used to tell humans and bots apart. They propose memes could be "one of the strongest techniques to distinguish between a human and a bot based on conscience and interpretation." After all, bots don't get cultural references and online humor the way humans do, the authors argue. The study was published in a volume of the Advances in Intelligent Systems and Computing book series.
"Everybody is on social media these days, and they're all acquainted with memes," said Ishaani Priyadarshini, a Ph.D. candidate in electrical and computer engineering at the University of Delaware and an author of the study. "Humans can understand [memes] at a better depth than a bot."
It's become increasingly important to ensure the effectiveness of methods that block bots from accessing personal and sensitive data. More than half of web traffic is made up of bots, according to the study, and they can wreak havoc by swaying elections or sabotaging online shopping.
Here's how using memes to block bots might work. After entering the correct username and password, users are directed to a page displaying a meme and asked to interpret its meaning from a set of options. For example, an image of a little girl running furiously with the text "Coffee is ready" could show the following four options:
Option 1: The child is running away from coffee
Option 2: The child is scared of coffee
Option 3: The child is holding coffee
Option 4: The child wants coffee
Humans will likely choose Option 4 in this scenario, but a bot trained to recognize images and expressions might not be able to do the same, the study suggests. A bot might gather that an anxious-looking child is running and holding something, but it probably won't be able to pick up underlying context and meaning.
Users are granted access if they choose the correct option. The meme and options used in a specific authentication process are then deleted from the database to ensure people won't know the correct answer beforehand. Thankfully, there's a nearly endless supply of memes online, and the database could reuse a meme template with varying texts, fonts and colors and still effectively stump a bot.
"This may be by far the most robust authentication system to tell a human and a bot apart, thereby making it almost impossible to break due to the unending number of memes and modifications that may be applied," the study says.
Given bots are increasingly becoming more sophisticated, will they one day be smart enough to understand memes, too?
"A thousand situations may be created out of a single meme [template], and it might be difficult for a bot to interpret all of them in a different manner by just looking at it, or just by inferring some of the text that's there on the meme," Priyadarshini said. "That's the hope."
In the meantime, this may be our chance to leverage internet phenomena to our defense before pesky bots get in on the joke. Time will tell who gets the last laugh.