I'm seated in a giant ballroom where vast rows of chairs face seven glowing supercomputers. Each liquid-cooled rack of servers is lit with a different color. The crowd of a couple hundred people cheers the computers on toward victory.
Though they stand on a dais at the Paris Las Vegas resort as still as statues, the computers are locked in heated battle with each other. Commentators on a jumbo screen offer a play by play of the invisible battle.
"The race for third is very tight," says Hakeem Oluseyi, an astrophysicist, in a rousing voice.
Funded by DARPA, the government agency that commissions far-out research for the US Department of Defense, this is the Cyber Grand Challenge.
The computers are competing to be the best at a tedious and challenging task that human cybersecurity researchers do every day: find a bug in a software program, then fix it. Right now there aren't enough skilled people to do that job, so this technology could take pressure off IT departments everywhere struggling to stay on top of vulnerabilities in their computer systems.
The number of vulnerabilities in computer software running in the world is impossible to know. Cybersecurity firm Symantec estimated in its 2016 report on internet security threats that researchers across the industry found more than 5,500 new vulnerabilities in 2015 alone. Those bugs tend to stick around, as programmers copy-paste outdated software into new products, and users like you and me forget to update our software.
This new technology, experts say, will also give cyberdefenders a much needed advantage in a war that right now heavily favors the bad guys. It's much easier to find one bug and exploit it than to defend against every single possible weakness in a computer system. This competition hopes to flip that script.
"The idea is, you find it before the bad guys do," said David Brumley, CEO of startup ForAllSecure, which is developing a product in line with this concept. Brumley's team Mayhem is competing tonight with an algorithm based on the ForAllSecure product.
Still, the technology is a little eerie. As I move closer to the seven competing computers, I can't help but feel like these might be our new cyber overlords.
I'm not alone in this worry. In a tweet, SpaceX and Tesla boss Elon Musk compared the DARPA project to the creation of Skynet, the technology that launched the robot wars described in the "Terminator" movie series. Spoiler alert: That story ends in nuclear disaster.
Who knows? This may be the last story I ever write.
Small steps to computer autonomy
All right, it's probably too soon to panic. Cybersecurity experts are still taking baby steps in adopting the technology called machine learning. The term refers to a broad range of techniques, all of which amount to humans taking their hands off the steering wheel and letting computers drive.
Machine learning is already at work defending our computers, noticing when a hacker has broken in. That might sound basic, but humans aren't very good at doing that.
The average time it takes US companies to realize they've been hacked is 146 days, according to a February report from cybersecurity firm FireEye. That number is actually an improvement from previous years, but it's still enough time for hackers to make a lot of mischief.
Take the data breach at Target in 2013, which compromised credit card details of 40 million customers. Experts say it could have been stopped much earlier. In fact, Bloomberg reported the programs guarding Target's computer systems noticed something was wrong and notified the IT department right away.
The problem was, that notice was lost in all the other warnings sent out that day. There are just too many suspicious things happening on a given computer system at any one time for a team of human beings to sort through.
"You're looking for a needle in a stack of needles," says Caleb Barlow, vice president of IBM Security. His company is training its Watson artificial intelligence technology to analyze blog posts, academic journals and news articles about cybersecurity threats. The goal is to have Watson apply what it learns to real-life cybersecurity incidents, offering advice to teams of humans.
Several other companies are developing computer algorithms that can sort out the false alarms from the real emergencies. Right now, the technology requires a feedback loop with human experts. The program makes its best guess as to what's dangerous, and humans can go in and tell the computer if it got that right.
Eventually, experts say, the computers won't need us.
Computers helping computers
The Cyber Grand Challenge aims to take machine learning tools far beyond finding a hacker in a machine. Rather than sitting around waiting to be hacked, this technology could automatically fix the software bugs that let hackers in.
The seven teams competing Thursday came from universities and cybersecurity companies, each bringing a computer that can run a specialized algorithm without human interference. In 96 rounds of competition, the computers looked at code provided by the contest organizers in search of vulnerabilities. Then they patched the code and sent it back to run on a test computer.
To fix the problem, the supercomputer has to do some reasoning, according to Tim Bryant, technical lead for the Deep Red team, whose members work at cybersecurity company Raytheon. It looks at the code and says, "Here's what we think the program was supposed to have done," Bryant said.
There's also gamesmanship. The algorithms steal patches from each other and change their approach based on how they can score the most points.
But really, is this a good idea?
So what is it that Elon Musk and I are afraid of? This sounds great. Supercomputers will fix all our software flaws, and hackers will be left out in the cold. Musk himself is investing in artificial intelligence, so maybe he was just kidding. Heh.
But if a computer can fix a problem in my software, couldn't it just as easily exploit the problem? Make a few changes, and you're not looking at supercomputers that protect us from hackers. You're looking at supercomputers that are hackers.
Brumley acknowledges this is a valid concern. Still...
"I don't think that's a bad thing," he says. "Like any tool, you have to deploy them ethically."
I guess we'll just have to trust Brumley, because in a nail-biting finish, his ForAllSecure team wins Thursday's competition. All hail our new supercomputer overlord.
Black Hat Defcon 2017
reading•Rise of the hacking machines
Aug 4•Paranoia and break dance battles: My first crazy hacker fest
Jul 31•Amazon suspends sales of Blu phones due to privacy concerns
Jul 30•Defcon hackers find it’s very easy to break voting machines
Jul 30•Hacker unlocks a ‘secure’ smart gun with $15 magnets