In this age of disinformation, where fake news thrives and the public has trust issues with technology, Google designed a machine that can deceive humans.
Gosh, what could go wrong?
Google's engineers used artificial intelligence to give birth to something called Duplex -- a digital assistant that can call a local business to make appointments on your behalf over the phone. And the receptionists on the other end of the call have no idea they are conversing back and forth with a robot.
Not only is the typical robotic voice gone, but Google took extra steps to disguise the system to sound more like a human. There are imperfections -- umms and uhhs -- and little quirks of phrasing added ("Oh, I gotcha"). And in examples provided by Google, humans don't even trip up the computer with our nuances of speaking casually.
For a robot's banter to be indistinguishable from a human is quite a breakthrough, but it's one that comes loaded with ethical and moral questions.
In the demonstrations Google provided, the robot never identified itself as a robot. Does Google have the responsibility to disclose when its AI is posing as a human? Google said in an interview with CNET that it's looking into ways to be transparent and identify itself. (And see update, below.)
But if Google wanted to be transparent, why go through the extra steps of hiding the robot interface? Why not just make a smooth-sounding voice, rather than one that tries to act more human with "umms" and "mm-hmms." The computer voice even talks with "upspeak" -- a typical Valley girl trait of ending statements to sound like questions.
Does a company that wants to be transparent need to trick our ears into thinking a robot is human?
It's unclear of how the final product will sound when it launches in the summer. But in this demonstration, Google has shown us how far this technology can go -- and how well a machine can spoof a human, bantering back and forth on a phone, undetected.
I'm disturbed by what this technology means and why we even needed to take artificial intelligence this far. It's one thing to be able to understand the quirks of human conversation. But what good does it bring our society to have a robot pose as human?
People have a hard enough time knowing what's fake news, or what photos are to be trusted. Now you won't be able to believe your own ears.
I'd hate to think about what happens when this technology advances beyond Google's grip. Get ready for robocallers to be less robot, and more chit-chatty.
Is Silicon Valley considering the implications of technology before building it? Instead of asking if we can make something, is anyone asking if we should?
Because if this advanced bot technology is used without transparency, we'll be the ones left asking questions every time we pick up the phone: "Hi... um, are you alive?"
And if the voice on the other end stumbles before they answer, you still may not know for sure.
First published May 9, 5:31 p.m. ET.
Update, May 11 at 1:50 p.m. ET: After this editorial published, Google told CNET explicitly it will launch Duplex with "disclosure built-in."
Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.
Blockchain Decoded: CNET looks at the tech powering bitcoin -- and soon, too, a myriad of services that will change your life.