Samsung's Neon AI has an ethics problem, and it's as old as sci-fi canon
Commentary: Do Neons dream of electric sheep?
For decades, ethicists, philosophers and science fiction writers have wrestled with what seems increasingly like an inevitability in the evolution of humankind's technological discovery: The creation of a new species of artificial humanity. What better place for such a species' debutante ball than the Las Vegas consumer electronics frenzy, Neon. It's the literal brainchild of Samsung-funded Star Labs' Pranav Mistry, who also serves as CEO of the company he says is building "the first computerized artificial human."? Enter stage right: The eerily realistic interactive CGI avatar,
"Neon is like a new kind of life,"this week at CES. "There are millions of species on our planet, and we hope to add one more."
That's big talk. And it's hard to see, just now, whether Neon will live up to the terrifying promises of its creator, or whether it will ultimately be proven to be a glorified chatbot with a bit more nuance than the notoriously creepy AI news anchor revealed in 2018.
But the big talk is why we're here. Whether Mistry's ambitious language reflects the realistic functionality of Neon matters less to me than the ethics of creating a sentient life form on a planet where billions of animals are currently burning to death in searing contortions thanks to climate change and wildfires.
Two years ago, Samsung said it would hire 1,000 AI specialists and spend $22 billion on AI by 2020. One wonders whether that budget included a line item for AI ethicists at Star Labs (we asked, but Samsung hadn't responded at the time of publication). In the blizzard of AI-product press releases, it's hard to discern whether a sincere conversation on AI ethics is being had at all by the heads of the world's biggest tech companies, or if those conversations are solely relegated to the ethical pushback of outside organizations, which seemed to surge in the last year.
Mistry is quick to point out that Neon isn't making technology for Samsung devices and that it operates as its own company. But Samsung is still its backer.
The electric dreams of buzzy technologists, flush with west coast capital, rarely seem to include sober descriptions of potentially world-changing technology. Instead companies favor token nods to ethics and self-aggrandizing language, which writers like myself too often parrot reflexively in our speedy reporting.
Samsung, for its part, has said privacy, security and ethics are important when it comes to AI.
"We should really worry about ethics," Samsung Chief Strategy Officer Young Sohn said in a November 2018 interview. "What is right? What is wrong? ... And the research? Great. But research for purpose, not for using that data to take advantage of all human beings out there."
We haven't heard much talk from "innovators" about reducing economic inequality by distributing the surplus labor value created by artificial intelligence. Rather, profits grow in the bank accounts of billionaires while economic class disparity increases at a startling rate amid unaddressed AI-related labor displacement anxieties.
Nor have we heard much from technology "leaders" about the use of artificial intelligence to reduce global human rights violations. Rather, we have a crop of technology companies that are using AI to help the US military kill people, to create facial recognition systems used to , and to help addiction-by-design social websites spread micro-targeted political propaganda. The list goes on.
If a company -- any company -- succeeds in creating artificial humans, why should we believe dignity will be intrinsic to the design of a new species? While Neons don't currently have a physical embodiment, the use of the word "currently" in the company-distributed FAQ gives pause. Will it take the ever-more-likely creation of an artificial physical body to prompt designers' ethical accountability for the manufacture of an artificial human?
Inspired by dystopia
In an interview with CNET's Shara Tibken on Tuesday, Mistry said, but they could one day take advantage of holographic technology.
In a December interview with LiveMint, Mistry cemented his work's connection to the precedents of science fiction.
"In Blade Runner 2049, Officer K develops a relationship with his AI hologram companion, Joi," he said. "While films may disrupt our sense of reality, 'virtual humans' or 'digital humans' will be reality. A digital human could extend its role to become a part of our everyday lives: a virtual news anchor, virtual receptionist, or even an AI-generated film star."
You've got to wonder what would compel a person to base their hopes for AI on a cyber-dystopian cautionary tale like Blade Runner. Philip K. Dick's 1982 story (and the Ridley Scott movies that followed it) imagine the torture and subsequent rebellion of a slave species of AI which are nearly indistinguishable from the homegrown humans who've created them. It's hard to imagine any reading of the material so shallow as to mistake it for encouragement.
But maybe it's an apt comparison, intentionally or not. In a world depicted as being both glamorously futuristic for some and nightmarishly decayed for the rest, the antagonist of the story is a corporation that creates sentient life recklessly. Neon's slogan is "more human than human."
Meanwhile, Mistry said in a press release that "Neons will be integrated into our world and will serve as new links for a better future, a world where 'humans are human' and 'machines are humane.'"
I hope it's all big talk. The weight of creation is terrible. To create a species is to morally chain yourself to its well-being and free will, its evolutionary freedom and rights of existence.
What does Mistry -- or any of his funders, or any of us -- have to say of the rights of such a human-like species? With only the sound of applause disrupting the silence of these concerns, what kind of horrible gods would we be?