This story is part of, where CNET covers the latest news on the most incredible tech coming soon.
A Neon avatar spoke to me and answered my questions at. When it was done, I had to dust myself off after my trip to the uncanny valley, but I was still impressed by what this ambitious tech could be. The lifelike avatar looks like a human, but the demo was wooden and robotic.
Neon was created by Samsung's Star Labs -- a research lab functioning as an independent company under Samsung. Star Labs CEO Pranav Mistry spoke at the first public demo and talked about the ambition of the company to create truly lifelike AI. The basis for the technology in Neon is the Core R3, which stands for reality, real time and responsive. Neon was first announced on Monday night at CES and we then, as well as earlier Tuesday from a .
In short, Neon isn't meant to replace digital assistants like Amazon's Alexa or Samsung's . At this point, we know a bit more about what it isn't than what it is. Mistry says Neon isn't intended to replace humans or act as a search engine. Instead he intends for his avatars to learn and respond to voice interaction like an actual human would. What it could be eventually and why it exists is still unclear. Star Labs mentioned possibilities like using it as a yoga instructor, a hotel concierge or a banker.
At the demo, Mistry clarified that they're not showing it because they're ready to sell it, but they are excited about it and they want to get others excited. Neon is an open platform, and Mistry invited the crowd to contribute to it.
Samsung has shared a few clips of Neon in action, and at the booth, you can see the avatars moving around. They all at least appear convincingly human. The actual live demo was not as polished, so the avatars shown on a loop at the booth are clearly preanimated.
During the demo, a representative used a tablet to command a Neon avatar. The avatar made different expressions, told a story, raised and lowered her eyebrows, and more. It was compelling in action, but the Neon avatar stood very still and only did the one specific command. It didn't look natural. During the question-and-answer phase, Neon's responses felt canned and were delivered rigidly.
Neon was able to switch to different languages on the fly and the voice sounded human, although it also felt oddly disconnected from the body. I initially thought even the question and answer portion was prescripted. I stuck around to see multiple demos, and each time the presenter asked the exact same questions.
I was pleasantly surprised, then, when I got to ask Neon my own questions and it responded well. Pizza was its favorite food. It doesn't like football. It simply calls itself Neon. When I asked it why it doesn't like football, it paused for a few seconds and froze completely (as you do when you're thinking) then told me that was a real brain burner. I asked it what it wanted in life and it said it wasn't interested in the topic. Not every answer was connected to the actual question, and it couldn't build on the context of the previous question in the football example. In this regard, it's a significant step below smart assistants like Alexa and Google Assistant, both of which can respond to a huge variety of questions instantaneously.
Neon feels ambitious, but it obviously still needs a lot of work. To be fair, it isn't even in beta yet. That's planned for later this year, and Mistry announced a future event called Neonworld for later this year, specifically to show off the next iteration. Mistry also discussed Spectra, the next step in the Neon engine that will add intelligence and emotions to the avatar and, he says, bring it much closer to an actual human in real time.
For now, the primitive demo filled me with skepticism. Neon's getting a lot of hype at CES. The team at Star Labs is excited by the breakthroughs they have made and wanted to show the world. That's understandable, and the raw, unpolished demo was refreshing to an extent, even if it makes me question whether Neon can ever live up to the hype.