Want CNET to notify you of price drops and the latest stories?

Putting vision systems into perspective

Start-up Tyzx brings stereo vision to computers, letting them pinpoint objects in space. The technology could be a boon for everything from surveillance to carpet cleaning.

Stephen Shankland principal writer
Stephen Shankland has been a reporter at CNET since 1998 and writes about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science Credentials
  • I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee
Stephen Shankland
5 min read
A Silicon Valley start-up believes it can improve computer vision by combining a custom-designed chip with the way humans see.

Human brains judge how far away objects are by comparing the slightly different view each eye sees. Tyzx hopes to build this stereo vision process into video cameras.

The Palo Alto, Calif.-based start-up has encoded a processing scheme into a custom chip called DeepSea, allowing the processor to determine not only the color of each tiny patch of an image but also how far away that patch is from the camera.

The technology could be a boon for surveillance systems, strengthening the ability to track people in banks, stores or airports. But stereo vision could have wider uses as well, helping focus a computer's attention and cutting down on the amount of data that needs to be crunched.

For instance, a vacuuming robot trying to discern a table leg through pattern recognition could avoid getting caught up in examining the wallpaper in the background. Similarly, vehicles could use the technology to detect obstacles in their path while filtering out visual noise.

"The biggest value is the segmentation. It separates out the portion of the image that interests you," said Takeo Kanade, a stereo vision computing pioneer at Carnegie Mellon University and a member of an independent Tyzx advisory board. "You have not only appearance but also distance to each point. That makes the subsequent processing, such as object detection and recognition, significantly easier."

Tyzx's first customers are mostly research labs, with other potential business partners evaluating the technology, Chief Executive Ron Buck said in an interview. Those who have bought the systems include MD Robotics, the company that makes the robotic arm for the Space Shuttle and, in the future, for the International Space Station. And ChevronTexaco is employing the equipment for "augmented reality" work--supplementing what ordinary people see with computer imagery for tasks such as operating oil platform cranes in bad weather.

The company hopes to win customers in the military and surveillance industries, and, as costs go down, to expand into broader "intelligent environments" where, for example, doors could open automatically or a house could send a medical alert if someone has been sitting still for an unusually long time. But Tyzx faces a solid challenge translating the idea into a workable product.

"I believe it's a great idea," Kanade said. "Conceptually it's easy, but computationally it's not."

Tyzx is backed by Vulcan Ventures, the investment firm of Microsoft co-founder Paul Allen. It has less than 20 employees, some of whom have years of experience in the field.

John Woodfill and Gaile Gordon launched the company in early 2001, but much of their work precedes that date. A key formula used in the custom chip dates back to 1990, and Tyzx has had prototype chips for about a year, Buck said. It's only recently, though, that Tyzx's ideas have become economically feasible.

Eyes on the prize
Stereo vision may indeed be a leap ahead for computers, but there's still a long way to go before machines can achieve the sophistication of human sight.

"Because vision comes so naturally to us, we don't appreciate the problem intuitively," said David Touretzky, a computational neuroscientist at Carnegie Mellon. "I don't think we got that appreciation until people started trying to build computer systems to see."

A large fraction of the brains of primates such as monkeys, apes and humans is devoted to processing visual information, Touretzky said. There are more than 20 different specialized areas for tasks such as recognizing motion, color, shapes and spatial relationships between objects.

"These areas are all interconnected in ways not fully understood yet," Touretzky said, but together these parts of the brain can discern the difference between the edge of a shadow and the edge of an object or compensate for color shifts that occur when the sun comes out.

Tyzx isn't the only company trying to capitalize on stereo computer vision. Microsoft Research is working on technology that extracts 3D information from 2D pictures. Point Grey Research already has cameras on the market, though its processing algorithms require a full-fledged computer.

In Japan, a company called ViewPlus is working in collaboration with Point Grey Research. Its products, though, combine as many as 60 cameras into a spherical system that produces 20 simultaneous video information streams.

These other companies are taking a fundamentally different approach to Tyzx in one respect: Their systems compare more than two images.

Carnegie Mellon's Kanade said it might seem that comparing three images would be a harder computational task, but in fact having more data to work with can actually make the process simpler.

DeepSea processing
The key development at Tyzx is its custom chip, which runs an algorithm called census correspondence that quickly finds similarities across two streams of video images broken up into a square grid of 512 pixels, or picture elements. The chip can perform this comparison 125 times per second with a video image measuring 512 by 512 pixels, but the 33MHz DeepSea consumes much less power than full-fledged processors such as Intel's Pentium.

"It allows incredibly compute-intensive searching for matching pixels to happen very fast at a very low price. It allows us to bring stereo vision to computers," CEO Buck said.

Another important development needed to reach Tyzx's low-price targets is camera sensors built using the comparatively inexpensive complimentary metal-oxide semiconductor (CMOS) technology--the same process used to build most computer chips, Buck said. Digital cameras today use more elaborate--but more expensive--"charge-coupled devices," or CCDs.

Kanade has an appreciation for the difficulties involved. About 10 years ago he built an expensive but pioneering stereo vision system with many processors that could determine range information by comparing the images from multiple cameras.

Since then, more powerful computer processing abilities have elevated the potential of the field, which Kanade believes will take off once stereo cameras are as cheap as today's ordinary video cameras.

"I'm very impressed with the various attempts which made real-time stereo possible. I think the Tyzx effort may be one of the eventual successes," Kanade said.