X

Nvidia AI Tech Lets Computers Understand the 3D World From 2D Photos

The graphics giant is eager to build on creating interactive "digital twins" of real-world locations.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
2 min read

Nvidia AI tech constructs 3D models out of a collection of 2D photographs.

Nvidia; animation by Stephen Shankland/CNET

Graphics chips are good at taking 3D scenes like video game battlefields or airplane designs and rendering them as 2D images on a screen. Nvidia, a top maker of such chips, now is using AI to do the exact opposite.

In a talk at Nvidia's GTC, the company's annual GPU Technology Conference, researchers described how they can reconstruct a 3D scene from a few camera images. To do so, Nvidia uses a processing technical called a neural radiance field, or NeRF. Nvidia's is way faster than earlier methods -- so fast that it can run at a video's 60 frames per second.

A NeRF ingests photo information and trains a neural network, an AI processing system somewhat like a human brain, to understand the scene, including how light rays travel from it to any given point surrounding it. That means you can place a virtual camera anywhere to get a new view of that scene.

It may not seem useful, but reconstructing 3D scenes is important for computers trying to understand the real world. One example Nvidia also showed off at GTC is autonomous vehicle technology that turns video into a 3D model of streets so developers can replay many variations of that scene to improve their vehicles' behavior.

Creating computer models of the real world also could be useful in building the 3D realms called metaverse that the tech industry is eager for you to inhabit for entertainment, shopping, work, chats and games. Nvidia, with its Omniverse technology, is keen on making it easier to create interactive "digital twins" of real world areas like roads and warehouses.

Nvidia's work also showcases the growing capability of artificial intelligence technology. By aping real brains and the way they learn from real-world data, the computing industry has found a way to program computers to recognize patterns in complex data. You'll likely be familiar with some AI uses, like detecting faces for camera focusing or processing Amazon Alexa voice commands. But AI is spreading everywhere, like detecting fraudulent financial transactions nearly instantly, designing computer chips and scrubbing bogus businesses off Google Maps.

Chip circuitry to accelerate AI is spreading across the tech world, too, from Nvidia's enormous new H100 processor designed to train neural network AI models to Apple iPhones that run those models.

Watch this: Nvidia Unveils H100 AI Chip