X

Better software for rescue mission bots

University of Missouri researchers are developing graphics software that enables search-and-rescue teams to virtually navigate spaces they cannot or have not yet entered.

Elizabeth Armstrong Moore
Elizabeth Armstrong Moore is based in Portland, Oregon, and has written for Wired, The Christian Science Monitor, and public radio. Her semi-obscure hobbies include climbing, billiards, board games that take up a lot of space, and piano.
Elizabeth Armstrong Moore
2 min read
The software identifies distinct objects and maps them in different colors for easier navigation. University of Missouri

Researchers at the University of Missouri are developing computer graphics visualization software that enables search-and-rescue teams to improve the virtual navigation of spaces unsafe for humans.

Remote-controlled robots have already proved invaluable in search-and-rescue missions, reaching places that humans often can't--or shouldn't. (Think earthquakes, bomb threats, or the recent mine explosion in West Virginia.) But software developed in Columbia, Mo., aims to improve what we do with the data these bots collect.

"We are developing computer graphics visualization software to allow the user to interactively navigate the 3D data captured from the robot's scans," says Ye Duan, associate professor of computer science at MU's College of Engineering. "I worked with my students to develop computer software that helps the user to analyze the data and conduct virtual navigation, so they can have an idea of the structure before they enter it."

The bot at the Missouri University of Science and Technology transports a light detection and ranging unit (LIDAR) that wirelessly transmits data about spacing to emergency responders attempting to enter hazard zones. The LIDAR unit takes scans at a rate of up to 500,000 point measurements per second, and can scan through windows and doors.

The team's robot weighs 200 pounds. University of Missouri

The group's new software converts these data points into 3D maps, separating out individual objects, generating floor plans, and color-coding areas based on levels of stability. It takes 30 minutes to about two hours to create these maps, depending on the volume of data gathered--a speed the researchers hope to continue improving.

As for hardware, the team is already working on a proposal to make the robot smaller, lighter, and more flexible so that it could, in effect, catch up with the sophisticated software. Just named on Kiplinger's list of 8 Robots That Will Change Your Life, it weighs a whopping 200 pounds, making it admittedly heavy for many of the precarious settings it needs to navigate.

Of course, this set-up won't likely be limited to search-and-rescue missions. "This system could be used for routine structure inspections, which could help prevent tragedies such as the Minneapolis bridge collapse in 2007," Duan says. "It also could allow the military to perform unmanned terrain acquisition to reduce wartime casualties."

Updated at 7:22 p.m. PST: Due to erroneous information provided by the University of Missouri's news service, a previous version of this report incorrectly reported that the LIDAR unit is able to see through walls.