When a jumping spider tackles a fly from a distance, its pounce must be precisely executed.To achieve this, the spiders have multiple layers of retinas in each of their eyes. As the image gets sharper in one eye and blurrier in another, a depth of focus emerges, allowing the spider to instantly judge the exact distance needed for a lethal jump. The setup has also allowed Harvard researchers to develop a sophisticated new lens, or "metalens," for microbots and other tiny tech.
In a study published earlier this month, a team of researchers designed a metalens depth sensor that can simultaneously produce two images with different blur. But instead of using layered retinas to capture multiple images simultaneously, as jumping spiders do, the metalens splits the light and forms two differently-defocused images. That data is then fed to an algorithm to get the complete picture.
"Metalenses are a game changing technology because of their ability to implement existing and new optical functions much more efficiently, faster and with much less bulk and complexity than existing lenses," said the paper's co-author Frederico Capasso in a Harvard release.
Currently, depth sensors in, cars and video game consoles use multiple cameras to measure distances. Facial identification on smartphones, for instance, uses thousands of laser dots to map your face's shape. But the new metalens development, researchers hope, could allow camera integration with nanotechnology, microbots and smaller wearables.