Imagine driving home after a long day at work. Suddenly, a car careens out of an obscured side street and turns right in front of you. Luckily, your autonomous car saw this vehicle long before it came within your line of sight and slowed to avoid a crash. This might seem like magic, but a novel technique developed at Caltech could bring it closer to a reality.
With the advent of autonomous vehicles, advanced spacecraft, and other technologies that rely on sensors for navigation, there is an ever-increasing need for advanced technologies that can scan for obstacles, pedestrians, or other objects. But what if something is hidden behind another object?
In a paper recently published in the journal Nature Photonics, Caltech researchers and their colleagues describe a new method that essentially transforms nearby surfaces into lenses that can be used to indirectly image previously obscured objects.
The technology, developed in the laboratory of Changhuei Yang, Thomas G. Myers Professor of Electrical Engineering, Bioengineering, and Medical Engineering; and Heritage Medical Research Institute investigator, is a form of non-line-of-sight (NLOS) sensing—or sensing that detects an object of interest outside of the viewer’s line of sight. The new method, dubbed UNCOVER, does this by using nearby flat surfaces, such as walls, like a lens to clearly view the hidden object.
Most current NLOS imaging technology will detect light from a hidden object that is passively reflected by a surface such as a wall. However, because surfaces such as walls predominantly scatter light, the techniques do not produce clear images. Computational imaging methods can be used to extract information from the scattered light and improve the image clarity, but they cannot generate high-resolution images.
UNCOVER, however, directly counteracts scattering through its use of wavefront shaping technology. Wavefront shaping was previously unviable because it requires the use of a guidestar, an approximate point source of light that allows details of the hidden object to be deduced.
“We know that lenses image a point onto another point. If you are looking through a bad ‘lens’ with matte surfaces, the image of a point is now blurred, and the light spreads all over the place, but you can grind and polish the matte surface to navigate the light to the correct position,” explains electrical engineering graduate student Ruizhi Cao, the first author of the Nature Photonics paper. “That is how a guidestar helps you in principle: It tells us where the tiny bumps are, so that we know how to correctly polish the surface.”
Yang and his colleagues found that the hidden object itself could be used as the guidestar. The result is an NLOS imaging method that pieces the scattered light back together into a clear image of the hidden object.
According to Cao, the imaging method might be useful for autonomous driving, rescue missions, and other remote-sensing related missions. In the case of autonomous driving, Cao says: “We can see all the traffic on the crossroads with this method. This might help the cars to foresee the potential danger that one is not able to see directly.”
The use of UNCOVER might allow automobiles to see as well as humans, but also for humans to become better drivers. Whereas a human driver might be able to spot an upcoming jaywalker a few feet away, an autonomous car outfitted with UNCOVER technology could potentially be able to spot such an instance on the next block, provided that the imaging conditions are optimal.
UNCOVER imaging could also prove useful beyond Earth—for example, in future robotic missions to explore Mars, Cao says: “We are counting on the rovers to take images of another planet to help us develop a better understanding about that planet. However, for those rovers, some places might be hard to reach because of limited resources and power. With the non-line-of-sight imaging technique, we don’t need the rover itself to do that. What is needed is to find a place where the light can reach.”
The Nature Photonics paper is titled “High-resolution non-line-of-sight imaging employing active focusing.” Other coauthors include Frederic de Goumoens, Baptiste Blochet, and Jian Xu.
Funding for the research was provided by Caltech’s Center for Sensing to Intelligence (S2I).