Nature, however, shows that vision isn't limited to light. Bats use sound echoes for navigation, while sharks detect electric fields from their prey. Inspired by these natural adaptations, researchers have found that radio waves - with wavelengths much longer than light - offer capabilities beyond human sight, such as seeing through smoke, fog, and some solid objects.
Historically, robots have relied on cameras and LiDAR for high-resolution imaging or traditional radar for viewing through obstacles, though radar yields low-detail results. A team at the University of Pennsylvania's School of Engineering and Applied Science (Penn Engineering) has now developed PanoRadar, a new tool enabling robots to create detailed 3D maps using radio signals, enhancing their vision significantly.
"We wondered if we could integrate the robustness of radio signals with the high resolution provided by visual sensors," said Mingmin Zhao, Assistant Professor in Computer and Information Science.
In a paper set for presentation at the 2024 International Conference on Mobile Computing and Networking (MobiCom), Zhao and researchers from the Wireless, Audio, Vision, and Electronics for Sensing (WAVES) Lab and the PRECISE Center introduced PanoRadar. This system, developed with doctoral student Haowen Lai, graduate Gaoxiang Luo, and research assistant Yifei (Freddy) Liu, employs AI to help robots navigate difficult conditions such as smoke-filled interiors and foggy roads.
PanoRadar works similarly to a lighthouse, scanning the environment with a rotating array of antennas. These antennas emit radio waves and record reflections, which AI processes to build detailed, high-resolution images. Although the sensor costs significantly less than LiDAR systems, its method of collecting data from multiple angles boosts its resolution to a comparable level. "The breakthrough is in how we process radio wave measurements," Zhao explained. "Our algorithms extract rich 3D data from the surroundings."
A major challenge was maintaining high-resolution imaging as the robot moved. "Combining data from numerous positions with sub-millimeter precision was essential for LiDAR-level detail," said Lai, the paper's lead author. Motion errors had to be minimized to avoid compromising image quality.
Teaching PanoRadar to interpret its data was another key task. "Indoor spaces have patterns and structures we used to train our system," Luo noted. Initially, the AI cross-referenced LiDAR data to verify its accuracy, improving its capabilities over time.
Field tests demonstrated that PanoRadar excelled where conventional sensors struggled. "The system maintained precise tracking through smoke and even mapped areas with glass walls," Liu said. This is due to radio waves' resistance to being blocked by airborne particles and their ability to reflect off materials that confound LiDAR.
Future plans include integrating PanoRadar with other sensors like cameras and LiDAR for enhanced multi-sensor systems. The team will also test the system on different robotic and autonomous platforms. "Using multiple sensing methods is vital for complex tasks," Zhao said. "By combining them, we can create robots ready for real-world conditions."
Research Report:Enabling Visual Recognition at Radio Frequency
Related Links
University of Pennsylvania School of Engineering and Applied Science
All about the robots on Earth and beyond!
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |