Image: Getty Images/iStockphoto
Scientists from Carnegie Mellon University have developed a cheap way to sense humans through walls by using two Wi-Fi routers to image a human’s 3D shape and pose.
The researchers outline in a new paper how they used a deep neural network called DensePose that maps Wi-Fi signals (phase and amplitude) to UV coordinates, which is when a 3D model’s surface is projected to a 2D image for mapping a computer-generated image.
DensePose was developed by researchers at Imperial College London, Facebook AI, and University College London.
The Carnegie Mellon researchers’ key achievement with DensePose, first reported by Vice, is that they can accurately map multiple subjects’ poses with an off-the-shelf 1D sensor — Wi-Fi antennas — rather than expensive RGB cameras, LiDAR, and radars. Also, they were able to use Wi-Fi to sense humans and their pose, rather than just being able to only accurately locate an object in a room.
“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing Wi-Fi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing,” researchers Jiaqi Geng, Dong Huang, and Fernando De la Torre explain in their paper, DensePose From WiFi.
The researchers ague that their Wi-Fi approach to imaging humans in households could