AI Case Study
Researchers at MIT have developed a new deep neural network algorithm that can identify people by detecting movements across walls using Wifi signals with ~83% accuracy
A group of researchers at MIT have developed a deep neural network algorithm to estimate human movements through walls by analysing wireless signals. The algorithms estimates 2D poses and is expected to find applications in surveillance, healthcare, military, robotics, gaming etc.
Public And Social Sector
Education And Academia
"The research is based on a fundamentally different approach
to deal with occlusions in pose estimation, and potentially
other visual recognition tasks. While visible light is easily
blocked by walls and opaque objects, radio frequency
(RF) signals in the WiFi range can traverse such occlusions.
Further, they reflect off the human body, providing an opportunity
to track people through walls. Recent advances
in wireless systems have leveraged those properties to detect
people and track their walking speed through occlusions. Past systems however are quite coarse: they either track only one limb at any time, or generate a static and coarse description of the body, where body-parts observed at different time are collapsed into one frame. Use of wireless signals to produce a detailed and accurate description of the pose, similar to that achieved by a state of-the-art computer vision system, has remained intractable.
R And D
Current visual recognition systems only work when there is light and no obstruction. This algorithm gets around the problem using radio signals.
"After 100 participants trained the system, it could correctly identify which researchers were which 83 percent of the time, based on “their style of moving,”"
"During training the system uses synchronized
wireless and visual inputs, extracts pose information
from the visual stream, and uses it to guide the training
process. Once trained, the network uses only the wireless
signal for pose estimation. We show that, when tested on
visible scenes, the radio-based system is almost as accurate
as the vision-based system used to train it. Yet, unlike
vision-based pose estimation, the radio-based system can
estimate 2D poses through walls despite never trained on
"RF-Pose, a neural network system that parses wireless signals and extracts accurate 2D human poses."
"RF-Pose transmits a low power wireless signal(1000 times lower power than WiFi) and observes its reflections from the environment. Using only the radio reflections as input, it estimates the human skeleton."
"synchronized wireless and vision data"
"More than 50 hours of data collection experiments
from 50 different environments, including
different buildings around our campus. The environments
span offices, cafeteria, lecture and seminar rooms,
stairs, and walking corridors. People performed natural everyday
activities without any interference from our side.
Their activities include walking, jogging, sitting, reading,
using mobile phones and laptops, eating, etc. Our data includes
hundreds of different people of varying ages."
"RF-Pose is trained with 70% of the data from visible
scenes, and tested with the remaining 30% of the data from
visible scenes and all the data from through-wall scenarios."