RF-Enabled Cyber Physical Human Systems (CPHS)

CPS Award #2238653

As technology advances and an increasing number of devices enter our homes and workplace, humans have become an integral component of cyber-physical systems (CPS). One of the grand challenges of cyber-physical human systems (CPHS) is how to design autonomous systems where human-system collaboration is optimized through improved understanding of human behavior.  A new frontier within this landscape is afforded by the advent of low-cost, low-power millimeter (mm)-wave RF transceivers, which enables the exploitation of RF sensors almost anywhere as part of the Internet-of-Things (IoT), smart environments, personal devices, and even wearables.  RF sensors use range detection and ranging (radar) signal processing (RSP) techniques to provide a unique, information rich dataset of high-resolution measurements of distance, direction-of-arrival, and micro-Doppler (mD) signature (mDS) in a non-contact, non-intrusive fashion in most weather conditions and in the dark.  The mDS is a profile of the instantaneous radial velocity of the surface of the body as a function of time, which reveals unique patterns for different human motions and has biometric properties.  As a result, companies like Google, Amazon, Infineon, and Vayyar, among others, have begun to develop radar-based products – but these are only just beginning to scratch the surface of what can be made possible through the exploitation of RF sensors in CPHS.

     My research aims to make possible new CPHS applications in service of society and a better quality-of-life by re-imagining the way RF sensors are designed and utilized in CPHS.  The utilization of RF sensors in CPHS is currently limited by the difficulty in acquiring large amounts of data to train deep models, the wide range and continuity of possible human movements, presence of other people and obstacles, and dynamic nature of the environment.  In the face of all these challenges, RF sensors currently do not alter any aspect of their interrogation of the scene – they continue transmitting the same signal at the same rate in the same direction and processing the incoming data without taking any advantage of prior knowledge or knowledge gained through other sensors.

My research tackles these challenges through three core ideas:

1. Physics-aware Machine Learning: We must take advantage of what we already know about the physics of RF backscatter and biomechanics of the human body to bridge the data gap and empower deep neural networks (DNNs) to better learn and recognize human motion. 

2. Spatio-temporal Kinematic Parsing: We should stop trying to force DNNs to categorize human movements into broad class labels that have limited utility because they are so generic and susceptible to misclassification when there are other people or obstacles present.  We need break down the motion recognition problem temporally, sequentially and functionally into more fundamental units that can enable better semantic understanding of motion, similar to the way language is broken down into words and sentences governed by grammar. 

3. Fully-Adaptive (Cognitive) Radar: We need to re-design RF sensors as fully-adaptive transceivers that integrate artificial intelligence (AI) and machine learning (ML) into its closed-loop decision making process – a process based not just on received RF signals, but on data from other sensors as well.