Physics-Aware Machine Learning (PhML) for Radar Datasets

Radar data is not inherently acquired as an image and is instead a time-varying complex data series upon which radar signal processing techniques can be applied to generate 2D, 3D, and 4D data representations. The signal observed and RF data representations is strongly related to principles of electromagnetic backscatter and target kinematics. Thus, robust, pervasive, and accurate radar-based sensing requires the development of novel deep neural network architectures tailored for RF datasets.

Physics-aware ML is foundational paradigm for achieving this goal by integrating physics-based models with data-driven deep learning (DL) to reap the benefits of both approaches. Physics-based models allow for the prediction of the expected radar return given any sensor position, observation angle and environmental scene. However, no model is perfect and the dynamic nature of the sensing environment ensures that there will always be some part of the signal that is unknown, which can be modeled as noise, bias or error uncertainty. Physics-aware machine learning combines the strengths of DL and physics-based modeling to optimize trade-offs between prior versus new knowledge, models vs. data, uncertainty, complexity, and computation time, for greater accuracy and robustness. My research develops novel physics-aware machine learning approaches specifically for RF datasets, providing improved methods for synthesizing RF training data, designing physics-aware generative adversarial networks, and developing deep neural network-based classification methods.

Physics-Aware Machine Learning for Dynamic, Data-Driven Radar Target Recognition
Dynamic Data-Driven Application Systems (DDDAS) Conference, Cambridge, MA, October 6-10, 2022.
S.Z. Gurbuz
PDF ]

Physics-aware ML is an emerging field that strives to integrate physics-based models with data-driven deep learning (DL) to reap the benefits of both approaches. This paper addresses the challenge of designing physics-aware synthetic data generation techniques for training deep models for ATR. In particular, physics-based methods for data synthesis, the limitations of current generative adversarial network (GAN)-based methods, new ways domain knowledge may be integrated for new GAN architectures and domain adaptation of signatures from different, but related sources of RF data, are presented. The use of a physics-aware loss term with a multi-branch GAN (MBGAN) resulted in a 9% improvement in classification accuracy over that attained with the use of real data alone, and a 6% improvement over that given using data generated by a Wasserstein GAN with gradient penalty. The implications for DL-based ATR in Dynamic Data-Driven Application Systems (DDDAS) due to fully-adaptive transmissions are discussed.

Physics-Aware Generative Adversarial Network for Human Activity Recognition
IEEE Transactions on Aerospace and Electronic Systems, vol. 59, no. 3, pp. 2994-3008, June 2023.
M.M. Rahman, S.Z. Gurbuz, and M.G. Amin
Link to Pub ] [ PDF ]

Generative adversarial networks (GANs) have recently been proposed for the synthesis of RF micro-Doppler signatures to address the issue of low sample support and enable the training of deeper neural networks (DNNs) for enhanced RF signal classification. But GANs suffer from systemic kinematic inconsistencies that decrease performance when GAN-synthesized data is used for training DNNs in human activity recognition. As a solution to this problem, this paper proposes the design of a multi-branch GAN (MBGAN), which integrates domain knowledge into its architecture, and physics-aware metrics based on correlation and curve-matching in the loss function. The quality of the synthetic samples generated is evaluated via image quality metrics, the ability to synthesize data that reflects human physical properties and generalize to broader subject profiles, and the achieved classification accuracy. Our experimental results show the proposed approach generates synthetic data for training that more accurately matches target kinematics, resulting in an increase of 9% in classification accuracy when classifying 14 different ambulatory human activities.

Physics-guided Deep Neural Networks for Radar-based UAV Recognition in Different Environments with No Prior In Situ Data
IEEE Transactions on Aerospace and Electronic Systems (under review)
S.J. Kearney and S.Z. Gurbuz

Classifying UAV radar micro-Doppler signatures with deep neural networks (DNNs) suffers a degradation in performance when there is a difference in the distribution of training and test data due to the differences in clutter backscatter and signal-to-interference-plus-noise ratio (SINR) of different environments. This is further exacerbated in low SINR samples in which the HERM lines characteristic of UAV blade rotations become only faintly visible. As HERM lines are a critical feature enabling the recognition and discrimination of various types of UAVs, this paper addresses the two-fold challenge of 1) how to accurately synthesize HERM lines in training data for UAV classification, and 2) how to enhance recognition performance in previously unseen, different environments, especially when the SINR is low. Towards these aims, we propose a physics-guided approach in which a semantic segmentation network trained only using low-fidelity CAD-models of UAVs is integrated into the discriminator of a Generative Adversarial Network (GAN) for training data synthesis, and on the front-end of a DNN for classification. Our results show that physics-guided semantic segmentation improves the representation of HERM lines in GAN-synthesized samples and enables improved classification of low SINR samples exhibiting HERM lines when no prior in situ data at that SINR is used during training. Consequently, our approach provides a way of enhancing UAV recognition performance in different operational environments, from which it is often difficult to obtain training data prior to deployment.

CV-SincNet: Learning Complex Sinc Filters from Raw Radar Data for Computationally Efficient Human Motion Recognition
IEEE Transactions on Radar Systems, vol. 1, pp. 493-504, 2023
S. Biswas, C. O. Ayna, S.Z. Gurbuz, and A.C. Gurbuz
Link to Pub ] [ PDF ]

The utilization of radio-frequency (RF) sensing in cyber-physical human systems, such as human-computer interfaces or smart environments, is an emerging application that requires real-time human motion recognition. However, current state-of-the-art radar-based recognition techniques rely on computing various RF data representations, such as range-Doppler or range-Angle maps, micro-Doppler signatures, or higher dimensional representations, which have great computational complexity. Consequently, classification of raw radar data has garnered increasing interest, while remaining limited in the accuracy that can be attained for recognition of even simple gross motor activities. To help address this challenge, this paper proposes a more interpretable complex-valued neural network design. Complex sinc filters are designed to learn frequency-based relationships directly from the complex raw radar data in the initial layer of the proposed model. The complex-valued sinc layer consists of windowed band-pass filters that learn the center frequency and bandwidth of each filter. A challenging RF dataset consisting of 100 words from American Sign Language (ASL) is selected to verify the model. About 40% improvement in classification accuracy was achieved over the application of a 1D CNN on raw RF data, while 8% improvement was achieved compared to real-valued SincNet. Our proposed approach achieved a 4% improvement in accuracy over that attained with a 2D CNN applied to micro-Doppler spectrograms, while also reducing the overall computational latency by 71%.

Automatic data-driven frequency-warped cepstral feature design for micro-Doppler classification
IEEE Transactions on Aerospace and Electronic Systems, vol. 54, iss. 4, pp. 1724 – 1738, August 2018
B. Erol, S.Z. Gurbuz, and M.G. Amin
Link to Pub ] [ PDF ]

Micro-Doppler signature analysis and speech processing share a common approach as both rely on the extraction of features from the signal’s time-frequency distribution for classification. As a result, features, such as the mel-frequency cepstrum coefficients (MFCCs), which have shown success in speech processing, have been proposed for use in micro-Doppler classification. MFCCs were originally designed to take into account the auditory properties of the human ear by filtering the signal using a filter bank spaced according to the mel-frequency scale. However, the physics underlying radar micro-Doppler is unrelated to that of human hearing or speech. This work shows that the mel-scale filter bank results in the loss of frequency components significant to the classification of radar micro-Doppler. A novel method for frequency-warped cepstral feature design is proposed as a means for optimizing the efficacy of features in a data-driven fashion specifically for micro-Doppler analysis. It is shown that the performance of the proposed frequency warped cepstral coefficients outperforms MFCC based on both simulated and measured data sets for four-class and eight-class human activity classification problems.

DNN transfer learning from diversified micro-Doppler for motion classification 
IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 5, pp. 2164 – 2180, Oct. 2019
B. Erol, S.Z. Gurbuz, M.G. Amin
https://arxiv.org/abs/1811.08361v1 (November 2018)
Link to Pub ] [ PDF ]

Recently, deep neural networks (DNNs) have been the subject of intense research for the classification of radio
frequency (RF) signals, such as synthetic aperture radar (SAR) imagery or micro-Doppler signatures. However, a fundamental challenge is the typically small amount of data available due to the high costs and resources required for measurements. Small datasets limit the depth of DNNs implementable, and limit performance. In this work, a novel method for generating diversified radar micro-Doppler signatures using Kinect-based motion capture simulations is proposed as a training database for transfer learning with DNNs. In particular, it is shown that together with residual learning, the proposed DivNet approach allows for the construction of deeper neural networks and offers improved performance in comparison to transfer learning from optical imagery. Furthermore, it is shown that initializing the network using diversified synthetic micro-Doppler signatures enables not only robust performance for previously unseen target profiles, but also class generalization. Results are presented for 7-class and 11-class human activity recognition scenarios using a 4-GHz continuous wave (CW) software-defined radar.

A Kinect-Based Human Micro-Doppler Simulator
IEEE Aerospace and Systems Magazine, Vol. 30, No. 5, May 2015
B.Erol, C. Karabacak, S.Z. Gürbüz
Link to Pub ] [ PDF ]

Until recently, human surveillance has primarily been accomplished using video cameras. However, radar offers unique advantages over optical sensors, such as being able to operate at far distances, under adverse weather conditions, and at nighttime, when optical devices are unable to acquire meaningful data. Radar is capable of recognizing human activities by classifying the micro-Doppler signature of a subject. Micro-Doppler is caused by any rotating or vibrating parts of a target, and results in frequency modulations centered about the main Doppler shift caused by the translational motion of the target. Thus, the rotation of a helicopter blade, wheels of a vehicle, or treads of a tank all result in micro-Doppler. In the case of humans, the complex motion of the limbs that occur in the course of any activity all result in a micro-Doppler signature visually distinguishable from other targets, even animals, which can then be exploited for human detection, automatic target recognition (ATR), and activity classification.