I obtained my Ph.D. in Computer Science at Washington State University where I was advised by Professor Jana Doppa.
My general research interests are in the area of robust and trustworthy machine learning. My research focuses on developing efficient algorithms and
theory to improve reliability and safety of deep learning algorithms for diverse problem settings and data domains. My current work includes:
Robustness and reliability of deep learning models for the time-series domain with diverse applications including
mobile health, smart grid management, human activity monitoring, and agriculture automation.
Trustworthy machine learning for sequential data.
Uncertainty quantification for robust and effective Human-ML collaborative systems using conformal prediction.
Current Research Projects
Adversarial Robustness against Time-Series Perturbations
There is significant growth in the Internet of Things (IoT), mobile applications and data analytics which
are based on predictive models over time-series data collected from various sources. Some important applications include smart home automation, mobile health,
smart grid management, and finance. Safe and reliable deployment of such machine learning (ML) systems require the ability to be robust to
adversarial/natural perturbations to time-series data.
*Depiction of real-world data affected by noise.
When the data collected is distorted, is the commonly used Minkowski distance appropriate to compare the similarity of the signals?
Time-series perturbations such as time-shifts and frequency-distortion affect significantly the ability of the ML model to classify correctly the input.
How can we enhance the robustness of these classifiers using adversarial perturbations and appropriate similarity measures between
time-series instances?
Related papers: TSA-STAT (JAIR'22) &
DTW-AR (TPAMI'22)
*Depiction of real-world data affected by sensor rotation.
Time-series data acquisition is prone to natural perturbations (such as hardware orientation or amplified noise) that affect significantly the data.
Such distortion yields signals that are dissimilar according to Minkowski distances. On the other hand, elastic measures are very expensive to compute. Therefore, how can we enhance the robustness of deep models during training
to recognize correctly the distorted inputs?
Related papers: RO-TS (AAAI'22) &
StatOpt (ICCAD'22)
Out-Of-Distribution (OOD) Detection for Safety-Critical Application
One of the failure scenarios in the AI safety domain is confident predictions on Out-Of-Distribution (OOD) examples, essentially for real-world contexts with low tolerance for error.
Such examples, not observed during the training phase or outside the intended context of deployment, pose a significant risk of leading to unsafe decision-making outcomes.
OOD detectors have the potential of achieving AI systems capable of functioning reliably when presented with these unforeseen examples and improves the predictive uncertainty of deep-learning models.
Related papers: SR Score (TIST'23)
Uncertainty Quantification for Reliable Machine-Learning
Most practices often train models on specific data and distribute them as black-box models.
A significant concern in this practice is the difficulty of rigorously quantifying the uncertainty of black-box algorithms and capturing the deviation of the prediction from the ground truth.
Safe deployment of deep neural networks in high-stake real-world applications requires
theoretically-sound uncertainty quantification. We study Conformal Prediction for
uncertainty quantification of deep models to obtain effective human-ML collaborative systems. We particularly focus on overcoming challenges like imbalanced data distributions, data heterogeneity, distribution shift, and coverage for sub-populations.
Related papers: NCP (AAAI'23) &
aPRCP (UAI'23)
Past Research Projects
Cybersecurity for Wireless Implantable Medical Devices
Implantable and Wearable Medical Devices (IMDs) are trending technologies in personal healthcare systems. They enable efficient diagnostics and scalable
monitoring of patient's health status in real-time. Information security is a serious challenge to
these devices as malicious attacks threaten the health and/or the privacy of patients.
On the other hand, IMDs' architectures are characterized by limited resources, such as energy supply, processing power, and memory.
Hence, balancing security/confidentiality with the efficiency of these devices is a substantial matter for IMD technologies to progress.
Highlighted papers: Biometric-based authentication &
IMD plain-text authentication