Research related to the Robustness aspect of DNNs

Back to Home

Adversarial Fooling beyond Flipping the Label
Konda Reddy Mopuri*, Vaisakh Shaj*, R. Venkatesh Babu
AMLCV, CVPR, 2020
PDF

Analyzes the adversarial attacks beyond fooling rate. Existing metrics only look at the percentage of label flips, but we propose metrics to measure the attack's ability on visual and semantic scales.

BatchOut: Batch-level feature augmentation to improve robustness to adversarial examples
Akshayvarun Subramanya, Konda Reddy Mopuri, R. Venkatesh Babu
ICVGIP, 2018

Performs feature space Augmentation to learn robust deep NNs. In other words, performs efficient Adversarial Training in feature domain unlike the expensive image space.

GAT: Gray-box Adversarial Training
Vivek B S, Konda Reddy Mopuri, R. Venkatesh Babu
ECCV, 2018
PDF

We have demonstrated that the pseudo-robustness of the adversarially trained models is due to the shortcomings of the existing evaluation procedure. To improve the evaluation, we presented an evaluation procedure via constructing robustness plots and a derived metric (Worst-case Performance) that can assess the susceptibility of the learned models. Further, harnessing our observations we propose a novel variant of adversarial training, termed Gray-box Adversarial Training to learn robust models.

Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions
Konda Reddy Mopuri*, Phani Krishna Uppala*, R. Venkatesh Babu
ECCV, 2018
PDF

First attempt to capture the distribution of UAPs for a given CNN classifier in the absence of training data. We extract proxy data from thr target classifier called Class Impressions to craft image agnostic adversarial perturbations. In an Adversarial Machine Learning framework, we learn a GAN-inspired generative model to capture the set of UAPs for one or more CNN classifiers. The learned generative model can act as an oracle which can seed UAPs to fool not only the known classifiers but also the ones not seen during its training.

NAG: Network for Adversary Generation
Konda Reddy Mopuri*, Utkarsh Ojha*, Utsav Garg, R. Venkatesh Babu
ECCV, 2018
PDF

First attempt to capture the distribution of UAPs for a given CNN classifier. In an Adversarial Machine Learning framework, we learn a GAN-inspired generative model to capture the set of UAPs for one or more target CNN classifiers. The learned generative model can act as an oracle which can seed UAPs to fool not only the known classifiers but also the ones not seen during its training.

GD-UAP : Generalizable and data-free objective across vision tasks to craft UAPs
Konda Reddy Mopuri*, Aditya Ganeshan*, R. Venkatesh Babu
Trans. on PAMI, 2018
arXiv / Codes

Generalized Data-Free objective for crafting image agnostic adversarial perturbations. Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers. Therefore, our objective is generalizable to craft image-agnostic perturbations across vision tasks such as object recognition, semantic segmentation and depth estimation. In black-box attacking scenario our objective outperforms the data dependent objectives. Further, via exploiting simple priors related to the data distribution, our objective remarkably boosts the fooling ability of the crafted perturbations.

Fast Feature Fool: A data independent approach to universal adversarial perturbations
Konda Reddy Mopuri*, Utsav Garg*, R. Venkatesh Babu
BMVC, 2017
arXiv / Codes

For the first time, we propose a novel data-free approach to generate image agnostic perturbations for CNNs trained for object recognition. These perturbations are transferable across multiple network architectures trained either on same or different data. In the absence of data, our method generates universal perturbations efficiently via fooling the features learned at multiple layers thereby causing CNNs to misclassify.

Back to Home