Research related to the Adaptability aspect of DNNs

Back to Home

Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data
Gaurav Kumar Nayak, Konda Reddy Mopuri, Saksham Jain, Anirban Chakraborty
IEEE Trans. on PAMI, 2021
PDF

Pretrained deep models hold their learnt knowledge in the form of the model parameters. These parameters act as memory for the trained models and help them generalize well on unseen data. However, in the absence of training data, the utility of a trained model is merely limited to either inference or better initialization towards a target task. In this paper, we go further and extract synthetic data by leveraging the learnt model parameters. We dub them "Data Impressions", which act as proxy to the training data and can be used to realize a variety of tasks. These are useful in scenarios where only the pretrained models are available and the training data is not shared (e.g., due to privacy or sensitivity concerns). We show the applicability of data impressions in solving several computer vision tasks such as unsupervised domain adaptation, continual learning as well as knowledge distillation.Extensive experiments performed on several benchmark datasets demonstrate competitive performance achieved using data impressions in absence of the original training data.

Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Konda Reddy Mopuri*, Gaurav Kumar Nayak*, Anirban Chakraborty
WACV, 2021
PDF

KD is an effective method to transfer the learning across DNNs. Typically, the dataset originally used for training the Teacher is chosen as the "Transfer Set" to conduct KD. However, this data may not always be available. In such scenarios, existing approaches either iteratively compose a synthetic set representative of the original training dataset, or learn a generative model to compose such a transfer set. However, both these approaches involve complex optimization and are computationally expensive. As a simple alternative, we investigate the effectiveness of "arbitrary transfer sets" such as random noise, publicly available synthetic/natural datasets, which are completely unrelated to the original training dataset.

Zero-Shot Knowledge Distillation in Deep Neural Networks
Konda Reddy Mopuri*, Gaurav Kumar Nayak*, Vaisakh Shaj*, Anirban Chakraborty, R. Venkatesh Babu
ICML, 2019
PDF / Codes

We aim to develop novel data-free methods to train the Student from the Teacher. Without even using any meta-data about the target dataset, we attempt to synthesise the synthetic samples (Data Impressions) from the complex Teacher model and utilise these as surrogates for the original training data samples to transfer its learning to Student via knowledge distillation. Therefore, we dub this procedure "Zero-Shot Knowledge Distillation".

Learning Representations with Strong Supervision for Image Search
Konda Reddy Mopuri, Vishal B Athreya, R. Venkatesh Babu
SPCOM, 2018 [Best Paper]
Link

Tasks such as scene retrieval suffer from features learned from label-level weak supervision and require stronger supervision to better understand the contents of the image. In this paper, we exploit the features learned from caption generating models to learn novel task specific image representations. In particular, we consider captioning system and the dense region description model and demonstrate that, owing to richer supervision provided during their training, features learned by them better than those of CNNs trained on object recognition.

Towards semantic visual representation: augmenting image representation with natural language descriptors
Konda Reddy Mopuri, R. Venkatesh Babu
ICVGIP, 2016
Link

we attempt to enrich the image representation with the tag encodings that leverage their semantics. Our approach utilizes neural network based natural language descriptors to represent the tag information. By complementing the visual features learned by convnets, our approach results in an efficient multi-modal image representation.

Object Level Deep Feature (OLDF) Pooling for Compact Image Representation
Konda Reddy Mopuri, R. Venkatesh Babu
Deep Vision Workshop, CVPR, 2015
PDF

we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling.

Back to Home