Lectures
You can download the lectures here. We will try to upload lectures prior to their corresponding classes.
-
(dl-0) Introduction
tl;dr: Introduction to Deep Learning, brief history of Artificial Neural Networks and logistics of this course.
[slides]
-
-
(dl-3) MLP - Representational Power of Networks of Neurons
tl;dr: Multi Layered Network of Neurons can learn any artbirary function!
[slides]
-
(dl-4) Optimization (Gradient Descent)
tl;dr: Its all about finding the right set of model parameters!
[slides]
Suggested Readings:
-
(dl-5) Backpropagation-1
tl;dr: Gradient Descent applied to a simple computational graph
[slides]
Suggested Readings:
-
(dl-6) Backpropagation-2
tl;dr: Gradient Descent applied to an MLP and more!
[slides]
Suggested Readings:
-
(dl-7) Cross-Entropy Loss
tl;dr: The loss function for training the NN classifiers
[slides]
Suggested Readings:
-
(dl-8) Building Blocks of CNNs
tl;dr: Basic constituting elements of Convolutional Neural Networks (CNN)
[slides]
Suggested Readings:
-
(dl-9) Evolution of CNN Architectures
tl;dr: How different families of architectures evolved over time
[slides]
Suggested Readings:
-
-
-
(dl-12) Visualizing and Understanding CNNs
tl;dr: What do the CNNs learn? Why they predict what they predict?
[slides]
Suggested Readings:
- Rich Feature Hierarchies by Girshick et al. CVPR 2014
- Visualizing and Understanding CNNs by Zeiler and Fergus ECCV 2014
- Deep Inside CNNs by Simonyan et al. ICLRW 2014
- Class Activation Maps (CAM) by B Zhou et al. CVPR 2016
- Grad-CAM by Selvaraju et al. NIPSW 2016, ICCV 2017
- CNN-Fixations by Mopuri et al. TIP 2018
- Texture Synthesis using CNNs by Gatys et al. NeurIPS 2015
- DeepDream
- Neural style transfer by Gatys et al. CVPR 2016
- Understanding Deep Representations by Inverting them by A Mahendran et al. CVPR 2015
- Inverting Visual Representations with CNNs by A Dosovitskiy et al. CVPR 2016-
-
(dl-13) Recurrent Neural Networks
tl;dr: Deep Learning Models to handle Sequence prediction tasks
[slides]
Suggested Readings:
-
(dl-14) Word Embeddings
tl;dr: How do we represent words (or, language) as vectors?
[slides]
Suggested Readings:
-
(dl-15) Encoder-Decoder Models and Attention
tl;dr: Configuration of a NN pair towards sequence learning problems and learning to selective focus
[slides]
-
-
(dl-17) Generative Models
tl;dr: Tools to model the density over the space of data.
[slides]
Suggested Readings:
-
-
(dl-19) Variational Autoencoders
tl;dr: Autoencoders but also generative models!
[slides]
Suggested Readings:
-
(dl-20) Generative Adversarial Networks (GAN)
tl;dr: Pair of NNs that compete adversarially to realize an implicit density modelling!
[slides]
Suggested Readings: