Deep Learning / Jan-May 2024

Updates

  • Apr 18: New Lecture is up: (dl-21) Diffusion Models [slides]
  • Apr 15: New Lecture is up: (dl-20) Generative Adversarial Networks (GAN) [slides]
  • Apr 01: New Lecture is up: (dl-19) Variational Autoencoders [slides]
  • Mar 20: New Lecture is up: (dl-18) Generative Models [slides]
  • Mar 18: New Lecture is up: (dl-17) Autoencoders [slides]
  • March 11, 2024: Assignment-3 is up and the deadline is March 25!

  • Mar 11: New Lecture is up: (dl-16) Applications of Transformers [slides]

Course Description

Deep Learning has lately become the driving force behind numerous high-performing AI/ML products deployed in real world across diverse disciplines. Tech giants such as Google, Microsoft, Facebook, Amazon, etc. have strongly been employing the Deep Learning workforce in the past few years for developing various applications in Computer Vision, Natural Processing, etc. Also, various organizations, such as OpenAI, DeepMind, etc., that intend to work towards developing safe and responsible Artificial General Intelligence (AGI) conduct research in Deep Learning. Hence, it has recently become one of the most sought-after courses. In this course, we will discuss (with a focus on the implementation) the various building blocks required to realize the Deep Learning solutions all the way from a simple neuron model to Generative AI.

Deep Learning (AI2100, AI5100 and CS5480) Course Contents

Starting from an artificial neuron model, the aim of this course is to understand feed-forward, recurrent architectures of Artificial Neural Networks, all the way to the latest Generative AI models driven by Deep Neural Networks. Specifically, we will discuss the basic Neuron models (McCulloch Pitts, Perceptron), Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN, LSTM and GRU). We will understand these models’ representational ability and how to train them using the Gradient Descent technique using the Backpropagation algorithm. We will then discuss the encoder-decoder architecture, attention mechanism and its variants. That will be followed by self-attention and Transformers. The next part of the course will be on Generative AI, wherein we will discuss Variational Autoencoders, GANs, Diffusion Models, GPT, BERT, etc. We will briefly discuss multi-modal representation learning (e.g., CLIP). Towards the end, students will be briefly exposed to some of the advanced topics and/or recent trends in deep learning.

Prerequisites

A course on Machine Learning (e.g., AI2000, CS3390, EE2802, EE5913, EE5610) and Programming experience in Python

Logistics

Class Room: A-LH1

Timings: Slot-B (Monday-10:00-10:55, Wednesday-09:00-09:55, Thursday-11:00-11:55)

Visit this page regularly for updates and information regarding the course.


Instructors

Teaching Assistants

Susmit Agrawal

Sairam Rebbapragada

Deepika Vemuri

Rupa Kumari

Kartik Srinivas

Savarana Datta Reddy

Lokesh Badisa

Adhvik Mani Sai

Dhatri Nanda

Vojeswitha Reddy

Vikhyath Kothamasu

Adepu Adarsh Sai

Arun Siddhardha

Vaishnavi W