🖨️ Printing Instructions: Press Ctrl/Cmd + P and select "Save as PDF".
1

From ML Basics to Deep Learning

Review, Linear Regression Math, and Introduction to Neural Networks

2

Part 1: ML Review - What We Covered

3

The 4 Ingredients of Machine Learning

4

ML Paradigms Recap

5

Techniques We Covered

6

Part 2: ML Review - Topics We Didn't Cover

7

The Labeled Data Bottleneck

8

Semi-Supervised Learning

9

Transfer Learning

10

Self-Supervised Learning (The Key to LLMs)

11

Reinforcement Learning

12

RL Applications

13

Part 3: Linear Regression - The Math Behind It

14

Linear Regression: Problem Setup

15

The Optimization Problem

16

The Bias Absorption Trick

17

The Closed-Form Solution

18

When Closed-Form Fails

19

Part 4: From ML to Deep Learning

20

The AI → ML → DL Hierarchy

21

What is Deep Learning?

22

Why Deep Learning Now?

23

Part 5: From Neurons to Neural Networks

24

Biological Inspiration

25

The Artificial Neuron (Perceptron)

26

Activation Functions

27

The XOR Problem

28

Part 6: Deep Neural Networks

29

Multi-Layer Perceptron (MLP)

30

Forward Pass: Python Example

31

Universal Approximation Theorem

32

Training: The Big Picture

33

Computational Graphs

34

The Chain Rule

35

Part 7: Gradient Descent & Optimization

36

Why Gradient Descent?

37

Gradient Intuition

38

Stochastic Gradient Descent (SGD)

39

Problems with Vanilla SGD

40

SGD with Momentum

41

AdaGrad: Adaptive Learning Rates

42

RMSProp: Fixing AdaGrad

43

Adam: The Best of Both Worlds

44

Adam: Default Hyperparameters

45

Summary & Next Steps

46

Key Takeaways

47

Next Lecture Preview