Deep Learning

Overall Progress 0%

5 questions after completing the first 6 DL Foundations pages. Check your understanding before continuing.

12 questions covering neural networks, backpropagation, training loops, and CNNs. Pass: 9/12.

Understand SGD, Momentum, and Adam optimizers from scratch. Implement and compare them in NumPy.

Build a full training loop in NumPy: batches, epochs, forward pass, backprop, and weight updates.

Understand overfitting and apply L2 regularization and dropout to prevent it in NumPy.

Bridge NumPy implementations to PyTorch. Build QNetwork and PolicyNetwork with nn.Module for RL.

15 drill problems covering neural networks, forward pass, backpropagation, optimizers, and training.

Review deep learning and see why RL needs neural networks β€” the bridge to DQN and policy gradients.

Review ML Foundations and see why linear models fail on complex patterns β€” motivation for neural networks.

Neural networks, backpropagation, CNNs, PyTorch patterns, and a mini-projectβ€”directly reusable for DQN, policies, and actor-critic.