<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Dl-Foundations on Reinforcement Learning Curriculum</title>
    <link>https://codefrydev.in/Reinforcement/tags/dl-foundations/</link>
    <description>Recent content in Dl-Foundations on Reinforcement Learning Curriculum</description>
    
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Fri, 20 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://codefrydev.in/Reinforcement/tags/dl-foundations/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Biological Inspiration: From Brain Neurons to Artificial Neurons</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/biological-inspiration/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/biological-inspiration/</guid>
      <description>How the biological neuron — dendrites, soma, axon — maps onto the artificial neuron with inputs, weights, bias, and activation.</description>
    </item>
    <item>
      <title>The Perceptron: Learning from Mistakes</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/perceptron/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/perceptron/</guid>
      <description>The perceptron learning rule, training on AND and OR gates, and why XOR exposes the fundamental limitation of single-layer networks.</description>
    </item>
    <item>
      <title>Activation Functions: Adding Non-Linearity</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/activation-functions/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/activation-functions/</guid>
      <description>ReLU, sigmoid, tanh, and softmax — what they compute, when to use each, and why non-linearity is essential for deep networks.</description>
    </item>
    <item>
      <title>Multi-Layer Perceptrons: Stacking Layers to Break Linearity</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/mlp/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/mlp/</guid>
      <description>MLP architecture, parameter counting, and how stacking non-linear layers allows networks to solve XOR and approximate any function.</description>
    </item>
    <item>
      <title>Forward Propagation: Computing the Network Output</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/forward-propagation/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/forward-propagation/</guid>
      <description>Layer-by-layer forward pass through an MLP — computing pre-activations, applying activations, and understanding intermediate representations.</description>
    </item>
    <item>
      <title>Checkpoint: DL Foundations Mid-Point</title>
      <link>https://codefrydev.in/Reinforcement/assessment/checkpoint-dl-mid/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/assessment/checkpoint-dl-mid/</guid>
      <description>5 questions after completing the first 6 DL Foundations pages. Check your understanding before continuing.</description>
    </item>
    <item>
      <title>Loss Functions: Measuring How Wrong the Network Is</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/loss-functions-dl/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/loss-functions-dl/</guid>
      <description>MSE for regression, cross-entropy for classification, and the TD error loss in DQN — how loss functions guide neural network training.</description>
    </item>
    <item>
      <title>Backpropagation: Teaching Networks by Propagating Errors</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/backpropagation/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/backpropagation/</guid>
      <description>The chain rule applied backwards through a neural network — computing gradients for every weight and verifying them with numerical finite differences.</description>
    </item>
    <item>
      <title>Phase 5 Assessment: Deep Learning Foundations</title>
      <link>https://codefrydev.in/Reinforcement/assessment/phase-5-dl/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/assessment/phase-5-dl/</guid>
      <description>12 questions covering neural networks, backpropagation, training loops, and CNNs. Pass: 9/12.</description>
    </item>
    <item>
      <title>Optimizers: SGD, Momentum, and Adam</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/optimizers/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/optimizers/</guid>
      <description>Understand SGD, Momentum, and Adam optimizers from scratch. Implement and compare them in NumPy.</description>
    </item>
    <item>
      <title>The Training Loop</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/training-loop/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/training-loop/</guid>
      <description>Build a full training loop in NumPy: batches, epochs, forward pass, backprop, and weight updates.</description>
    </item>
    <item>
      <title>Regularization and Overfitting</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/regularization/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/regularization/</guid>
      <description>Understand overfitting and apply L2 regularization and dropout to prevent it in NumPy.</description>
    </item>
    <item>
      <title>CNN Basics: Convolutions and Pooling</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/cnn-basics/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/cnn-basics/</guid>
      <description>Learn convolution and pooling from scratch in NumPy. See how Atari DQN uses CNNs to process raw pixels.</description>
    </item>
    <item>
      <title>PyTorch: Building Neural Networks with nn.Module</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/pytorch-nn-practice/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/pytorch-nn-practice/</guid>
      <description>Bridge NumPy implementations to PyTorch. Build QNetwork and PolicyNetwork with nn.Module for RL.</description>
    </item>
    <item>
      <title>DL Mini-Project: Digits Classifier in NumPy</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/dl-mini-project/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/dl-mini-project/</guid>
      <description>Build a 2-layer MLP to classify handwritten digits using only NumPy. Full pipeline: data, init, training, evaluation.</description>
    </item>
    <item>
      <title>DL Foundations Drills</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/drills/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/drills/</guid>
      <description>15 drill problems covering neural networks, forward pass, backpropagation, optimizers, and training.</description>
    </item>
    <item>
      <title>DL Foundations Review &amp; Bridge to RL</title>
      <link>https://codefrydev.in/Reinforcement/dl-foundations/review-and-bridge/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/dl-foundations/review-and-bridge/</guid>
      <description>Review deep learning and see why RL needs neural networks — the bridge to DQN and policy gradients.</description>
    </item>
  </channel>
</rss>
