<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Learning path modules (interactive hubs) on Reinforcement Learning Curriculum</title>
    <link>https://codefrydev.in/Reinforcement/learning-path/modules/</link>
    <description>Recent content in Learning path modules (interactive hubs) on Reinforcement Learning Curriculum</description>
    
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 01 Jan 0001 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://codefrydev.in/Reinforcement/learning-path/modules/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Phase 0 — Programming from zero</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-0/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-0/</guid>
      <description>Install Python, run your first script, and learn variables, conditionals, loops, and functions before RL.</description>
    </item>
    <item>
      <title>Phase 1 — Math foundations for RL</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-1/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-1/</guid>
      <description>Probability, statistics, linear algebra, and calculus with RL-motivated examples. Read in order: 1a → 1d → self-check.</description>
    </item>
    <item>
      <title>Phase 2 — Prerequisites (tools and libraries)</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-2/</guid>
      <description>Python, NumPy, PyTorch, Gym/Gymnasium, and related tools the curriculum assumes. Complete tasks on the prerequisites index, then the Phase 2 quiz.</description>
    </item>
    <item>
      <title>Phase 3 — Math for RL (deep dive)</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-3/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-3/</guid>
      <description>Deeper pass through the same math areas as Phase 1, with more drills and RL-motivated examples before you start the core RL volumes.</description>
    </item>
    <item>
      <title>Phase 4 — ML foundations</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-4/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-4/</guid>
      <description>Supervised learning, regression, classification, gradient descent, and evaluation—before neural networks for RL.</description>
    </item>
    <item>
      <title>Phase 5 — DL foundations</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-5/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-5/</guid>
      <description>Neural networks, backpropagation, CNNs, PyTorch patterns, and a mini-project—directly reusable for DQN, policies, and actor-critic.</description>
    </item>
    <item>
      <title>Phase 6 — RL foundations (tabular)</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-6/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-6/</guid>
      <description>Volumes 1–2: MDPs, dynamic programming, Monte Carlo, TD, SARSA, and tabular Q-learning. Core theory before function approximation.</description>
    </item>
    <item>
      <title>Phase 7 — Deep RL</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-7/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-7/</guid>
      <description>Volumes 3–5: value function approximation, DQN family, policy gradients, actor-critic, and advanced policy optimization (chapters 21–50).</description>
    </item>
    <item>
      <title>Phase 8 — Advanced topics</title>
      <link>https://codefrydev.in/Reinforcement/learning-path/modules/phase-8/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://codefrydev.in/Reinforcement/learning-path/modules/phase-8/</guid>
      <description>Volumes 6–10: model-based RL, exploration, offline RL, MARL, real-world RL, safety, and RL with LLMs (chapters 51–100).</description>
    </item>
  </channel>
</rss>
