Deep Q-Networks: Experience Replay and Target Networks
In the Q-learning post, we trained an agent to navigate a 4×4 frozen lake using a simple lookup table — 16 states × 4 actions = 64 numbers. But what happens when the state space isn't a grid? CartP...

Source: DEV Community
In the Q-learning post, we trained an agent to navigate a 4×4 frozen lake using a simple lookup table — 16 states × 4 actions = 64 numbers. But what happens when the state space isn't a grid? CartPole has four continuous state variables: cart position, cart velocity, pole angle, and pole angular velocity. Even if you discretised each into 100 bins, you'd need 100⁴ = 100 million Q-values. An Atari game frame is 210×160 pixels with 128 colours — roughly $10^{18}$ possible states. Tables don't work here. The solution: replace the Q-table with a neural network. Feed in the state, get out Q-values for every action. But naively combining neural networks with Q-learning is unstable — the network chases a moving target while training on correlated sequential data. DeepMind solved both problems with two elegant tricks: experience replay and a target network. By the end of this post, you'll implement a Deep Q-Network from scratch in PyTorch, train it to balance a pole, and understand why these t