Reinforcement Learning: Q Learning
Explore how Q-learning emerges from SARSA by shifting from real actions to optimal ones.
Imagine you are back in Grid City, watching your little drone learn how to navigate the windy streets. By now the drone has learned from full episodes with Monte Carlo methods, and then from immediate corrections with SARSA. If you watch the drone closely during SARSA training, you notice a certain rhythm: in each step, it behaves according to its ε-greedy policy, takes an action, receives a reward, picks the next action from the same ε-greedy policy, and then updates using that next action. That next action is always present in the SARSA update. It is the “A” in the name SARSA.
But something strange begins to happen as the drone gets better. Even when it learns that, say, “RIGHT” is usually an excellent move from state 13, the ε-greedy policy still forces it to occasionally pick “UP” a clumsy, wind-blown, battery-draining mistake. And because SARSA uses the action the drone actually chooses in its update, the value it learns for state 13 becomes slightly pessimistic. The drone wants to learn how good the best meaningful action is, but SARSA keeps whispering, “Don’t forget, sometimes you explore and take the bad action too.”
To understand exactly what this means, it helps to zoom in on a single, simple moment in the drone’s life and slow time almost to a stop.
Imagine state 13 in Grid City again. The drone has two actions that matter: UP and RIGHT. Over many flights, it has learned something like
So if you just look at the Q-table, the story is very clear: RIGHT is good, UP is bad. If the drone were perfectly greedy, it would choose RIGHT from 13 every single time and sail happily toward the goal.
But the drone is not perfectly greedy. It is ε-greedy. That means that even when it completely “knows” that RIGHT is better, it will still choose UP with some small probability ε. Maybe ε is 0.1. Maybe it is shrinking over time but not yet zero. Either way, every now and then, the drone sighs, shrugs, and takes the bad action just to keep exploring.
Now picture one specific step during SARSA training. The drone is at state 13, it uses ε-greedy and chooses RIGHT this time. It receives some reward , lands in a new state , and then uses ε-greedy again to pick the next action . Only after choosing this next action does SARSA update :
That term is the key. The next action is sampled from the same ε-greedy policy, which means sometimes it is a great action and sometimes it is a silly exploratory move. Over time, the update for averages over all of that behavior. It learns not just “what happens if I take RIGHT and then play optimally,” but rather “what happens if I take RIGHT and then continue following this slightly noisy ε-greedy policy forever.” Mathematically, SARSA is learning the value of the ε-greedy policy itself.
This is exactly what “on-policy” means in the TD world. The behavior policy (ε-greedy with respect to Q) and the policy being evaluated in the update rule are one and the same. At first, this is comforting. The drone is learning about how it truly behaves, randomness and all. But in some environments, especially ones where exploration is dangerous, it starts to feel limiting.
One afternoon, you imagine a different kind of learner, a more determined drone. This drone still behaves ε-greedily, because exploration is important, but in its heart, in its internal calculations, it no longer cares about the sometimes-stupid action it just chose. Instead, it imagines what it should have done next, not what it did. When it lands in the next state , it peeks at all possible actions and selects the best one, not to execute, but to use in its value update. It dreams of the optimal future even while behaving imperfectly.
To understand exactly what this means, it helps to compare what’s happening inside this new learner’s head with what SARSA was doing.
With SARSA, the update target was always tied to the action actually taken in the next state. After stepping from to with reward , SARSA looked at the next action sampled from the ε-greedy policy and used
as the “snapshot of the future” for that update.
Our new, more stubborn learner wants to break that dependency. It still uses ε-greedy to act, but when it updates, it quietly asks a different question:
“If I stood in this next state and behaved perfectly from now on, what is the best I could hope to get?”
In symbols, that means that instead of using , it looks at all possible actions in ,
and takes the largest of them:
The moment the drone swaps
the entire personality of the learning algorithm changes.
It is as if the drone has finally separated its behavior from its beliefs. Its behavior is still messy, still exploratory, still ε-greedy. But its beliefs, its Q-values, now evolve according to a world in which it expects itself to act optimally from the next step onward.
This is a startling transformation. The same real experience is flowing in, the same transitions, the same rewards, the same occasional crashes into walls… but the internal interpretation of that experience is now entirely different.
Once this mental shift happens, the updated formula that emerges is almost inevitable. The drone still wants to correct its old estimate using a one-step TD target. But the target is no longer “what happened next according to the ε-greedy policy.” It is “what would happen next if I were perfect from here onward.”
If you plug this idea into the familiar TD update pattern, you get a new target:
And the update becomes
This is the defining rule of Q-learning.
Notice how similar it looks to SARSA’s update. The structure is identical:
but the meaning of the target has changed. Under the hood, this tiny change flips the algorithm from on-policy to off-policy. The behavior policy, the way the drone behaves in Grid City is still ε-greedy. It still occasionally flails UP from state 13, still sometimes crashes into buildings, still wastes battery in windy loops. But the target that shapes the Q-values is tied to a different policy: the greedy one that always picks . The drone is living one life and dreaming of another.
What’s Next
And yet, even here, you can sense the next question emerging. Q-learning bootstraps from a one-step lookahead:
Monte Carlo bootstraps from the entire episode.
SARSA bootstraps from the next exploratory action.
Q-learning bootstraps from the next optimal action.
But what about everything in between?
What if the drone doesn’t want to look just one step ahead, nor wait until the end of the episode?
What if it wants to blend these approaches, peek two steps ahead, or five, or a flexible number of steps so it can shape its updates with richer, more nuanced information?
This is the doorway that leads to n-step TD methods, where the drone learns not from just one future, but from a horizon of futures, carefully balanced between immediate predictions and long-range intuition.
And that, naturally, leads to one of the most elegant and powerful ideas in reinforcement learning: a method that unifies Monte Carlo and TD, that uses all n-step returns at once, weighted by bootstrapped estimates.
A method called TD(λ).