Reinforcement Learning: N-Step SARSA
A clear walkthrough of n-step returns, showing how they unify Monte Carlo and temporal-difference methods through a controllable lookahead horizon.
Every method so far has made a very specific commitment about how far into the future it wants to look.
Monte Carlo says:
“I will wait. I will watch the entire episode unfold. Only when the story is finished will I decide what the first step was worth.”
One-step TD methods like SARSA and Q-learning say:
“I won’t wait that long. I’ll peek one step ahead, trust my current estimate for the rest, and update immediately.”
And suddenly a question feels unavoidable. Why only one step? Why all the way to the end? Why not something in between?
To see what this really means, let’s slow time again and zoom into a single episode in Grid City. Imagine the drone starts at state . Over the next few moments, it experiences the following trajectory:
Assume is terminal.
Now imagine you are standing at time , just after leaving . You want to estimate how good was. Monte Carlo would say:
and only after reaching would it update:
One-step TD would instead say:
It peeks one step ahead, then bootstraps. But now let’s try something different. Suppose we look two steps ahead before bootstrapping. From , we observe:
and then instead of waiting for the whole episode, we stop at and bootstrap from there. That gives us the 2-step return:
Notice what happened. We used two real rewards, then trusted our estimate for the rest. If we go three steps before bootstrapping, we get:
And if we keep going until the episode ends, we get:
which is exactly the Monte Carlo return. One-step TD is the 1-step return:
Monte Carlo is the full n-step return, where equals the remaining episode length. And in between? There is a whole spectrum of possibilities:
Now pause and reflect. Each choice of changes the personality of the learner.
If , we bootstrap quickly. Low variance. Slight bias.
If is very large, we approach Monte Carlo. Unbiased. Higher variance.
We are no longer trapped between two extremes. We have a dial we can turn.
Now, how do we update? So in general, at time , we will update the state visited at time . The learning signal travels backward with a delay of .
Mathematically, the update is always
Conceptually, what changes is how long we allow reality to unfold before trusting our estimate.
If , we trust our current value function almost immediately. If is large, we trust actual observed rewards longer and if n is of episode length then we update once the episode ends.
Let’s keep it short this time. We will visit same concept again in “Generalized Advantage Estimation (GAE)”. There we will derive generalized way of balancing between 1 step TD bootstrap and full monte carlo return.