Reinforcement Learning: SARSA
A simple guide to TD learning, step-size dynamics, and SARSA’s on-policy updates.
Before we introduce SARSA, let’s step back and look carefully at how our Monte Carlo (MC) learner updates its knowledge. In the MC setting, whenever a particular state–action pair appears in an episode, we eventually obtain a sample return:
and after observing this return, we refine our estimate . The more often appears, the more returns we accumulate, and the more accurate our estimate becomes. Suppose the pair has appeared times across all episodes. Let the observed returns be:
The Monte Carlo principle says, The best estimate of is simply the average of all observed returns. So after visits:
This looks simple, but computing the full sum every time is inefficient. We can rewrite it in an incremental form.
Now the update depends only on the previous estimate, the new sample return and a single learning rate. The expression above can be viewed as a special case of a much more general learning rule used throughout reinforcement learning:
where the target is whatever new evidence we just observed. This “error correction” structure is central to almost every RL algorithm.
Before the -th visit, our estimate summarizes everything we’ve learned so far, our prior knowledge. From the new episode, we receive a fresh return , a single snapshot of the future, our new information.
Our new estimate should blend these two:
- Not too much new information (because it’s noisy),
- Not too much old information (or we’ll never improve).
If instead we took:
we would be throwing away all past experience and trusting a single sample. That’s extremely unstable. If we set:
we would never learn anything new. To strike a balance, we use a step-size parameter ( 0 to 1):
Here:
- is the target, what we observed now and the direction we want to move toward.
- is our current belief.
- is the error between new information and old belief.
- controls how aggressively we learn. In MC learning, .
- is how we blend the two.
This single formula is the core of many RL algorithms and it is exactly the door through which we now enter SARSA and Q-learning (will discuss more in the next blog about Q-learning).
Sample-average vs constant-step-size
In Monte Carlo methods, the step-size is strictly tied to how many times we have visited a state–action pair:
This makes the estimate converge to the true average of all observed returns which is perfect if the environment is stationary and we want long-term averaging. The first few samples influence the estimate a lot, and later samples influence it less and less. This makes MC converge to the true average, but it also makes the learner slow to adapt if the environment changes. But in many RL problems (especially control), we want an agent that:
- adapts quickly when the environment changes,
- responds immediately to new rewards,
- does not let old data dominate forever.
That’s why SARSA switches to a constant step-size:
Suppose we’re estimating the value of some state–action pair and the sequence of observed returns is:
| Visit k | Return Gk |
|---|---|
| 1 | 10 |
| 2 | -4 |
| 3 | 8 |
| 4 | 2 |
| 5 | 0 |
| 6 | 6 |
| 7 | -2 |
| 8 | 4 |
| 9 | 1 |
| 10 | 3 |
Let’s start with Initial estimate and compare both update rules, MC and constant then update will look like something below:
| k | Return Gk | Monte Carlo (α = 1/k) | Constant α = 0.5 |
|---|---|---|---|
| 1 | 10 | Q1 = 10 | Q1 = 5.00 |
| 2 | -4 | Q2 = 10 + 0.5(-4 − 10) = 3.00 | Q2 = 5 + 0.5(-4 − 5) = 0.50 |
| 3 | 8 | Q3 = 3 + (1/3)(8 − 3) = 4.67 | Q3 = 0.5 + 0.5(8 − 0.5) = 4.25 |
| 4 | 2 | Q4 = 4.67 + 0.25(2 − 4.67) = 4.00 | Q4 = 4.25 + 0.5(2 − 4.25) = 3.12 |
| 5 | 0 | Q5 = 4.00 + 0.2(0 − 4.00) = 3.20 | Q5 = 3.12 + 0.5(0 − 3.12) = 1.56 |
| 6 | 6 | Q6 = 3.20 + (1/6)(6 − 3.20) = 3.70 | Q6 = 1.56 + 0.5(6 − 1.56) = 3.78 |
| 7 | -2 | Q7 = 3.70 + (1/7)(-2 − 3.70) = 2.90 | Q7 = 3.78 + 0.5(-2 − 3.78) = 0.89 |
| 8 | 4 | Q8 = 2.90 + (1/8)(4 − 2.90) = 3.04 | Q8 = 0.89 + 0.5(4 − 0.89) = 2.44 |
| 9 | 1 | Q9 = 3.04 + (1/9)(1 − 3.04) = 2.82 | Q9 = 2.44 + 0.5(1 − 2.44) = 1.72 |
| 10 | 3 | Q10 = 2.82 + 0.1(3 − 2.82) = 2.84 | Q10 = 1.72 + 0.5(3 − 1.72) = 2.36 |
Let’s put it into a diagram to visualize it better
As we can see, Monte Carlo (1/k) smoothing gradually stabilizes as more data is collected, converging toward the true average of the returns. Over time, it becomes very stable after many samples, but as a result, it tends to react slowly to new information.
With a constant step-size of , the estimate adapts quickly and becomes more responsive to the latest returns. Instead of computing the true mean, it produces a running, exponentially weighted estimate, which makes it particularly effective in non-stationary environments.
Now that we’ve seen how behaves, let’s look at two more extreme cases: a very small learning rate and a very large one. This will help us understand why SARSA/Q-learning relies heavily on choosing an appropriate step-size. When we set:
we’re telling the algorithm:
Trust your old estimate much more than the new sample.
This makes the Q-value update extremely conservative. The agent changes its belief slowly, almost reluctantly. This is good when:
- the environment is stable,
- returns are noisy,
- we want to smooth fluctuations.
But it also means:
- sudden changes in reward patterns take a long time to be reflected,
- early mistakes dominate for a while,
- the learning curve becomes smooth but sluggish.
On the opposite end of the spectrum:
means:
Trust the new return almost completely. Forget most of the past.
This makes the Q-value jump dramatically with each new return, similar to:
- a highly reactive trader who changes strategy after every win/loss,
- an anxious driver who overreacts to every event on the road.
With :
- the Q-value becomes very sensitive to recent samples,
- noise dominates,
- the curve becomes jagged and volatile,
- the estimate can quickly adapt to changes (great for non-stationary tasks).
From one episode at a time to one step at a time
Up to this point, everything we’ve done has followed the Monte-Carlo philosophy, learn only after an entire episode finishes. In an MC learner:
-
During a game, Q-values never change.
-
We record rewards, keep track of returns, but do not update anything yet.
-
Only when the episode terminates, when we finally know the full return
we go back and update every visited state-action pair.
This creates a very particular learning rhythm:
Play whole game → compute all G-values → update all Q-values → start next game.
Imagine an MDP with four states:
and only one action:
Assume the agent plays a single episode and experiences the following transitions:
- From , take action , get reward 3, move to
- From , take action , get reward 1, move to
- From , take action , get reward 5, move to terminal
The episode ends when the agent reaches . In a Monte-Carlo learner, nothing is updated yet. We simply record the trajectory:
| Time t | State | Action | Reward | Next State |
|---|---|---|---|---|
| 1 | s1 | a | 3 | s2 |
| 2 | s2 | a | -1 | s3 |
| 3 | s3 | a | 5 | s4 |
Now that the game is over, we compute the returns:
-
For
-
For
-
For
Then and only then we update (for all state-action):
This deep-backup approach requires the full trajectory to be known. Every return depends on all future rewards. Look again at the first step of the episode:
At that moment, we already know three important pieces of information:
-
The immediate reward:
-
The next state:
-
And crucially, our current estimate of the future from that next state:
But Monte Carlo learning ignores all of this until the game ends. What if we took advantage of something we do know right away? When the agent moves from to , we have a perfectly valid estimate of the remaining return:
This is not the actual return. MC would say it's incomplete but it is a reasonable guess about the future based on our current knowledge. And unlike the Monte-Carlo return, we don’t have to wait for the game to end to compute it. And that raises a powerful idea, Maybe we don’t need the full return . Maybe we can update immediately using
This is the heart of the transition From Monte-Carlo (wait for the full return) to Temporal-Difference (use a predicted return)
Step 1: Transition
Reward = 3
Old value:
TD (Temporal Difference) target:
Error:
Update:
Table: After Step 1
| (s1, a) | (s2, a) | (s3, a) | |
|---|---|---|---|
| Old Q | 3 | 7 | 2 |
| Reward rk | 3 | - | - |
| TD target r + γQ(s′) | 9.3 | - | - |
| Error | 9.3 − 3 = 6.3 | - | - |
| Update α · error | 0.2 × 6.3 = 1.26 | - | - |
| New Q | 4.26 | - | - |
Step 2: Transition
Reward = –1
Old value:
TD (Temporal Difference) target:
Error:
Update:
Table: After Step 2
| (s1, a) | (s2, a) | (s3, a) | |
|---|---|---|---|
| Old Q | 3 | 7 | 2 |
| Reward rk | 3 | −1 | - |
| TD target r + γQ(s′) | 9.3 | 0.8 | - |
| Error | 9.3 − 3 = 6.3 | 0.8 − 7 = −6.2 | - |
| Update α · error | 0.2 × 6.3 = 1.26 | 0.2 × −6.2 = −1.24 | - |
| New Q | 4.26 | 5.76 | - |
Step 3: Transition
similarly, Table After Step 3
| (s1, a) | (s2, a) | (s3, a) | |
|---|---|---|---|
| Old Q | 3 | 7 | 2 |
| Reward rk | 3 | −1 | 5 |
| TD target r + γQ(s′) | 9.3 | 0.8 | 5 |
| Error | 9.3 − 3 = 6.3 | 0.8 − 7 = −6.2 | 5 − 2 = 3 |
| Update α · error | 0.2 × 6.3 = 1.26 | 0.2 × −6.2 = −1.24 | 0.2 × 3 = 0.6 |
| New Q | 4.26 | 5.76 | 2.6 |
From One Action to Many Actions
So far, our TD examples involved a very simplified world: every state had only a single action. That made updates easy because the agent never had to choose, there was only one possible to consider. But real reinforcement learning problems involve multiple actions, and therefore:
- a policy that selects among them,
- -values that must be learned for every state–action pair, not just one state–action pair.
- and updates that depend on both the next state and the next action the agent actually takes.
This is exactly where SARSA comes in. In the single-action case, the TD target was simply
because the agent must take only action in the next state. But with multiple actions, we must ask a new question:
After reaching a new state , which action will the agent actually take?
The answer depends on the policy the agent is following, usually something like an ε-greedy exploration policy. So now the TD target depends not only on the next state but also on the next chosen action, which we’ll denote as:
This gives the SARSA update its name:
State — Action — Reward — State — Action
(the five elements involved in each update)
Update Rule
In the Monte-Carlo world, learning is “all-at-once”:
Finish the entire episode → compute all returns → update all ’s.
Temporal-Difference learning, with SARSA as its 1-step control variant, fundamentally changes this learning rhythm. Instead of waiting for the full return , we approximate it on the fly using what we already know about the future.
After our “From One Action to Many Actions” discussion, we now think in terms of full SARSA tuples:
At time step , the agent:
- is in state ,
- takes action ,
- receives reward ,
- lands in next state ,
- then picks the next action according to its (typically ε-greedy) policy.
Right after this transition we can build a TD target:
and then do an incremental update:
The bracketed term is the TD error:
So the update is simply:
Compare this with Monte Carlo:
-
MC target: full return
-
TD target (SARSA): one-step “bootstrapped” guess
And the core learning rule is the same pattern:
The only difference is what we use as the target.
- MC: target is what actually happened (after the episode ends).
- TD/SARSA: target is what we currently predict will happen (right now).
That’s why TD (Temporal-Difference) is called bootstrapping. it learns from its own predictions.
At this point it’s natural to worry that a TD is “less accurate” than a Monte Carlo learner because its target
is only a rough guess of the full return
For example, suppose the Monte-Carlo return from the current episode is , while our TD learner (like SARSA) uses a target of about . It’s tempting to say “41 is the true value and 40 is just an approximation.” But from a statistical point of view, the Q-value can actually be more reliable.
The quantity we plug into the TD target summarizes what happened in the previous episodes after visiting . In contrast, the Monte-Carlo return from this episode is just one noisy sample. So MC uses a single fresh sample, while TD blends many past samples into its estimate.
In practice this tradeoff (slightly biased but lower-variance targets) often makes TD methods like SARSA more data-efficient and more effective than pure Monte-Carlo learning.
With this SARSA update rule, our agent can adjust its Q-values step by step as it interacts with the environment. In our on-policy Monte Carlo setup, we already saw how to turn Q-values into behavior using an ε-greedy policy to balance exploration and exploitation.
Now we’re ready to see how SARSA fits into that same on-policy control picture.
SARSA as an On-Policy Control Algorithm
In the on-policy Monte Carlo blog, we already built the full control loop:
- use an ε-greedy policy w.r.t. current Q-values,
- generate episodes by following that policy,
- use the observed returns to update Q,
- then make the policy greedier with respect to the new Q.
SARSA keeps exactly the same high-level idea and the same ε-greedy exploration strategy. The difference is not how we choose actions, but how and when we update Q.
Instead of waiting until the end of the episode and using the full return , SARSA updates on every step using the 1-step TD target:
The tuple
is generated by the same ε-greedy policy. That’s why SARSA is called on-policy. It learns the value of the very policy it is using to act (ε-greedy w.r.t. Q), just like our on-policy MC learner but with bootstrapped, step-by-step TD updates instead of full-episode Monte Carlo returns.
What’s Next
When you first look at the SARSA update, it feels like everything finally fits together. On every step the agent adjusts its Q-value using the action it actually took, completing a tidy loop: act, observe, update, improve. Our drone or car now learns continuously rather than waiting for full episodes the way Monte Carlo does. But if you stare at SARSA’s update a little longer, something subtle appears. Because the next action comes from the same ε-greedy policy used to explore, every estimate is shaped by that small bit of randomness. In tricky or risky regions, SARSA ends up learning the value of a slightly jittery, exploration-heavy policy, not the clean, greedy policy we ultimately care about.
This naturally raises a deeper question. What if we want to keep exploring because exploration is essential, yet learn about a policy that behaves more confidently than our exploration-heavy one? Is there a way for the agent to act with curiosity but learn with conviction? In the next blog, we’ll follow this tension to a surprisingly elegant solution.