Reinforcement Learning: Off Policy Monte Carlo
Explore how importance sampling lets RL evaluate policies without executing them.
In the last blog, our drone learned by doing. And “doing” meant crashing into buildings, wasting battery, drifting into restricted airspace, and spiraling around in the wind until it finally figured things out. The agent must live through its own mistakes in order to learn from them. That’s fine when we’re training a tiny drone in Grid City…But this approach quickly hits a wall when the world becomes more serious. Let’s look into couple of scenarios to understand it better.
Scenario 1: A Trading Bot
Imagine you’re training a trading agent. Unlike a drone, it can’t afford early crashes. One bad trade may cost millions. But luckily your firm already has:
- 10 years of historical trades,
- decisions made by human experts,
- logs from a conservative legacy algorithm.
This existing trader isn’t optimal. it’s slow, cautious, maybe outdated but it’s safe, battle-tested and profitable enough to trust in production. If the bot learns on-policy, it must behave using its own early policy and that policy will be awful.
What you actually want is:
Learn an optimal or improved policy from the trades already available without executing the bad ones.
Scenario 2: A self-driving car
Now imagine a self-driving car. The current system, deployed in the real world, follows:
Policy (safe-but-slow)
- Keeps a large buffer distance.
- Only overtakes when the road is nearly empty.
- Brakes early near intersections.
- Avoids tight gaps aggressively.
This policy is extremely safe, but traffic efficiency is terrible, trips take longer, users complain, and competitors are faster. Your research team has designed a new, more assertive driving policy:
Policy (fast-but-unsafe-to-test)
- Closes the gap sooner to cars ahead.
- Overtakes more frequently.
- Accelerates through borderline yellow lights.
- Navigates tighter spaces in traffic.
This policy might dramatically improve travel time, but there is a critical problem. You cannot put on the road to collect experience because one bad decision can cause a life. so now:
I want to know how good is.…but I don’t want my car to actually drive according to yet.
When you say “how good”, what you really mean is, If my self-driving car were to always follow the risky policy , what would its expected long-term return be?
This adds an important nuance:
- You don’t actually want to deploy on the real vehicle.
- Because doing so could:
- break traffic laws
- increase collision risk
- endanger passengers or other road users
What you're really saying is:
I want to evaluate the consequences of before letting my real car behave that way in the real world.
Prediction vs Control
In both of our stories the trading bot and the self-driving car, there’s a subtle but very important shift in what question we’re asking. Up until now, our Monte Carlo drone was mostly answering:
What should I do in each state to get the best long-term return?
That’s the control problem. But in the self-driving example with and , we suddenly started asking a different question:
If I were to follow this policy all the time, how good would it be?
That’s the prediction problem.
So in reinforcement learning, we usually separate things into two tasks:
- Prediction → Evaluate how good a given policy is (Policy Evaluation).
- Control → Find a good (or optimal) policy (Policy Evaluation + Policy Improvement).
In the last post we learned an on-policy control algorithm using action-values . In this post, we take a step sideways: we focus on off-policy prediction evaluating a target policy from data generated by another policy. For clarity, we will write this in terms of state values , but all of this can be done with Q-values as well. If the policy is fixed, The policy already tells us which action will be used hence the value we care about is
which means: if I start in state , and from that point onward I strictly follow policy , what total discounted reward should I expect?
Bridging the Gap
That equation may look familiar and unfamiliar at the same time. Familiar, because we are still talking about return. Unfamiliar, because in the previous blog our main object was not . It was the action-value , and the whole story revolved around choosing the best action.
There, the drone stood in a state, looked at all available actions, and asked a control-style question:
“What should I do next?”
That is why the key relationship looked like
The value of a state was obtained by looking across actions and picking the best one. The max was the mathematical expression of control. It encoded a very specific attitude: from this state, I am free to choose whichever action gives the highest long-term return.
But now the question has quietly changed.
In off-policy prediction, we are no longer asking what the best action is. We are asking what happens if we commit to a particular policy and keep following it. The policy may be cautious, reckless, random, or somewhere in between. Whatever it is, once the policy is fixed, we are no longer free to take a max over actions. The policy itself decides how actions are chosen.
To make this concrete, imagine the self-driving car reaches a yellow light. There are only two actions:
If we were solving a control problem, we would compare the two action-values and keep the larger one:
But suppose instead that the risky policy does not always do the same thing. Maybe in this state it chooses
Now the policy is not saying “always pick the better one.” It is saying “most of the time go, sometimes brake.” So the value of state under this policy must reflect both possibilities. That means the max disappears and an average appears:
More generally,
And since is just the expected return obtained by taking action in state and then continuing with policy , we can write
or equivalently,
This is the conceptual bridge from the previous blog to this one. In on-policy control, the state value came from the best action. In off-policy prediction, the state value comes from the policy’s action distribution. The moment we stop asking “what is the best move?” and start asking “what return does this policy produce?”, the naturally turns into an expectation.
Importance Sampling
So let’s slow things down and understand what it means to evaluate a policy we never executed.
Under the risky policy, the self-driving car would sometimes GO, sometimes STOP, in proportions dictated by . If we could run that policy, we would simply average the observed returns according to the actions it chose. In other words, the value of state under is just the expected return produced by the actions that policy prefers.
But we don’t actually have any samples drawn from . Every return we’ve ever seen came from the safe policy . So the question becomes:
How do we rewrite an expectation taken under one policy in terms of data generated by another?
To do this, we perform a small algebraic trick: multiply and divide each term by (which is non-zero for all actions the safe policy ever took). Since we’re multiplying by 1, nothing changes numerically:
Now rearrange the terms:
This expression is beautiful because:
- Now everything on the outside matches the distribution we actually collected data from.
- The ratio adjusts each sample to reflect how strongly preferred that action compared to .
This weighting term is what allows off-policy Monte Carlo to evaluate without ever executing it. This correction factor is exactly where importance sampling weights come from. With this reformulation in hand, we can now compute off-policy values using only the data.
Putting it into action
let’s shrink Scenario 2 (self-driving car) into the simplest possible MDP. Just one state. Two actions. One reward. No transitions. This lets us focus on the core idea without algebra swallowing the message.
Environment
One decision at a traffic light:
- State: at the intersection (call it )
- Actions:
- GO
- STOP
- Transition Model:
- If you GO:
- 99% chance → safe → reward = +10
- 1% chance → crash → reward = −100
- If you STOP:
- always safe → reward = −1
- If you GO:
We have two policies.
Behavior Policy (safe policy used during data collection where STOP is preferred more)
Target Policy (risky policy we want to evaluate where GO is preferred more)
What we want:
But We never actually drive using because it’s unsafe. All real data comes from .
Suppose we let the car approach this intersection 3 times, each time following the safe policy. The collected episodes:
| Episode | Action Taken | Reward | | --- | --- | --- | | 1 | STOP | -1 | | 2 | GO (safe outcome) | +10 | | 3 | STOP | -1 |
Everything in this dataset was generated by , not . If we did on-policy MC for the safe policy, we’d simply average the rewards Because that’s what we were doing in on-policy Monte Carlo, we generated multiple episodes using the same policy and simply averaged the returns we observed. Every time a state (or state–action pair) appeared, we recorded its full return and updated its value as the mean of all those returns over multiple episodes.
That tells us how good the safe policy is. To answer “what if we had driven under ?”, we must adjust each episode according to importance sampling:
This ratio tells us:
“How much more (or less) likely was this action under the risky policy compared to the safe one?”
- If strongly prefers the action → weight is large ( > 1).
- If rarely takes that action → weight is small (< 1 ).
If we compute them.
Episode 1 Action: STOP
and
Episode 2 Action: GO
and
Episode 3 Action: STOP
Now our dataset looks like this:
| Ep | Action | Reward | Weight | | --- | --- | --- | --- | | 1 | STOP | -1 | 0.125 | | 2 | GO | +10 | 4.5 | | 3 | STOP | -1 | 0.125 |
Now instead of averaging raw returns, we weight them.
Plug in numbers:
Now suddenly the estimate explodes upward. Why?
Because the one GO event gets amplified by 4.5 times.
In our three episodes, the safe policy produced two STOP actions and one GO. Under the safe policy that makes sense: it stops most of the time. But the risky policy behaves very differently. It almost always goes. Under the risky policy, GO is far more common than under the safe one. So every GO sample becomes extremely influential. Importance sampling compensates for this mismatch.
From a Single State to a Full Episode
Real environments whether drones, trading bots, or self-driving cars don’t make just one decision. They make many decisions per episode:
- the drone flies through a whole grid,
- the trading bot opens, adjusts, and closes positions,
- the self-driving car navigates an entire route.
So we now face the more general question:
How do we evaluate a target policy when each episode consists of multiple decisions, each taken under the behavior policy?
To extend importance sampling beyond a single action, we must look at the entire trajectory.
A real drone flight episode might look like:
And the return we care about is no longer just a single reward instead it is the whole discounted sum:
When the behavior policy generated this trajectory, it selected each action according to its own probabilities:
Under the target policy, the same trajectory would look like:
we extend the same idea we used in the single-state case. Instead of one importance ratio, we now have one ratio per step, multiplied together:
Why a product?
Because the probability of a trajectory under a policy is the product of action probabilities along the path. If we want to convert expectations from one policy’s trajectory distribution to another’s, we must correct every decision along the way.
So the Monte Carlo return update becomes:
Look carefully at what changed compared to ordinary Monte Carlo.
Ordinary Monte Carlo:
Off-policy Monte Carlo:
Just one extra multiplier. That’s it. But that one multiplier changes everything.
Now comes the uncomfortable truth. Imagine a short 3-decision episode where the safe policy is conservative at every step, and the risky policy is aggressive at every step. Suppose at each of the three states, the risky policy chooses the aggressive action with probability , while the safe policy chooses it with probability . If one episode happens to contain aggressive actions all three times, then the ratio becomes
So one “lucky” aggressive trajectory can count as ninety-one episodes worth of evidence. That’s not a small correction anymore. That’s a hijacking of your estimate.
And there’s an even harsher truth hidden inside that product: This product tells us how much more (or less) likely the entire trajectory would have been under compared to . If the target policy can choose an action in a state, the behavior policy must assign it non-zero probability. Formally:
Otherwise the ratio either explodes (division by zero) or is undefined, and you literally cannot evaluate that part of the target policy from your data. that means
The behavior policy must sometimes do everything that might do. No data → no evaluation.
Weighted importance sampling
From here we have two ways to use importance sampling in off-policy Monte Carlo method
-
Ordinary Importance Sampling
This estimator is unbiased: if you collected infinitely many episodes, its average would exactly equal the true value of the target policy.
However, because the weights are products of many probability ratios, they can become extremely large. A single rare trajectory (trajectory that is very unlikely under but much more likely under ) can dominate the estimate, so the variance can be huge.
-
Weighted (self-normalized) Importance Sampling
This estimator is slightly biased (since it’s a ratio of random sums), but it usually has much lower variance. It effectively treats the weights as relative, normalizing them to sum to 1, and is therefore far more stable in practice.
And there’s a beautifully simple incremental form of that weighted update that feels almost like a TD update even though it’s still Monte Carlo. Suppose you maintain a running “total weight” for each state, and a value estimate . For each new weighted sample for that state, you do
That update has a very comforting interpretation. The term is the familiar prediction error. The learning rate is , which automatically shrinks as you accumulate more weighted evidence. So big-weight episodes can move the estimate more, but only in proportion to the total evidence you’ve already seen.
Important thing to keep in mind here is, is “the expected return when we start in state ****and then behave according to policy ” for episode-length weights instead of one-step weights.
From Off-Policy Prediction to Off-Policy Control
So far in this post, we’ve stayed in the prediction world:
Given a fixed target policy , how do we estimate using episodes generated by another policy ?
we can safely evaluate risky or experimental policies from logged data but in reinforcement learning we rarely want to stop at prediction. Eventually, we want control:
Can I use off-policy Monte Carlo ideas not just to evaluate a policy, but to actually learn a better one?
Conceptually, off-policy MC control is “just” off-policy prediction + policy improvement:
-
Behavior policy :
used to explore and generate episodes (e.g., safe/self-driving, conservative trader).
-
Target policy :
a greedy (or almost greedy) policy with respect to our current value estimates.
-
Evaluate off-policy using importance sampling.
-
Improve by making it greedier w.r.t. updated values.
-
Repeat.
For control, it’s more natural to work with action-values instead of just state values:
- State values tell us how good it is to be in state if we follow .
- Action values tell us how good it is to take action in state and then follow .
Once we have , we can easily define a greedy target policy:
Now the game looks like this:
- : behavior policy, kept exploratory and safe.
- : target policy, greedy (or -greedy) with respect to .
We generate episodes under , but we update Q as if we had followed , using importance sampling on the state–action level.
For a single episode:
For each time step , we can compute:
-
the return from that point on:
-
the tail importance ratio:
Then is a corrected Monte Carlo return. The same old “full return from time ” idea, except now it’s been reweighted so that even though the episode was generated by , it counts as evidence for .
Now a weighted-IS style update for each visited pair looks like:
In control, we usually make the target policy greedy with respect to our current . In its purest form it’s deterministic. After processing the whole episode, we update the target policy:
We keep behaving with (which must still explore all actions that might want), but our evaluation target keeps getting greedier.
From Episodes to Step-by-Step Learning
Off-policy Monte Carlo control gives us a powerful idea, we can learn a better policy without ever acting according to it but MC has one big limitation, it only learns after an episode ends. In long or complicated tasks, this makes learning slow, unstable, and sometimes impractical.
So the natural next question is:
Can we update our estimates during an episode instead of waiting until the end?
This is exactly what Temporal-Difference (TD) learning does. Instead of waiting for the full return, TD methods learn step-by-step, adjusting their value estimates continuously.