Reinforcement Learning: Value Iteration
Value Iteration: a faster path to optimal policies for MDPs.
Our drone has come a long way.It began with a random policy, wandering aimlessly. Then it learned to evaluate that policy, estimating how good each state was. After that, it improved the policy, updating arrows to point toward better futures. And finally, through repeated cycles of Policy Iteration,
evaluation → improvement → evaluation → improvement → evaluation → …..
the drone found a truly optimal plan: a set of actions that reliably guide it toward the rooftop while avoiding no-fly zones.
That’s thoughtful. It’s wise.
But it’s slow.
It requires sweeping through the grid dozens of times just to evaluate a single policy, before even improving it! What if, instead of waiting for an entire policy to converge, the drone could update its utilities and improve its decisions at the same time? This is where an even more powerful idea emerges, Value Iteration.
Let’s look closely at what happens inside Policy Iteration.
During Policy Evaluation, we repeatedly apply:
During Policy Improvement, we update:
But why keep these two phases separate? Why not insert the improvement step directly into the evaluation step? Whenever you update a value at state , instead of using the policy’s action , why not simply use:
In other words, “Don’t wait to improve the policy later. Improve the decision now, during the value update itself.” This transforms the Bellman equation from a policy-based evaluator into an optimality-based evaluator.
So we replace the fixed action with a max over all possible actions:
This is the Bellman Optimality Equation.
Suppose we want to update the value of state 0. At the start of Value Iteration, all non-terminal values are initialized to 0, just like before:
The drone must consider all four possible actions from state 0, So we write:
So our four action-values become:
The best action is the one with the highest value. Here, LEFT, with value 0. Final value update
Just like that, the drone has completed one Value Iteration, improving the utility of state 0 by considering every possible action, not just the one suggested by a policy.
After a few sweeps of Value Iteration, the utilities across the grid begin to change rapidly. Good states grow brighter, dangerous states grow darker, and the “terrain of value” settles into its true shape. Eventually, Value Iteration converges:
At this point, the drone has discovered the optimal value function
the best possible estimate of long-term future reward from every state. But a value function alone doesn’t tell the drone what action to take. An important question remains:
“Once the values have converged, how do we actually get the final best actions, the optimal policy?”
Once we have , the optimal value function, the drone doesn’t need to guess or experiment. For every state, it can now compute,:
Which action leads to the highest long-term value?
Formally, the optimal policy is:
This looks similar to the “max” inside the Bellman equation because it is the same idea, just reused for the final decision.
From a Perfect Map to the Real Sky
Over the past few chapters in our drone’s journey, we explored the full power of MDPs. We learned how a drone can navigate a city perfectly, as long as it has access to a perfectly defined world:
- a known reward for every rooftop and alley,
- a precise model of how the wind pushes it around,
- and the ability to compute utilities through Policy Iteration or Value Iteration.
In our little grid-city, life is clean. Predictable. Fully observable. But the real sky is rarely so kind. The drone faces a world that does not hand over its transition probabilities in neat tables. No city official gives it a spreadsheet explaining:
- “When you try to fly UP, here's a 10% chance you'll drift left.”
- “This block always costs −0.04 battery.”
- “That rooftop gives +1 reward.”
If real life were that cooperative, we could solve everything with MDPs overnight. But the truth is harsher:
real-world environments do not reveal their rules.
Our drone must fly through a city where:
- rewards are unknown,
- transition dynamics are hidden,
- the map changes with weather, traffic, and time,
- and even the set of reachable states may be a mystery.
It’s like releasing the drone into a foggy metropolis where only feedback is “you moved… something happened… try again.” No clear map. No guaranteed model. No perfect grid.
In other words:
The MDPs we solved are fully observable. Reality is not.
And this is where the classical MDP toolbox reaches its limit. Policy Iteration and Value Iteration shine in worlds where the rules are known. But when the world is hidden or unpredictable, a new toolbox is needed, one built not on knowing the environment, but on learning it.
That toolbox is Reinforcement Learning.
In reinforcement learning, the drone learns the reward function, the transition model, or even the optimal policy directly through trial, error, exploration, and experience. It does not start with a map. It discovers the map. It reshapes its understanding with every flight. So as our story in the Grid City comes to an end, a new adventure begins:
What happens when the drone must learn to fly without ever being told the rules?
That is the world of reinforcement learning and that’s where we’ll go next.
Stay tuned. The sky is about to get unpredictable.