Penn State 2021, March 16: Discrete-state dynamic programming

Return to Ken Judd’s Home Page

Return to Ken Judd’s 2021 Penn State Numerical Methods Course Home Page

March 16: “Discrete-state dynamic programming”

I will discuss the combination of numerical methods used to solve this model, as described in “Optimal Dynamic…”

Discrete-state models are often used even when the state variables are naturally continuous. Good approximations require fine discretizations of the states.

There are many advantages of infinite-horizon discrete-state DP because one does not need to approximate the value function with some continuous function. The approximation is a finite, but large, list (or matrix, or tensor) of values at the states. While the number of unknowns may be large, there are many properties of dynamic programming that can be exploited by discrete-state problems, resulting in algorithms far faster than the usual value function iteration (a Gauss-Jacobi scheme).

The key property is the monotonicity of the Bellman operator. Some key techniques related to this are

Gauss-Seidel

Upwind Gauss-Seidel

Alternating Sweep Gauss-Seidel

Asynchronous parallel Gauss-Jacobi, Gauss-Seidel

Other advantages of discrete-state DP is the ability to exploit known monotonicity properties of the value function.

Some problems do not have concave value functions, as is the case with our example of optimal taxation, creating a difficult global optimization problem.

I will discuss the many options to accelerate the solution of such DP’s and how to adjust the discretization to improve its approximation of the continuous problem.