6.231 | Fall 2015 | Graduate

Dynamic Programming and Stochastic Control


Course Meeting Times

Lectures: 2 sessions / week, 1.5 hours / session

Recitations: 1 session / week, 1 hour / session

Course Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

We will place increased emphasis on approximations, even as we talk about exact Dynamic Programming, including references to large scale problem instances, simple approximation methods, and forward references to the approximate Dynamic Programming formalism. However, the more (mathematically) formal parts of approximate Dynamic Programming that require a good understanding of the exact Dynamic Programming material, will be the focal point of the last part of the course.

The course will roughly follow this schedule:

  • Finite-horizon Problems: Perfect Information
  • Finite-horizon Problems: Imperfect Information
  • Infinite Horizon Problems
  • Suboptimal Control and Approximate Dynamic Programming Methods


Solid knowledge of undergraduate probability, at the level of 6.041 Probabilistic Systems Analysis and Applied Probability, especially conditional distributions and expectations, and Markov chains. Mathematical maturity and the ability to write down precise and rigorous arguments are also important. A class in analysis (e.g. 18.100C Real Analysis) will be helpful, although this prerequisite will not be strictly enforced.

Readings and Resources

Bertsekas, Dimitri P. Dynamic Programming and Optimal Control, Volume I. 3rd ed. Athena Scientific, 2005. ISBN: 9781886529267.

Bertsekas, Dimitri P.Dynamic Programming and Optimal Control, Volume II: Approximate Dynamic Programming. 4th ed. Athena Scientific, 2012. ISBN: 9781886529441.

The two volumes can also be purchased as a set. ISBN: 9781886529083.

Errata (PDF)

Videos from a 6-lecture, 12-hour short course at Tsinghua University, Beijing, China, 2014. Available on the Tsinghua course site and on Youtube. Based on the course textbook.

Course Requirements

A term paper or project will be required, of one of the following types:

  • Read some of the literature and provide a critical report, with suggestions for further work.
  • Formulate a new model, motivated by some application that interests you, and study it, analytically or computationally.

There will be short project presentations during exam week. A fairly complete version of your paper needs to be handed in before the presentation. More detailed instructions, together with pointers to the literature and possible topics can be found in the Project Description.

There will be one midterm and 9 problem sets.


Homework 30%
Quizzes 30%
Project 40%

Course Info

As Taught In
Fall 2015