Dynamic Programming and Stochastic Control

Diagram in which nodes can be inserted into or removed from a list of active nodes.

Label correcting methods for shortest paths. See lecture 3 for more information. (Figure by MIT OpenCourseWare, adapted from course notes by Prof. Dimitri Bertsekas.)

Instructor(s)

MIT Course Number

6.231

As Taught In

Fall 2011

Level

Graduate

Cite This Course

Course Features

Course Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Bertsekas, Dimitri. 6.231 Dynamic Programming and Stochastic Control, Fall 2011. (MIT OpenCourseWare: Massachusetts Institute of Technology), http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-231-dynamic-programming-and-stochastic-control-fall-2011 (Accessed). License: Creative Commons BY-NC-SA


For more information about using these materials and the Creative Commons license, see our Terms of Use.


Close