2.997 | Spring 2004 | Graduate

Decision Making in Large Scale Systems

Course Description

This course is an introduction to the theory and application of large-scale dynamic programming. Topics include Markov decision processes, dynamic programming algorithms, simulation-based algorithms, theory and algorithms for value function approximation, and policy search methods. The course examines games and …
This course is an introduction to the theory and application of large-scale dynamic programming. Topics include Markov decision processes, dynamic programming algorithms, simulation-based algorithms, theory and algorithms for value function approximation, and policy search methods. The course examines games and applications in areas such as dynamic resource allocation, finance and queueing networks.
Learning Resource Types
Lecture Notes
Projects with Examples
Problem Sets
Diagram of the game Tetris.
Tetris game strategies can be analyzed using the value function approximation techniques described in this course – see lecture session 10. (Illustration courtesy of MIT OpenCourseWare.)