16.323 | Spring 2008 | Graduate

Principles of Optimal Control

Lecture Notes

Keywords

LQR = linear-quadratic regulator
LQG = linear-quadratic Gaussian
HJB = Hamilton-Jacobi-Bellman

Lec # Topics Notes
1

Nonlinear optimization: unconstrained nonlinear optimization, line search methods

(PDF - 1.9 MB)
2

Nonlinear optimization: constrained nonlinear optimization, Lagrange multipliers

Penalty/barrier functions are also often used, but will not be discussed here.

(PDF - 1.2 MB)
3

Dynamic programming: principle of optimality, dynamic programming, discrete LQR

(PDF - 1.0 MB)
4

HJB equation: differential pressure in continuous time, HJB equation, continuous LQR

(PDF)
5

Calculus of variations

Most books cover this material well, but Kirk (chapter 4) does a particularly nice job. See here for an online reference.

(PDF)
6

Calculus of variations applied to optimal control

(PDF)
7

Numerical solution in MATLAB

(PDF)
8

Properties of optimal control solution

Bryson and Ho, Section 3.5 and Kirk, Section 4.4

(PDF)
9

Constrained optimal control

Bryson and Ho, section 3.x and Kirk, section 5.3

(PDF)
10

Singular arcs

Bryson, chapter 8 and Kirk, section 5.6

(PDF)
11

Estimators/Observers

Bryson, chapter 12 and Gelb, Optimal Estimation

(PDF)
12

Stochastic optimal control

Kwaknernaak and Sivan, chapters 3.6, 5; Bryson, chapter 14; and Stengel, chapter 5

(PDF)
13

LQG robustness

Stengel, chapter 6

Question: how well do the large gain and phase margins discussed for LQR (6-29) map over to LQG?

(PDF)
14

16.31 Feedback Control Systems: multiple-input multiple-output (MIMO) systems, singular value decomposition

(PDF)
15

Signals and system norms: H synthesis, different type of optimal controller

(PDF)
16

Model predictive control

(PDF)

Course Info

As Taught In
Spring 2008
Level
Learning Resource Types
Problem Sets
Exams
Lecture Notes
Programming Assignments