> > Dynamic Programming: Continuous-time Optimal Control

This is the free portion of the full article. The full article is available to licensed users only.
How do I get access?

Dynamic Programming: Continuous-time Optimal Control

Article Outline

Keywords

Problem Formulation
Example
Hamilton-Jacobi-Bellman Equation
Pontryagin Minimum Principle
See also
References

Keywords Dynamic programming - Continuous-time optimal control

Even though dynamic programming [1] was originally developed for systems with discrete types of decisions, it can be applied to continuous problems as well. In this article the application of dynamic programming to the solution of continuous time optimal control problems is discussed.

Problem Formulation

Consider the following continuous time dynamical system:
(1)
where z(t) ∊ R n is the state vector at time t with time derivative given by ż(t), u(t) ∊ UR m is the control vector at time t, U is the set of control constraints, and T is the terminal time. The function f(z(t), u(t)) is continuously differentiable with respect to z and continuous with respect to u. The