- Free Articles
- Reactive Scheduling of Batch Processes Encyclopedia of Optimization
- Variational methods in shape analysis Handbook of Mathematical Methods in Imaging
- History of Geomathematics: Navigation on Sea Handbook of Geomathematics
- Geomagnetic Field: Satellite Data Handbook of Geomathematics
- From Omnipotent to Omnipresent Maps Handbook of Geomathematics
**More Free Articles**

Mathematics and Statistics
>
Encyclopedia of Optimization
>
Dynamic Programming: Continuous-time Optimal Control

This is the free portion of the full article.
The full article
is available to licensed users only.

How do I get access?

# Dynamic Programming: Continuous-time Optimal Control

### Article Outline

Keywords

Problem Formulation

Example

Hamilton-Jacobi-Bellman Equation

Pontryagin Minimum Principle

See also

References

Keywords Dynamic programming - Continuous-time optimal control

Even though *dynamic programming* [1] was originally developed for systems with discrete types of decisions, it can be applied to continuous problems as well. In this article the application of dynamic programming to the solution of continuous time optimal control problems is discussed.

## Problem Formulation

Consider the following continuous time dynamical system:

where

(1) |

*z*(*t*) ∊**R**^{n}is the state vector at time*t*with time derivative given by ż(*t*),*u*(*t*) ∊*U*⊂**R**^{m}is the control vector at time*t*,*U*is the set of control constraints, and*T*is the terminal time. The function*f*(*z*(*t*),*u*(*t*)) is continuously differentiable with respect to*z*and continuous with respect to*u*. The