2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning

Hilton Hawaiian Village, Honolulu, Hawaii, USA, April 1-5, 2007

http://liu.ece.uic.edu/ADPRL07



Professor Frank L. Lewis, University of Texas at Arlington, will deliver the following keynote talk at IEEE ADPRL 2007.


Adaptive Dynamic Programming for Robust Optimal Control Using Nonlinear Network Learning Structures

Abstract: Adaptive control systems use on-line tuning methods to produce feedback controllers that stabilize systems, without knowing the system dynamics. Some severe assumptions on the system structure are needed, such as linearity-in-the-parameters. It is by now known how to relax such assumptions using neural networks as nonlinear approximators.

Most developments in Intelligent Controllers, including neural networks and fuzzy logic, that have rigorously verifiable performance have centered around using the approximation properties of these nonlinear network structures in feedback-linearization-type control system topologies, possibly involving extensions using backstepping, singular perturbations, dynamic inversion, etc.

However, naturally occurring and biological systems are optimal, for they have limited resources in terms of fuel or energy or time. Likewise, many manmade systems, including electric power systems and aerospace systems, must be optimal due to cost and limited resources factors.

Unfortunately, feedback linearization, backstepping, and standard adaptive control approaches do not provide optimal controllers.

On-line methods are known in the computational intelligence community for dynamic programming using neural networks to solve for the optimal cost in Bellman's relation by on-line tuning using numerically efficient techniques rooted in "approximate dynamic programming" or "neurodynamic programming". It is also known that if the neural network has the control signal, as well as the state, as an input, then the system dynamics can in fact be unknown; only the performance measure need be known. In this talk, a framework is laid for rigorous mathematical control systems design with performance guarantees using such nearly optimal control methods for on-line tuning. Both discrete-time and continuous-time systems are considered. Connections are drawn between the computational intelligence and feedback control theory approaches. Some effective recent design methods are given for solving Hamilton-Jacobi equations on-line using learning methods to obtain nearly optimal H2 and H-infinity robust controllers. The result is a class of adaptive controllers that converge to optimal feedback controllers without knowing the system dynamics.