Abstracts.
Irwin W. Sandberg, University of Texas at Austin
Radial basis functions are of interest in connection with a
variety of approximation problems in the neural networks area, and in other areas
as well. Here we show that the members of some interesting families of
shift-varying input-output maps, that take a function space into a function space, can be
uniformly approximated, over an infinite time or space domain, in a certain special way using
gaussian radial basis functions.
Khurram Waheed, Michigan State University
Fathi M. Salem, Michigan State University
This talk presents a
comprehensive exposition of our work on modeling of neural
networks and their hardware implementations. In this effort
we discuss several architectures and their hardware implementations
leading to the latest layered system architecture, named micro-learner,
which integrates digital/analog neuro-processing, memory, A/D and D/A
conversions. The chip only requires external counter to proceed through
three phases of learning, (weight) storing, processing, or (weight)
reading/writing to external unit. The chip is envisioned to mount on a
probe or probe array for on-line processing or control without the
interface with external platforms.
Finally, contributions of Tony Michel in
regard to foundational and analytical basis of neural networks and large
scale systems and their impact on the modeling of neural
networks will be discussed.
Lei Yang, Arizona State University
Russell Enns, The Boeing Company
Yu-Tsung Wang, Scientific Monitoring, Inc.
Jennie Si, Arizona State University
This talk is about approximate dynamic programming, which has gone by many
different names, such as "reinforcement learning", "adaptive critics",
"neuro-dynamic programming", "adaptive dynamic programming" or more broadly,
approximate dynamic programming. Years of research have shown that these
apparently diverse areas are actually addressing much the same issue:
optimization over time by using learning and approximation to handle
problems that severely challenge conventional methods due to their very
large scale and/or lack of sufficient prior knowledge. In the present paper,
we first address the results and challenges in a unified fashion of some of
the most important results under the scope of approximate dynamic
programming, to provide an introduction to results obtained in respective
areas such as neural networks, adaptive/optimal/robust control, computer
science/machine learning, decision theory (especially the study of Markov
decision processes), engineering and operations research. Then we present
results developed and tested by the authors. We introduce the fundamentals
and the basic framework of our direct NDP. We address the generalization
issue by demonstrating a continuous state complex control problem.
Specifically we will provide details of how to use direct NDP to perform
stabilization, command tracking, and re-configuration for an Apache
helicopter. This is probably one of the first studies that an ADP type of
algorithm has been applied to a complex, realistic, continuous state
problem. Until now, reinforcement learning has been mostly successful in
discrete state space problems. On the other hand, prior ADP based approaches
to controlling continuous state space systems have all been limited to
smaller, or linearized, or decoupled problems. Therefore the work presented
here compliments and advances the existing literature in the general area of
learning approaches in approximate dynamic programming.
Jay Farrell, University of California, Riverside
Olfactory-based mechanisms have been hypothesized for a variety of
biological behaviors: homing by Pacific salmon, homing by green
sea turtles, foraging by Antarctic procellariiform seabirds,
foraging by lobsters, foraging by blue crabs, and mate-seeking and
foraging by insects. Typically, olfactory-based mechanisms
proposed for biological entities combine a large-scale orientation
behavior based in part on olfaction with a multisensor local
search in the vicinity of the source. Long-range olfactory based
search is documented in moths at ranges of 100-1000 m and in
Antarctic procellariiform seabirds over thousands of kilometers.
This presentation considers the development of algorithms to
replicate these feats in autonomous vehicles. The goal of the
autonomous vehicle will be to locate the source of a chemical that
is transported in a turbulent fluid flow. Autonomous vehicle with
such capabilities possess applicability in searching for
environmentally interesting phenomena, unexploded ordinance,
undersea wreckage, and sources of hazardous chemicals or
pollutants. This talk will discuss the chemical plume
tracing problem, discuss aspects of the problem that make it
challenging, present a solution to the problem, and present
results from in-water experiments.
Gary G. Yen, Oklahoma State University
In this talk, we propose a new evolutionary
approach to multiobjective optimization problems--the Dynamic
MultiObjective Evolutionary Algorithm (DMOEA). In our DMOEA, a novel cell-based
rank and density estimation strategy is proposed to efficiently compute
dominance and diversity information when the population size dynamically
increases or decreases. In addition, a population growing strategy and a
population declining strategy are designed to determine if an individual
will be survived or eliminated based on some qualitative indicators.
Meanwhile, an objective space compression strategy is devised to
continuously refine the quality of the resulting Pareto front. By examining
the selected performance metrics on three recently designed benchmark
functions, DMOEA is found to be competitive with, or even superior to five
state-of-the-art MOEAs in terms of keeping the diversity of the individuals
along the trade-off surface, tending to extend the Pareto front to new areas
and finding a well-approximated Pareto optimal front. Moreover, DMOEA is
evaluated by using different parameter settings on the chosen test functions
to verify its robustness of converging to an optimal population size, if it
exists. From simulation results, DMOEA has shown the potential of
autonomously determining the optimal population size, which is found
insensitive to the initial population size chosen.
M. A. Pai, Univ. of Illinois at Urbana-Champaign
Trong B. Nguyen, Pacific Northwest National Lab
Trajectory sensitivity analysis (TSA) has been applied in control system
problems for a long time in such areas as optimization, adaptive control
etc. Applications in power systems in conjunction with Lyapunov/transient
energy functions first appeared in the 80's. More recently, it has found
applications on its own by defining a suitable metric on the trajectory
sensitivities with respect to the parameters of interest. In this talk we
present the theoretical as well as practical applications of TSA for dynamic
security applications in power systems. We also discuss the technique to
compute critical values of any parameter that induces stability in the
system using trajectory sensitivities.
Vijay Vittal, Iowa State University
Power systems are being operated closer to the stability limit nowadays as deregulation introduces many more economic objectives for operation. As open access transactions increase, weak connections, unexpected events, hidden failures in protection system, human errors and other reasons may cause the system to lose balance and even lead to catastrophic failures. As a result several innovative emergency control procedures and special protection systems are being introduced to maintain the reliability of the system. The control approaches fall under the category of corrective control and are initiated after the occurrence of the disturbance.
This talk addresses the topic of designing a corrective control strategy after large disturbances. When a power system is subjected to large disturbances such as simultaneous loss of several generating units or major transmission lines, and the vulnerability analysis indicates that the system is approaching a catastrophic failure, control actions need to be taken to limit the extent of the disturbance. In our approach, the system is separated into smaller islands at a slightly reduced capacity. The basis for forming the islands is to minimize the generation-load imbalance in each island, thereby facilitating the restoration process. An analytical approach to forming the islands using a two time scale procedure is developed. Then by exploring a carefully designed load shedding scheme based on the rate of frequency decline, we limit the extent of the disruption, and
we are able to restore the system rapidly. We refer to this corrective control scheme as controlled islanding followed by load shedding based on the rate of frequency decline.
Traditionally power system control design has largely been based on a model based approach. Currently several innovations in the area of synchronized phasor measurements have significantly improved the availability of real time signals over a wide portion of the network. As a result several new control approaches based on measurements are feasible. These approaches will be examined.
A new class of controls called special protection systems (SPS) in now finding greater acceptance in power systems with the enhanced need to provide reliability in a competitive restructured utility environment. These systems are triggered only when large disturbances occur. They provide a degree of safety in preventing cascading outages. The SPS being explored will be presented and their potential for greater use will be examined.
Anjan Bose, Washington State University
The power system networks in North America and Europe are the largest man-made
interconnected systems in the world. The Eastern Interconnection in North America
that stretches from the East Coast to
the Rocky
Mountains is the largest in terms of geographic area covered, total installed
generation capacity and total
length of transmission lines. Moreover, all the rotating generators in one network rotate synchronously producing alternating current at the same frequency, that is, all the generators operate together in dynamic equilibrium. Any unbalance in the energy distribution of the system caused by disturbances tends to perturb the system. Large disturbances, usually caused by short circuits of high voltage equipment, can make the power system become unstable.
Large power systems exhibit a large range of dynamical characteristics, very slow to very
fast, and various controllers have been developed over time to control various phenomena.
Many of the controls are on-off switches (circuit breakers) that can isolate
short-circuited or malfunctioning equipment, or shed load or generation.
Other controls are discrete
like tap-changers in transformers or switching of capacitor/reactor
banks. Still others are continuous control like voltage controllers and power system
stabilizers in rotating generators or the newer power electronic controls in FACTS
devices (Flexible AC Transmission Systems refers to modern electronic devices such as High
Voltage DC Transmission or Static VAR Controllers that can control power flows or voltage).
However, all the controls (especially the fast ones) are local controls, that is, the
input and the control variables are in the same locale (substation). Most dynamic
phenomena in the power system, on the other hand, are regional or sometimes system-wide.
Thus designers of power system control have been constrained to handle system-wide
stability problems with local controllers. The only system-wide control in the power
system is the balancing of the slowly changing system electrical load by adjusting
generation levels; this slow dynamical phenomenon allows a slow communication system to reach all the generators in the system in time for the adjustments to be effective. The only other way to implement non-local control has been to dedicate a communication channel between the input variable in one substation to the control variable in another, an expensive proposition that has limited its use.
The tremendous breakthroughs in computer communications of the last decade, both in
cost and bandwidth, have opened opportunities that are yet to be fully utilized in the
control of power systems. The availability of many new control devices, e.g., FACTS
devices, and of accurate time synchronizing signals through the GPS are also factors
in this new equation. It is certainly possible now to design fast system-wide controls.
However, much research and development is needed to bring such designs to fruition.
In this talk, we first survey the state of the art in stability control of
power systems. Then we outline the new technologies that can be brought to bear
on this problem. Finally, we lay out a possible development path from system-wide
controls in which
simple extensions of existing controls can start helping power system operations
right away to concepts that will require significant time and effort to control more
complex phenomena. The goal, as always, is to provide more efficient operation, that
is, be able to transmit more power over existing transmission lines with more
flexibility.
David W. Porter, Johns Hopkins University Applied Physics Lab
Engineering projects involving hydrogeology are faced with uncertainties
because the earth is
heterogeneous, and typical data sets are fragmented and disparate. In
theory, predictions provided
by computer simulations using calibrated models constrained by geological
boundaries provide
answers to support management decisions, and geostatistical methods quantify
safety margins. In
practice, currently
existing methods are limited by the data types and models that can
be included,
computational demands, or simplifying assumptions. Data fusion modeling
(DFM) removes many
of the limitations and is capable of providing data integration and model
calibration with
quantified uncertainty for a variety of hydrological, geological, and
geophysical data types and
models. The benefits of DFM for waste management, water supply, and
geotechnical applications
are savings in time and cost through the ability to produce visual models
that fill in missing data
and predictive numerical models to aid management optimization. DFM has the
ability to update
field-scale models in real time using PC or workstation systems and is
ideally suited for parallel
processing implementation. DFM is a spatial state estimation and system
identification methodology
that uses three sources of information: measured data, physical laws, and
statistical models for
uncertainty in spatial heterogeneities. What is new in
the present DFM is the solution
of the causality problem
in the data as simulation Kalman filter methods to achieve computational
practicality. The Kalman
filter is generalized by introducing information filter methods due to
Bierman coupled with a
Markov random field representation for spatial variation. A Bayesian penalty
function is implemented
with Gauss-Newton methods. This leads to a computational problem similar to
numerical simulation
of the partial differential equations PDEs of groundwater.
As a matter of fact,
extensions of PDE
solver ideas to break down computations over space form the computational
heart of DFM. State
estimates and uncertainties can be computed for heterogeneous hydraulic
conductivity fields in
multiple geological layers from the usually sparse hydraulic conductivity
data and the often more
plentiful head data. Further, a system identification theory is
derived based on statistical
likelihood principles. A maximum likelihood theory is provided to estimate
statistical parameters
such as Markov model parameters that determine the geostatistical variogram.
Field-scale application
of DFM at the DOE Savannah River Site is presented and compared with manual
calibration.
DFM calibration runs converge in less than 1 hour on a Pentium
PC for a 3D
model with more
than 15,000 nodes. Run time is approximately linear with the number of
nodes. Furthermore,
conditional simulation is used to quantify the statistical variability in
model predictions such as
contaminant breakthrough curves.
Alfred Fettweis, Ruhr-Universitaet Bochum
The wave-digital approach to numerical integration owes its advantageous
behavior partly to the use of wave concepts, in particular however to the
use of passivity and losslessness properties that occur naturally in
physical systems. For handling nonlinear such systems, one is naturally led
to certain formulations that turn out to be of fundamental physical
significance, yet are violated by some basic relations in special relativity
theory. By starting from the classical relativistic kinematics and making some
assumptions that are at least not a priori physically unreasonable, however,
one is led to a modified version of relativistic dynamics that is in
complete accord with the formulations just mentioned, yields expressions of
appealing elegance (including a four-vector, thus a Lorentz-invariant
quadruplet, that is of immediate physical significance and coincides with a
four-vector already considered by Minkowski), and is, at least at first
sight, in good agreement with some reasonable analytic expectations. In this
alternative approach, Newton's second law is altered in a slightly different
way than in classical relativity, and, as a consequence, Newton's third law,
which is taken over untouched in classical theory, has also to be subjected
to some modification. For problems concerning collisions of particles or
action of fields (electromagnetic, gravitational) upon particles the
alternative approach yields exactly the same dynamic behavior as the
classical theory. Corresponding experiments are thus unable to
differentiate, and the same holds for some other available experimental
results.
The present talk builds on the same basic concepts as those that have
previously been published and, in some respect, expands them. On the
other hand, an unnecessary additional earlier requirement that had led to an
unavoidable factor 1/2 in the expression for the equivalence between mass
and energy is abandoned. This way, e.g., a remarkable agreement with certain
results in electromagnetics is obtained. For further testing the validity,
the crucial issue to be considered now appears to be the kinetic energy of
fast particles. A classical experiment by Bertozzi addresses this issue, but
it is not yet sufficiently clear how the results obtained there should
properly be interpreted in the present context.
It is hoped that the present talk can contribute to clarifying some of the
issues involved, even if the conventional theory should in the end be
confirmed, by accurate and unequivocal measurements, to be the one agreeing
best with reality.
Lyubomir T. Gruyitch, University of Technology Belfort
The goal is to present some not widely known original recent results and new
ones in the areas in which Professor Anthony N. Michel has been a
world leading scientist.
Properties of time are summarized. They are linked with the
features of physical variables expressed by the Physical Continuity
and Uniqueness Principle implying the Time
Continuity and
Uniqueness Principle. They appear important for modeling of physical
systems and for studies of their qualitative dynamical properties.
The complete transfer function matrix is defined for MIMO
time-invariant linear continuous-time and discrete-time systems.\ It is
crucial for zero-pole cancellation, system minimal realization, synthesis of
stabilizing, tracking
and/or
optimal control for the systems.
A new Lyapunov methodology for nonlinear systems, called
the consistent Lyapunov methodology, enables us to establish the
necessary and sufficient conditions for: i) asymptotic stability,
ii) a direct construction of a Lyapunov function for a given
nonlinear dynamical system, and iii) for a set to be the exact domain
of asymptotic stability. They are not expressed in terms of the existence
of a Lyapunov function.
The extended concepts of definite vector functions and of vector
Lyapunov functions open new directions for studies of complex nonlinear
dynamical systems and for their control synthesis. This is shown by
sythesizing stabilizing (output) tracking control for time-invariant
nonlinear 2D systems.
Jinglai Shen, University of Michigan
Amit K. Sanyal University of Michigan
N. Harris McClamroch, University of Michigan
A rigid base body, supported by a fixed pivot point, is free to rotate in three
dimensions. Multiple elastic subsystems are rigidly mounted on the rigid body;
the elastic degrees of freedom are constrained relative to the rigid base body.
A mathematical model is developed for this multibody attitude system that
exposes the dynamic coupling between the rotational degrees of freedom of the
base body and the deformation or shape degrees of freedom of the elastic
subsystems. The models are used to assess passive dissipation assumptions that
guarantee asymptotic stability of an equilibrium solution. These results are
motivated and inspired by a 1980 publication of R. K. Miller and A. N. Michel.
Hai Lin, University of Notre Dame
Panos J. Antsaklis, University of Notre Dame
In this talk, a class of discrete time uncertain linear hybrid
systems, affected by both parameter variations and exterior
disturbances, is considered. The main question is whether there
exists a controller such that the closed loop system exhibits
desired behavior under dynamic uncertainty and exterior
disturbances. The notion of {\em attainability} is introduced to
refer to the specified behavior that can be forced to the plant by
a control mechanism. We give a method for attainability checking
that employs the predecessor operator and backward reachability
analysis, and a procedure for controller design that uses finite
automata and linear programming techniques. Finally, Networked
Control Systems (NCS) are proposed as a promising application area
of the results and tools developed here, and the ultimate
boundedness control problem for the NCS with uncertain delay,
package dropout and quantization effects is formulated as a
regulation problem for an uncertain hybrid system.
Kevin Passino, The Ohio State University
Resource allocation involves partitioning of resources
(e.g.,
processor
time or machine processing capacity) and dedication of these to tasks
(jobs in a computer system or parts in a manufacturing system) in order
to optimize some performance objective (e.g., maximize task completion
throughput rate). Such resource allocation functionalities are
commonly found in parallel and distributed computing systems and
flexible manufacturing systems, but can also represent biological
attentional processes in humans. Distributed multi-processor resource
allocation demands that multiple processors each decide how to allocate
their resources (e.g., computing power) to multiple task types.
Network-based cooperative resource allocation involves having multiple
processors work together over an imperfect communication network to
share the processing load in order to optimize throughput. Here, we will
show that one class of network-based cooperative schedulers exhibit
stable behavior (i.e., result in bounded buffer levels). Simulation
results will be provided to provide insights into scheduler performance
and resource allocation dynamics.
Michael K. Sain, University of Notre Dame
For a transfer function which is the
ratio, say, of two polynomials with
real coefficients, it is clear that
the number of zeros in the extended
complex plane is equal to the number
of poles in the extended complex plane.
A matrix of such transfer functions does not,
in general, enjoy the same property. In
particular, when the matrix is deficient
in either row or column rank, such an
accounting does not give equality. To
recapture the essence of the above classical
equality, one may take into account the
kernel and the cokernel of the matrix of
transfer functions, and recast the ideas
of poles and zeros in terms of spaces.
In this talk we summarize these notions
and show how they can apply to the well known
systems theory problem of exact model
matching, where they provide surprising
insights about pole and zero "cancellation."
The exposition takes place in the
"cosmo-logical space" of vectors,
modules, and mappings.
John J. Murray State University of New York at Stony Brook
Chadwick J. Cox, Accurate Automation Corp.
Richard E. Saeks, Accurate Automation Corp.
The centerpiece of Dynamic Programming is the Hamilton Jacobi Bellman (HJB)
Equation which can be used to solve for the optimal cost functional
for a
nonlinear optimal control problem, while one can solve a second partial
differential equation for the corresponding optimal control law.
Although the direct solution of the Hamilton Jacobi Bellman
Equation is computationally untenable.
The Adaptive Dynamic Programming Algorithm, one
starts with an initial cost functional/control law pair,
and constructs a sequence of
cost functional/control law pairs in real-time.
The goal of the present talk is to provide a proof for the
Adaptive Dynamic Programming Theorem to the effect
that (with the appropriate technical assumptions) this process is:
for a prescribed nonlinear optimal control problem with
unknown input affine state dynamics and input quadratic performance measure.
Kelvin T. Erickson, University of Missouri-Rolla
E. Keith Stanek, University of Missouri-Rolla
Egemen Cetinkaya, Sprint PCS
Shari Dunn-Norman, University of Missouri-Rolla
Ann Miller, University of Missouri-Rolla
Supervisory control and data acquisition (SCADA) systems are commonly used
in the offshore oil and gas industry for remote monitoring and control of
offshore platforms. Using a generalized platform system architecture, the
reliability of the entire system is estimated. The outcome of this
reliability assessment is an estimate of
The reliability was estimated using probabilistic risk assessment. A
fault tree was constructed to show the effect of contributing events on
system-level reliability. Probabilistic methods provide a unifying method to
assess physical faults, contributing effects, human actions, and other
events having a high degree of uncertainty. The probability of various end
events, both acceptable and unacceptable, is calculated from the
probabilities of the basic initiating failure events.
Derong Liu, University of Illinois at Chicago
Yi Zhang, University of Illinois at Chicago
Sanqing Hu, University of Illinois at Chicago
In this talk, we consider
call admission control algorithms for SIR-based
power-controlled DS-CDMA cellular networks.
We consider networks that handle multiple classes of services.
When a new call (or a handoff call) arriving at a base station
requesting for
admission, our algorithms will calculate the desired power
control set points for
the new call and all existing calls.
We will provide necessary and
sufficient conditions under which the power control algorithm will
have a feasible solution. These conditions are obtained through
deriving the inverse of the matrix used in the calculation
of power control set points.
If there is no feasible solution to power control
or if the desired power levels to be received
at the base station for some calls are
larger than the maximum allowable power limits, the admission request will be rejected.
Otherwise, the admission request will be granted. When
higher priority is desired for
handoff calls,
we will allow different thresholds for
new calls and handoff calls.
We will develop an adaptive algorithm that adjusts these thresholds in
real-time as environment changes.
The performance of our
algorithms will be shown through computer simulation and
compared with existing algorithms.
Author: Derong Liu Email: dliu@ece.uic.edu