Introduction

Download Report

Transcript Introduction

Introduction to System Modeling and
Control





Introduction
Basic Definitions
Different Model Types
System Identification
Neural Network Modeling
Mathematical Modeling (MM)




A mathematical model represent a physical
system in terms of mathematical equations
It is derived based on physical laws
(e.g.,Newton’s law, Hooke’s, circuit laws,
etc.) in combination with experimental data.
It quantifies the essential features and
behavior of a physical system or process.
It may be used for prediction, design
modification and control.
Engineering Modeling Process
Theory
Data
Engineering
System
f m
dv
 bv
dt
c
v
T

v
x
Numerical
Solution
f
Solution
Data
Math. Model
Model
Reduction
Control
Design
Example: Automobile
• Engine Design and Control
• Heat & Vibration Analysis
• Structural Analysis
Graphical
Visualization/Animation
Definition of System


System: An aggregation or assemblage of
things so combined by man or nature to form
an integral and complex whole.
From engineering point of view, a system is
defined as an interconnection of many
components or functional units act together
to perform a certain objective, e.g.,
automobile, machine tool, robot, aircraft, etc.
System Variables
Every system is associated with 3 variables:
y
u
System
x



Input variables (u) originate outside the system and
are not affected by what happens in the system
State variables (x) constitute a minimum set of system
variables necessary to describe completely the state of
the system at any given time.
Output variables (y) are a subset or a functional
combination of state variables, which one is interested
to monitor or regulate.
Mathematical Model Types
Lumped-parameter
discrete-event
Most General
x  f ( x , u , t )
y  h( x, u , t )
Input-Output Model
y
(n)
 f (y
Linear-Time invariant (LTI)
x  Ax  Bu
y  Cx  Du
distributed
( n 1 )
(n)
,  , y , y , u ,  , u , u , t )
LTI Input-Output Model
y
(n)
 a1 y
( n 1 )
   a n 1 y  a n y  b0 u
(n)
   b n 1u  b n u
Discrete-time model:
Transfer Function Model
Y ( s )  G ( s )U ( s )
x ( t )  x ( t  1)
(i)
y (t )  y (t  i )
Example: Accelerometer (Text 6.6.1)
Consider the mass-spring-damper (may be used as
accelerometer or seismograph) system shown below:
Free-Body-Diagram
x
x
fs
fs
M
M
fd
fd
fs(y): position dependent spring force, y=u-x
fd(y): velocity dependent spring force
Newton’s 2nd law
Linearizaed model:
  M u
  y   f d ( y )  f s ( y )
Mx

M y  b y  ky  M u
u
Example II: Delay Feedback
Consider the digital system shown below:
u
Input-Output Eq.:
D e la y
z -1
y
y ( k )  y ( k  1)  u ( k  1)
k 1
Equivalent to an integrator:
y (k ) 
 u( j )
j 0
Transfer Function
Transfer Function is the algebraic input-output
relationship of a linear time-invariant system in the s (or
z) domain
U
G
Y
Example: Accelerometer System
m y  b y  ky  u  G ( s ) 
Y (s )
ms

U (s )
ms
2
2
 bs  k
,s 
d
dt
Example: Digital Integrator
y ( k )  y ( k  1)  u ( k  1)  G 
Y (z )
u(z )

z
1
1 z
,z 
1
Forward
shift
Comments on TF



Transfer function is a property of the
system independent from input-output
signal
It is an algebraic representation of
differential equations
Systems from different disciplines (e.g.,
mechanical and electrical) may have the
same transfer function
Acceleromter Transfer Function


Accelerometer Model: M y  b y  ky  M u
Transfer Function: Y/A=1/(s2+2ns+n2)



n=(k/m)1/2, =b/2n
Natural Frequency n, damping factor 
Model can be used to evaluate the
sensitivity of the accelerometer


Impulse Response
Frequency Response
Impulse Response
Frequency Response
Bode Diagrams
From: U(1)
40
0
-20
-40
-60
0
-50
To: Y(1)
Phase (deg); Magnitude (dB)
20
-100
-150
-200
-1
10
10
0
Frequency (rad/sec)
10
/n
1
Mixed Systems




Most systems in mechatronics are of the
mixed type, e.g., electromechanical,
hydromechanical, etc
Each subsystem within a mixed system can
be modeled as single discipline system first
Power transformation among various
subsystems are used to integrate them into
the entire system
Overall mathematical model may be
assembled into a system of equations, or a
transfer function
Electro-Mechanical Example
Input: voltage u
Output: Angular velocity 
Ra
u
La
ia
B
dc

Elecrical Subsystem (loop method):
u  R a i a  La
di a
dt
 e b , e b  back - emf voltage
Mechanical Subsystem
  B

T motor  J 
J
Electro-Mechanical Example
Power Transformation:
Torque-Current:
Voltage-Speed:
Ra
B
T motor  K t i a
eb  K b 
La
u
ia
dc
where Kt: torque constant, Kb: velocity constant For an
Kt  Kb
ideal motor
Combing previous equations results in the following
mathematical model:
di a

L
 R aia  K b  u
 a
dt

J
   B   K t ia  0

Brushless D.C. Motor


A brushless PMSM has a
wound stator, a PM rotor
assembly and a position
sensor.
The combination of inner
PM rotor and outer windings
offers the advantages of



low rotor inertia
efficient heat dissipation, and
reduction of the motor size.
dq-Coordinates
q

b
d
e
a
c
e=p + 0
offset
Electrical angle
Number of poles/2
Mathematical Model
di d

R
dt
L
di q
R
dt

L
id  p m iq 
iq  p m id 
1
L
vd
K em
L

1
L
vq
Where p=number of poles/2, Ke=back emf constant
 m  Te  K t iq
J
System identification
Experimental determination of system model.
There are two methods of system
identification:
 Parametric Identification: The input-output
model coefficients are estimated to “fit” the
input-output data.
 Frequency-Domain (non-parametric): The
Bode diagram [G(j) vs.  in log-log scale] is
estimated directly form the input-output data.
The input can either be a sweeping sinusoidal
or random signal.
Electro-Mechanical Example
Ra
Transfer Function, La=0:
Ω(s)
U(s)

K t
Ra 
Js  B  K t K b R a 

La
B
k
ia
u
Ts  1
Kt

12
u
t
ku
10
k=10, T=0.1
Amplitude
8
6
4
T
2
0
0
0.1
0.2
0.3
Time (secs)
0.4
0.5
Comments on First Order
Identification
Graphical method is
 difficult to optimize with noisy data and
multiple data sets
 only applicable to low order systems
 difficult to automate
Least Squares Estimation

Given a linear system with uniformly
sampled input output data, (u(k),y(k)), then
y ( k )  a 1 y ( k  1)    a n y ( k  n )  b1u ( k  1)    b n u ( k  n )  noise

Least squares curve-fitting technique may
be used to estimate the coefficients of the
above model called ARMA (Auto Regressive
Moving Average) model.
Frequency-Domain Identification
Method I (Sweeping Sinusoidal):
f
Ai
system
 A0 
 ,
M agnitude  
 Ai  d b
Ao
t>>0
P hase  
Method II (Random Input):
system
Transfer function is determined by analyzing the spectrum of the input and
output
Photo Receptor Drive Test Fixture
Experimental Bode Plot
System Models
M a g n it u d e (d B )
25
0
high order
25
low order
50
75
0.1
1
10
100
1 10
100
1 10
3
Frequency (Hz)
180
Phas e ( D eg)
90
0
90
180
0.1
1
10
Frequency (Hz)
3
Nonlinear System Modeling
& Control
Neural Network Approach
Introduction




Real world nonlinear systems often difficult to
characterize by first principle modeling
First principle models are often
suitable for control design
Modeling often accomplished with inputoutput maps of experimental data from the
system
Neural networks provide a powerful tool for
data-driven modeling of nonlinear systems
Input-Output (NARMA) Model
u
z -1
z -1
z -1
y
g
z -1
z -1
z -1
y [ k ]  g ( y [ k  m ],..., y [ k  1], u [ k  m ],..., u [ k  1])
What is a Neural Network?


Artificial Neural Networks (ANN) are
massively parallel computational machines
(program or hardware) patterned after
biological neural nets.
ANN’s are used in a wide array of applications
requiring reasoning/information processing
including





pattern recognition/classification
monitoring/diagnostics
system identification & control
forecasting
optimization
Advantages and
Disadvantages of ANN’s

Advantages:





Learning from
Parallel architecture
Adaptability
Fault tolerance and redundancy
Disadvantages:




Hard to design
Unpredictable behavior
Slow Training
“Curse” of dimensionality
Biological Neural Nets




A neuron is a building block of biological
networks
A single cell neuron consists of the cell body
(soma), dendrites, and axon.
The dendrites receive signals from axons of
other neurons.
The pathway between neurons is synapse
with variable strength
Artificial Neural Networks



They are used to learn a given inputoutput relationship from input-output
data (exemplars).
The neural network type depends
primarily on its activation function
Most popular ANNs:



Sigmoidal Multilayer Networks
Radial basis function
NLPN (Sadegh et al 1998,2010)
Multilayer Perceptron

MLP is used to learn, store, and produce
input output relationships
y 
x1
y
x2
w
i
 (  x j v ij )
i
j
weights
activation
function

The activation function (x) is a suitable
nonlinear function:



Sigmidal: (x)=tanh(x)
Gaussian: (x)=e-x2
Triangualr (to be described later)
Sigmoidal and Gaussian
Activation Functions
1
0.9
gaussian
sigmoid
0.8
0.7
sig(x)
0.6
0.5
0.4
0.3
0.2
0.1
0
-5
-4
-3
-2
-1
0
x
1
2
3
4
5
Multilayer Netwoks
y
x
W0
Wp
Wk,ij: Weight from node i in layer k-1 to node j in layer k




y  W p σ W p  1σ  σ W 1 σ W 0 x
T
T
T
T

Universal Approximation
Theorem (UAT)
A single hidden layer perceptron network with a
sufficiently large number of neurons can
approximate any continuous function arbitrarily
close.
Comments:
 The UAT does not say how large the network
should be
 Optimal design and training may be difficult
Training

Objective: Given a set of training inputoutput data (x,yt) FIND the network
weights that minimize the expected
L  E( y  y )
error
Steepest Descent Method: Adjust
weights in the direction of steepest
descent of L to make dL as negative as
possible.
2
t

dL  E ( e d y )  0,
T
e  y  yt
Neural Network Approximation of
NARMA Model
u[k-1]
y
y[k-m]
Question: Is an arbitrary neural network model
consistent with a physical system (i.e., one that has
an internal realization)?
State-Space Model
u
system
States: x1,…,xn
x [ k  1]  f ( x [ k ], u [ k ])
y [ k ]  h ( x [ k ])
y
A Class of Observable State
Space Realizable Models

Consider the input-output model:
y [ k ]  g ( y [ k  m ],..., y [ k  1], u [ k  m ],..., u [ k  1])

When does the input-output model have a
state-space realization?
x [ k  1]  f ( x [ k ], u [ k ])
y [ k ]  h ( x [ k ])
Comments on State Realization of
Input-Output Model




A Generic input-Output Model does not
necessarily have a state-space realization
(Sadegh 2001, IEEE Trans. On Auto. Control)
There are necessary and sufficient conditions
for realizability
Once these conditions are satisfied the statespace model may be symbolically or
computationally constructed
A general class of input-Output Models may
be constructed that is guaranteed to admit a
state-space realization
Fluid Power Application
INTRODUCTION
APPLICATIONS:




Robotics
Manufacturing
Automobile industry
Hydraulics
EXAMPLE:
EHPV control
(electro-hydraulic poppet valve)
 Highly nonlinear
 Time varying characteristics
 Control schemes needed to
open two or more valves
simultaneously
Motivation



The valve opening is controlled by
means of the solenoid input current
The standard approach is to calibrate of
the current-opening relationship for
each valve
Manual calibration is time consuming
and inefficient
Research Goals



Precisely control the conductivity of
each valve using a nominal input-output
relationship.
Auto-calibrate the input-output
relationship
Use the auto-calibration for precise
control without requiring the exact
input-output relationship
INTRODUCTION
EXAMPLE:




Several EHPV’s were used
to control the hydraulic
piston
Each EHPV is supplied with
its own learning controller
Learning Controller employs
a Neural Network (NLPN) in
the feedback
Satisfactory results for
single EHPV used for
pressure control
Control Design

Nonlinear system (‘lifted’ to a square system)
x k  n  F x k , u k


Feedback Control Law


ˆ (xd , xd )  K p
u  


ˆ (xd , xd )

x d
(x  xd )

d
ˆ ( x , x d ) is the neural network output
The neural network controller is directly trained based
on the time history of the tracking error
Learning Control Block Diagram
Experimental Results
Experimental Results