Transcript My Name

EE616
Dr. Janusz Starzyk
Computer Aided Analysis of Electronic
Circuits
• Innovations in numerical techniques had profound
import on CAD:
– Sparse matrix methods.
– Multi-step methods for solution of differential
equation.
– Adjoint techniques for sensitivity analysis.
– Sequential quadratic programming in
optimization.
Fundamental Concepts
• NETWORK ELEMENTS:
– One-port
Resistor
i  f (v )
or
Capacitor
Inductor
condition
v  f (i )
q  f (v )
  f (i )
+
i
V
voltage controlled.
current controlled
and
and
i
dq
dt
d
v
dt
f ( 0)  0
Independence voltage source
Independence current source
v  const.
i  const .
-
Fundamental Concepts
– Two-port:
+
i1
i2
+
V2
V1
-
-
v2  v1
Voltage to voltage transducer (VVT):
i1  0
Voltage to current transducer (VCT):
i1  0 i2  gv1
Current to voltage transducer (CVT):
v1  0
Current to current transducer (CCT):
v1  0 i2  i1
Ideal transformer (IT):
v1  nv2
1
i1   i2
n
Ideal gyrator (IG):
v1  ri2
v2  ri2
v2  ri1
Fundamental Concepts
Positive impedance converter (PIC) v1  k1v2 i1  k1i2
Negative impedance converter (NIC) v1  k1v2 i1  k2i2
Ideal operational amplifier (OPAMP) v1  0 i1  0
OPAMP is equivalent to nullor constructed from two singular one-ports:
i
nullator
and
+ V + V -
i
norator
v0
i0
v, i arbitrary
i1
i1
+
V1
-
i2

OPAMP
+
V2
-

i2
+
+
V1
V2
-
-
nullor
Network Scaling
Typical design deals with network elements having resistivity from ohms
to MEG ohms, capacitance from fF to mF, inductance from mH to H
9
within frequency range 10 Hz. Consider
EXAMPLE:
f ( xo  x)  f ( xo )
 f ( xo )
Calculate derivative
x
Let
f ( xo )  1
f ( x0  x )  1.0000086
with 6 digits accuracy?
x  105
f / ( xo )  0.86
but because of roundoff errors: f ( xo  x )  1.00001
and
f / ( xo )  1
Which is 16% error.
Scaling is used to bring network impedance close to unity
Impedance scaling: Design values have subscript d and scaled values
subscript s.
For scaling factor K we get:
Rd
RS 
K
Z
1
Z CS  cd 
 CS  KCd
K
KSCd
Z LS
Z Ld SLd
Ld


 LS 
K
K
K

s 
Frequency scaling:
 o has effect on reactive elements:
ZC 
1
1
1


j C d
j S  o C d
j S C S
Z L  jLd  j S o Ld  j S LS
With:
CS  oCd
LS  o Ld
For both impedance and frequency scaling we have:
Rd
RS 
K
Ld  o
LS 
K
C S  Cd  o K
WT, CCT, IT, PIC, NIC, OPAMP remain unchanged.
VCT the transcondactance g is multiplied by K.
CVT, IG the transresistance r is divided by K.
NODAL EQUATIONS
V3
For (n+1) terminal network:
YV=J
or:
 y11 y12    y1n 1  V1   j1 


 j 
y
y



y
V
2 n 1   2 
 21 22
 2 

    





    

    


 

 yn1 yn 2    yn n 1  Vn 1   jn 1 
j3
V2
j2
j1
V1
Jn+1
Vn+1
Y is called indefinite admittance matrix.
For network with R, L, C and VCT we can obtain Y directly
from the network.
i
k
For VCT:
V1
gV1
j
m
from k
to m






from i    g     g   








to j
    g    g   






when k=I and m=j we have one-port and g = Y:
K=i
i=Yv
Y
m=j
to m
from k




from k    Y    Y   






to m 





Y



Y











Liner Equations and Gaussian Elimination:
For liner network nodal equations are linear. Nonlinear networks can be
solved by linearization about operating point. Thus solution of linear
equations is basic to many problems.
Consider the system of liner equations:
Ax  b
or:
a11 a12    a1n   x1  b1 

   
a
a



a
2 n   x2 
 21 22
b2 

    

    

    

    

   
an1 an 2    an n   xn  bn 
Solution can be obtained by inverting matrix A
~
1
xA b
but this approach is not practical.
Gaussian elimination:
Rewrite equations in explicit from and denote bi by ai,n+1 to simplify
notation :
a11 x1  a12 x2    a1n xn  a1,n 1
a21 x1  a22 x2    a2 n xn  a2,n 1






an1 x1  an 2 x2    ann xn  an ,n 1
How to start Gaussian elimination?
Divide the first equation by a11 obtaining: x  (a1) x     (a1) x  (a1)
12 2
1n n
1, n 1
1
(1)
a1j 
a1 j
j  1,2,  n  1
Where
a11
Multiply this equation by a21 and add it to the second. The coefficients of
the new second equation are
(1)
(1)
a 2 j  a2 j  a21 a1 j
(1)
j  1,2,  n  1
with this transformation a21 becomes zero. Similarly for the other
equations, setting:
(1)
(1)
a 1 j  a2 j  ai1 a1 j
j  1,2,  n  1
i  2,  n
(1)
makes all coefficients of the first column zero with exception of a11 .
We repeat this process selecting diagonal elements as dividers and
obtaining general formulas
(k )
( k 1) ( k 1)
(k )
( k 1)
a kj  akj / akk
( k 1) ( k )
a ij  aij  aik a kj
k  1,  , n

i  k  1,  , n
 j  k  1,  , n  1

where superscript shows how many changes were made. The resulting
equations have the form:
(1)
(1)
(1)
x1  a12 x2     a1n xn  a1, n 1
( 2)
( 2)
x2     a2 n xn  a2, n 1






(n)
xn  an , n 1
Back substitution is used to obtain solution. Last variable is used to obtain
xn-1 and so on.
n
(i )
(i )
In general:
xi  ai ,n1 
aij x j
i  n  1,  1

j i 1
Gaussian elimination requires
EXAMPLE:
n
3
3
operations.
Example 2.5.b (p70)
2
While back substitutions requires  n 2 .
Triangular decomposition:
Triangular decomposition has an advantage over Gaussian elimination as
it can give simple solution to systems with different right-hand-side
vectors and transpose systems required in sensitivity computations.
Assume we can factor matrix A as follows:
~
A  LU
~
where
l11



l21 l22


L  
0
~


  



ln1 ln 2    lnn 
~ ~
1 u12 u13    u1n 
 1 u    u 
23
2n 

 
 

U 
~

 




0



1 
L stands for lower triangular and U for upper triangular. Replacing A by
LU the system of equations takes a form:
LUX=b
Define an auxiliary vector Z as
UX=Z
then
L X = b and Z can be found easily as:
l11 z1
 b1
l21 z1 l22 z 2
 b2
  
ln1 z1 ln 2 z 2    lnn z n  bn
so
Zn=b1/l11
and
i 1



Z i   bi   lij Z j  lii
j 1


i  2,  , n
This is called forward elimination. Solution of UX=Z is called backward
substitution. We have
x1  u12 x2    u1n xn  z1
x2    u 2 n xn  z 2


so
and


xn  z n
Xn=Zn
X i  Zi 
n
U
j i 1
ij
Zj
i  n  1,  ,1
to find LU decomposition consider 4 4 matrix.
Taking product of L and U we have :
l11 l11u12
l
l21u12  l22
21
A
l31 l31u12  l32

l41 l41u12  l42
l11u13
l21u13  l22u23
l31u13  l32u23  l33
l41u13  l42u23  l43


l21u14  l22u24

l31u14  l32u24  l33u34 

l41u14  l42u24  l44 
l11u14
From the first column we have
li1  ai1
from the first row we find
u1 j  a1 j l11
i  1,  ,4
j  2,  ,4
from the second column we have
li 2  ai 2  li1u12
i  2,  ,4
and so on…
In machine implementation L and U will overwrite A with L occupying
the lower and U the upper triangle of A.
In general the algorithm of LU decomposition can be written as (Crout
algorithm):
1. Set k=1 and go to step 3.
2. Compute column k of L using:
k 1
lik  aik   limumk
if k=n stop.
3. Compute row k of U using
ik
m 1
k 1


ukj   akj   lkmumj  lkk
m 1


jk
4. Set k=k+1 and go to step 2.
This technique is represented in text by CROUT subroutine.
Modification which is dealing with rows only by LUROW.
Modification of Gaussian elimination which give LU decompositions
realized by LUG subroutine.
Features of LU decomposition:
n
1. Simple calculation of determinant det A   lii
i 1
2.if only right-hand-side vector b is changed there is no need to recalculate
the decomposition and only forward and backward substitution are
performed, which takes n2 operations.
3.Transpose system AT X = C required for sensitivity calculation `can be
solved easily as AT = UTLT.

1 3
M

n n
4. Number of operation required for LU decomposition is
3
(equivalent to Gaussian elimination.)
Example 2.5.1

2.6 PIVOTING:
the element by which we divide (must not be zero) in gaussian elimination
is called pivot. To improve accuracy pivot element should have large
absolute value.
Partial pivoting: search the largest element in the column.
Full pivoting: search the largest element in the matrix.
Example 2.6.1
SPARSE MATRIX PRINCIPLES
To reduce number of operations in case many coefficients
in the matrix A are zero we use sparse matrix technique.
This not only reduces time required to solve the system of
equations but reduces memory requirements as zero
coefficients are not stored at all.
(read section 2.7)
Pivot selection strategies are motivated mostly by the
possibility to reduce the number of operations.