Dowload File - Industrial Engineering Department EMU-DAU
Download
Report
Transcript Dowload File - Industrial Engineering Department EMU-DAU
APPLICATIONS OF JACOBIAN
METHOD
EASTERN MEDITERRANEAN UNIVERSITY
Department of Industrial Engineering
Non linear Optimization
Spring 2014-15
Instructor: Prof.Dr.Sahand
Daneshvar
Submited by: AAKASH AHMED
Student number: 145322
Constrained Derivatives (Jacobian) Method
Minimize z = f(X)
subject to
g(X) = 0
where
The functions f(X) and g(X), i = 1,2, ... , m, are twice continuously differentiable.
The idea of using constrained derivatives is to develop a closed-form expression
for the first partial derivatives of f(X) at all points that satisfy the constraints g(X) = O.
This corresponding stationary points are identified as the points at which these partial
derivatives vanish. The sufficiency conditions introduced in Section can then be
used to check the identity of stationary points.
To clarify the proposed concept, consider f(xl , X2) illustrated in Figure 18.4. This
function is to be minimized subject to the constraint
where b is a constant. From Figure , the curve designated by the three points A,
B, and C represents the values of f(x1, X2) for which the given constraint is always
satisfied. The constrained derivatives method defines the gradient of f(Xl, X2) at
any point on the curve ABC. Point B at which the constrained derivative vanishes
is a stationary point for the constrained problem.
The method is now developed mathematically. By Taylor's theorem, for
in the feasible neighborhood of X, we have
and
Demonstration of the idea of the Jacobian method
Demonstration of the idea of the Jacobian method
For feasibility, we must have
, and it follows that
This gives (m + 1) equations in (n + 1) unknowns,
and
. Note that
is a dependent variable, and hence is determined as soon as
is known.
This means that, in effect, we have m. equations in n unknowns.
If m > n, at least (m - n) equations are redundant. Eliminating redundancy,
the system reduces to m < n. If m = n, the solution is
, and X has no feasible
neighborhood, which means that the solution space consists of one point only. The
remaining case, where m < n, requires further elaboration.
Define,
x = (Y, Z)
such that,
The vectors Y and Z are called the dependent and independent
variables, respectively. Rewriting the gradient vectors of f and g in
terms of Y and Z, we get,
Define,
J(m*m) is called the Jacobian matrix and C(m*n- m) the control matrix. The Jacobian
J is assumed non-singular. This is always possible because the given m equations are
independent by definition. The components of the vector Y must thus be selected from
among those of X such that J is nonsingular.
The original set of equations in partial df(x) and partial df(x) may be written as
Because J is nonsingular, its inverse J-1 exists. Hence,
Substituting for partial d(Y) in the equation for partial df(x)
gives partial d f as a function of partial d ( Z ) -that is,
From this equation, the constrained derivative with respect
to the independent vector Z is given by
The sufficiency conditions are similar to those developed in Section . The
Hessian matrix will correspond to the independent vector Z, and the elements of the
Hessian matrix must be the constrained second derivatives. To show how this is
obtained,
Let
It thus follows that the āiā th row of the (constrained) Hessian matrix is a
that W is a function of Y and Y is a function of Z. Thus, the partial derivative of
with respect to Zi is based on the following chain rule:
Notice
Example: 1
Consider the following problem:
Hence, the incremental value of constrained f is given as
Example: 2
Application of the Jacobian Method to an LP Problem : Consider the linear program
Maximize z = 2x1 + 3x2
subject to
Xl + X2 + X3 = 5
Xl ā X2 + X4 = 3
Xl , X2, X3, X4 > 0
To account for the nonnegativity constraints
, substitute
. With this
substitution, the nonnegativity conditions become implicit and the original problem
becomes
Subject to
To apply the Jacobian method, let
(In the terminology of linear programming, Y and Z correspond to the basic and
non basic variables, respectively.) Thus
So that,
The corresponding dual objective value is 5UI + 3U2 = 15, which equals the optimal
primal objective value. The given solution also satisfies the dual constraints and hence
is optimal and feasible. This shows that the sensitivity coefficients are the same as the
dual variables. In fact, both have the same interpretation.
Figure: Extreme points
of the solution space
of the linear program