ALTERNATE ITERATIVE NUMERICAL ALGORITHMS FOR MINIMIZATION OF UNCONSTRAINED NONLINEAR FUNCTIONS

In this paper, we propose few alternate numerical algorithms for minimization of unconstrained non-linear functions by using modified Adomian decomposition method. Then comparative study among the new algorithms and Newton’s algorithm is established by means of examples. AMS Subject Classification: 90C27, 93B40, 65K10


Introduction
Optimization problems with or without constraints arise in various fields such as science, engineering, economics, management sciences, etc., where numerical information is processed.In recent times, many problems in business situations and engineering designs have been modeled as an optimization problem for taking optimal decisions.In fact, numerical optimization techniques have made deep in to almost all branches of engineering and mathematics.Several methods [10,11,12] are available for solving unconstrained minimization problems.These methods can be classified in to two categories as non gradient and gradient methods.The non gradient methods require only the objective function values but not the derivatives of the function in finding minimum.The gradient methods require, in addition to the function values, the first and in some cases the second derivatives of the objective function.Since more information about the function being minimized is used through the use of derivatives, gradient methods are generally more efficient than non gradient methods.All the unconstrained minimization methods are iterative in nature and hence they start from an initial trial solution and proceed towards the minimum point in a sequential manner.
Since the beginning of the 1980's the Adomian decomposition method has been applied to a wide class functional equations [3,4,5].Adomian gives the solution as an infinite series usually converging to an accurate solution.Abbaoui et.al. [2] applied to the standard Adomian decomposition on simple iteration method to solve the equation f (x) = 0, where f (x)is a nonlinear function, and proved the convergence of the series solution.Babolian et.al. [6] modified the standard Adomian method which proposed in [2].S.Abbasbandy introduced [1] the modified Adomian decomposition method is applied to construct the numerical algorithms for solving non linear equations.
Vinay Kanwar et al. [14] introduced new algorithms called, external touch technique and orthogonal intersection technique for solving the non linear equations.In this paper we introduce few new algorithms for minimization of non linear functions by using modified Adomian decomposition method.Then comparative study is established among the new algorithms with Newton's algorithm by means of examples.

New Algorithms
Consider the nonlinear optimization problem: Minimize {f (x), x ∈ R, f : R → R}, where f is a non-linear twice differentiable function.
Consider the function G(x) = x − (g(x)/g ′ (x)), where g(x) = f ′ (x).Here f (x ) is the function to be minimized.G ′ (x)is defined around the critical point If we assume that g ′′ (x * ) = 0, we have where α be a root of it and g is a C 2 function on an interval containing α, and we suppose that |g ′ (α)| > 0..By using the Taylor's expansion near x, For a small h, we have The Newton-Raphson method is then given by which started initial guess of the root x 0 .This process has the local convergence property.A more constructive theorem was given by Kantorovich and coworkers [9,13].
The Taylor expansion of g(x) to a higher order is and we are looking for h such as where c is a constant, N is non linear function and for approximating h we can apply the Adomian decomposition method.The Adomian decomposition technique consists of representing the solution of (2) as a series and the non linear function is decomposed as where A n 's are Adomian polynomials of h 0 , h 1, . . ..h n given by Substituting ( 3) and (4) in to (2) yields The convergence of the series in (6) will yield The polynomials An are generated for all kind of nonlinearity by Wazwaz [15] The first few polynomials are given by Other polynomials can be generated in a similar manner. Let denote the (m + 1) approximation term of h.Since the series converges very rapidly, the sum H m = m i=0 h i can serve as a practical solution in each iteration.We will show that the number of terms required to obtain an accurate computable solution is very small.
which is the Householder's iteration [8] and the same as Adomian decomposition method for m = 1, see [1].
New Algorithm -I Since g(x) = f ′ (x), the Algorithm 2.1 becomes For m = 2: which is the same as Adomian decomposition method for m = 2, see [1].
New Algorithm -II Since g(x) = f ′ (x), the algorithm 2.2 becomes

High Order Iterations
Under some conditions of regularity of g and its derivative, we obtain good techniques.For example, if we consider Then, for m = 0, x n+1 = x n − g(xn) g ′ (xn) and for m = 1.
which is same as Adomian decomposition method [1].
New Algorithm -III Since g(x) = f ′ (x), the Algorithm 2.3 becomes

Convergence Analysis of New Algorithms -I, II, III
The convergence analysis of New Algorithm-I ( 8) is same as the convergence analysis of Householder's iteration [8] and the same as Adomian decomposition method [1] for m = 1, the convergence analysis of New Algorithm-II ( 10) is same as Adomian decomposition method [1] for m = 2, and convergence analysis of New Algorithm-III ( 12) is same Adomian decomposition method [1] for m = 1, in the High order iterations.

Numerical Illustrations
Example 3.1.Consider the functionf (x) = x 4 − x − 10x ∈ R.Then, minimizing point of the function is equal to 0.629961 which is obtained in 4 iterations by Newton's Algorithm, in 3 iterations by New Algorithm-I, New algorithm-II and New Algorithm-III for the initial value x 0 = 1 and also seen the variation of iterations for the initial value of x 0 = 2 and x 0 = 3.

Iterations Newton's algorithm
Then, minimizing point of the function is equal to -1 which is obtained in 7 iterations by Newton's algorithm, 5 iterations by New algorithm -I, in 3 iterations by New algorithm -II, in 5 iterations by New Algorithm -III for the initial value x 0 = 1 and also seen the variations in iteration for the initial value of x 0 = 2 and x 0 = 3.

Conclusion
In this paper, we have introduced few numerical algorithms namely, New Algorithm -I, New Algorithm -II, New Algorithm -III for minimization of non linear functions.From the above illustrations it is clear that the rate of convergence of these new algorithms is faster than Newton's Algorithm.In real life problems, the variables can not be chosen arbitrarily rather they have to satisfy certain specified conditions called constraints.Such problems are known as constrained optimization problems.In near future, we have a plan to extend the proposed new algorithms to constrained optimization problems.