New global convergence of nonmonotone line search algorithm

In this paper, a class of nonmonotone line search, study the convergence properties of such an algorithm for general nonconvex function, and proved its global convergence. More conjugate gradient algorithm is used the Wolfe rule nonmonotone line search. The global convergence results are proved


Introduction
Our problem is to minimize a function of n variables min {f (x) : where f : R n → R is smooth and its gradient g(x) is available.Conjugate gradient method for solving (1.1) are iterative methods of the form The other search direction can be defined recursively: where g k = ∇f (x k ) ,α k is a stepe-size obtained by some line search, and β k is a scalar There are many ways to select β k , and some well-known formulas are given by respectively, where y k−1 = g k − g k−1 and means the Euclidean norm.The technique of nonmonotone line search was proposed first in [1] and has received many successful applications or extensions in both unconstrained optimization and constrained optimization, A large portion of optimization methods require monotonicity of the objective values to guarantee their global convergence.This target is usually achieved by a suitable line technique even when the initial point is far away from the optimum.Among the most popular line search techniquesare are the Armijo rule, the Goldstein rule and the Wolfe rule (see [4,5,13]).In particular, enforcing monotonicity may considerably reduce the rate of convergence, when the iteration is trapped near a narrow curved valley, which can result in very short steps or zigzagging.Therefore, it might be advantageous to allow the iterative sequence to occasionally generate points with nonmonotone objective values, while retaining global convergence of the minimization algorithm.Several numerical tests show that the nonmonotone line search technique for unconstrained optimization and constrained optimization is efficient and competitive [7,12,14].Note that the famous watchdog technique for constrained optimization proposed in [2] can also be viewed as strategy of the nonmonotone type.The forcing function introduced in [11] is an important class of functions which can be used to measure sufficiency of descent and prove convergence.In [11] a detailed steplength analysis with forcing function is given.Han and Liu [9] used the idea of forcing function and proposed a general line search rule.We combine forcing functions with the nonmonotone line search technique and give a general line search rule, called the nonmonotone F-rule, for unconstrained minimization problems.We show that some common nonmonotone line search rules such as the nonmonotone such as the nonmonotone Armijo line search rule, the nonmonotone Goldstein line search rule, and the Wolf line search rule are special cases of the nonmonotone F-rule.Finally, we prove the global convergence of the resulted nonmonotone descent methods under mild conditions.The remainder of this paper is organized as follows .In Section 2 we describe our nonmonotone F-rule and show that the aforementioned common nonmonotone line search rules, are particular cases of the nonmonotone F-rule.In Section 3, we establish the global convergence of nonmonotone descent methods for unconstrained optimization.Some conclusions are given in Section 4.
In this paper, a class of non-monotone line search (NLS), study the convergence properties of such an algorithm for general nonconvex function, and proved its global convergence.This article studies the global convergence of a class of nonmonotone line search of Wolf rule conjugate gradient algorithm to prove that the idea comes from the on the F-rule search techniques.We study the condition of global convergence of four conjugate gradient methods with nonmonotone line searchs.When the conditions is being increased, for nonconvex functions, we prove the global convergence of modified method.
In the convergence analysis and implementation of conjugate gradient method, the extended wolf rule nonmonotone line search, namely Next, we present the nonmonotone F-rule.We begin with two definitions the forcing function and the reverse modulus of continuity of gradient..

Technique of the Nonmonotone Line Search
First given the general assumption of this section: Assumption 2.1.
f is strongly convex and differentiable in the level set L 0 and its gradient Then the mapping δ : [0, ∞) → [0, ∞) defined by is the reverse modulus of continuity of gradient g(x) .Now we give the nonmonotone F-rule for line searches as follows.
where σ is a forcing function and the above nonmonotone F-rule is just the rule of sufficient decrease in [11].
The above line search in each iteration, it is recommended that the initial test step r k no longer remain the same, but can be adjusted automatically.On the value, change the initial test approach can get better results, calculte a larger step size α k , there by reducing the number of iterations.Remarks 2.5 1-In order to be able to use a nonmonotonic line search the NLS calculate the step length facor α k must be the search direction d k is a descent direction.In the next section,we will prove that this study of the conjugate gradient algorithm to keep the search direction d k is down.
2-In order to able to calculate a larger step size α k .

Convergence Analysis
In this section we establish the global convergence properties of optimization methods with nonmonotone F-rule.Note that to establish our result, we need some additional mild conditions.Lemma 3.1.Let the search direction d k is a descent direction and step size factor α k by the nonmonotone line search NLS by (1.2) of iteration {x k } ⊂ L 0 .
Proof.By the nonmonotone line search in the NLS (2.3) show . The step factor α k NLS1 of nonmonotone line search, by (1.2), amendment to the definition of method to meet 2) we apply the inequality to the second term in (3.2) with Therefore, we can get that (3.1) is true for all k ∈ N .
For simplicity, we introduce the notation Namely l(k) is non-negative integer, and satisfy the following two formulas So nonmonotone line search NLS1 (1.4) can be rewritten as Lemma 3.3.Under the conditions of the assumptions (A 1 ) , the sequence f (x l(k) ) is decreases monotonically. Proof.
On both sides of order k → ∞ that it.So lim k→∞ σ(t l(k)−1 ) = 0 Theorem 3.5.Let function f :R n → R satisfy Assumption 2.1.Let the sequence {x k } be defined by (1.2) where the steplength α k is defined by the nonmonotone F-rule (2.3).If the direction d k satisfies and where σ(•) is a forcing function and m 1 > 0. Then the sequence {x k } ⊂ L 0 and lim Let We prove by induction that for any given j ≥ 1 If j = 1, since {l 1 (k)} ⊂ {l(k)}, (3.13) and (3.14) follow from (3.11).Assume that (3.13) and (3.14) hold for a given j.We consider the case of j + 1.Since using the same argument for deriving (3.11), we deduce Know (3.13) was established.But Thus, for any k, do deformed where x L 1 (k) transposition, and noting (3.17), was . Thus, by the uniform continuity of f (x), lim taking limits for k → ∞, we get Proof.Noting that α k ≤ λ 1 , by this, (1.2), and (3.24) we have that and from the lipschitz continuity (2.1) and (3.27), we can get that

Conclusions
The program prepared by the Matlab6.5 in general on a PC.Test functions from [10], indicated in brackets after the function name is the number of variables.NLS is a line search method proposed in this paper, by GLL on Grippo-Lampariello-Lucidi from non-monotonic line search.n i represents the number of iterations, n f the number of times that the function value, gradient calculation is the number n i + 1.We were calculated for different values of M , when M = 0, ie, monotone line search.To the pros and cons of the algorithm, the parameters are uniform taken as, σ = 1, β = 0.2, γ 2 = 0.9, ε = 10 −6 .Our conjugate gradient method is divided into two kinds of numerical experiment for a class of initial testing step according to this formula to the case of correction, the other is the case of an initial test step length fixed for a.From the results of the comparison, the proposed line search termination criterion has the following advantages: 1 -Monotone line search (M = 0), or non-monotone line search (M > 0), the number of iterations of the NLS method, the function value calculation times are reduced.
2-Usually better than the initial test step fixed the case when the initial testing step according to this formula be amended.
3-Non-monotone strategy is effective for most of the functions, especially high-dimensional, or initial testing step fixed the situation

Corollary 3 . 6 .T k d k ≤ m 2 g k 2 ( 3
Let function f : R n → R satisfy Assumption 2.1.Let the sequence {x k } be defined by (1.2) where the steplength α k is defined by the nonmonotone F-rule (2.3) with F -function σ(t) = ( m 2 m 1 ) t, and the direction d k satisfies g .23)and d k ≤ m 1 g k (3.24)where m 1 , m 2 > 0. Then the sequence {x k } ⊂ L 0 and lim k→∞ g k = 0 Proof.( This follows directly from [3] and Theorem 3.5) Corollary 3.7.Under the conditions of the assumptions 2.1.Consider any iterative method (1.2), where d k satisfies (3.23) − (3.24) and α k is obtained by the nonmonotone line search (1.4) − (1.5)Then, there exists a constant m 3 such that g k+1 ≤ m 3 g k , for all k (3