TRUST REGION WITH NONLINEAR CONJUGATE GRADIENT METHOD

Descent direction methods and trust region methods are usually used to solve the unconstrained optimization problem (p) (P ) { min x∈R f (x) . In this work, we are interested in convergence results that use trust region methods which employ the conjugate gradient method Day-Yuan version as a subprogram for each iteration. Further, we penalize the quadratic problems with constraints and convert them into series of unconstrained problems. AMS Subject Classification: 65K05, 90C30


Introduction
The aim of this paper is to solve the following unconstrained optimization problem (P ) : min where the function f is assumed to be nonlinear and differentiable.
At the current point x k ∈ R n , a model of the variation of f , for an increment d of x k , is supposed to be given.In differentiable optimization, it is reasonable to consider a quadratic model of the form where g k = ∇f (x k ) is the gradient of f at x k and H k = ∇ 2 f (x k ) is the Hessian of f at x k .g k and H k are assumed to be computed using the Euclidean scalar product u, v = i u i v i .
In trust region methods [1,2,3,7], we consider that Ψ k is a model of the variation of f which is acceptable in a neighborhood of the form where △ k > 0 and . is the Euclidean norm.The domain B(0, △ k ) is called the Trust region of the model Ψ k and the positive number △ k is called the radius of trust.
To find the increase d k to be given to x k , we minimize the quadratic model Ψ k on the trust region [7].Therefore, we have to solve the quadratic subproblem A possible approach for solving constraint problems is the penalty method [7].Clearly, we introduce the constraints in the objective under penalties form by starting from a non-feasible solution and try to impose on the unconstrained optimum to arrive to qualifying set.
By solving a problem of the form the sequence of values {µ k } has the following properties i) and the p(d) is a continuous penalty function such that Initialization step.Set constants η 1 , η 2 , γ 1 , γ 2 and γ 3 such as Set the trust radius ∆ 0 and an iterate x 0 .Compute of f (x 0 ) .Put k = 0 and go to step 1.
Set k = k + 1 and go to step 1.1.
Step 2. Cmpute of f (x k + d k ) and evaluate the performance criteria Step 3. Update the radius of trust The step is a success and d k is accepted.
The step is a success and d k is accepted.
The step is a failure and d k is rejected.
Step 4. If ∇f (x k ) ≤ ε, then the algorithm is stopped.Otherwise, we set k = k + 1 and go back to step 1.

Convergence Results
By a similar manner to the linear search methods, the determination of optimal steps d k is not a necessary condition for global convergence.Under certain conditions, a good approximation of these steps may be acceptable.It suffices to determine an approximate solution d k , in the interior of the trust region, which produces a sufficient reduction of the model function.This reduction can be achieved by the method of Cauchy point d c k .Definition 1. [3] We call Cauchy point of quadratic subproblem (RC k ), the point noted d c k solution of the problem Therefore, such a point is the point minimizing Ψ k in the trust region.That is, along the right side of the strongest slope of Ψ k .
[3] The Cauchy point d c k is unique and it is given by otherwise. (3.1) Proof.Indeed, the case where g k = 0 is obvious.Suppose now that 0, therefore, α must be taken as large as possible while keeping The result follows from the fact that, in this case, the minimum of the function α . As we have seen, the point of Cauchy d c k allows whether an approximate solution of the problem in trust region to be validated or not.So, the point d k must verify the decrease of the model function The following proposition is known as Powell Condition.
Proof.We consider two cases for the Hessian matrix we have (3.3) .

Theorem 4.
[1] Let d k be an arbitrary vector such that In particular, if d k is the exact solution of (RC k ), then it satisfies (3.3) with c 2 = 1 2 .

Convergence of penalty methods
The following two lemmas are used to prove the convergence of the method.By d k we denote the solution of (p 2 ) .
Lemma 5. [7] For any value of k, we have Proof.
• i) Since the sequence {µ k } is increasing and p(d k+1 ) ≥ 0, we have and and adding the two inequalities we obtain, Since µ k+1 > µ k , we have Lemma 6.
[7] d * be an optimal solution of (p 1 ) .Then, for each k we have, Proof.
Theorem 7. Suppose that d k is an exact global minimum of (p 2 ) and assume that {µ k } is a divergent sequence.Then, every limit point d * of the sequence {d k } is a solution of (p 1 ).
Proof.Let d be the solution of (p 1 ) such that Then, It is assumed that d * is the limit point {d k }.Then, there exists an infinite subsequence of K such that lim By taking the limit of inequality (3.5), we find So, we have p(d * ) = 0 and consequently d * ≤ △.
The optimality of d * follows directly from lemma 6.Indeed, the relation

Global Convergence
Theorem 8. Let f : R n → R be a continuously differentiable function on the set Proof.Assume that there exists an indice k 1 and a constant γ > 0 such that (3.7) We claim that △ k → 0 and According to the condition of Powell (3.3) and the fact that {H k } is bounded [3], we have where C is a positive constant which is independent of k.Using the step 3 of the algorithm 1 (ρ k ≥ η 2 ), we have The last inequality can be written as Since f is bounded below, the sequence {f (x k ) − f (x k+1 )} is convergent and we deduce from the above inequality that k≥1 △ k < ∞, and since d k ≤ △ k , the assertions of (3.8) are deduced.
We now show that the ratio ρ k → 1.Since f is bounded and On the other hand, since {H k } is bounded, we have: Finally, we can successively write From (3.9), △ → 0 and s k ≤ △ k , and so, Then, combining (3.9) and (3.10), we get This shows that ρ k → 1. Accordingly, ρ k > η 1 for k large enough .Due to the updating rule of △ k (see the step 3 of the algorithm 1), this implies that △ k > △ > 0. But this contradicts the first assertion of (3.8).This contradiction completes the proof.
Theorem 9. Suppose, in addition to the assumptions and conditions of Theorem 8, that ∇f k is continuous in the sense of Lipchitz and bounded below on the set Proof.Suppose that there exists a constant ε > 0 and a subsequence {x k } k≥m ⊂ B (x m , R) where B (x m , R) is the closed ball of center x m and radius R, for which we have Let l ≥ m such that x k+1 is the first iterate after x m outside B (x m , R) .Since ∇f k ≥ ε for all k = m, m + 1, .., l, we can write Using the condition of Powell (3.3) and the fact that {H k } is bounded [3], we have as in the proof of Theorem 8, So, for all k ≥ m , we have [3].Finally, using the fact that d k ≤ △ k , we have Then by the uniform continuity of ∇f k , we have This is in contradiction with (3.12).Hence the theorem is proved.
The results showing the number of iterations and time of convergence are illustrated in the following tables.

1 .
Construct the model function Φ (d, µ k ) according to the relation (1.4) and determine an approximate solution d k of the problem (p 2 ) (p 2 ) min d∈R n Φ (d, µ k ) .
and let {x k } be a sequence generated by the algorithm 1, with H k uniformly bounded.If d k satisfies the condition (3.2) then we have lim inf k→∞ ∇f k = 0. (3.6)