eu APPLICATION OF THE AVERAGING METHOD TO OPTIMAL CONTROL PROBLEM OF SYSTEM WITH FAST PARAMETERS

The application of the averaging method for nonlinear optimal control problem of system with fast oscillating parameters is founded. According to this method, fast oscillating parameters are replaced by averaged one. It is proved that the optimal control of the averaged problem is asymptotically optimal for the original problem. The convergence of the original problem optimal solutions to the averaged problem optimal solutions is researched. AMS Subject Classification: 34K35, 49K15, 49J15, 49J30


Introduction
In this paper we consider two following optimal control problems with fast oscillating coefficients.
The first is purely nonlinear problem: ẋ = X t ε , x, u (t) , x (0, u (0)) = x 0 (1.1) with the quality criterion Here ẋ = dx dt .The second one is linear by control function ẋ = f t ε , x + f 1 (x) u (t) , x (0, u (0)) = x 0 (1.3) with the quality criterion: Also we consider the criterion quadratic by control for problem (1.3): Here ε > 0 is a small parameter, T > 0 -a given constant, x is a phase vector in R d , u(t) -m-dimensional control vector which belongs to some functional set.

If limits lim
with the quality criteria: (1.12) The main result of this paper is the proof of the convergence of optimal controls and optimal trajectories of original problems to optimal controls and trajectories of averaged problems.
Also in this case it is founded that the optimal control of the averaged problem is asymptotically optimal for the original one.It means that the minimum of the quality criterion is realized with accuracy small parameter ε.
N. N. Moiseev was the first who paid attention to the possibility of applying the averaging method to solving optimal control problems on asymptotically large time intervals [1,2].In particular, he singled out two approaches to find asymptotically optimal controls: the first is the averaging of the boundary value problem for the maximum principle, the second is the direct averaging of the controlled motion equations.
N. N. Moiseev emphasized the practical application of these approaches to solve specific applied problems, without sufficient mathematical foundation.In V. A. Plotnikov's works [3] and his school there are a rigorous mathematicals foundations of applying the averaging method to the solving of optimization problems.However in this case the set of admissible controls of the averaged system was constructed in a special way on the base of the set of admissible controls of the original system.This way is technically difficult,because it is necessary to solve some integral equation.Another approach considered for example in the [4,5], is related to the construction of a differential inclusion for the original problem.Then the averaging method was applied for this inclusion.That is, the original control problem of a system of ordinary differential equations was replaced by differential inclusion, which is a new object with a more complex nature.
The different from previous averaging method was obtained in [6].Namely, first we averaged the right-hand side of the system on the time, while the control function u(t) was considered as a parameter (on it there was no averaging).The advantages of such approach are as follows: firstly, the averaged object is constructed rather simply by averaging the right-hand parts over the time; secondly, the set of admissible controls of the original and averaged problems was the same and it did not require, as in [3], any additional constructions.
However, the authors had to impose the condition of asymptotic constancy on the control functions, namely for each control u(t) ∈ U and there exists a constant u 0 such that |u (t) We note that a similar condition was in [1,2].
The main difficulty arising in the approaches mentioned above is to prove the closeness of the corresponding (with the same initial conditions) solutions of original and averaged objects on asymptotically large (of the order of ε −1 ) intervals.If such an affinity is obtained, then further reasoning is analogous.
The purpose of this paper is to founded the approach of [6] to the optimal control problem without the condition of asymptotic constancy.
Concerning other aspects of the application of asymptotic methods for solving optimal control problems we mention papers [7,8,9].
As for the averaging method itself, its rigorous justification was first obtained by Bogolyubov in [10].
Later this method was generalized and developed for various classes of differential equations, for example, impulse [11], functional-differential [12].
To the justification of our averaging scheme, we will use the approach connected with the Krasnosel'skii-Krein theorem [13], which we give in the Preliminary.
This article is organized as follows.
In section 2 we introduce some preliminary definitions and results about averaging method.In particular, we give the theorem of Krasnosel'skii-Krein.
Then we give rigorous formulations of the considered problem and state main results.Section 3 is devoted to the proofs of the main result.In Section 3.1, we prove the analogs of averaging method theorem for the case the righthand sides are dependened on the functional parameters.In subsection 3.2 correspondence between the original problem solutions and solutions of the average optimal control problems in the nonlinear case are proved.Subsection 3.2 devoted to similar questions for linear control problems.

Preliminaries, Statement of the Problems and Main Results
In our paper, the concept of the integral continuity of a vector function with respect to a parameter µ will be used [13].Namely, let X(t, x, µ) be a ddimentional vector-valued function defined in the domain We say that a function X(t, x, µ) is integrally continuous in region Q with respect to a parameter µ at a condensation point µ 0 ∈ Λ if for any t ∈ [0, T ] and x ∈ D the limit holds: lim It is well known [13] that the existence of limits (1.6) and (1.7) is equivalent to the integral continuity of a function Concerning integral continuity, we will use the following theorem [13].
Theorem 2.1 (Krasnosel'skii-Krein).Let 1) X(t, x, µ) is continuous with respect to t in Q; 2) X(t, x, µ) is uniformly continuous with respect to x on Q, uniformly with respect to x, µ; 3) X(t, x, µ) is integrally continuous with respect to µ at the point µ = µ 0 .Then if x µ (t) is a family of functions that are piecewise continuous with respect to t ∈ [0, T ], with values in the domain D. This functions converge to the limit function x µ 0 (t) as µ → µ 0 uniformly with respect to t ∈ [0, T ].Then lim For Problem (1.1)-(1.2) we assume that the following conditions are satisfied.
2.1 admissible controls are m-dimensional vector functions u(•) such that u(•) ∈ U , and belong to a compact set in L 2 ([0, T ]); 2.2 the function X(t, x, u) is defined and continuous in all its arguments in the domain and: 1) X(t, x, u) satisfies the linear growth condition with respect to x and with constant M on Q 0 that is 2) X(t, x, u) satisfies the Lipschitz condition with respect to x ∈ R d and u ∈ R with constant λ in domain Q 0 that is for any (t, x, u) and (t, x 1 , u 1 ) in Q 0 ; 2.3 the limit (1.6) exists uniformly with respect to x ∈ R d and u ∈ R m ; 2.4 the function L(t, x, u) is defined and continuous in the set of arguments in the domain For problem (1.3)-(1.4)we assume that the conditions are hold: 2.5 admissible controls are also m-dimensional 2.6 the function f (t, x) is defined and continuous in all its variables in the domain Q 2 = t ≥ 0, x ∈ R d and the n × m-dimensional matrix f 1 (x) is defined for x ∈ R d and the following conditions are satisfied: 1) f (t, x) satisfies in the domain Q 2 the linear growth condition with respect to 2) f (t, x) and f 1 (x) satisfy in the domains of definition the Lipschitz condition with respect to x with constant λ > 0; 2.7 the limit (1.6) exists uniformly with respect to x ∈ R d 2.8 the scalar functions A(t, x) and B(t, u) are defined for t ∈ [0, T ] , x ∈ R d , u ∈ U and continuous in all its variables, and moreover, 1) A (t, x) ≥ 0, B(t, u) ≥ a|u| p for some constant a > 0 and for each 2) the function Ψ : R d → R 1 is continuous with respect to x and nonnegative.
For Problem (1.3)-(1.5)we assume that the admissible controls are m- In addition, conditions 2.6 and 2.7 are also satisfied.We note that from conditions 1) and 2) of 2.2 and 2.6, combining with the Caratheodory theorem, it follows that for every admissible control u(t) there exists x(t, u) unique Cauchy problem solution on the whole interval [0, T ].Moreover, x(t, u) is an absolutely continuous function.Also according to conditions 2.3 and 2.7 the functions X 0 and f 0 satisfy the linear growth condition with respect to x ∈ R d with constant M and the Lipschitz condition with constant λ.Therefore, for each admissible control u(t) there exists y(t, u) unique Cauchy problem solution (1.8) and (1.9) exists, unique on the whole interval [0, T ].Here x(t, u), y(t, u) are absolutely continuous functions.
Our first theorem, which plays an important role in the future for the control problem (1.1)-(1.2) guarantees the closeness of the corresponding Cauchy problems solutions for systems (1.1),(1.8)and for small value ε.
Note that it also represents an independent interest and is a generalization of Bogolyubov's first theorem of the averaging method for the case of the righthand side dependence on the functional parameters.
Remark 2.3.We note that ε 0 from Theorem 2.1 does not depend on admissible controls.That is, the estimate (2.3) is uniform over all admissible controls.
Remark 2.4.If, under the conditions of Theorem 2.1, the function X(t, x, u) is bounded on Q, then ε 0 = ε 0 (η) depends only on η and does not depend on x 0 .Thus in this case the estimate (2.3) is uniform with respect to x 0 ∈ R d .
Remark 2.5.The assertion of Theorem 2.1 does not follow from any scheme of partial averaging presented [15].
The next theorem establishes a connection between optimal controls and quality criteria for an original ( That is, the optimal control of the averaged problem is asymptotically optimal for the original problem; 3) there exists a sequence uniformly on [0, T ] and If in addition the averaged problem (1.8)-(1.10)has an unique solution, then the convergences (2.5) and (2.6) hold for all ε → 0.
Of course, the compactness of the set U is rather restrictive.However, if we confine ourselves to the class problem (1.3)-(1.4)that is problems linear in control, then the condition of strong compactness can be removed, replacing it roughly speaking, with a weak compactness.
We note that problems of the type (1.3)-(1.4)are quite often encountered in practice, for example, of all sorts of regulators.
First, we prove the result about averaging.
Theorem 2.7.Suppose that conditions 2.5-2.7 are satisfied.Then if the sequence u ε w →u 0 weakly converges in L p ([0, T ]) as ε → 0, then the solution x ε (t) of the Cauchy problem (1.3) with u (t) = u ε (t) converges uniformly on the interval [0, T ] to y(t) the corresponding Cauchy problem solution (1.9) with control u (t) = u 0 (t), that is, Using Theorem 2.3, we can establish a connection between the optimal controls of the original and averaged problems.
For the quality functional (1.5) assertion (2.10) can be strengthened by replacing the weak convergence with the strong one.
Corollary 2.9.Under the conditions of Theorem 2.4, for problem (1.3)- (1.5) all the assertions of Theorem 2.4 are valid with the replacement weak convergence of optimal controls by strong convergence in L 2 ([0, T ]).

Proofs of the Theorems and the Corollary
Further we will carry out all the proofs for T = 1, i.e. for the interval [0, 1].

The proof of Theorem 2.2 and 2.6
Proof of Theorem 2.2 Let us take any η > 0 and fix it.And now we will estimate the difference between x(t, u) and y(t, u) for all ε > 0 and for any admissible control u(t) .For simplicity, we denote x(t, u) = x(t), and y(t, u) = y(t).We will also omit the dependence x(t) on ε.Since U is a compact in L 2 ([0, 1]), then for given η there exists a finite ηe −λ 4λ net: u 1 (t) , . . .u n (t), here N = N (η).
Then for the chosen control u(t) there exists a representative u j (t) from the net above such that Next, note that by using the linear growth conditions with respect to x for X(t, x, u) and X 0 (x, u)combining the Gronwall inequality for x(t) and y(t) on an interval [0, 1] it is easy to obtain estimates where the constant C depends only on M and on x 0 .Using conditions 2) from 2.2, we have: Now we estimate the first summand I 1 in (3.3)and have: Since each function from L 2 ([0, 1]) can be approximated in the L 2 -norm by a continuous function (for example, partial Fourier sums with respect to sines and cosines), and each continuous on a segment function can be approximated by a piecewise constant one, then for u j (t) we can choose a continuous function u c (t) and a piecewise constant function u p (t), such that the inequalities hold: and Then from (3.4) -(3.6) we have the estimate If it necessary, we can divide interval [0,1] by the points {t k } R 0 (t 0 = 0, t R = 1), we can affirm that all components of the vector-valued function u p (t) take constant values on each interval [t k , t k+1 ) that is, Here, of course, the natural R = R(η) is fixed for a fixed choice of η.
Now we can choose a positive integer n and divide the interval [0, 1] into n equal parts by the points t i = i n (i = 0, n) .We assume that n is so large that there are points t i at each interval [t k , t k+1 ).As a result, we obtain n intervals of the form [t i , t i+1 ).If for some k and i we have t i < t k < t i+1 then divide interval [t i , t i+1 ) into two intervals [t i , t k ) and [t k , t i+1 ).As a result, the segment [0,1] divides into no more than n + R intervals, the length of each of them does not exceed 1  n .Now we denote the points of the partition again by t i , and the total number of the interval [t i , t i+1 ) denote by K = K(η).In this case it is obviously that K ≤ n + R and also u p (t) = u p (t i ) for t ∈ [t i , t i+1 ).Let us denote y i = y(t i ), and u p (t i ) = u pi .Then Let us estimate each term in (3.8).Using the linear growth condition and Lipschitz condition for X(t, x, u) we have next estimate But with using (3.2) and with the division length choice we have: Then from (3.9) for J 2 we have next estimate Similarly, we can estimate the term J 3 in (3.8).
Then for the chosen η > 0 we can specify the number n such that for all ε > 0 the next inequality holds Let us fix n.Now we estimate J 4 in (3.8).Since condition 2.3 the function X( s ε , x, u) is uniformly with respect to x ∈ R d , u ∈ R m integral continuous at the point ε = 0. Therefore, there exists ε 0 such that for any ε < ε 0 and fixed n, the value J 4 satisfies the estimate The above results can be applied for each function u 1 (t) , . . .u n (t) from the indicated net.Because the net is finite ε 0 we can choise as universal parameter for every function in the net.Then from (3.3), (3.7), (3.11), (3.12) for ε < ε 0 follows inequality (2.3) uniformly with respect to all admissible controls, which proves Theorem 2.1.
Using condition 2) from 2.2 and Gronwall's inequality we have: (3.13) Using condition 2.4 we get The estimate (3.2) is uniform with respect to all admissible u(t).That is why it follows from (3.2) that x(t, u) does not leave the domain B C which is a sphere with radius C and with center at zero for t ∈ [0, 1].Usin condition 1) of 2.4 and the Cantor theorem we have that the function L(t, x, u) is uniformly continuous with respect to x ∈ B C uniformly with respect to t ∈ [0, 1] and u ∈ R m .Therefore from (3.13) and (3.14) follows the continuity of J ε (u) in the L 2 -norm.
Similarly we establish that the functional J 0 (u) is continuous with respect to u .
If the problem (1.8) -(1.10) has an unique solution then we obtain that an arbitrary sequence (u * εn (t), x * εn (t))) convergences to the same limit value.This proves the last assertion of Theorem 2.2.

Proof of Theorem 2.7 and Corollary
Proof of the Theorem 2.7 Rewriting equation (1.3) we have: From conditions 2.6 we obtain The weak convergence u ε to u 0 implies the strong boundedness of u ε , that is sup The last inequality implies the equicontinuous of the family x ε (y) on [0, 1].Let x ε (y) be a sequence which uniformly with respect to t ∈ [0, 1] converges to some function x 0 (t) as ε n → 0. From (3.30) we have: u εn Lp (3.37)Now taking into account the uniform bound u εn Lp from (3.37), we obtain that the last term in (3.34) tends to zero.
Passing now to the limit in (3.34) as ε → 0 and using (3.35) and (3.36), we have: That is, x 0 (t) is a solution of the Cauchy problem (1.9), and therefore, by the uniqueness of x 0 (t) ≡ y(t).
Therefore x ε (t) ⇒ y(t) as ε n → 0. So, any convergent sequence of functions from the family converges to the same limit.This proves Theorem 2.3.
The proof of the optimal solution existence (x * ε (t), u * 2 (t)) for each ε > 0 with the selection of a weakly convergent minimizing sequence u (n) ε (t) to u * ε (t) with the passage to the limit is standard (see, for example [15]).
Here the semicontinuity of the integral 1 0 B(t, u(t))dt with respect to u, which follows from the convexity of B(t, u) is used.
Belonging u * ε (t) to the set V for every t ∈ [0, 1] follows from Mazur's lemma [16], and also using convexity and closure of V .
The existence of an optimal pair (y * (t), u 8 (t)) for problem (1.8) -(1.10) is similarly proved.So, Let ū be an arbitrary constant vector in V .It is obvious that the control u(t) ≡ ū is an admissible control for the problem (1.3) - (1.4).Then for each ε > 0 we have By the analogous of the estimate (3.33) way we can show the existence of a constant C 1 which is independent of ε such that . Then from continuity of A, B, and Ψ implies the existence of a constant C 2 which is independent of ε and such that J * ε ≤ C 2 .And we have: for all positive ε.It follows from conditions 2.8 and (2.41) that Therefore the set u * ε is weakly compact in L p [0, 1].Let u * ε (t) be a sequence of optimal controls that converges weakly to u 0 (t).It follows from the Mazur's Lemma [16] that u 0 (t) ∈ V for t ∈ [0, 1], hence u 0 (t) is an admissible control.
Let x 0 (t) be a solution of Cauchy problem (1.8) with u(t) = u 0 (t).
According to Theorem 2.3 the solution x εn (t) of the Cauchy problem (1.3) converges uniformly on [0, 1] to x 0 (t) as ε n → 0. Then for any η > 0 we have: By the theorem 2.3 solution x εn (t, u * ) of the Cauchy problem (1.3) converges uniformly with respect to t ∈ [0, 1] to y * (t), for ε n → 0 Whence and with (3.43) we have: For ε n < ε.On the other hand Let us consider auxiliary systems Applying Theorem 2.7 to systems (3.47) and (3.48) we have: Hence and from the uniform convergence of x * εn to x 0 , we obtain that sup For some constant C3 > 0 which is independent of n.Thus, from (3.49) for any η > 0 follows existanse ε such that Which proves assertion 1) of the theorem.We now prove assertion 2).Since x ε (t, u * ) converges to y * (t) uniformly with respect to t ∈ [0, 1], as ε → 0, we can get analogous to (3.44) results.That is for any η > 0 and for small values ε:  3.53) we obtain the assertion 2).Let us prove assertion 3).For this, let us show that in fact (x 0 (t), u 0 (t)) are optimal solutions of the problem (1.9) - (1.11).
We have Let us go to the limit in last equality as n → ∞.From (3.52) and conditions 2.8 we have: [A(t, x 0 (t))dt + lim From which it follows that (x 0 (t), u 0 (t)) is an optimal pair.The last assertion of the theorem is proved similarly to the same assertion of Theorem 2.6.Proof of Corollary.Suppose that J ε (u) has the form (1.5).For it all the arguments of Theorem 2.4 are valid.Then from (3.54) we have:

Conclusion
Therefore, in this paper the optimal control problem of system with fast oscillating parameters was considered.The application of the averaging method for such nonlinear systems is founded.Furthermore, the convergence of optimal controls and optimal trajectories of original problems to optimal controls and trajectories of averaged problems is proved.Also it is proved that the optimal control of the averaged problem is asymptotically optimal for the original problem.It means that the minimum of the quality criterion is realized with accuracy small parameter.