STOCHASTIC STABILITY OF VECTOR SDE WITH APPLICATIONS TO A STOCHASTIC EPIDEMIC MODEL

In this paper we consider stochastic stability, namely, asymptotic stability in mean, asymptotic mean-square stability, asymptotic stability a.s., exponential p-stability and stability with stochastic Lyapunov fucntion for vector stochastic differential equations. We apply these results to the stochastic epidemic models induced by bacteriophages in the marine bacteria populations. The novelty of the paper consists of new stability results for vector SDE and their applications to a new stochastic epidemic model. AMS Subject Classification: 34F05, 34E10, 60K15, 34D20, 34D05, 92D30


Introduction
In this paper, we study different types of stochastic stability for the detrministic url: www.acadpubl.eu§ Correspondence author equilibrium state of a vector stochastic differential equations (SDEs).We apply these stability results to a stochastic epidemic model of bacteriophages in the marine bacteria populations.
The general conception of stochastic stability was developed by H. Kushner [2].Starting with the classical theory of Lyapunov [1] for ordinary differential equations, Kushner has studied the general properties of "stochastic Lyapunov function".This level of ideas was not, however, always sufficient to resolve some of the simplest problems for linear stochastic differential equations (SDE).R. Z. Khasminiskii [4] gave necessary and sufficient conditions for the asymptotic stochastic stability of linear stochastic systems.
It was starting point for the investigations by A. Friedman and M. Pinsky [8,9] on asymptotic behavior of solutions of linear stochastic differential systems [8] and on stability and spiraling properties of stochastic equations [9].M. Pinsky [10] examined various conditions for the asymptotic stability of the origin for a SDE.The basic notion is S-function, logarithm of the classical Lyapunov function.Using this function, the local asymptotic stability may be obtained, i.e., probability that the solution will converge to zero can be made arbitrary close to 1, by taking initial condition sufficiently close to the origin.To prove the global stability, i.e., that probability is equal to 1, the notion of the G-function was introduced.The global stability of zero solution was stated for nonlinear and linear SDE using the existence a mass distribution.
The difficult step in the application of Lyapunov theory to analyse the stability of deterministic systems as well as the stability of stochastic systems, is the construction of a suitable Lyapunov function.There is indeed no general systematic procedure for generating Lyapunov functions.For deterministic systems the results such as the Popov criterion and the circle conditions have been obtained for a particular class of feedback systems containing a linear time-invariant forward path element and either a nonlinear or a time-varying feedback element.
It was the motivation for J. L. Willems [7] to consider a similar class of stochastic feedback systems, where the gain of the feedback element has a stochastic white noise component.He dealt with the stability of a particular class of stochastic systems: feedback systems are considered which have a feedback gain with a deterministic gain which may be nonlinear and/or timevarying and stochastic component which is white noise.Lyapunov functions are constructed and criteria for global stability are derived similar to the results available for related detereministic feedback systems, such as the Ruoth-Hurwitz criterion, the Popov criterion, and the circle criteria.
Paper [6] by R. Brockett and J. Willems deals with average value criteria for stochastic stability.The results obtained in this paper fall into two categories.In the first class the colored case ( K(t) is a colored process) is considered, and it was shown how to use linear techniques to obtain conditions for almost sure asymptotic stability for the following model dz/dt = F z + Gw, K = Hz, dx/dt = Ax − BKCx with w white noise.The second class of results considers the white noise case, K(t) is a white noise, and it was shown how to use the frequency-domain stability criteria for linear systems on order to obtain criteria for mean square stability of the above-mentioned model.
In the paper [5] by F. Kozin a few ideas and their application to the study of stability of linear stochastic systems were presented.In particular it was shown how they lead to stronger sufficient conditions, and in the case of the Ito differential equations, to the exact stability boundary for second order systems.
The asymptotic stability of linear stochastic evolution equations in Hilbert space was treated by U. G. Haussman [11].He stated exponentially mean square stability for the zero solution of the eqaution dx + Axdt = B(x)dw(t) in Hilbert space: E|x(t)| 2 ≤ AE|x 0 | 2 e −αt .Also, he proved exponentially stable almost sure (a.s.) (or with probability 1) for the zero solution of the above-mentioned equation: |x(t)| 2 ≤ AE|x 0 | 2 e −αt a.s.P. L. Chow [13] considered the following equation in separable Banach space: dx = A(x)dt + B(x)dw(t), x(0) = x 0 with nonlinear functions A(x) and B(x) and white noise w(t).The stability and asymptotic stability of the equilibrium state x (A(x) = 0) were studied for the above-mentioned equation.P. L. Chow developed a Lyapunov method in separable Banach space for asymptotic stable a.s.criteria.
The book [12] by R. Z. Khasminskii contains various criterions on stochastic stability of differential equations under random perturbations of their parameters, including SDE.
The stability of stochastic semigroups of operators and associated with them stochastic equations was considered by A. V. Skorokhod [14].
V. Mandrekar [15] gave a review of the results related to some extensions of Lyapunov methods in the stability theory for the evolutions of infinitedimensional (stochastic and deterministic) evolution equations.
The book [25] presents a systematic study of stochastic differential delay equations driven by nonlinear integrators, detailing various exponential stabilities for stochastic dfferential equations and large-scale systems.It illustrates the practical use of stochastic stabilization, stochas tic destabilization, stochastic flows, and stochastic oscillators in numerous real-world situations.The book [27] covers the basic principles and applications of various types of stochas-tic systems, with much on theory and applications not previously available in book form.The paper [26] deals with almost sure sample stability of nonlinear stochastic dynamic systems.Stochastic version of the LaSalle theorem [29] was presented in [28].In the paper [32] the authors looked at an SDE version of the classical SIS epidemic model, with noise introduced in the disease transmission term.They showed that the SDE has a unique positive global solution, and they established conditions for extinction and persistence of disease.Allen [31] discusses a stochastic model of the above SIS epidemic model.This was done by constructing a stochastic differential equation (SDE) approximation to the continuous time Markov chain model.The latter is obtained by assuming that events occurring at a constant rate in the deterministic model occur according to a Poisson process with the same rate.McCormack and Allen [?] construct an SDE approxi-mation similar to an SIS multihost epidemic model and explore the stochastic and deterministic models numerically.As a matter of fact, three types of stochastic models commonly used in mathematical biology are: discrete-time and continuous-time Markov chain models and stochastic differential equation models.See, for example, Allen [34] for application of Markov-chain models, and Allen [34,36] and Gard [33] for applications of stochastic differential equation models in mathematical biology.Allen [35] also considered applications of SDEs to study persistence time for two interacting populations.We note that besides considering applications of SDEs to biological models, Allen [36] also considered partial differential equations for transition probabilities of different states of the system.It turned out that the PDE was forward Kolmogorov equation that discribes diffusion process.
In this paper we consider stochastic stability for some stochastic epidemic model, namely, asymptotic stability in mean, asymptotic mean-square stability, asymptotic stability a.s., exponential p-stability and stability with stochastic Lyapunov fucntion for stochastic epidemic models induced by bacteriophages in the marine bacteria populations.
The importance of studying the stability of deterministic equilibrium for a stochastic system follows from the same idea of studying mean (or average or expected) value in statistics.It gives us the insight into the behavior of the system on the long-run (for asymptotic stabilities) around the equilibrium state.For example, mean stability describes the behavior of the averaged system near equilibrium state, meanwhile the mean-square stability describes the behavior of the spread of the system near equilibrium point, similar to the variance in statistics.In the case of biological applications, e.g., epidemic for marine bacteriophages population, we have three equilibrium states: the vanishing equilibrium, the boundary equilibrium and the positive equilib-rium: E 0 = (0; 0; 0); E f = (1; 0; 0); E + = (s * ; i * ; v * ) : They are described by threedimensional vector (depending on densities for susceptible (s), infected (i) and with viruses (v) bacteria, respectively).Mean stability near E0 means that the averaged system does not have bacteria, near E f -is free of virus infection, and near E + -has all three types of bacteria, susceptible, infected and with virus.Asymptotical stability in mean defines the same situation on the long run for the averaged system.Mean-square stability gives us an information about the spread of the susceptible, infected and with virus bacteria.Mean-square stability near E 0 says that there is no spread of bacteria, near E f -the spread is stable for susceptible and there is no spread for infected and with virus, near E + -the spread is stable for all three types of susceptible, infected and with virus.
A deterministic mathematical model for the marine bacteriophage infection was proposed in [16], where the essential mathematical features are analyzed.As authors noticed, the model was proposed to one of the authors by Professor A. Okubo (University of New York, Stony Brook) in 1994 during his visit to Urbino University, Italy.It was the idea to model the observed infection of some marine bacteria such as Cytophage marinoflava, Cyanobacterium, etc. (see [17].)Other experimental evidence of viral invasion of bacteria can also be found in [18,19,20].
The system of three equations for a marine bacteriophage infection proposed in [16] is similar to the one that describes SIR models [21,22,23,24], where the role of removed class R isplayed by the phages v.
An epidemic model with a free-living stage, proposed in [20], quite similar to the presented one in [16].
The work [22] on the disease transmission in invariable host populations has some similariites too with the model in [16].
The rest of the paper is organize as follows.Section 2 introduces the stochastic epidemic model of bacteriophages in the marine bacteria population.Section 3 gives various definitions of stochastic stability and relationships between them.In section 4 we present some results on stochastic stability for vector SDEs.It includes asymptotic mean stability, asymptotic mean-square stability, asymptotic stability almost sure, and exponential p-stability.Section 5 deals with applications of stability results from Section 4 to the epidemic models of bacteriophages.Section 6 presents some results on p-stability using stochastic Lyapunov function.Section 7 concludes the paper.We note, that References [1]- [15] contain bibliography on stability and References [16]- [24] contain bibliography on marine bacteriophage infection.The rest of the References [25]- [36] give a review of applications of SDEs in biology and some sources on stability fo SDEs and nonlinear stochastic systems.

Stochastic Epidemics Model of Bacteriophages in the Marine Bacteria Populations
We consider the epidemics induced by bacteriophages in the marine bacteria populations.The marine environment is of the layer of the sea in which the bacteriophages or viruses and bacteria are assumed to be homogeneously distributed.The total population density of bacteria at time t is denoted by N (t) (numbers of bacteria/liter) while the total population density of viruses at time t is denoted by V (t) (numbers of viruses/liter).The total bacteria population N (t) can be divided into two groups -the susceptible bacteria with density S(t) and the infected bacteria with density I(t).
Let C > 0 be the carrying capacity, α > 0 be the intrinsic growth rate, K ≥ 0 the effective contact rate per bacteria with viruses, b > 1 be the replication rate of viruses, µ the death rate of viruses, λ the lysis death rate.
Our purpose is to study the stochastic stability of the model (1).

Definitions of Stochastic Stability
This section contains the definitions of different types of stochastic stability.For the completeness of the picture, we present eleven notions of stochastic stability, starting from the stability in probability and ending with the exponential pstability.Remark 2 explain relationship between different types of stochastic stability.
Let (Ω, F, F t , P ) be a probability space with filtration F t , and probability P, and let x t be a stochastic process in R n , n ≥ 1 measurable with respect to F t .Symbol E stands for the expectation (or mean value) with respect to probability P. Definition 1.The state x * is said to be stable in probability (or stochastically stable), if for every x 0 = x (deterministic), ρ > 0 and ǫ > 0 there exists δ := δ(ρ, ǫ, x) > 0 such that if |x − x * | < δ, then Here |x| stands for norm of x in R n .Definition 2. The state x * is said to be asymptotically stable in probability (or asymptotically stochastically stable), if for every ǫ > 0 there exists δ := δ(ǫ) > 0 such that Definition 3. The state x * is said to be almost sure stable (or stable with probability 1), if for every ǫ > 0 there exists δ := δ(ǫ) > 0 such that Definition 4. The state x * is said to be almost sure asymptotically stable, if Definition 3 holds and Definition 11.The state x * is exponentially p-stable, if there exist A > 0 and α > 0 such that Remark 1. Usually 2-stable (p = 2) (asymptotically 2-stable and exponentially 2-stable) state is called mean square stable (asymptotically mean square or exponentially mean square) state.
Remark 2. We note that from exponential p-stability follows asymptotical stability in mean (and hence stability in mean); from latter follows asymptotical mean stability (and hence mean stability) and asymptotical stochastic stability.Also, from almost sure asymptotical stability follows almost sure stability and from asymptotical stability in probability follows stability in probability.

Some Results on Stochastic Stability for Vector SDE
We consider the following non-linear vector stochastic differential equation in R n : where x 0 = x ∈ R n , A(x) and B(x) are nonlinear vector in R n and n×m matrix, respectively, A(x Our purpose is to study various kinds of stability for the equation ( 2).Let x * be an equilibrium state (or steady state, or critical state) for deterministic equation i.e. where Actually, as we will see that it will be the case in our application.We suppose everywhere that entries a i (x) and b ij (x) are differentiable functions of x.
Now, we make the extension of A(x t ) and B(x(t)) near the equilibrium point x * : where ∂A(x * ) ∂x is Jacobian matrix at point x * and r : As r tends to zero, that is as (x t − x * ) → 0, we deduce that x t → x * in one or another kind of convergence.
We rewrite the equation ( 2) in the following form, taking into account the extensions ( 5) and ( 6): Hence, near equilibrium point x * , the general system (7) reduces to the equivalent matrix form with error O(r The solution of the equation ( 8) is called the first approximation of the solution of the equation ( 2).Assertion on stability (unstability) of the equilibrium state by first approximation is formulated as the follows: if the equilibrium state of ( 8) is stable (unstable), then the equilibrium state of ( 2) is also stable (unstable).

Asymptotic Mean Stability
In this section we study asymptotic mean stability for the SDE (2).We rewrite the vector SDE (2) in the following form with error O(R 2 ).Suppose that where T stands for transpose matrix or vector.Under condition (10) we can conclude that Now, taking the expectation from both parts of ( 9) and taking into account (11) we can obtain the following equation From the latest equation we obtain the following solution The behavior of the solution (13) depends on the eigenvalues of the matrix The following result follows from the representation (13).
Theorem 12. Asymptotic Mean Stability Result.If all the eigenvalues of matrix ∂A(x * ) ∂x have negative real parts, then the equilibrium state x * is asymptotically mean stable (see Definition 8).If real part of one eigenvalue is positive then the equilibrium state x * is unstable.

Asymptotic Mean-Square Stability
In this section we study the asymptotic mean-square stability of equilibrium state x * .To study this kind of stability we need to know about the behavior of the following expression (see Definition 10 with p = 2) E|x t − x * | 2 when t → +∞.Now, we use Ito formula [12,14] to calculate the differential equation for the mixed second moments of the vector ( For the second moment we have the following differential equation where i = 1, 2, ..., n. The first order approximation for the eqution ( 14) takes the following look: wher a * ik are entries of Jacobian matrix A * , D * is the value of B(x t )B T (x t ) at point x * , D * := B(x * )B T (x * ), and ∂B(x * )B T (x * ) ∂x is the same notation as in (6).We note that we have the system of n × n equations in (15) and we can solve this system explicitely.
Hence, asymptotic mean-square stability follows from the solution of this system.

Asymptotic Stability Almost Sure
To study asymptotic stability almost sure, we set z(t) = |x t − x * | −1 (x t − x * ).By Ito formula we obtain the following SDE for z(t) where where and If functions A * (z) and E * k (z) satisfy Lipschitz condition then there exists the unique solution of the SDE (17).Hence, z(t) is an homogeneous Markov process on the sphere S in R n of unit radius with center at zero.Now we apply Ito formula for the function R(t) := |x t − x * | : where and Since R(t) > 0 then From here we obtain finally where For R(t) goes to zero with probability 1 (or almost sure), it is necessary and sufficient that the expression under exponent sign in (19) goes to −∞ with probabilty 1.
Theorem 13.Asymptotic Stability Almost Sure.Let with probability 1.Then the equilibrium state x * is asymptotic stable almost sure (or with probability 1).
Proof.Follows from the fact that and from the condition (20).
Remark 3. Process z(t) is a Feller process with compact phase space, that is why it has an ergodic distribution, say ρ(dz).If S G * (z)ρ(dz) < 0, then the equailibrium state x * is asymptotic stable almost sure as by ergodic theorem for such Markov process We note that if S G * (z)ρ(dz) > 0, then the equilibrium state x * is unstable.
Remark 4. The process z(t) is ergodic under certain conditions.The ergodic properties of the z-process are determined by the nature of the z-process in the neighborhood of its singularities.A singularity of a Markov process is defined as a point at which the diffusion component vanishes.For the z-process, given by ( 17), the singularities are the solutions to B * k (z) = 0.
Remark 5. We also note that it is impossible effectively to calculate the value q * in (20).In particular, it is impossible to find effectively ergodic distribution for the process z(t).That is why it is interesting to find sufficient stability conditions by first approximation.

Exponential p-Stability
Here, we consider the result concerning exponential stability in mean for the process x t .Using Ito formula we obtain the SDE for |x t − x * | p : where B k (x) := (b kj ), j = 1, 2, ..., m.
Proof.Let L 0 be some constant which we specify later.From ( 5), ( 6) and ( 21) we obtain the following representation: where C * (x t − x * ) is defined as follows From here we obtain: Let us take δ 1 , δ and p such that −q 1 > C, when q 1 < 0 and C < q 2 , when q 2 > 0. Now, we can see that for L 0 we can take the following expression: We can also see that process |x t − x * | p e L 0 t is a supermartingale.From the previous representation, Kolmogorov-Doob inequality and the fact that process |x t∧τ δ − x * | p e L 0 (t∧τ δ ) is a supermartingale for such L 0 < 0 and τ δ being the first exit time for process x t from the sphere of radius δ > 0 with center at point zero, we obtain asymptotic stability almost sure.. Unstable result follows from the representation (21), Kolmogorov-Doob inequality and the fact that process |x t∧τ δ − x * | −p e L 0 (t∧τ δ ) is a supermartingale for q 2 > 0 and L 0 > 0 and τ δ being the first exit time for process x t rom the sphere of radius δ > 0 with center at point zero.Remark 6. Asymptotic p-mean stability.We note that from exponential stability in mean follows asymptotic p-mean stability (see Definitions 10-11).
Let us consider the equilibria of the deterministic part.From the following equations it gets that there exist three equilibria: the vanishing equilibrium E 0 = (0, 0, 0), the boundary equilibrium E f = (1, 0, 0) in which f stands for 'free of virus infection', and the positive equilibrium as * +l , and v * = al(1−s * ) as * +l .By analyzing the eigenvalues of the Jacobian matrix of the system at each equilibrium E 0 , E f and E + , it can be obtained that E 0 is always an unstable saddle-point;

Equilibrium Points of Stochastic Model
We consider the stochastic model for the bacteriophages infection on bacteria (we omitt subscrit t in s, i and v for simplicity): Here, vectors x, w(t), A(x) and matrix B(x) have the following: Hence, we can rewrite our system (24) in the following matrix form: where Matrix B(x) is defined in (26).
As we know (see Section 5.1), the equilibrium points we can find from the solution of the following equation A(x * ) = 0.In our case x * = (s * , i * , v * ) T and, in fact, we have three equilibrium points (see Section 5.1): We note that B(x * i ) = 0 3×4 , for i = 1, 2, however B(x * 3 ) = 0 3×4 , where 0 3×4 is a 3 × 4 matrix with zero entries.
The Jacobian matrix A * := ∂A(x * ) ∂x at the equilibrium point x * has the following look in this particular case: In the point x * 1 = (0, 0, 0) T := E 0 we obtain: in the point x * 2 = (1, 0, 0) in the point where s * , i * and v * are defined in Section 5.1: Also, the value of B(x * ) in the points x * i , i = 1, 2, 3, are equal to: and where 0 3×4 is a 3 × 4 matrix with zero entries.Hence, the first order approximation system near the equilibrium point x * = (s * , i * , v * ) for our system (24) has the following look: where matrices A(x * ) and B(x * ) are defined in ( 29)-( 35), x * in (28) and ∂B(x * ) ∂x in (6).We note that transpose matrix B(x) T to matrix B(x) in ( 26) has the following look: We need also the product of matrices B(x) and B T (x) :

Mean Stability
In this section we study mean stability (see Definition 7) of equilibrium points x * i , i = 1, 2, 3 in ( 28) and ( 33) for the stochastic model (24).Taking expectatiom from both parts of the equation ( 36) and taking into account the property (11) we obtain The solution of this equation takes the following form: where (s 0 , i 0 , v 0 ) := (s, i, v).
As we can see, the mean stability of the stochastic model ( 24) depends on the eigenvalues of the Jacobian matrix A * in (29), where equilibrium pointa are defined in (28) and (33).By analyzing the eigenvalues of the Jacobian matrix A * of the stochastic system at each eqiulibrium point E 0 , E f and E + ( the corresponding matrices are A * 1 , A * 2 and A * 3 in (31), ( 32) and (33), respectively) we obtain the following results: In this case the equation ( 40) has the following look: where matrix A * 1 is defined in (30).The Jacobian matrix A * 1 in (41) has one positive eigenvalue λ = a > 0, hence the equilibrium point E 0 = (0, 0, 0) is mean unstable.

Maen Stability Near Equilibrium Point
In this case the equation ( 40) takes the form: where matrix A 2 is defined in (31).The Jacobian matrix A * 2 in (42) has all the eigenvalue negative for b ∈ (1, b * ], where b * = 1 + µ KC .Hence, in this case the equilibrium point E f is asymptotically mean stable (see Definition 8).If b > b * , then equilibrium point E f is mean unstable.

Mean Stability Near Equilibrium
In this case the equation ( 40) takes the form: where matrix A * 3 is defined in (32) and equilibrium point (s * , i * , v * ) is defiend in (33).For the eigenvalue E f to be positive we need to take s * < 1 and b > 1.It is known that for a special case s * * the model ( 43 Remark 7. We note that from asymptotical mean stability (Definition 8) follows mean stability (definition 7).Hence, the equilibrium points E f and E + are also mean stable under relevant values of b.

Mean-Square Stability
In this section we study mean-square stability for the stochastic model in (24).For this purpose we need to write the system of equation in (15) with respect to our stochastic model.We note that we will have the system of 9 = 3 × 3 equations (n = 3).
We remind that matrix A * is defined in (29), matrix D * := B(x * )B T (x * ) has the following look Taking into account the forms of matrices A * , D * , ∂B(x * )B T (x * ) ∂x (x t − x * ) and the system of equations ( 15) we obtain the following system of equtions in the case of stochastic model ( 24): 5.4.1.Mean-Suqare Stability Near Equilibrium Point E 0 = (0, 0, 0).We rewrite the system (45) near the equilibrium point E 0 = (0, 0, 0) : Actually we need only three first equutions in (46).We can rewrite the system of the first three equations in the following form: where matrix C * 1 has the following look: The system (48) is deterministic with respect to E(s 2 , i 2 , v 2 ).We need here some result on stability of deterministic inhomogeneous equations.
Lemma.Let we have the deterministic equation in R n Suppose that the solutions of the equation in R n are bounded, and for |x| ≤ h.
If matrix A has all the eigenvalues negative, then the solution of the equation (49) are stable.
If matrix A has at least one eigenvalue positive, even the condition (50) holds, then the solution of (49) is unstable.
Proof.The solution of the equation (49) has the following look: From here and boundness of the solution of equation ( 50) we obtain the following estimation: where K is such that |e At | ≤ K.The result now follows from the Gronwall-Bellman inequality.
We note that in this case even the condition (51) fails (see Section 5.2.and equation (41)).

Mean-Square Stability Near
Equilibrium Point e f = (1, 0, 0).We rewrite the system (45) near equilibrium point E f = (1, 0, 0, ) as follows: The system (52) is deterministic, and the system of the first three equtions of (52) may be represented in the following form: where matrix C * 2 has the following look: The system (53) satisfies all the conditions of Lemma (see Section 5.2 and equation (42) with matrix A * 2 in (31)).Hence, we have the following result: the solution of the system (53) is meansquare stable near the equilibrium point E f = (1, 0, 0), hence the solution of the initial nonlinear system ( 24) is also mean-square stable near this equilibrium point.Moreover, the solution of ( 53) is asymptotically mean-square stable, hence, the solution of ( 24) is also asymptotically mean-square stable.Remark 8. From the following inequalities we obtain the asymptotical stability in mean for the system (53) and the system (24) near the equilibrium point E f = (1, 0, 0).

Mean-Square Stability Near Equilibrium Point
We note that in this case the respected system has the look similar to (45) with the equilibrium point E + = (s * , i * , v * ) defined in (33).
The system of the first three equations in (45) may be represented in the following form: where matrix C * 3 has the following look: and functions f i (t, s, i, v), i = 1, 2, 3 are defined as follows: The system of equtions (55) with matrix (56) and vector-function in (57) is deterministic.
From Lemma and (55)-(57) we have the following result: if v * > a(1− 2s * − v * ) (it is the case since a(1− 2s * − i * )− v * = −as * < 0), then the solution of the system of equtions ( 55) is asymptotical mean-square stable under some relations between coefficients a and b and , hence, the solution of the initial sytem of equations ( 24) is asymptotical mean-square stable too.Now, we consider the local stability analysis of the equilibrium point E + = (s * , i * , v * ) in detail.
The eigenvalue problem for the Jacobian matrix (36) provides the following characteristic equation: where the coefficients a i (s * , v * ), i = 1, 2, 3 are

b+1
, then We note that in this case l < (a+b) 2 ab < (a+b) (b−1) .From Routh-Hurwitz criterion we obtain that for any s * ∈ (0, 1) E + is locally asymptotically stable if and only if

b+1
, then we have Hopf bifurcation.In this case, l > (a+b) 2  ab , and we need to differ the following two situations: 1)

Asymptotic Stability Almost Sure
To study asymptotic stability almost sure we need to calculate the function G * (z) in (19): Here, z(t Taking into account the expression for matrices A * in (29) and B(x) in ( 26) we obtain the following expression for the values (z, where We note that expressions for n k=1 (z, B * k z) 2 and n k=1 |B * k z| 2 do not exist at the equilibrium points E 0 = (0, 0, 0) and E f = (1, 0, 0).
We can study only stability at the equilibrium point E + = (s * , i * , v * ), where E + is defined in (33).
The expresion for G * (z) takes the following form: where the expressions in the right hand side of (61) are defined in (59) and (60).
From (20) we obtain the following result: if then the equilibrium state E + is asymptotic stable for the system (36) and, hence, for the initial system (24).As we know, process z(t) := ((s , that is why it has an ergodic distribution, say ρ(dz).If S 3 G * (z)ρ(dz) < 0, then the equailibrium state (s * , i * , v * ) is asymptotic stable almost sure as by ergodic theorem for such Markov process We note that if S 3 G * (z)ρ(dz) > 0, then the equilibrium state (s * , i * , v * ) is unstable.
Of course, the process z(t) := ((s T is ergodic under certain conditions.The ergodic properties of the z-process are determined by the nature of the z-process in the neighborhood of its singularities.A singularity of a Markov process is defined as a point at which the diffusion component vanishes.For the z-process, given by ( 17), the singularities are the solutions to B * k (z) = 0.

Exponential p-Stability
To study exponential p-stability ( see Section 4.4) we need to calculate the following expression and then to find After calculating all the terms in (62) we obtain the following expressions: where 63) and (62), respectively. Then Now, to study the exponential p-stability (see Section 4.4) we need to solve the following optimization problems: with the following constraints with the following constraints where C * (s, i, v) is defined in (65).Set q 1 := sup and q 2 := inf Then, after the solving of the optimization problems (66)-(67), we obtain that q 1 < 0 (see (70)), then the equilibrium state E + = (s * , i * , v * ) is exponential p-stable for our stochastic system (24).

Stochastic Lyapunov Function and p-Stability
In this section we want to give an idea how we could you use the method of stochastic Lyapunov function to prove exponential p-stability for our model.Let we have the eqution (2): where x 0 = x ∈ R n , A(x) and B(x) are nonlinear vector in R n and n×m matrix, respectively, A(x) := (a i (x), i = 1, 2, ..., n), B(x) := (b ij (x), i = 1, 2, ..., n, j = 1, 2, ..., m), w(t) := (w 1 (t), w 2 (t), ..., w m (t)) is m-dimensional standard Wiener process, x t := (x 1 t , ..., x n t ).The infinitesimal operator of the Markov diffusion process x t in (72) has the following look: The main result in the theory of stochastic stability with stochastic Lyapunov function is the following: if there exists the Lyapunov function V (x) such that LV (x − x * ) ≤ −γV (x − x * ), γ > 0, then the equilibrium state x * is exponentially p-stable, where L is defined in (73).
If there exists the Lyapunov function V (x) such that LV (x − x * ) ≤ 0, (74) then the equilibrium state x * is asymptotically stable.Now, we are going to apply this result to our model (24).In this case the respected operator L has the following look:  74) and (75) we obtain the following result: if there exists the Lyapunov function V (s − s * , i − i * , v − v * ) such that then the equilibrium state x * is asymptotically stable for our stochastic model (24).Remark 9.As Lyapunov function we can take, for example, the following function: where L, M ∈ R + .

Conclusion
In this paper we considered stochastic stability, namely, asymptotic stability in mean, asymptotic mean-square stability, asymptotic stability a.s., exponential p-stability and stability with stochastic Lyapunov function for vector stochastic differential equations (SDE) near equilibrium states of the deterministic system.We applied these results to the stochastic epidemic models induced by bacteriophages in the marine bacteria populations.The novelty of the paper consists of new stability results for vector SDE and their applications to a new stochastic epidemic model.The importance of studying the stability of deterministic equilibrium for a stochastic system follows from the same idea of studying mean (or average or expected) value in statistics.It gives us the insight into the behavior of the system on the long-run (for asymptotic stabilities) around the equilibrium state.For example, mean stability describes the behavior of the averaged system near equilibrium state, meanwhile the mean-square stability describes the behavior of the spread of the system near equilibrium point, similar to the variance in statistics.In the case of biological applications, e.g., epidemic for marine bacteriophages population, we have three equilibrium states: the vanishing equilibrium, the boundary equilibrium and the positive equilib-rium: E 0 = (0; 0; 0); E f = (1; 0; 0); E + = (s * ; i * ; v * ) : They are described by threedimensional vector (depending on densities for susceptible (s), infected (i) and with viruses (v) bacteria, respectively).Mean stability near E0 means that the averaged system does not have bacteria, near E f -is free of virus infection, and near E + -has all three types of bacteria, susceptible, infected and with virus.Asymptotical stability in mean defines the same situation on the long run for the averaged system.Mean-square stability gives us an information about the spread of the susceptible, infected and with virus bacteria.Mean-square stability near E 0 says that there is no spread of bacteria, near E f -the spread is stable for susceptible and there is no spread for infected and with virus, near E + -the spread is stable for all three types of susceptible, infected and with virus.
There are many other interesting problems to be considered.One of them is to look at the existence of random attractors or random equilibria for the models under consideration.Another one is to take into account the delay factor into account.The authors would like to considere these problems in their future research papers.
E f is a locally asymptotically stable node for all b ∈ (1, b * ), critically stable at b = b * and an unstable saddle-point for b > b * , where b * = 1 + µ KC ; For the positive equilibrium E + , E + is feasible only if s * < 1 and b > 1.There exists a special value s * * such that, at this value, the model has a simple Hopf bifurcation.Correspondingly there exists the bifurcation value b * * for b such that E + is asymptotically stable if b * < b < b * * and unstable if b > b * * .
) has a simple Hopf bifurcation.Correspondingly, the Jacobian matrix A * 3 in (43) has all the eiugenvalues negative only for the following b : b * < b < b * * , where b * * is the bifurcation value.Hence, in this case the equilibrium point E + is asymptotically stable.If b > b * * , then E + is mean unstable.