SOME COMPARISON RESULTS FOR MOVING LEAST-SQUARE APPROXIMATIONS

Some properties of moving least-square approximations for two concrete weight functions are investigated. The used thecnique is based on some properties of differential equations and applications of the theory of Lyapunov functions.

4. W : R d × R d → R be a smooth function.
Following [6], we will use the following definition.The moving least-squares approximation of order l at a point x is the value of p * (x), where p * ∈ P l is minimizing the least-squares error among all p ∈ P l .
The equivalent statement is the following constrained problem: subject to m i=1 a i p j (x i ) = p j (x), j = 1, . . .l. ( Here we assumed: H1.2.rank(E t ) = l.
2. The approximation defined by the moving least-squares method is where 3. If w(x i , x i ) = 0 for all i = 1, . . ., m then the approximation is interpolatory.
For the approximation order of moving least-squares approximation (see [6] and [2]) it is not difficult to receive (for convenience we suppose P is the span of standard monomial basis, see [2]): and (C 1 =const.) Of course, if D is a bounded domain in R d and the function f is (l + 1)continuously differentiable in D, then there exists a constant C 2 such that max f (l+1) (x) : x ∈ D ≤ C 2 .Therefore, ( 6) and (7) yield It follows from (8) that the error of moving least-squares approximation is upper-bounded of the 2-norm of coefficients of approximation a(x).
In the article, we will consider two families of weight-functions (α, β ≥ 0): Usually the moving least-squares approximation generated by weight-function w 1 is called exp-moving least-squares approximation.
Our goal in this short note is to compare the upper bounds generated by the use of w i , i = 1, 2.
Let us note the following facts: 1.If α = 0 in w 1 , then we receive classical least-squares approximation.
3. The moving least-squares approximation generated by weight function w 2 (α, 1, x, y) is studied in Levin's works, and we will call it Levin approach, see for example [6].In this case the approximation in interpolatory.

The Weight Family w 1 Generates "Decreasing Bounds"
with Respect to α Through this section, we will suppose that conditions (H1) hold true and w(x, y) = w 1 (α, x, y).
Obviously A 0 = A 0 (α, x) and moreover Here, in the right-hand side, only the matrix D depends on α and x.
To simplify notations, we will write we obtain (differentiation with respect to α; only the matrix D depends from α): Therefore a(α) is a solution of the equation Let us set: Our goal is to prove that L is a Lyapunov function for (10).Indeed: 1. L(0) = 0.
2. Let µ * (resp.µ * ) be the smallest (resp.largest) eigenvalue of H, or equivalently smallest (resp.largest) entry of H, because H is a diagonal matrix.Then for any a ∈ R m .
3. For any a ∈ R m , we have L(a) = a, Ha ≥ 0, because the matrix H is positive definite.
Using Rayleigh-Ritz theorem, we obtain Therefore L is positive definite decrescent (and of course radially unbounded) Lyapunov function for (10).
Corollary 2.1.Let the conditions (H1) hold true.Let x be a fixed point in D.
Let Li (f ), i = 1, 2 be two moving least-squares approximation of order l at a point x, generated by the weight functions w(α i , x, y), respectively.
Then if α 1 ≤ α 2 and where the constant µ is defined in the proof of Theorem 2.1.
The proof of Corollary 2.1 follows from ( 8) and Theorem 2.1.
Obviously A 0 = A 0 (β, x) and moreover Here, in the right-hand side of the equality, only the matrix D depends on β and x.Obviously Theorem 3.1.Let the conditions (H1) hold true.
Example 3.1.It is not difficult to see that the errors are increasing function of β -a little bit "strange fact", because β = 1 is interpolatory approximation.