ON PARAMETER ESTIMATION OF SEMI-VARYING COEFFICIENT MODELS WITH CORRELATED RANDOM ERRORS

This paper deals with estimation of semi-varying coefficient models with correlated random errors. The estimations of the function coefficients are are given by the use of generalized weighted least squares. The theorem of Gauss-Markov was very well known, but moreover we present an estimation of parameter σ2. AMS Subject Classification: 62G05, 62G08

Most researches have focused on the case in which random error are independently and identically distributed in a model.
In practice, sometimes the random errors are correlated, so it is been necessary to discuss the question.
In this paper, we discuss the semi-varying coefficient models with correlated random errors, which can be defined as follows:

Estimation
Suppose that we have a random sample of size n.{Y r , X r , Z r , U r }(r = 1, 2, . . ., n) is a sample from the model (1.2), e ∼ N (0, σ 2 Σ), e = (e 1 , e 2 , . . ., e n ) T , Σ is a given positive matrices, (1.2) can be written as where Y * = Y − Zβ,and β known, As Σ is a given positive definite matrix,there must exist a orthogonal matrix.Hence, Σ = P T ΛP , Λ = diag(λ 1 , λ 2 , . . ., λ n ), λ r > 0 (r = 1, 2, . . ., n) are characteristic roots of Σ.Use P T left multiplied by (2.1). (2.2) According to the principle of weighted least squares, if we assume that the inverse of the matrix X T W X exists for any u, then the estimated coefficient function at u can be expressed as Then (2.2) can be written as The estimator for M is then The matrix S is a smoothing matrix and dependent only on the observations (U r , X T r ), r = 1, 2, . . ., n.Substituting M into (2.5),weobtain Applying least squares to the linear model (2.6), we obtain On the basis of α(u), we inferred from β, hence C T β is two-step estimation method of C T α(U ), due to the complexity of local fitting, so it's difficult to analyze the statistical inference properties.thus,we mainly focus on some properties of α(u).Suppose that EY * = Xα(u), then: Theorem 1.Consider model (2.2), α(u) is the generalized weighted least squares estimation Suppose that EY * = Xα(u).Then: (1) When X is a matrix with full column rank and the inverse of the matrix Hence α(u) is an estimation of α(u), and we called α(u) the weighted linear unbiased estimate of α(u).
(2) When X is a matrix with full column rank and W is idempotent matrix,then (3) When X is a matrix with full column rank, W is idempotent matrix and lim n→∞

Simulation
In this section, we use Matlab conduct some research to estimator.[t!] The estimate of Sin(15U ) The estimate of 4U (1 − U ).

The estimate of Cos(5piU )
The estimate of exp(4U ).
(2) when W is an idempotent matrix, then α(u) is the unique best weighted linear unbiased estimation of α(u).
(3) when X is a matrix with full row rank,W is an idempotent matrix and lim In order to prove Theorem 2, we give some definitions and lemmas are defined in this paper.
Definition 2. (see [2]) Considering model(2.2) and c ∈ R p ,b ∈ R n ,so b T W Y is the best weighted linear unbiased estimate. If: (1) b T W Y is the weighted linear unbiased estimate of c T α(u).
(2) Any g T W Y is the weighted linear unbiased estimate of c T α(u), we have By Lemma 2, we know c T α(u) must be unique.So α(u) is the unique weighted linear unbiased estimate of α(u).
(2) W is an idempotent matrix, any g T W Y is the weighted linear unbiased estimate of c T α(u).So for any α(u).
We have where From Definition 2, we have that α(u) is the weighted linear unbiased estimate of α(u).The conditions of the holding water of sign of equality in (3.1) is (I n − P W 1 2 )g = 0, then Hence α(u) is the unique best weighted linear unbiased estimate of α(u).
(3) We have E(c T α(u)) = c T α(u), X is a matrix with full column rank, W is idempotent matrix and lim Therefore, c T α(u) is the consistent estimate of c T α(u).

The Estimation of σ 2 and Optimality
Let us set: 2 )Y * .