Consider a linear predictor with Np coefficients,
Np y(k) = SUM p(i) x(k-i) , i=1where x(k) is the input signal. The prediction error is
e(k) = x(k) - y(k) .
To minimize the mean-square prediction error, solve
R p = r,
where R is a symmetric positive definite covariance matrix, p is a vector of predictor coefficients and r is a vector of correlation values. The matrix R and and vector r are defined as follows
R(i,j) = Cov(i,j) = E[x(k-i) x(k-j)], for 1 <= i,j <= Np, r(i) = Cov(0,i) = E[x(k) x(k-i)], for 1 <= i <= Np.
The solution is found using a Cholesky decomposition of the matrix R. The resulting mean-square prediction error can be expressed as
perr = Ex - 2 p'r + p'R p = E0 - p'r ,
where Ex is the mean-square value of the input signal,
Ex = Cov(0,0) = E[x(k)^2].
The expectation operator E[.] is often replaced by a sum over k over a finite interval. Minimization of the prediction error over this interval defines the so-called covariance method for determining the linear prediction coefficients.
If the coefficient matrix is numerically not positive definite, or if the prediction error energy becomes negative at some stage in the calculation, the remaining predictor coefficients are set to zero. This is equivalent to truncating the coefficient matrix at the point at which it is positive definite.
Predictor coefficients are usually expressed algebraically as vectors with 1-offset indexing. The correspondence to the 0-offset C-arrays is as follows.
p(1) <==> pc predictor coefficient corresponding to lag 1 p(i) <==> pc[i-1] 1 <= i < Np