World Library  
Flag as Inappropriate
Email this Article

Linear prediction

Article Id: WHEBN0000018572
Reproduction Date:

Title: Linear prediction  
Author: World Heritage Encyclopedia
Language: English
Subject: Lag windowing, SVOPC, Recurrence period density entropy, Speech coding, Wiener filter
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Linear prediction

Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.

In digital signal processing, linear prediction is often called linear predictive coding (LPC) and can thus be viewed as a subset of filter theory. In system analysis (a subfield of mathematics), linear prediction can be viewed as a part of mathematical modelling or optimization.

The prediction model

The most common representation is

\widehat{x}(n) = \sum_{i=1}^p a_i x(n-i)\,

where \widehat{x}(n) is the predicted signal value, x(n-i) the previous observed values, and a_i the predictor coefficients. The error generated by this estimate is

e(n) = x(n) - \widehat{x}(n)\,

where x(n) is the true signal value.

These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the parameters a_i are chosen.

For multi-dimensional signals the error metric is often defined as

e(n) = \|x(n) - \widehat{x}(n)\|\,

where \|\cdot\| is a suitable chosen vector norm. Predictions such as \widehat{x}(n) are routinely used within Kalman filters and smoothers [1] to estimate current and past signal values, respectively.

Estimating the parameters

The most common choice in optimization of parameters a_i is the root mean square criterion which is also called the autocorrelation criterion. In this method we minimize the expected value of the squared error E[e2(n)], which yields the equation

\sum_{i=1}^p a_i R(j-i) = -R(j),

for 1 ≤ jp, where R is the autocorrelation of signal xn, defined as

\ R(i) = E\{x(n)x(n-i)\}\,,

and E is the expected value. In the multi-dimensional case this corresponds to minimizing the L2 norm.

The above equations are called the normal equations or Yule-Walker equations. In matrix form the equations can be equivalently written as

Ra = -r,\,

where the autocorrelation matrix R is a symmetric, p×p Toeplitz matrix with elements ri,j = R(ij), 0≤i,jr is the autocorrelation vector rj = R(j), 0a is the parameter vector.

Another, more general, approach is to minimize the sum of squares of the errors defined in the form

e(n) = x(n) - \widehat{x}(n) = x(n) - \sum_{i=1}^p a_i x(n-i) = - \sum_{i=0}^p a_i x(n-i)

where the optimisation problem searching over all a_i must now be constrained with a_0=-1.

On the other hand, if the mean square prediction error is constrained to be unity and the prediction error equation is included on top of the normal equations, the augmented set of equations is obtained as

\ Ra = [1, 0, ... , 0]^{\mathrm{T}}

where the index i ranges from 0 to p, and R is a (p + 1) × (p + 1) matrix.

Specification of the parameters of the linear predictor is a wide topic and a large number of other approaches have been proposed. In fact, the autocorrelation method is the most common and it is used, for example, for speech coding in the GSM standard.

Solution of the matrix equation Ra = r is computationally a relatively expensive process. The Gauss algorithm for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry of R and r. A faster algorithm is the Levinson recursion proposed by Norman Levinson in 1947, which recursively calculates the solution. In particular, the autocorrelation equations above may be more efficiently solved by the Durbin algorithm.[2]

Later, Delsarte et al. proposed an improvement to this algorithm called the split Levinson recursion which requires about half the number of multiplications and divisions. It uses a special symmetrical property of parameter vectors on subsequent recursion levels. That is, calculations for the optimal predictor containing p terms make use of similar calculations for the optimal predictor containing p − 1 terms.

Another way of identifying model parameters is to iteratively calculate state estimates using Kalman filters and obtaining maximum likelihood estimates within Expectation–maximization algorithms.

See also

References

  1. ^ Einicke, G.A. (2012). Smoothing, Filtering and Prediction: Estimating the Past, Present and Future. Rijeka, Croatia: Intech.  
  2. ^ M. A. Ramirez (2008) "A Levinson Algorithm Based on an Isometric Transformation of Durbin's," IEEE Signal Processing Lett., vol. 15, pp. 99-102.

Further reading

External links

  • PLP and RASTA (and MFCC, and inversion) in Matlab
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.