World Library  
Flag as Inappropriate
Email this Article


Article Id: WHEBN0000764858
Reproduction Date:

Title: Autoregressive  
Author: World Heritage Encyclopedia
Language: English
Subject: AR, Ornstein–Uhlenbeck process, Dickey–Fuller test, SETAR (model), STAR model
Publisher: World Heritage Encyclopedia


In statistics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it describes certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values. It is a special case of the more general ARMA model of time series.


The notation AR(p) indicates an autoregressive model of order p. The AR(p) model is defined as

X_t = c + \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t \,

where \varphi_1, \ldots, \varphi_p are the parameters of the model, c is a constant, and \varepsilon_t is white noise. This can be equivalently written using the backshift operator B as

X_t = c + \sum_{i=1}^p \varphi_i B^i X_t + \varepsilon_t

so that, moving the summation term to the left side and using polynomial notation, we have

\phi (B)X_t= c + \varepsilon_t \, .

An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.

Some constraints are necessary on the values of the parameters of this model in order that the model remains wide-sense stationary. For example, processes in the AR(1) model with |φ1| ≥ 1 are not stationary. More generally, for an AR(p) model to be wide-sense stationary, the roots of the polynomial \textstyle z^p - \sum_{i=1}^p \varphi_i z^{p-i} must lie within the unit circle, i.e., each root z_i must satisfy |z_i|<1.

Intertemporal effect of shocks

In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model X_t = c + \varphi_1 X_{t-1} + \varepsilon_t. A non-zero value for \varepsilon_t at say time t=1 affects X_1 by the amount \varepsilon_1. Then by the AR equation for X_2 in terms of X_1, this affects X_2 by the amount \varphi_1 \varepsilon_1. Then by the AR equation for X_3 in terms of X_2, this affects X_3 by the amount \varphi_1^2 \varepsilon_1. Continuing this process shows that the effect of \varepsilon_1 never ends, although if the process is stationary then the effect diminishes toward zero in the limit.

Because each shock affects X values infinitely far into the future from when they occur, any given value Xt is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression

\phi (B)X_t= \varepsilon_t \,

(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as

X_t= \frac{1}{\phi (B)}\varepsilon_t \, .

When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to \varepsilon_t has an infinite order—that is, an infinite number of lagged values of \varepsilon_t appear on the right side of the equation.

Characteristic polynomial

The autocorrelation function of an AR(p) process can be expressed as

\rho(\tau) = \sum_{k=1}^p a_k y_k^{-|\tau|} ,

where y_k are the roots of the polynomial

\phi(B) = 1- \sum_{k=1}^p \varphi_k B^k

where B is the backshift operator, where \phi(.) is the function defining the autoregression, and where \varphi_k are the coefficients in the autoregression.

The autocorrelation function of an AR(p) process is a sum of decaying exponentials.

  • Each real root contributes a component to the autocorrelation function that decays exponentially.
  • Similarly, each pair of complex conjugate roots contributes an exponentially damped oscillation.

Graphs of AR(p) processes

The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.

For an AR(1) process with a positive \varphi, only the previous term in the process and the noise term contribute to the output. If \varphi is close to 0, then the process still looks like white noise, but as \varphi approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a low pass filter.

For an AR(2) process, the previous two terms and the noise term contribute to the output. If both \varphi_1 and \varphi_2 are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If \varphi_1 is positive while \varphi_2 is negative, then the process favors changes in sign between terms of the process. The output oscillates.

Example: An AR(1) process

An AR(1) process is given by:

X_t = c + \varphi X_{t-1}+\varepsilon_t\,

where \varepsilon_t is a white noise process with zero mean and constant variance \sigma_\varepsilon^2. (Note: The subscript on \varphi_1 has been dropped.) The process is wide-sense stationary if |\varphi|<1 since it is obtained as the output of a stable filter whose input is white noise. (If \varphi=1 then X_t has infinite variance, and is therefore not wide sense stationary.) Consequently, assuming |\varphi|<1, the mean \operatorname{E} (X_t) is identical for all values of t. If the mean is denoted by \mu, it follows from

\operatorname{E} (X_t)=\operatorname{E} (c)+\varphi\operatorname{E} (X_{t-1})+\operatorname{E}(\varepsilon_t),



and hence


In particular, if c = 0, then the mean is 0.

The variance is


where \sigma_\varepsilon is the standard deviation of \varepsilon_t. This can be shown by noting that

\textrm{var}(X_t) = \varphi^2\textrm{var}(X_{t-1}) + \sigma_\varepsilon^2,

and then by noticing that the quantity above is a stable fixed point of this relation.

The autocovariance is given by


It can be seen that the autocovariance function decays with a decay time (also called time constant) of \tau=-1/\ln(\varphi) [to see this, write B_n=K\phi^{|n|} where K is independent of n. Then note that \phi^{|n|}=e^{|n|\ln\phi} and match this to the exponential decay law e^{-n/\tau}].

The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:


\frac{1}{\sqrt{2\pi}}\,\sum_{n=-\infty}^\infty B_n e^{-i\omega n} =\frac{1}{\sqrt{2\pi}}\,\left(\frac{\sigma_\varepsilon^2}{1+\varphi^2-2\varphi\cos(\omega)}\right).

This expression is periodic due to the discrete nature of the X_j, which is manifested as the cosine term in the denominator. If we assume that the sampling time (\Delta t=1) is much smaller than the decay time (\tau), then we can use a continuum approximation to B_n:

B(t)\approx \frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\,\varphi^{|t|}

which yields a Lorentzian profile for the spectral density:



where \gamma=1/\tau is the angular frequency associated with the decay time \tau.

An alternative expression for X_t can be derived by first substituting c+\varphi X_{t-2}+\varepsilon_{t-1} for X_{t-1} in the defining equation. Continuing this process N times yields


For N approaching infinity, \varphi^N will approach zero and:


It is seen that X_t is white noise convolved with the \varphi^k kernel plus the constant mean. If the white noise \varepsilon_t is a Gaussian process then X_t is also a Gaussian process. In other cases, the central limit theorem indicates that X_t will be approximately normally distributed when \varphi is close to one.

Choosing the maximum lag

Calculation of the AR parameters

There are many ways to estimate the coefficients, such as the ordinary least squares procedure, method of moments (through Yule Walker equations), or Markov chain Monte Carlo methods.

The AR(p) model is given by the equation

X_t = \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t.\,

It is based on parameters \varphi_i where i = 1, ..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule-Walker equations.

Yule-Walker equations

The Yule-Walker equations are the following set of equations.

\gamma_m = \sum_{k=1}^p \varphi_k \gamma_{m-k} + \sigma_\varepsilon^2\delta_{m,0},

where m = 0, ..., p, yielding p + 1 equations. Here \gamma_m is the autocovariance function of Xt, \sigma_\varepsilon is the standard deviation of the input noise process, and \delta_{m,0} is the Kronecker delta function.

Because the last part of an individual equation is non-zero only if m = 0, the set of equations can be solved by representing the equations for m > 0 in matrix form, thus getting the equation


\gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \vdots \\ \gamma_p \\ \end{bmatrix}


\begin{bmatrix} \gamma_0 & \gamma_{-1} & \gamma_{-2} & \dots \\ \gamma_1 & \gamma_0 & \gamma_{-1} & \dots \\ \gamma_2 & \gamma_{1} & \gamma_{0} & \dots \\ \vdots & \vdots & \vdots & \ddots \\ \gamma_{p-1} & \gamma_{p-2} & \gamma_{p-3} & \dots \\ \end{bmatrix}

\begin{bmatrix} \varphi_{1} \\ \varphi_{2} \\ \varphi_{3} \\

\vdots \\

\varphi_{p} \\ \end{bmatrix}

which can be solved for all \{\varphi_m; m=1,2, \cdots ,p\}. The remaining equation for m = 0 is

\gamma_0 = \sum_{k=1}^p \varphi_k \gamma_{-k} + \sigma_\varepsilon^2 ,

which, once \{\varphi_m ; m=1,2, \cdots ,p \} are known, can be solved for \sigma_\varepsilon^2 .

An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first p+1 elements \rho(\tau) of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating [1]

\rho(\tau) = \sum_{k=1}^p \varphi_k \rho(k-\tau)

Examples for some Low-order AR(p) processes

  • p=1
    • \gamma_1 = \varphi_1 \gamma_0
    • Hence \rho_1 = \gamma_1 / \gamma_0 = \varphi_1
  • p=2
    • The Yule-Walker equations for an AR(2) process are
      \gamma_1 = \varphi_1 \gamma_0 + \varphi_2 \gamma_{-1}
      \gamma_2 = \varphi_1 \gamma_1 + \varphi_2 \gamma_0
      • Remember that \gamma_{-k} = \gamma_k
      • Using the first equation yields \rho_1 = \gamma_1 / \gamma_0 = \frac{\varphi_1}{1-\varphi_2}
      • Using the recursion formula yields \rho_2 = \gamma_2 / \gamma_0 = \frac{\varphi_1^2 - \varphi_2^2 + \varphi_2}{1-\varphi_2}

Estimation of AR parameters

The above equations (the Yule-Walker equations) provide several routes to estimating the parameters of an AR(p) model, by replacing the theoretical covariances with estimated values. Some of these variants can be described as follows:

  • Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.
  • Formulation as a least squares regression problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of Xt on the p previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule-Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.
  • Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:
X_t = c + \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon^*_t \,.
Here predicted of values of Xt would be based on the p future values of the same series. This way of estimating the AR parameters is due to Burg,[2] and call the Burg method:[3] Burg and later authors called these particular estimates "maximum entropy estimates",[4] but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with maximum entropy spectral estimation.[5]

Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial p values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.


The power spectral density of an AR(p) process with noise variance Var(Z_t) = \sigma_Z^2 is[1]

S(f) = \frac{\sigma_Z^2}{| 1-\sum_{k=1}^p \varphi_k e^{-2 \pi i k f} |^2}.


For white noise (AR(0))

S(f) = \sigma_Z^2.


For AR(1)

S(f) = \frac{\sigma_Z^2}{| 1- \varphi_1 e^{-2 \pi i k f} |^2}
    = \frac{\sigma_Z^2}{ 1 + \varphi_1^2 - 2 \varphi_1 cos{2 \pi f} }
  • If \varphi_1 > 0 there is a single spectral peak at f=0, often referred to as red noise. As \varphi_1 becomes nearer 1, there is stronger power at low frequencies, i.e. larger time lags. This is then a low-pass filter, when applied to full spectrum light, everything except for the red light will be filtered.
  • If \varphi_1 < 0 there is a minimum at f=0, often referred to as blue noise. This similarly acts as a high-pass filter, everything except for blue light will be filtered.


AR(2) processes can be split into three groups depending on the characteristics of their roots:

z_1,z_2 = \frac{1}{2}\left(\varphi_1 \pm \sqrt{\varphi_1^2 + 4\varphi_2}\right)
  • When \varphi_1^2 + 4\varphi_2 < 0, the process has a pair of complex-conjugate roots, creating a mid-frequency peak at:
f^* = \frac{1}{2\pi}\cos^{-1}\left(\frac{\varphi_1(\varphi_2-1)}{4\varphi_2}\right)

Otherwise the process has real roots, and:

  • When \varphi_1 > 0 it acts as a low-pass filter on the white noise with a spectral peak at f=0
  • When \varphi_1 < 0 it acts as a high-pass filter on the white noise with a spectral peak at f=1/2.

The process is stable when the roots are within the unit circle, or equivalently when the coefficients are in the triangle -1 \le \varphi_2 \le 1 - |\varphi_1|.

The full PSD function can be expressed in real form as:

S(f) = \frac{\sigma_Z^2}{1 + \varphi_1^2 + \varphi_2^2 - 2\varphi_1(1-\varphi_2)\cos(2\pi f) - 2\varphi_2\cos(4\pi f)}

Implementations in statistics packages

  • R, the stats package includes an ar function.[6]
  • Matlab and Octave: the TSA toolbox contains several estimation functions for uni-variate, multivariate and adaptive autoregressive models.[7]

n-step-ahead forecasting

Once the parameters of the autoregression

X_t = c + \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t \,

have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use t to refer to the first period for which data is not yet available; substitute the known prior values Xt-i for i=1, ..., p into the autoregressive equation while setting the error term \varepsilon_t equal to zero (because we forecast Xt to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use t to refer to the next period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of X one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from prior steps.

There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term \varepsilon_t \, for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the n-step-ahead predictions; the confidence interval will become wider as n increases because of the use of an increasing number of estimated values for the right-side variables.

Evaluating the quality of forecasts

The predictive performance of the autoregressive model can be assessed as soon as estimation has been done if cross-validation is used. In this approach, some of the initially available data was used for parameter estimation purposes, and some (from available observations later in the data set) was held back for out-of-sample testing. Alternatively, after some time has passed after the parameter estimation was conducted, more data will have become available and predictive performance can be evaluated then using the new data.

In either case, there are two aspects of predictive performance that can be evaluated: one-step-ahead and n-step-ahead performance. For one-step-ahead performance, the estimated parameters are used in the autoregressive equation along with observed values of X for all periods prior to the one being predicted, and the output of the equation is the one-step-ahead forecast; this procedure is used to obtain forecasts for each of the out-of-sample observations. To evaluate the quality of n-step-ahead forecasts, the forecasting procedure in the previous section is employed to obtain the predictions.

Given a set of predicted values and a corresponding set of actual values for X for various time periods, a common evaluation technique is to use the mean squared prediction error; other measures are also available (see Forecasting#Forecasting accuracy).

The question of how to interpret the measured forecasting accuracy arises—for example, what is a "high" (bad) or a "low" (good) value for the mean squared prediction error? There are two possible points of comparison. First, the forecasting accuracy of an alternative model, estimated under different modeling assumptions or different estimation techniques, can be used for comparison purposes. Second, the out-of-sample accuracy measure can be compared to the same measure computed for the in-sample data points (that were used for parameter estimation) for which enough prior data values are available (that is, dropping the first p data points, for which p prior data points are not available). Since the model was estimated specifically to fit the in-sample points as well as possible, it will usually be the case that the out-of-sample predictive performance will be poorer than the in-sample predictive performance. But if the predictive quality deteriorates out-of-sample by "not very much" (which is not precisely definable), then the forecaster may be satisfied with the performance.

See also



  • Mills, Terence C. (1990) Time Series Techniques for Economists. Cambridge University Press
  • Percival, Donald B. and Andrew T. Walden. (1993) Spectral Analysis for Physical Applications. Cambridge University Press
  • Pandit, Sudhakar M. and Wu, Shien-Ming. (1983) Time Series and System Analysis with Applications. John Wiley & Sons
  • Philosophical Transactions of the Royal Society of London, Ser. A, Vol. 226, 267–298.]
  • Walker, Gilbert (1931) Proceedings of the Royal Society of London, Ser. A, Vol. 131, 518–532.

External links

  • AutoRegression Analysis (AR) by Paul Bourke

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.