World Library  
Flag as Inappropriate
Email this Article

Deviance information criterion

Article Id: WHEBN0003103500
Reproduction Date:

Title: Deviance information criterion  
Author: World Heritage Encyclopedia
Language: English
Subject: Focused information criterion, DIC, Model selection, Hannan–Quinn information criterion, Determining the number of clusters in a data set
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Deviance information criterion

The deviance information criterion (DIC) is a hierarchical modeling generalization of the AIC (Akaike information criterion) and BIC (Bayesian information criterion, also known as the Schwarz criterion). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation. Like AIC and BIC it is an asymptotic approximation as the sample size becomes large. It is only valid when the posterior distribution is approximately multivariate normal.

Define the deviance as D(\theta)=-2 \log(p(y|\theta))+C\, , where y\, are the data, \theta\, are the unknown parameters of the model and p(y|\theta)\, is the likelihood function. C\, is a constant that cancels out in all calculations that compare different models, and which therefore does not need to be known.

The expectation \bar{D}=\mathbf{E}^\theta[D(\theta)] is a measure of how well the model fits the data; the larger this is, the worse the fit.

There are two calculations in common usage for the effective number of parameters of the model. The first, as described in Spiegelhalter et al. (2002, p. 587) is p_D=\bar{D}-D(\bar{\theta}), where \bar{\theta} is the expectation of \theta\,. The second, as described in Gelman et al. (2004, p. 182) is p_D = p_V = \frac{1}{2}\widehat{\operatorname{var}}\left(D(\theta)\right). The larger the effective number of parameters is, the easier it is for the model to fit the data, and so the deviance needs to be penalized.

The deviance information criterion is calculated as

\mathit{DIC} = p_D+\bar{D},

or equivalently as

\mathit{DIC} = D(\bar{\theta})+2 p_D.

From this latter form, the connection with Akaike's information criterion is evident.

The idea is that models with smaller DIC should be preferred to models with larger DIC. Models are penalized both by the value of \bar{D}, which favors a good fit, but also (in common with AIC and BIC) by the effective number of parameters p_D\,. Since \bar D will decrease as the number of parameters in a model increases, the p_D\, term compensates for this effect by favoring models with a smaller number of parameters.

The advantage of DIC over other criteria in the case of Bayesian model selection is that the DIC is easily calculated from the samples generated by a Markov chain Monte Carlo simulation. AIC and BIC require calculating the likelihood at its maximum over \theta\,, which is not readily available from the MCMC simulation. But to calculate DIC, simply compute \bar{D} as the average of D(\theta)\, over the samples of \theta\,, and D(\bar{\theta}) as the value of D\, evaluated at the average of the samples of \theta\, . Then the DIC follows directly from these approximations. Claeskens and Hjort (2008, Ch. 3.5) show that the DIC is large-sample equivalent to the natural model-robust version of the AIC.

In the derivation of DIC, it is assumed that the specified parametric family of probability distributions that generate future observations encompasses the true model. This assumption does not always hold, and it is desirable to consider model assessment procedures in that scenario. Also, the observed data are used both to construct the posterior distribution and to evaluate the estimated models. Therefore, DIC tends to select over-fitted models. Recently, these issues are resolved by Ando (2007), Bayesian predictive information criterion, BPIC. Ando (2010, Ch. 8) provided a discussion of various Bayesian model selection criteria. To avoid the over-fitting problems of DIC, Ando (2011) developed Bayesian model selection criteria from a predictive view point. The criterion is calculated as

\mathit{IC} =\bar{D}+2p_D=-2\mathbf{E}^\theta[ \log(p(y|\theta))]+2p_D.

The first term is a measure of how well the model fits the data, while the second term is a penalty on the model complexity. Note, that the p in this expression is the predictive distribution rather than the likelihood above.

See also

References

  • Ando, T. (2010). Bayesian Model Selection and Statistical Modeling, CRC Press. Chapter 7.
  • Claeskens, G, and Hjort, N.L. (2008). Model Selection and Model Averaging, Cambridge. Section 3.5.
  • van der Linde, A. (2005). "DIC in variable selection", Statistica Neerlandica, 59: 45-56. doi:10.1111/j.1467-9574.2005.00278.x
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.