World Library  
Flag as Inappropriate
Email this Article

Algorithmic probability

Article Id: WHEBN0000402688
Reproduction Date:

Title: Algorithmic probability  
Author: World Heritage Encyclopedia
Language: English
Subject: Ray Solomonoff, Inductive probability, Randomness, Inductive reasoning, Prior probability
Collection: Algorithmic Information Theory, Artificial Intelligence, Probability Interpretations
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Algorithmic probability

In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. In a theoretic sense, the prior is universal. It is used in inductive inference theory, and analyses of algorithms. Since it is not computable, it must be approximated.[1]

It deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can one evaluate the different hypotheses, and how can one predict future data?

Algorithmic probability combines several ideas: Occam's razor; Epicurus' principle of multiple explanations; and special coding methods from modern computing theory. The prior obtained from the formula is used in Bayes rule for prediction.[2]

Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.[3]

In contrast, Epicurus had proposed the Principle of Multiple Explanations: if more than one theory is consistent with the observations, keep all such theories.[4]

A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.[5] The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.

Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.

Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960,[6] publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference."[7] He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I[8] and Part II.[9]

He described a universal computer with a randomly generated input program. The program computes some possibly infinite output. The universal probability distribution is the probability distribution on all possible output strings with random input.[10]

The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.

Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory, Solomonoff's is mathematically rigorous.

Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity, was motivated by information theory and problems in randomness while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes’s rule was invented by Solomonoff with Kolmogorov complexity as a side product.[11]

Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this is a variant of Leonid Levin's Search Algorithm,[12] which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting the search space include training sequences.

Contents

  • Key people 1
  • See also 2
  • References 3
  • Further reading 4
  • External links 5

Key people

See also

References

  1. ^ Hutter, M., Legg, S., and Vitanyi, P., "Algorithmic Probability", Scholarpedia, 2(8):2572, 2007.
  2. ^ Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008, p 347
  3. ^ ibid, p. 341
  4. ^ ibid, p. 339.
  5. ^ Hutter, M., "Algorithmic Information Theory", Scholarpedia, 2(3):2519.
  6. ^ Solomonoff, R., "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol. 55, No. 1, pp. 73-88, August 1997.
  7. ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
  8. ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part I". Information and Control, Vol 7, No. 1 pp 1-22, March 1964.
  9. ^ Solomonoff, R., "A Formal Theory of Inductive Inference, Part II" Information and Control, Vol 7, No. 2 pp 224–254, June 1964.
  10. ^ Solomonoff, R., "The Kolmogorov Lecture: The Universal Distribution and Machine Learning" The Computer Journal, Vol 46, No. 6 p 598, 2003.
  11. ^ Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", IEEE Information Theory Society Newsletter, Vol. 61, No. 1, March 2011, p 11.
  12. ^ Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115–116, 1973

Further reading

  • Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference

External links

  • detailed description of Algorithmic Probability in Scholarpedia
  • Solomonoff's publications
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.