Conditional probabilities

Template:Probability fundamentals In probability theory, a conditional probability is the probability that an event will occur, when another event is known to occur or to have occurred. If the events are A and B respectively, this is said to be "the probability of A given B". It is commonly denoted by P(A|B), or sometimes PB(A). P(A|B) may or may not be equal to P(A), the probability of A. If they are equal, A and B are said to be independent. For example, if a coin is flipped twice, "the outcome of the second flip" is independent of "the outcome of the first flip".

In the Bayesian interpretation of probability, the conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A having accounted for evidence E. (In fact, this is also the Frequentist interpretation.)

Definition




Conditioning on an event

Kolmogorov definition

Given two events A and B from the sigma-field of a probability space with P(B) > 0, the conditional probability of A given B is defined as the quotient of the joint probability of A and B, and the probability of B:

P(A|B) = \frac{P(A \cap B)}{P(B)}

This may be visualized as restricting the sample space to B. The logic behind this equation is that if the outcomes are restricted to B, this set serves as the new sample space.

Note that this is a definition but not a theoretical result. We just denote the quantity P(AB)/P(B) as P(A|B) and call it the conditional probability of A given B.

As an axiom of probability

Some authors, such as De Finetti, prefer to introduce conditional probability as an axiom of probability:

P(A \cap B) = P(A|B)P(B)

Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations such as the subjective theory, conditional probability is considered a primitive entity. Further, this "multiplication axiom" introduces a symmetry with the summation axiom for mutually exclusive events:[1]

P(A \cup B) = P(A) + P(B)

Definition with σ-algebra

If P(B) = 0, then the simple definition of P(A|B) is undefined. However, it is possible to define a conditional probability with respect to a σ-algebra of such events (such as those arising from a continuous random variable).

For example, if X and Y are non-degenerate and jointly continuous random variables with density ƒX,Y(xy) then, if B has positive measure,

P(X \in A \mid Y \in B) = \frac{\int_{y\in B}\int_{x\in A} f_{X,Y}(x,y)\,dx\,dy}{\int_{y\in B}\int_{x\in\Omega} f_{X,Y}(x,y)\,dx\,dy} .

The case where B has zero measure can only be dealt with directly in the case that B = {y0}, representing a single point, in which case

P(X \in A \mid Y = y_0) = \frac{\int_{x\in A} f_{X,Y}(x,y_0)\,dx}{\int_{x\in\Omega} f_{X,Y}(x,y_0)\,dx} .

If A has measure zero then the conditional probability is zero. An indication of why the more general case of zero measure cannot be dealt with in a similar way can be seen by noting that the limit, as all δyi approach zero, of

P(X \in A \mid Y \in \cup_i[y_i,y_i+\delta y_i]) \approxeq \frac{\sum_{i} \int_{x\in A} f_{X,Y}(x,y_i)\,dx\,\delta y_i}{\sum_{i}\int_{x\in\Omega} f_{X,Y}(x,y_i) \,dx\, \delta y_i} , depends on their relationship as they approach zero. See conditional expectation for more information.

Conditioning on a random variable

Conditioning on an event may be generalized to conditioning on a random variable. Let X be a random variable taking some value from xn. Let A be an event. The conditional probability of A given X is defined as the random variable:

P(A|X) \text{ taking on the value } P(A\mid X=x_n) \text{ if } X=x_n

More formally:

P(A|X)(\omega)=P(A\mid X=X(\omega)) .

The conditional probability P(A|X) is a function of X, e.g., if the function g is defined as

g(x)= P(A\mid X=x),

then

P(A|X) =g\circ X

Note that P(A|X) and X are now both random variables. From the law of total probability, the expected value of P(A|X) is equal to the unconditional probability of A.

Example

Suppose that somebody secretly rolls two fair six-sided dice, and we must predict the outcome.

  • Let A be the value rolled on die 1
  • Let B be the value rolled on die 2

What is the probability that A = 2? Table 1 shows the sample space. A = 2 in 6 of the 36 outcomes, thus P(A=2) = 636 = 16.

Table
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Suppose it is revealed that A+B ≤ 5. Table 2 shows that A+B ≤ 5 for 10 outcomes. For 3 of these, A = 2. So the probability that A = 2 given that A+B ≤ 5 is P(A=2 | A+B ≤ 5) = 310 = 0.3.

Table 2
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Statistical independence

Events A and B are defined to be statistically independent if:

P(A \cap B) \ = \ P(A) P(B)
\Leftrightarrow P(A|B) \ = \ P(A)
\Leftrightarrow P(B|A) \ = \ P(B).

That is, the occurrence of A does not affect the probability of B, and vice versa. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined if P(A) or P(B) are 0, and the preferred definition is symmetrical in A and B.

Common fallacies

These fallacies should not be confused with Robert K. Shope's 1978 beg the question.

Assuming conditional probability is of similar size to its inverse

In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.[2] The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:

P(B|A) = \frac{P(A|B) P(B)}{P(A)}.

That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B).

Alternatively, noting that AB = BA, and applying conditional probability:

P(A \cap B) = P(A|B)P(B)= P(B \cap A) = P(B|A)P(A)

Rearranging gives the result.

Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the formula for total probability:

P(A) \, = \, \sum_n P(A \cap B_n) \, = \, \sum_n P(A|B_n)P(B_n).

where : B_n \cap B_{n+1} \, = \, \emptyset \,\forall n . This fallacy may arise through selection bias.[3] For example, in the context of a medical claim, let SC be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S so P(SC) is low. Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(SC) is high. The actual probability observed by the doctor is P(SC|H).

Over- or under-weighting priors

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.

Formal derivation

Formally, P(A|B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.[4][5]

Let Ω be a sample space with elementary events {ω}. Suppose we are told the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. For events in B, it is reasonable to assume that the relative magnitudes of the probabilities will be preserved. For some constant scale factor α, the new distribution will therefore satisfy:

\text{1. }\omega \in B : P(\omega|B) = \alpha P(\omega)
\text{2. }\omega \notin B : P(\omega|B) = 0
\text{3. }\sum_{\omega \in \Omega} {P(\omega|B)} = 1.

Substituting 1 and 2 into 3 to select α:

\begin{align}

\sum_{\omega \in \Omega} {P(\omega | B)} &= \sum_{\omega \in B} {\alpha P(\omega)} + \cancelto{0}{\sum_{\omega \notin B} 0} \\ &= \alpha \sum_{\omega \in B} {P(\omega)} \\ &= \alpha \cdot P(B) \\ \end{align}

\implies \alpha = \frac{1}{P(B)}

So the new probability distribution is

\text{1. }\omega \in B : P(\omega|B) = \frac{P(\omega)}{P(B)}
\text{2. }\omega \notin B : P(\omega| B) = 0

Now for a general event A,

\begin{align}

P(A|B) &= \sum_{\omega \in A \cap B} {P(\omega | B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega|B)} \\ &= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\ &= \frac{P(A \cap B)}{P(B)} \end{align}

See also

References

External links

  • MathWorld.
  • F. Thomas Bruss Der Wyatt-Earp-Effekt oder die betörende Macht kleiner Wahrscheinlichkeiten (in German), Spektrum der Wissenschaft (German Edition of Scientific American), Vol 2, 110–113, (2007).
  • Conditional Probability Problems with Solutions
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.