World Library  
Flag as Inappropriate
Email this Article

Chebyshev inequality

Article Id: WHEBN0000902999
Reproduction Date:

Title: Chebyshev inequality  
Author: World Heritage Encyclopedia
Language: English
Subject: Pafnuty Chebyshev, Chernoff bound, Unimodality
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Chebyshev inequality

For the similarly named inequality involving series, see Chebyshev's sum inequality.
Not to be confused either with the Chebychev's inequalities on the size of the number-theoretic function \scriptstyle{\pi(x)}.

In probability theory, Chebyshev's inequality (also spelled as Tchebysheff's inequality, Нера́венство Чебышева) guarantees that in any probability distribution, "nearly all" values are close to the mean — the precise statement being that no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean (or equivalently, at least 1 - 1/k2 of the distribution's values are within k standard deviations of the mean). The inequality has great utility because it can be applied to completely arbitrary distributions (unknown except for mean and variance), for example it can be used to prove the weak law of large numbers.

In practical usage, in contrast to the empirical rule, which applies to normal distributions, under Chebyshev's inequality a minimum of just 75% of values must lie within two standard deviations of the mean and 89% within three standard deviations.[1][2]

The term Chebyshev's inequality may also refer to the Markov's inequality, especially in the context of analysis.

History

The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé.[3]:98 The theorem was first stated without proof by Bienaymé in 1853[4] and later proved by Chebyshev in 1867.[5] His student Andrey Markov provided another proof in his 1884 PhD thesis.[6]

Statement

Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces.

Probabilistic statement

Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number k > 0,

   \Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}.
 

Only the case k > 1 provides useful information. When k < 1 the right-hand side is greater than one, so the inequality becomes vacuous, as the probability of any event cannot be greater than one. When k = 1 it just says the probability is less than or equal to one, which is always true for probabilities.

As an example, using k = 2 shows that at least half of the values lie in the interval (μ2σ, μ + 2σ).

Because it can be applied to completely arbitrary distributions (unknown except for mean and variance), the inequality generally gives a poor bound compared to what might be possible if something is known about the distribution involved.

k Min % within k standard deviations of mean Max % beyond k standard deviations from mean
1 0% 100%
2 50% 50%
2 75% 25%
3 88.8889% 11.1111%
4 93.75% 6.25%
5 96% 4%
6 97.2222% 2.7778%
7 97.9592% 2.0408%
8 98.4375% 1.5625%
9 98.7654% 1.2346%
10 99% 1%

Measure-theoretic statement

Let (X, Σ, μ) be a measure space, and let f be an extended real-valued measurable function defined on X. Then for any real number t > 0,

\mu(\{x\in X\,:\,\,|f(x)|\geq t\}) \leq {1\over t^2} \int_X |f|^2 \, d\mu.

More generally, if g is an extended real-valued measurable function, nonnegative and nondecreasing on the range of f, then

\mu(\{x\in X\,:\,\,f(x)\geq t\}) \leq {1\over g(t)} \int_X g\circ f\, d\mu.

The previous statement then follows by defining g(t) as t^2 if t\ge 0 and 0 otherwise, and taking |f| instead of f.

Example

Suppose we randomly select a journal article from a source with an average of 1000 words per article, with a standard deviation of 200 words. We can then infer that the probability that it has between 600 and 1400 words (i.e. within k = 2 SDs of the mean) must be more than 75%, because there is less than 1k2
=

  1. REDIRECT Template:Sfrac chance to be outside that range, by Chebyshev's inequality. But if we additionally know that the distribution is normal, we can say that is a 75% chance the word count is between 770 and 1230 (which is an even tighter bound).
Note

This example should be treated with caution as the inequality is only stated for probability distributions rather than for finite sample sizes. The inequality has since been extended to apply to finite sample sizes (see below).

Sharpness of bounds

As shown in the example above, the theorem will typically provide rather loose bounds. However, the bounds provided by Chebyshev's inequality cannot, in general (remaining sound for variables of arbitrary distribution), be improved upon. For example, for any k ≥ 1, the following example meets the bounds exactly.

   X = \begin{cases}
       -1, & \text{with probability }\frac{1}{2k^2} \\
        0, & \text{with probability }1 - \frac{1}{k^2} \\
        1, & \text{with probability }\frac{1}{2k^2}
       \end{cases}
 

For this distribution, mean μ = 0 and standard deviation σ =

  1. REDIRECT Template:Sfrac, so
   \Pr(|X-\mu| \ge k\sigma) = \Pr(|X|\ge1) = \frac{1}{k^2}.
 

Equality holds only for distributions that are a linear transformation of this one.

Proof (of the two-sided version)

Probabilistic proof

Markov's inequality states that for any non-negative random variable Y and any positive number a, we have Pr(|Y| > a) ≤ E(|Y|)/a. One way to prove Chebyshev's inequality is to apply Markov's inequality to the random variable Y = (X − μ)2 with a = (σk)2.

It can also be proved directly. For any event A, let IA be the indicator random variable of A, i.e. IA equals 1 if A occurs and 0 otherwise. Then

\begin{align} & {} \qquad \Pr(|X-\mu| \geq k\sigma) = \operatorname{E}(I_{|X-\mu| \geq k\sigma}) = \operatorname{E}(I_{[(X-\mu)/(k\sigma)]^2 \geq 1}) \\[6pt] & \leq \operatorname{E}\left(\left({X-\mu \over k\sigma} \right)^2 \right) = {1 \over k^2} {\operatorname{E}((X-\mu)^2) \over \sigma^2} = {1 \over k^2}. \end{align}

The direct proof shows why the bounds are quite loose in typical cases: the number 1 to the right of "≥" is replaced by [(X − μ)/(kσ)]2 to the left of "≥" whenever the latter exceeds 1. In some cases it exceeds 1 by a very wide margin.

Measure-theoretic proof

Fix t and let A_t be defined as A_t = \{x\in X\mid f(x)\ge t\}, and let 1_{A_t} be the indicator function of the set A_t. Then, it is easy to check that, for any x,

0\leq g(t) 1_{A_t}\leq g(f(x))\,1_{A_t},

since g is nondecreasing on the range of f, and therefore,

\begin{align}g(t)\mu(A_t)&=\int_X g(t)1_{A_t}\,d\mu\\ &\leq\int_{A_t} g\circ f\,d\mu\\ &\leq\int_X g\circ f\,d\mu.\end{align}

The desired inequality follows from dividing the above inequality by g(t).

Extensions

Several extensions of Chebyshev's inequality have been developed.

Asymmetric two-sided case

An asymmetric two-sided version of this inequality is also known.[7]

When the distribution is asymmetric or is unknown

P( k_1 < X < k_2 ) \ge \frac{ 4 [ ( \mu - k_1 )( k_2 - \mu ) - \sigma^2 ] }{ ( k_2 - k_1 )^2 } ,

where σ2 is the variance and μ is the mean.

Bivariate case

A version for the bivariate case is known.[8]

Let X1 and X2 be two random variables with means and finite variances of μ1 and μ2 and σ1 and σ2 respectively. Then

P( k_{ 11 } \le X_1 \le k_{ 12 }, k_{ 21 } \le X_2 \le k_{ 22 }) \ge 1 - \sum T_i

where for i = 1,2,

T_i = \frac{ 4 \sigma_i^2 + [ 2 \mu_i - ( k_{ i1 } + k_{ i2 } ) ]^2 } { ( k_{ i2 } - k_{ i1 } ) }

Two correlated variables

Berge derived an inequality for two correlated variables X1 and X2.[9] Let ρ be the correlation coefficient between X1 and X2 and let σi2 be the variance of Xi. Then

P\left( \bigcap_{ i = 1}^2 \left[ \frac{ | X_i - \mu_i | } { \sigma_i } < k \right] \right) \ge 1 - \frac{ 1 + \sqrt{ 1 - \rho^2 } } { k^2 }

Lal later obtained an alternative bound[10]

P\left( \bigcap_{ i = 1}^2 \left[ \frac{ | X_i - \mu_i | }{ \sigma_i } \le k_i \right] \right) \ge 1 - \frac{ k_1^2 + k_2^2 + \sqrt{ ( k_1^2 + k_2^2 )^2 - 4 k_1^2 k_2^2 \rho } } { 2 ( k_1 k_2 )^2 }

Isii derived a further generalisation.[11] Let

Z = P\left( \bigcap_{ i = 1}^2 ( - k_1 < X_i < k_2 )\right)

with 0 < k1k2.

There are now three cases.

Case A: If 2k_1^2 > 1 - \rho and k_2 - k_1 \ge 2 \lambda where

\lambda = \frac{ k_1( 1 + \rho ) + \sqrt{ ( 1 - \rho^2 )( k_1^2 + \rho ) } }{ 2k_1 - 1 + \rho }

then

Z \le \frac{ 2 \lambda^2 } { 2 \lambda^2 + 1 + \rho }

Case B: If the conditions in case A are not met but k1k2 ≥ 1 and

2 ( k_1 k_2 - 1 )^2 \ge 2( 1 - \rho^2 ) + ( 1 - \rho )( k_2 - k_1 )^2

then

Z \le \frac{ ( k_2 - k_1 )^2 + 4 + \sqrt{ 16 ( 1 - \rho^2 ) + 8 ( 1 - \rho )( k_2 - k_1 ) } }{ ( k_1 +k_2 )^2 }

Case C: If the conditions in cases A or B are not met there is no universal bound other than 1.

Multivariate case

The general case is known as the Birnbaum–Raymond–Zuckerman inequality after the authors who proved it for two dimensions.[12]

P\left[ \sum_{ i = 1 }^n \frac{ ( X_i - \mu_i )^2 }{ \sigma_i^2 t_i^2 } \ge k^2 \right] \le \frac{ 1 }{ k^2 } \sum_{ i = 1 }^n \frac{ 1 }{ t_i^2 }

where Xi is the ith random variable, μi is the ith mean and σi2 is the ith variance.

If the variables are independent this inequality can be sharpened.[13]

P\left[ \bigcap_{i = 1 }^n \frac{ | X_i - \mu_i | }{ \sigma_i } \le k_i \right] \ge \prod{ ( 1 - \frac{ 1 }{ k_i^2 } ) }

Olkin and Pratt derived an inequality for n correlated variables.[14]

P\left( \bigcap_{i = 1 }^n \frac{ | X_i - \mu_i | }{ \sigma_i } < k_i \right) \ge 1 - \frac{ [ \sqrt{ u } + \sqrt{ n - 1 } \sqrt{ n \sum{ \frac{ 1 }{ k_i^2 } - u } } ]^2 }{ n^2 }

where the sum is taken over the n variables and

u = \sum_{ i = 1 }^n{ \frac{ 1 }{ k_i^2 } } + 2 \sum_{ i = 1 }^n \sum_{ i < j } \frac{ \rho_{ ij } } { k_i k_j }

where ρij is the correlation between Xi and Xj

Olkin and Pratt's inequality was subsequently generalised by Godwin.[15]

Vector version

Ferentinos[8] has shown that for a vector X = (x1, x2, x3, ...) with mean μ = (μ1, μ2, μ3, ...), variance σ2 = (σ12, σ22, σ32, ...) and an arbitrary norm (|| ||) that

P(|| X - \mu || \ge k || \sigma ||) \le \frac{ 1 } { k^2 }

An second related inequality has also been derived by Chen. [16] Let N be the dimension of the stochastic vector X and let E[X] be the mean of X. Let S be the covariance matrix and k > 0. Then

P( ( X - E[ X ] )^T S^{ -1 } ( X - E[ X ] ) < k ) \ge 1 - \frac{ N }{ k }

where YT is the transpose of Y.

Infinite Dimensions

There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings. Let X be a random variable which takes values in a Fréchet space \mathcal X (equipped with seminorms \|\cdot\|_\alpha). This includes most common settings of vector-valued random variables, e.g., when \mathcal X is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above.

Suppose that X is of "strong order two", meaning that

\mathbb E\big(\| X\|_\alpha^2 \big) < \infty

for every seminorm \|\cdot\|_\alpha. This is a generalization of the requirement that X have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.[17]

Let \mu \in \mathcal X be the Pettis integral of X (i.e., the vector generalization of the mean), and let \sigma_a := \sqrt{\mathbb E\|X - \mu\|_\alpha^2}be the standard deviation with respect to the seminorm \|\cdot\|_\alpha.

In this setting, the general version of Chebyshev's inequality states that

\mathbb P\big( \|X - \mu\|_\alpha \ge k \sigma_\alpha \big) \le \frac{ 1 } { k^2 }

for all k > 0.

Proof. The proof is straightforward, and essentially the same as the finitary version. If \sigma_\alpha = 0, then X is constant (and equal to \mu) almost surely, so the inequality is trivial.

On the event \|X - \mu\|_\alpha \ge k \sigma_\alpha^2 we have that \|X - \mu\|_\alpha > 0, so we may safely divide by \|X - \mu\|_\alpha. The crucial trick in Chebyshev's inequality is to recognize that 1 = \tfrac{\|X - \mu\|_\alpha^2}{\|X - \mu\|_\alpha^2}.

We calculate:

\mathbb P\big( \|X - \mu\|_\alpha \ge k \sigma_\alpha \big) = \int_\Omega 1_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \, \mathrm d \mathbb P = \int_\Omega \frac{\|X - \mu\|_\alpha^2}{\|X - \mu\|_\alpha^2} \cdot 1_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \, \mathrm d \mathbb P \le \int_\Omega \frac{\|X - \mu\|_\alpha^2}{(k\sigma_\alpha)^2} \cdot 1_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \, \mathrm d \mathbb P.

Next, we use the fact that an indicator function is bounded above by 1 to calculate that this is bounded by

\frac{1}{k^2 \sigma_\alpha^2} \int_\Omega \|X - \mu\|_\alpha^2 \, \mathrm d \mathbb P = \frac{\mathbb E\|X - \mu\|_\alpha^2}{k^2 \sigma_\alpha^2} = \frac{\sigma_\alpha^2}{k^2 \sigma_\alpha^2} = \frac{1}{k^2}.

This completes the proof. ⃞

Higher moments

An extension to higher moments is also possible:

P( | X - \operatorname{ E } ( X ) | \ge k ) \le \frac{ \operatorname{ E }(| X - \operatorname{ E }( X ) |^n ) } { k^n }

where k > 0 and n ≥ 2.

Exponential version

A related inequality sometimes known as the exponential Chebyshev's inequality[18] is the inequality

P(X \ge \varepsilon) \le e^{ -t \varepsilon } \operatorname{ E } (e^{ t X })

where t > 0.

Let K( x, t ) be the cumulant generating function,

K( x , t ) = \log( \operatorname{ E } ( e^{ t x } ) ).

Taking the Legendre–Fenchel transformation of K(x, t) and using the exponential Chebyshev's inequality we have

- \log( P \ge \varepsilon ) \le \sup_t( t \varepsilon - K( x , t ) ).

This inequality may be used to obtain exponential inequalities for unbounded variables.[19]

Inequalities for bounded variables

If P(x) has finite support based on the interval [a, b], let M = max( |a|, |b| ) where |x| is the absolute value of x. If the mean of P(x) is zero then for all k > 0[20]

\frac{ E( | X |^r ) - k^r }{ M^r } \le P( | X | \ge k ) \le \frac{ E( | X |^r ) }{ k^r }

The second of these inequalities with r = 2 is the Chebyshev bound. The first provides a lower bound for the value of P(x).


Sharp bounds for a bounded variate have been derived by Niemitalo[21]

Let 0 ≤ XM where M > 0. Then

Case 1

P( X < k ) = 0 \text{ if } E( X ) > k \text{ and } E( X^2 ) < k E( X ) + M E( X ) - kM


Case 2

P( X < k ) \ge 1 - \frac{ k E( X ) + M E( X ) - E( X^2 ) }{ kM }


\text{ if } [ E( X )> k \text{ and } E( X^2 ) \ge kE( X ) + ME( X ) - kM ] \text{ or } [ E( X ) \le k \text{ and } E( X^2 ) \ge kE( X ) ]


Case 3

P( X < k ) \ge \frac{ E( X )^2 - 2 k E( X ) + k^2 }{ E( X^2 ) - 2 k E( X ) + k^2 } \text{ if } E( X ) \le k \text{ and } E( X^2 )< kE( X )

Finite samples

Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known but are instead replaced by their sample estimates.[22]

P( | X - m | \ge ks ) \le \frac{ g_{ N + 1 }\left( \frac{ N k^2 }{ N - 1 + k^2 } \right) }{ N + 1 } \left( \frac{ N }{ N + 1 } \right)^{ 1 / 2 }

where N is the sample size, m is the sample mean, k is a constant and s is the sample standard deviation. g(x) is defined as follows:

Let x ≥ 1, Q = N + 1, and R be the greatest integer less than Q / x. Let

a^ 2 = \frac{ Q( Q - R ) } { 1 + R( Q - R ) }

Now

g_Q(x) = R \quad \text{if }R \text{ is even}
g_Q(x) = R \quad \text{if }R \text{ is odd and }x < a^2
g_Q(x) = R - 1 \quad \text{if } R \text{ is odd and } x \ge a^2

This inequality holds when the population moments do not exist and when the sample is weakly exchangeably distributed.

Kabán gives a somewhat less complex version of this inequality.[23]

P( | X - m | \ge ks ) \le \frac{ 1 }{ [ N( N + 1 ) ]^{ 1 / 2 } }\left[ \left( \frac{ N - 1 }{ k^2 } + 1 \right) \right]

If the standard deviation is a multiple of the mean then a further inequality can be derived,[23]

P( | X - m | \ge ks ) \le \frac{ N - 1 }{ N } \frac{ 1 }{ k^2 } \frac{ s^2 }{ m^2 } + \frac{ 1 }{ N }

A table of values for the Saw–Yang–Mo inequality for finite sample sizes (n < 100) has been determined by Konijn.[24]

For fixed N and large m the Saw–Yang–Mo inequality is approximately[25]

P( | X - m | \ge ks ) \le \frac{ 1 }{ N + 1 }

Beasley et al have suggested a modification of this inequality[25]

P( | X - m | \ge ks ) \le \frac{ 1 }{ k^2( N + 1 ) }

In empirical testing this modification is conservative but appears to have low statistical power. Its theoretical basis currently remains unexplored.

Dependence of sample size

The bounds these inequalities give on a finite sample are less tight than those the Chebyshev inequality gives for a distribution. To illustrate this let the sample size n = 100 and let k = 3. Chebyshev's inequality states that approximately 11.11% of the distribution will lie outside these limits. Kabán's version of the inequality for a finite sample states that approximately 12.05% of the sample lies outside these limits. The dependence of the confidence intervals on sample size is further illustrated below.

For N = 10, the 95% confidence interval is approximately ±13.5789 standard deviations.

For N = 100 the 95% confidence interval is approximately ±4.9595 standard deviations; the 99% confidence interval is approximately ±140.0 standard deviations.

For N = 500 the 95% confidence interval is approximately ±4.5574 standard deviations; the 99% confidence interval is approximately ±11.1620 standard deviations.

For N = 1000 the 95% and 99% confidence intervals are approximately ±4.5141 and approximately ±10.5330 standard deviations respectively.

The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively.

Comparative bounds

Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. Samuelson's inequality states that all values of a sample will lie within √(N − 1) standard deviations of the mean. Chebyshev's bound improves as the sample size increases.

When N = 10, Samuelson's inequality states that all members of the sample lie within 3 standard deviations of the mean: in contrast Chebyshev's states that 95% of the sample lies within 13.5789 standard deviations of the mean.

When N = 100, Samuelson's inequality states that all members of the sample lie within approximately 9.9499 standard deviations of the mean: Chebyshev's states that 99% of the sample lies within 140.0 standard deviations of the mean.

When N = 500, Samuelson's inequality states that all members of the sample lie within approximately 22.3383 standard deviations of the mean: Chebyshev's states that 99% of the sample lies within 11.1620 standard deviations of the mean.

It is likely that better bounds for finite samples than these exist.

Sharpened bounds

Chebyshev's inequality is important because of its applicability to any distribution. As a result of its generality it may not (and usually does not) provide as sharp a bound as alternative methods that can be used if the distribution of the random variable is known. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed.

Standardised variables

Sharpened bounds can be derived by first standardising the random variable.[26]

Let X be a random variable with finite variance Var(x). Let Z be the standardised form defined as

Z = \frac {X - \operatorname{E}(X) } { \operatorname{Var}(X)^{ 1/2 } }

Cantelli's lemma is then

P(Z \ge k) \le \frac{ 1 } { 1 + k^2 }

This inequality is sharp and is attained by k and −1/k with probability 1/(1 + k2) and k2/(1 + k2) respectively.

If k > 1 and the distribution of X is symmetric then we have

P(Z \ge k) \le \frac { 1 } { 2 k^2 } .

Equality holds if and only if Z = −k, 0 or k with probabilities 1 / 2 k2, 1 − 1 / k2 and 1 / 2 k2 respectively.[26] An extension to a two-sided inequality is also possible.

Let u, v > 0. Then we have[26]

P(Z \le -u \text{ or } Z \ge v) \le \frac{ 4 + (u - v)^2 } { (u + v)^2 } .

Semivariances

An alternative method of obtaining sharper bounds is through the use of semivariances (partial moments). The upper (σ+2) and lower (σ2) semivariances are defined

\sigma_+^2 = \frac { \sum (x - m)^2 } { n - 1 }
\sigma_-^2 = \frac { \sum (m - x)^2 } { n - 1 }

where m is the arithmetic mean of the sample, n is the number of elements in the sample and the sum for the upper (lower) semivariance is taken over the elements greater (less) than the mean.

The variance of the sample is the sum of the two semivariances

\sigma^2 = \sigma_+^2 + \sigma_-^2

In terms of the lower semivariance Chebyshev's inequality can be written[27]

\Pr(x \le m - a \sigma_-) \le \frac { 1 } { a^2 }

Putting

a = \frac{ k \sigma } { \sigma_- }

Chebyshev's inequality can now be written

\Pr(x \le m - k \sigma) \le \frac { 1 } { k^2 } \frac { \sigma_-^2 } { \sigma^2 }

A similar result can also be derived for the upper semivariance.

If we put

\sigma_u^2 = \max(\sigma_-^2, \sigma_+^2) ,

Chebyshev's inequality can be written

\Pr(| x \le m - k \sigma |) \le \frac { 1 } { k^2 } \frac { \sigma_u^2 } { \sigma^2 } .

Because σu2σ2, use of the semivariance sharpens the original inequality.

If the distribution is known to be symmetric, then

\sigma_+^2 = \sigma_-^2 = \frac{ 1 } { 2 } \sigma^2

and

\Pr(x \le m - k \sigma) \le \frac { 1 } { 2 k^2 } .

This result agrees with that derived using standardised variables.

Note
The inequality with the lower semivariance has been found to be of use in estimating downside risk in finance and agriculture.[27][28][29]

Selberg's inequality

Selberg derived an inequality for P(x) when axb.[30] To simplify the notation let

Y = \alpha X + \beta

where

\alpha = \frac{ 2 k }{ b - a }

and

\beta = \frac{ - ( b + a ) k }{ b - a }.

The result of this linear transformation is to make P(aXb) equal to P(|Y| ≤ k).

The mean (μX) and variance (σX) of X are related to the mean (μY) and variance (σY) of Y:

\mu_Y = \alpha \mu_X + \beta
\sigma_Y^2 = \alpha^2 \sigma_X^2.

With this notation Selberg's inequality states that

P( | Y | < k ) \ge \frac{ ( k - \mu_Y )^ 2 }{ ( k - \mu_Y )^2 + \sigma_Y^2 } \quad\text{ if }\quad \sigma_Y^2 \le \mu_Y ( k - \mu_Y )
P( | Y | < k ) \ge 1 - \frac{ \sigma_Y^2 + \mu_Y^2 }{ k^2 } \quad\text{ if }\quad \mu_Y ( k - \mu_Y ) \le \sigma_Y^2 \le k^2 - \mu_Y^2
P( | Y | < k ) \ge 0 \quad\text{ if }\quad k^2 - \mu_Y^2 \le \sigma_Y^2.

These are known to be the best possible bounds.[31]

Cantelli's inequality

Cantelli's inequality[32] due to Francesco Paolo Cantelli states that for a real random variable (X) with mean (μ) and variance (σ2)

P(X - \mu \ge a) \le \frac{\sigma^2}{ \sigma^2 + a^2 }

where a ≥ 0.

This inequality can be used to prove a one tailed variant of Chebyshev's inequality with k > 0[33]

\Pr(X - \mu \geq k \sigma) \leq \frac{ 1 }{ 1 + k^2 }.

The bound on the one tailed variant is known to be sharp. To see this consider the random variable X that takes the values

X = 1 with probability \frac{ \sigma^2 } { 1 + \sigma^2 }
X = - \sigma^2 with probability \frac{ 1 } { 1 + \sigma^2 }.

Then E(X) = 0 and E(X2) = σ2 and P(X < 1) = 1 / (1 + σ2).

An application – distance between the mean and the median

The one-sided variant can be used to prove the proposition that for probability distributions having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation. To express this in symbols let μ, ν, and σ be respectively the mean, the median, and the standard deviation. Then

\left | \mu - \nu \right | \leq \sigma.

There is no need to assume that the variance is finite because this inequality is trivially true if the variance is infinite.

The proof is as follows. Setting k = 1 in the statement for the one-sided inequality gives:

\Pr(X - \mu \geq \sigma) \leq \frac{ 1 }{ 2 }.

Changing the sign of X and of μ, we get

\Pr(X \leq \mu - \sigma) \leq \frac{ 1 }{ 2 }.

Thus the median is within one standard deviation of the mean.

For a proof using Jensen's inequality see An inequality relating means and medians.

Bhattacharyya's inequality

Bhattacharyya[34] extended Cantelli's inequality using the third and fourth moments of the distribution.

Let μ = 0 and σ2 be the variance. Let γ = E(X3) / σ3 and κ = E(X4) / σ4.

If k2kγ − 1 > 0 then

P(X > k\sigma) \le \frac{ \kappa - \gamma^2 - 1 }{ (\kappa - \gamma^2 - 1) (1 + k^2) + (k^2 - k\gamma - 1) }.

The necessity of k2kγ − 1 > 0 requires that k be reasonably large.

Mitzenmacher and Upfal's inequality

Mitzenmacher and Upfal[35] note that

[ X - E( X ) ]^{ 2k } > 0

for any real k > 0 and that

E ( [ X - E( X ) ]^{ 2k } )

is the kth central moment. They then show that for t > 0

P( | X - E( X ) | > t [ E( X - E( X ) )^{ 2k } ]^{ 1 / 2k } ) \le \min\left[ 1, \frac{ 1 }{ t^{ 2k } } \right].

For k = 2 we obtain Chebyshev's inequality. For t ≥ 1, k > 2 and assuming that the kth moment exists, this bound is tighter than Chebyshev's inequality.

Related inequalities

Several other related inequalities are also known.

Zelen's inequality

Zelen has shown that[36]

P( X - \mu \ge k \sigma ) \le [ 1 + k^2 + \frac{ ( k^2 - k \theta_3 - 1 )^2 }{ \theta_4 - \theta_3^2 - 1 } ]^{ -1 }

with

k \ge \frac{ \theta_3 + \sqrt{ \theta_3^2 + 4 } }{ 2 }

and

\theta_m = \frac{ M_m }{ \sigma_m }

where Mm is the Mth moment and σ is the standard deviation.

He, Zhang and Zhang's inequality

For any collection of n nonnegative independent random variables Xi [37]

P[ \frac{ \Sigma_{ i = 1 }^n X_i }{ n } - 1 \ge \frac{ 1 }{ n } ] \le \frac{ 7 }{ 8 }

Hoeffding’s lemma

Let X be a random variable with aXb and E[ X ] = 0, then for any s > 0, we have

E[ e^{ sX } ] \le e^{ \frac{ s^2 ( b - a )^2 }{ 8 } }

van Zuijlen's bound

van Zuijlen has proved the following result.[38]

Let Xi be a set of independent Rademacher random variables: P( Xi = 1 ) = P( Xi = −1 ) = 0.5. Then

P \Bigl( \Bigl | \frac{ \sum_{ i = 1 }^n X_i } { \sqrt n } \Bigr| \le 1 ) \ge 0.5.

The bound is sharp and better than that which can be derived from the normal distribution (approximately P > 0.31).

Unimodal distributions

A distribution function F is unimodal at ν if F is convex on (−∞, ν) and concave on (ν,∞)[39] An empirical distribution can be tested for unimodality with the dip test.[40]

In 1823 Gauss showed that for a unimodal distribution with a mode of zero[41]

P( | X | \ge k ) \le \frac{ 4 \operatorname{ E }( X^2 ) } { 9k^2 } \quad\text{if} \quad k^2 \ge \frac{ 4 } { 3 } \operatorname{E} (X^2),
P( | X | \ge k ) \le 1 - \frac{ k^2 } { 3 \operatorname{ E }( X^2 ) } \quad \text{if} \quad k^2 \le \frac{ 4 } { 3 } \operatorname{ E }( X^2 ).

If the second condition holds then the second bound is always less than or equal to the first.

If the mode (ν) is not zero and the mean (μ) and standard deviation (σ) are both finite then denoting the root mean square deviation from the mode by ω, we have

\sigma \le \omega \le 2 \sigma,

and

| \nu - \mu | \le \sqrt{ \frac{ 3 }{ 4 } } \omega.

Winkler in 1866 extended Gauss' inequality to rth moments [42] where r > 0 and the distribution is unimodal with a mode of zero:

P( | X | \ge k ) \le \left( \frac{ r } { r + 1 } \right)^r \frac{ \operatorname{ E }( | X | )^r } { k^r } \quad \text{if} \quad k^r \ge \frac{ r^r } { ( r + 1 )^{ r + 1 } } \operatorname{ E }( | X |^r ),
P( | X | \ge k) \le \left( 1 - \left[ \frac{ k^r }{ ( r + 1 ) \operatorname{ E }( | X | )^r } \right]^{ 1 / r } \right) \quad \text{if} \quad k^r \le \frac{ r^r } { ( r + 1 )^{ r + 1 } } \operatorname{ E }( | X |^r ).

Gauss' bound has been subsequently sharpened and extended to apply to departures from the mean rather than the mode: see the Vysochanskiï–Petunin inequality for details.

The Vysochanskiï–Petunin inequality has been extended by Dharmadhikari and Joag-Dev[43]

P( | X | > k ) \le \max\left( \left[ \frac{ r }{( r + 1 ) k } \right]^r E| X^r |, \frac{ s }{( s - 1 ) k^r } E| X^r | - \frac{ 1 }{ s - 1 } \right)

where s is a constant satisfying both s > r + 1 and s(s − r − 1) = rr and r > 0.

It can be shown that these inequalities are the best possible and that further sharpening of the bounds requires that additional restrictions be placed on the distributions.

Unimodal symmetrical distributions

The bounds on this inequality can also be sharpened if the distribution is both unimodal and symmetrical.[44] An empirical distribution can be tested for symmetry with a number of tests including McWilliam's R*.[45] It is known that the variance of a unimodal symmetrical distribution with finite support [ab] is less than or equal to ( b − a )2 / 12.[46]

Let the distribution be supported on the finite interval [ −NN ] and the variance be finite. Let the mode of the distribution be zero and rescale the variance to 1. Let k > 0 and assume k < 2N/3. Then[44]

P( X \ge k ) \le \frac{ 1 }{ 2 } - \frac{ k }{ 2 \sqrt{ 3 } } \quad \text{if} \quad 0 \le k \le \frac{ 2 }{ \sqrt{ 3 } },
P( X \ge k ) \le \frac{ 2 }{ 9k^2 } \quad \text{if} \quad \frac{ 2 }{ \sqrt{ 3 } } \le k \le \frac{ 2N }{ 3 }.

If 0 < k ≤ 2 / √3 the bounds are reached with the density[44]

f( x ) = \frac{ 1 }{ 2 \sqrt{ 3 } } \quad \text{if} \quad | x | < \sqrt{ 3 }
f( x ) = 0 \quad \text{if} \quad | x | \ge \sqrt{ 3 }

If 2 / √3 < k ≤ 2N / 3 the bounds are attained by the distribution

( 1 - \beta_k ) \delta_0 ( x ) + \beta_k f_k( x ),

where βk = 4 / 3k2, δ0 is the Dirac delta function and where

f_k( x ) = \frac{ 1 }{ 3k } \quad \text{if} \quad | x | < \frac{ 3k }{ 2 },
f_k( x ) = 0 \quad \text{if} \quad | x | \ge \frac{ 3k }{ 2 }.

The existence of these densities shows that the bounds are optimal. Since N is arbitrary these bounds apply to any value of N.

The Camp–Meidell's inequality is a related inequality.[47] For an absolutely continuous unimodal and symmetrical distribution

P( | X - \mu | \ge k \sigma ) \le 1 - \frac{ k }{ \sqrt{ 3 } } \quad \text{if} \quad k \le \frac{ 2 }{ \sqrt { 3 } }
P( | X - \mu | \ge k \sigma ) \le \frac{ 4 }{ 9k^2 } \quad \text{if} \quad k > \frac{ 2 }{ \sqrt { 3 } }

The second of these inequality is the same as the Vysochanskiï–Petunin inequality.

DasGupta has shown that if the distribution is known to be normal[48]

P( | X - \mu | \ge k \sigma ) \le \frac{ 1 }{ 3 k^2 }

Notes

Effects of symmetry and unimodality

Symmetry of the distribution decreases the inequality's bounds by a factor of 2 while unimodality sharpens the bounds by a factor of 4/9.

Unimodal distributions

Because the mean and the mode in a unimodal distribution differ by at most √3 standard deviations[49] at most 5% of a symmetrical unimodal distribution lies outside (2√10 + 3√3)/3 standard deviations of the mean (approximately 3.840 standard deviations). This is sharper than the bounds provided by the Chebyshev inequality (approximately 4.472 standard deviations).

These bounds on the mean are less sharp than those that can be derived from symmetry of the distribution alone which shows that at most 5% of the distribution lies outside approximately 3.162 standard deviations of the mean. The Vysochanskiï–Petunin inequality further sharpens this bound by showing that for such a distribution that at most 5% of the distribution lies outside 4√5/3 (approximately 2.981) standard deviations of the mean.

Symmetrical unimodal distributions

For any symmetrical unimodal distribution:

  • approximately 5.784% of the distribution lies outside 1.96 standard deviations of the mode
  • 5% of the distribution lies outside 2√10/3 (approximately 2.11) standard deviations of the mode

DasGupta's inequality states that for a normal distribution at least 95% lies within approximately 2.582 standard deviations of the mean. This is less sharp than the true figure (approximately 1.96 standard deviations of the mean).

Bounds for specific distributions

DasGupta has determined a set of best possible bounds for a normal distribution for this inequality.[48]

Steliga and Szynal have extended these bounds to the Pareto distribution.[7]

Zero means

When the mean (μ) is zero Chebyshev's inequality takes a simple form. Let σ2 be the variance. Then

P(| X | \ge 1) \le \min(1, \sigma^2)

With the same conditions Cantelli's inequality takes the form

P(X \ge 1) \le \frac{ \sigma^2 }{ 1 + \sigma^2 }

Unit variance

If in addition E( X2 ) = 1 and E( X4 ) = ψ then for any 0 ≤ ε ≤ 1[50]

P( | X | > \epsilon ) \ge \frac{ ( 1 - \epsilon^2 )^2 }{ \psi - 1 + ( 1 - \epsilon^2 )^2 } \ge \frac{( 1 - \epsilon^2 )^2 }{ \psi }

The first inequality is sharp.

It is also known that for a random variable obeying the above conditions that[51]

P( X \ge \epsilon ) \ge \frac{ C_0 }{ \psi } - \frac{ C_1 }{ \sqrt{ \psi } } \epsilon + \frac{ C_2 }{ \psi \sqrt{ \psi } } \epsilon

where

C_0 = 2 \sqrt{ 3 } - 3 \quad ( \approxeq 0.464 )
C_1 = 1.397
C_2 = 0.0231

It is also known that[51]

P( X > 0 ) \ge \frac{ C_0 }{ \psi }

The value of C0 is optimal and the bounds are sharp if

\psi \ge \frac{ 3 }{ \sqrt{ 3 } + 1 } \quad ( \approxeq 1.098 )

If

\psi \le \frac{ 3 }{ \sqrt{ 3 } + 1 }

then the sharp bound is

P( X > 0 ) \ge \frac{ 2 }{ 3 + \psi + \sqrt{ ( 1 + \psi )^2 - 4 } }

Integral Chebyshev inequality

There is a second (less well known) inequality also named after Chebyshev[52]

If f, g : [a, b] → R are two monotonic functions of the same monotonicity, then

\frac{ 1 }{ b - a } \int_a^b \! f(x) g(x) \,dx \ge \left[ \frac{ 1 }{ b - a } \int_a^b \! f(x) \,dx \right] \left[ \frac{ 1 }{ b - a } \int_a^b \! g(x) \,dx \right]

If f and g are of opposite monotonicity, then the above inequality works in the reverse way.

This inequality is related to Jensen's inequality,[53] Kantorovich's inequality,[54] the Hermite–Hadamard inequality[54] and Walter's conjecture.[55]

Other inequalities

There are also a number of other inequalities associated with Chebyshev

Haldane's transformation

One use of Chebyshev's inequality in applications is to create confidence intervals for variates with an unknown distribution. Haldane noted,[56] using an equation derived by Kendall,[57] that if a variate (x) has a zero mean, unit variance and both finite skewness (γ) and kurtosis (κ) then the variate can be converted to a normally distributed standard score (z):

z = x - \frac{ \gamma }{ 6 } (x^2 - 1) + \frac{ x }{ 72 } [ 2 \gamma^2 (4 x^2 - 7) - 3 \kappa (x^2 - 3) ] + \cdots

This transformation may be useful as an alternative to Chebyshev's inequality or as an adjunct to it for deriving confidence intervals for variates with unknown distributions.

While this transformation may be useful for moderately skewed and/or kurtotic distributions, it performs poorly when the distribution is markedly skewed and/or kurtotic.

Chernoff bounds

If the random variables may also be assumed to be independently distributed it is possible to obtain sharper bounds. Let δ > 0. Then[58]

\delta - (1 + \delta) \log(1 + \delta) < \frac{ -\delta^2 }{ 2 + \delta } .

With this inequality it can be shown that

P(X > (1 + \delta) \mu) \le e^{ \frac{ -\delta^2 \mu }{ 2 + \delta } },
P(X < (1 - \delta) \mu) \le e^{ \frac{ -\delta^2 \mu }{ 2 + \delta } }.

where μ is the mean of the distribution. Further discussion may be found in the article on Chernoff bounds

Notes

Caution concerning use of Chebyshev's inequality

The [1]

See also

References

Further reading

  • A. Papoulis (1991), Probability, Random Variables, and Stochastic Processes, 3rd ed. McGraw–Hill. ISBN 0-07-100870-5. pp. 113–114.
  • G. Grimmett and D. Stirzaker (2001), Probability and Random Processes, 3rd ed. Oxford. ISBN 0-19-857222-0. Section 7.3.

External links

  • Template:Springer
  • Mizar system.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.