World Library  
Flag as Inappropriate
Email this Article
 

Bell inequalities

Bell's theorem is a no-go theorem famous for drawing an important line in the sand between quantum mechanics (QM) and the world as we know it classically. In its simplest form, Bell's theorem states:[1]

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

When introduced in 1927, the philosophical implications of the new quantum theory were troubling to many prominent physicists of the day, including Albert Einstein. In a well known 1935 paper, Einstein and co-authors Boris Podolsky and Nathan Rosen (collectively EPR) demonstrated by a paradox that QM was incomplete. This provided hope that a more complete (and less troubling) theory might one day be discovered. But that conclusion rested on the seemingly reasonable assumptions of locality and realism (together called "local realism" or "local hidden variables", often interchangeably). In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed. These assumptions were hotly debated within the physics community, notably between Nobel laureates Einstein and Niels Bohr.

In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox", physicist John Stewart Bell presented an analogy (based on spin measurements on pairs of entangled electrons) to EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting here should not affect the outcome of a measurement there (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of QM.

In experimental tests following Bell's example, now using quantum entanglement of photons instead of electrons, John Clauser and Stuart Freedman (1972) and Alain Aspect et al. (1981) convincingly demonstrated that the predictions of QM are correct in this regard. While this does not demonstrate QM is complete, one is forced to reject locality, realism, or the possibility of nondeterminism (the last leads to alternative superdeterministic theories, none of which has yet replicated the predictions of QM).

Cornell solid-state physicist David Mermin has described the various appraisals of the importance of Bell's theorem within the physics community as ranging from "indifference" to "wild extravagance".[2] Lawrence Berkeley particle physicist Henry Stapp declared: “Bell’s theorem is the most profound discovery of science.”.[3]

Overview

Bell’s theorem states that the concept of local realism, favoured by Einstein,[4] yields predictions that disagree with those of quantum mechanical theory. Because numerous experiments agree with the predictions of quantum mechanical theory, and show correlations that are, according to Bell, greater than could be explained by local hidden variables, the experimental results have been taken by many as refuting the concept of local realism as an explanation of the physical phenomena under test. For a hidden variable theory, if Bell's conditions are correct, then the results which are in agreement with quantum mechanical theory appear to evidence superluminal effects, in contradiction to the principle of locality.


The theorem applies to any quantum system of two entangled qubits. The most common examples concern systems of particles that are entangled in spin or polarization.

Following the argument in the Einstein–Podolsky–Rosen (EPR) paradox paper (but using the example of spin, as in David Bohm's version of the EPR argument[5][6]), Bell considered an experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions."[5] The two particles travel away from each other to two distant locations, at which measurements of spin are performed, along axes that are independently chosen. Each measurement yields a result of either spin-up (+) or spin-down (−); it means, spin in the positive or negative direction of the chosen axis.

The probability of the same result being obtained at the two locations varies, depending on the relative angles at which the two spin measurements are made, and is subject to some uncertainty for all relative angles other than perfectly parallel alignments (0° or 180°). Bell's theorem thus applies only to the statistical results from many trials of the experiment. For this reason, the terms "correlated", "anti-correlated", and "uncorrelated" apply only to sets of several pairs of measurements. The correlation of two binary variables can be defined as the average of the product of the two outcomes of the pairs of measurements. This definition is in accordance with the definition of covariance between real-valued random variables. Using this definition, if the pairs of outcomes are always the same, the correlation will be +1, no matter which same value each pair of outcomes have. If the pairs of outcomes are always opposite, the correlation will be -1. Finally, if the pairs of outcomes are perfectly balanced, being 50% of the times in accordance, and 50% of the times opposite, the correlation, being an average, will be 0. Measuring the spin of these entangled particles along anti-parallel directions, i.e. along the same axis but in opposite directions, the set of all results will be correlated. On the other hand, if the measurements are performed along parallel directions they will always yield opposite results, and the set of measurements will show perfect anti-correlation. Finally, measurement at perpendicular directions will have a 50% chance of matching, and the total set of measurement will be uncorrelated. These basic cases are illustrated in the table below.

Anti-parallel Pair 1 Pair 2 Pair 3 Pair 4 Pair n
Alice, 0° + + +
Bob, 180° + + +
Correlation = ( +1 +1 +1 +1 +1 ) / n = +1
(100% identical)
Parallel Pair 1 Pair 2 Pair 3 Pair 4 Pair n
Alice, 0° + + +
Bob, 0° or 360° + +
Correlation = ( -1 -1 -1 -1 -1 ) / n = -1
(100% opposite)
Orthogonal Pair 1 Pair 2 Pair 3 Pair 4 Pair n
Alice, 0° + +
Bob, 90° or 270° + +
Correlation = ( −1 +1 +1 −1 +1 ) / n = 0
(50% identical, 50% opposite)

With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables could agree with a linear dependence of the correlation in the angle but, according to Bell inequality, could not agree with the dependence predicted by quantum mechanical theory, namely, that the correlation is the cosine of the angle. Experimental results match the curve predicted by quantum mechanics.[1]

Bell achieved his breakthrough by first deriving the results that he posits local realism would necessarily yield. Bell claimed that, without making any assumptions about the specific form of the theory beyond requirements of basic consistency, the mathematical inequality he discovered was clearly at odds with the results (described above) predicted by quantum mechanics and, later, observed experimentally. If correct, Bell's theorem appears to rule out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables). Bell concluded:

In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that a theory could not be Lorentz invariant.
[5]

Over the years, Bell's theorem has undergone a wide variety of experimental tests. However, various common deficiencies in the testing of the theorem have been identified, including the detection loophole[7] and the communication loophole.[7] Over the years experiments have been gradually improved to better address these loopholes, but no experiment to date has simultaneously fully addressed all of them.[7] However, it is generally considered unreasonable that such an experiment, if conducted, would give results that are inconsistent with the prior experiments. For example, Anthony Leggett has commented:

[While] no single existing experiment has simultaneously blocked all of the so-called ‘‘loopholes’’, each one of those loopholes has been blocked in at least one experiment. Thus, to maintain a local hidden variable theory in the face of the existing experiments would appear to require belief in a very peculiar conspiracy of nature.[8]

To date, Bell's theorem is generally regarded as supported by a substantial body of evidence and is treated as a fundamental principle of physics in mainstream quantum mechanics textbooks.[9][10]

Importance of the theorem

Bell's theorem, derived in his seminal 1964 paper titled On the Einstein Podolsky Rosen paradox,[5] has been called, on the assumption that the theory is correct, "the most profound in science".[11] Perhaps of equal importance is Bell's deliberate effort to encourage and bring legitimacy to work on the completeness issues, which had fallen into disrepute.[12] Later in his life, Bell expressed his hope that such work would "continue to inspire those who suspect that what is proved by the impossibility proofs is lack of imagination."[13]

The title of Bell's seminal article refers to the famous paper by Einstein, Podolsky and Rosen[14] that challenged the completeness of quantum mechanics. In his paper, Bell started from the same two assumptions as did EPR, namely (i) reality (that microscopic objects have real properties determining the outcomes of quantum mechanical measurements), and (ii) locality (that reality in one location is not influenced by measurements performed simultaneously at a distant location). Bell was able to derive from those two assumptions an important result, namely Bell's inequality, implying that at least one of the assumptions must be false.

In two respects Bell's 1964 paper was a step forward compared to the EPR paper: firstly, it considered more hidden variables than merely the element of physical reality in the EPR paper; and Bell's inequality was, in part, liable to be experimentally tested, thus raising the possibility of testing the local realism hypothesis. Limitations on such tests to date are noted below. Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories[15] as well, and it was also realised[16] that the theorem is not so much about hidden variables as about the outcomes of measurements which could have been done instead of the one actually performed. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.

After the EPR paper, quantum mechanics was in an unsatisfactory position: either it was incomplete, in the sense that it failed to account for some elements of physical reality, or it violated the principle of a finite propagation speed of physical effects. In a modified version of the EPR thought experiment, two hypothetical observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It is the conclusion of EPR that once Alice measures spin in one direction (e.g. on the x axis), Bob's measurement in that direction is determined with certainty, as being the opposite outcome to that of Alice, whereas immediately before Alice's measurement Bob's outcome was only statistically determined (i.e., was only a probability, not a certainty); thus, either the spin in each direction is an element of physical reality, or the effects travel from Alice to Bob instantly.

In QM, predictions are formulated in terms of probabilities — for example, the probability that an electron will be detected in a particular place, or the probability that its spin is up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness is its inability to predict those values precisely. The possibility existed that some unknown theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilities predicted by QM. If such a hidden variables theory exists, then because the hidden variables are not described by QM the latter would be an incomplete theory.

Two assumptions drove the desire to find a local realist theory:

  1. Objects have a definite state that determines the values of all other measurable properties, such as position and momentum.
  2. Effects of local actions, such as measurements, cannot travel faster than the speed of light (in consequence of special relativity). Thus if observers are sufficiently far apart, a measurement made by one can have no effect on a measurement made by the other.

In the form of local realism used by Bell, the predictions of the theory result from the application of classical probability theory to an underlying parameter space. By a simple argument based on classical probability, he showed that correlations between measurements are bounded in a way that is violated by QM.

Bell's theorem seemed to put an end to local realism. This is because, if the theorem is correct, then either quantum mechanics or local realism is wrong, as they are mutually exclusive. The paper noted that "it requires little imagination to envisage the experiments involved actually being made",[5] to determine which of them is correct. It took many years and many improvements in technology to perform tests along the lines Bell envisaged. The tests are, in theory, capable of showing whether local hidden variable theories as envisaged by Bell accurately predict experimental results. The tests are not capable of determining whether Bell has accurately described all local hidden variable theories.

The Bell test experiments have been interpreted as showing that the Bell inequalities are violated in favour of QM. The no-communication theorem shows that the observers cannot use the effect to communicate (classical) information to each other faster than the speed of light, but the ‘fair sampling’ and ‘no enhancement’ assumptions require more careful consideration (below). That interpretation follows not from any clear demonstration of super-luminal communication in the tests themselves, but solely from Bell's theory that the correctness of the quantum predictions necessarily precludes any local hidden-variable theory. If that theoretical contention is not correct, then the "tests" of Bell's theory to date do not show anything either way about the local or non-local nature of the phenomena.

Bell inequalities

Bell inequalities concern measurements made by observers on pairs of particles that have interacted and then separated. According to quantum mechanics they are entangled, while local realism would limit the correlation of subsequent measurements of the particles.

Different authors subsequently derived inequalities similar to Bell´s original inequality, and these are here collectively termed Bell inequalities. All Bell inequalities describe experiments in which the predicted result from quantum entanglement differs from that flowing from local realism. The inequalities assume that each quantum-level object has a well-defined state that accounts for all its measurable properties and that distant objects do not exchange information faster than the speed of light. These well-defined states are typically called hidden variables, the properties that Einstein posited when he stated his famous objection to quantum mechanics: "God does not play dice."

Bell showed that under quantum mechanics, the mathematics of which contains no local hidden variables, the Bell inequalities can nevertheless be violated: the properties of a particle are not clear, but may be correlated with those of another particle due to quantum entanglement, allowing their state to be well defined only after a measurement is made on either particle. That restriction agrees with the Heisenberg uncertainty principle, a fundamental concept in quantum mechanics.

In Bell's words:

Theoretical physicists live in a classical world, looking out into a quantum-mechanical world. The latter we describe only subjectively, in terms of procedures and results in our classical domain. (…) Now nobody knows just where the boundary between the classical and the quantum domain is situated. (…) More plausible to me is that we will find that there is no boundary. The wave functions would prove to be a provisional or incomplete description of the quantum-mechanical part. It is this possibility, of a homogeneous account of the world, which is for me the chief motivation of the study of the so-called "hidden variable" possibility.

(…) A second motivation is connected with the statistical character of quantum-mechanical predictions. Once the incompleteness of the wave function description is suspected, it can be conjectured that random statistical fluctuations are determined by the extra "hidden" variables — "hidden" because at this stage we can only conjecture their existence and certainly cannot control them.

(…) A third motivation is in the peculiar character of some quantum-mechanical predictions, which seem almost to cry out for a hidden variable interpretation. This is the famous argument of Einstein, Podolsky and Rosen. (…) We will find, in fact, that no local deterministic hidden-variable theory can reproduce all the experimental predictions of quantum mechanics. This opens the possibility of bringing the question into the experimental domain, by trying to approximate as well as possible the idealized situations in which local hidden variables and quantum mechanics cannot agree.[17]

In probability theory, repeated measurements of system properties can be regarded as repeated sampling of random variables. In Bell's experiment, Alice can choose a detector setting to measure either A(a) or A(a') and Bob can choose a detector setting to measure either B(b) or B(b'). Measurements of Alice and Bob may be somehow correlated with each other, but the Bell inequalities say that if the correlation stems from local random variables, there is a limit to the amount of correlation one might expect to see.

Original Bell's inequality

The original inequality that Bell derived was:[5]

1 + \operatorname{\rho}(B, C) \geq |\operatorname{\rho}(A, B) - \operatorname{\rho}(A, C)|,

where ρ is the "correlation" of the particle pairs and A, B and C settings of the apparatus. This inequality is not used in practice. For one thing, it is true only for genuinely "two-outcome" systems, not for the "three-outcome" ones (with possible outcomes of zero as well as +1 and −1) encountered in real experiments. For another, it applies only to a very restricted set of hidden variable theories, namely those for which the outcomes on both sides of the experiment are always exactly anticorrelated when the analysers are parallel, in agreement with the quantum mechanical prediction.

Nevertheless, a simple limit of Bell's inequality has the virtue of being quite intuitive. If the result of three different statistical coin-flips A, B, and C have the property that:

  1. A and B are the same (both heads or both tails) 99% of the time
  2. B and C are the same 99% of the time,

then A and C are the same at least 98% of the time. The number of mismatches between A and B (1/100) plus the number of mismatches between B and C (1/100) are together the maximum possible number of mismatches between A and C (a simple Boole–Fréchet inequality).

In quantum mechanics, however, by letting A, B, and C be the values of the spin of two entangled particles measured relative to some axis at 0 degrees, θ degrees, and 2θ degrees respectively, the overlap of the wavefunction between the different angles is proportional to cos() ≈ 1–S2θ2/2. The probability that A and B give the same answer is 1–ε2, where ε is proportional to θ. This is also the probability that B and C give the same answer.

But A and C are the same 1 – (2ε)2 of the time. Choosing the angle so that ε=0.1, A and B are 99% correlated, B and C are 99% correlated, but now A and C are only 96% correlated!

Imagine that two entangled particles in a spin singlet are shot out to two distant locations, and the spins of both are measured in the direction A. The spins are 100% correlated (actually, anti-correlated, but for this argument that is equivalent). The same is true if both spins are measured in directions B or C. It is safe to conclude that any hidden variables that determine the A, B, and C measurements in the two particles are 100% correlated, and can be used interchangeably. If A is measured on one particle and B on the other, the correlation between them is 99%. If B is measured on one and C on the other, the correlation is 99%. This allows us to conclude that the hidden variables determining A and B are 99% correlated, and B and C are 99% correlated.

But if A is measured in one particle and C in the other, the quantum mechanical results are only 96% correlated, which is a contradiction. This intuitive formulation is due to David Mermin, while the small-angle limit is emphasized in Bell's original article.

CHSH inequality

Main article: CHSH inequality

In addition to Bell's original inequality,[5] the form given by John Clauser, Michael Horne, Abner Shimony and R. A. Holt,[18] (the CHSH form) is especially important,[18] as it gives classical limits to the expected correlation for the above experiment conducted by Alice and Bob:

\ (1) \quad \mathbf{C}[A(a), B(b)] + \mathbf{C}[A(a), B(b')] + \mathbf{C}[A(a'), B(b)] - \mathbf{C}[A(a'), B(b')] \leq 2

where C denotes correlation.

Correlation of observables X, Y is defined as

\mathbf{C}(X,Y) = \operatorname{E}(X Y)

Where \operatorname{E}(Z) represents the expected or average value of Z

This is a non-normalized form of the correlation coefficient considered in statistics (see Quantum correlation).

To formulate Bell's theorem, we formalize local realism as follows:

  1. There is a probability space \Lambda and the observed outcomes by both Alice and Bob result by random sampling of the parameter \lambda \in \Lambda.
  2. The values observed by Alice or Bob are functions of the local detector settings and the hidden parameter only. Thus
    • Value observed by Alice with detector setting \scriptstyle a is \scriptstyle A(a,\lambda)
    • Value observed by Bob with detector setting \scriptstyle b is \scriptstyle B(b,\lambda)

Implicit in assumption 1) above, the hidden parameter space \scriptstyle\Lambda has a probability measure \scriptstyle\rho and the expectation of a random variable X on \scriptstyle\Lambda with respect to \scriptstyle\rho is written

\operatorname{E}(X) = \int_\Lambda X(\lambda) \rho(\lambda) d \lambda

where for accessibility of notation we assume that the probability measure has a density.

Bell's inequality. The CHSH inequality (1) holds under the hidden variables assumptions above.

For simplicity, let us first assume the observed values are +1 or −1; we remove this assumption in Remark 1 below.

Let \lambda \in \Lambda. Then at least one of

B(b, \lambda) + B(b', \lambda), \quad B(b, \lambda) - B(b', \lambda)

is 0. Thus

\begin{align}
 &\quad A(a, \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) + A(a', \lambda) B(b, \lambda) - A(a', \lambda) \ B(b', \lambda)\\
 &= A(a, \lambda) \left[B(b, \lambda) +  B(b', \lambda)\right] + A(a', \lambda) \left[B(b, \lambda) -  B(b', \lambda)\right]\\
 &\leq 2

\end{align}

and therefore

\begin{align}
 &\quad \mathbf{C}(A(a), B(b)) + \mathbf{C}(A(a), B(b')) +
        \mathbf{C}(A(a'), B(b)) - \mathbf{C}(A(a'), B(b'))&\\
 &= \int_\Lambda A(a, \lambda) B(b, \lambda) \rho(\lambda) d \lambda +
    \int_\Lambda A(a, \lambda) B(b', \lambda) \rho(\lambda) d \lambda +
    \int_\Lambda A(a', \lambda) B(b, \lambda) \rho(\lambda) d \lambda -
    \int_\Lambda A(a', \lambda) B(b', \lambda) \rho(\lambda) d \lambda&\\
 &= \int_\Lambda \big\{
                   A(a, \lambda) B(b, \lambda) + 
                   A(a, \lambda) B(b', \lambda) +
                   A(a', \lambda) B(b, \lambda) -
                   A(a', \lambda) B(b', \lambda)
                 \big\} \rho(\lambda) d \lambda&\\
 &= \int_\Lambda \big\{
                   A(a, \lambda) \left[
                     B(b, \lambda) + B(b', \lambda)
                   \right] + A(a', \lambda) \left[
                     B(b, \lambda) - B(b', \lambda)
                   \right]
                 \big\} \rho(\lambda) d \lambda\\
 &\leq 2

\end{align}

Remark 1

The correlation inequality (1) still holds if the variables A(a,\lambda), B(b,\lambda) are allowed to take on any real values between −1 and +1. Indeed, the relevant idea is that each summand in the above average is bounded above by 2. This is easily seen as true in the more general case:

\begin{align}
 &\quad A(a, \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) + A(a', \lambda) B(b, \lambda) - A(a', \lambda) B(b', \lambda)\\
 &=     A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] + A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right]\\
 &\leq \big| A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] + A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right] \big|\\
 &\leq \big| A(a, \lambda) \left[B(b, \lambda) + B(b', \lambda)\right] \big| + \big| A(a', \lambda) \left[B(b, \lambda) - B(b', \lambda)\right] \big|\\
 &\leq \big| B(b, \lambda) + B(b', \lambda) \big| + \big| B(b, \lambda) - B(b', \lambda) \big| \leq 2

\end{align}

To justify the upper bound 2 asserted in the last inequality, without loss of generality, we can assume that

B(b, \lambda) \geq B(b', \lambda) \geq 0

In that case

\begin{align}
     &  \big|B(b, \lambda) + B(b', \lambda)\big| + \big|B(b, \lambda) - B(b', \lambda)\big|\\
     &= B(b, \lambda) + B(b', \lambda) +  B(b, \lambda) - B(b', \lambda)\\
     &= 2B(b, \lambda) \leq 2

\end{align}

Remark 2

Though the important component of the hidden parameter \scriptstyle\lambda in Bell's original proof is associated with the source and is shared by Alice and Bob, there may be others that are associated with the separate detectors, these others being conditionally independent given the first, and with conditional probability distributions only depending on the corresponding local setting (if dependent on the settings at all). This argument was used by Bell in 1971, and again by Clauser and Horne in 1974,[15] to justify a generalisation of the theorem forced on them by the real experiments, in which detectors were never 100% efficient. The derivations were given in terms of the averages of the outcomes over the local detector variables. The formalisation of local realism was thus effectively changed, replacing A and B by averages and retaining the symbol \scriptstyle\lambda but with a slightly different meaning. It was henceforth restricted (in most theoretical work) to mean only those components that were associated with the source.

However, with the extension proved in Remark 1, CHSH inequality still holds even if the instruments themselves contain hidden variables. In that case, averaging over the instrument hidden variables gives new variables:

\overline{A}(a, \lambda), \quad \overline{B}(b, \lambda)

on \scriptstyle\Lambda, which still have values in the range [−1, +1] to which we can apply the previous result.

Bell inequalities are violated by quantum mechanical predictions

In the usual quantum mechanical formalism, the observables X and Y are represented as self-adjoint operators on a Hilbert space. To compute the correlation, assume that X and Y are represented by matrices in a finite dimensional space and that X and Y commute; this special case suffices for our purposes below. The von Neumann measurement postulate states: a series of measurements of an observable X on a series of identical systems in state \scriptstyle\phi produces a distribution of real values. By the assumption that observables are finite matrices, this distribution is discrete. The probability of observing λ is non-zero if and only if λ is an eigenvalue of the matrix X and moreover the probability is

\|\operatorname{E}_X(\lambda) \phi\|^2

where EX (λ) is the projector corresponding to the eigenvalue λ. The system state immediately after the measurement is

\|\operatorname{E}_X(\lambda) \phi\|^{-1} \operatorname{E}_X(\lambda) \phi.

From this, we can show that the correlation of commuting observables X and Y in a pure state \scriptstyle\psi is

\langle X Y \rangle = \langle X Y \psi \mid \psi \rangle

We apply this fact in the context of the EPR paradox. The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labelled a and a′; these settings correspond to measurement of spin along the z or the x axis. Bob can choose between two detector settings labelled b and b′; these correspond to measurement of spin along the z′ or x′ axis, where the x′ – z′ coordinate system is rotated 135° relative to the xz coordinate system. The spin observables are represented by the 2 × 2 self-adjoint matrices:

S_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix},
       S_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}

These are the Pauli spin matrices normalized so that the corresponding eigenvalues are +1, −1. As is customary, we denote the eigenvectors of Sx by

\left|+x\right\rang, \quad \left|-x\right\rang.

Let \scriptstyle\phi be the spin singlet state for a pair of electrons discussed in the EPR paradox. This is a specially constructed state described by the following vector in the tensor product

\left|\phi\right\rang = \frac{1}{\sqrt{2}} \left(\left|+x\right\rang \otimes \left|-x\right\rang -
      \left|-x\right\rang \otimes \left|+x\right\rang \right)

Now let us apply the CHSH formalism to the measurements that can be performed by Alice and Bob.

\begin{align}
 A(a)  &= S_z \otimes I\\
 A(a') &= S_x \otimes I\\
 B(b)  &= -\frac{1}{\sqrt{2}} \ I \otimes (S_z + S_x)\\
 B(b') &=  \frac{1}{\sqrt{2}} \ I \otimes (S_z - S_x)

\end{align}

The operators \scriptstyle B(b'), \scriptstyle B(b) correspond to Bob's spin measurements along x′ and z′. Note that the A operators commute with the B operators, so we can apply our calculation for the correlation. In this case, we can show that the CHSH inequality fails. In fact, a straightforward calculation shows that

\langle A(a) B(b) \rangle = \langle A(a') B(b) \rangle = \langle A(a') B(b') \rangle = \frac{1}{\sqrt{2}}

and

\langle A(a) B(b') \rangle = -\frac{1}{\sqrt{2}}

so that

\langle A(a) B(b) \rangle + \langle A(a') B(b') \rangle + \langle A(a') B(b) \rangle - \langle A(a) B(b') \rangle = \frac{4}{\sqrt{2}} = 2 \sqrt{2} > 2

Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that \scriptstyle 2 \sqrt{2} is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices.

Practical experiments testing Bell's theorem

Main article: Bell test experiments

Experimental tests can determine whether the Bell inequalities required by local realism hold up to the empirical evidence.

Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.

Bell test experiments to date overwhelmingly violate Bell's inequality. Indeed, a table of Bell test experiments performed prior to 1986 is given in 4.5 of Redhead, 1987.[19] Of the thirteen experiments listed, only two reached results contradictory to quantum mechanics; moreover, according to the same source, when the experiments were repeated, "the discrepancies with QM could not be reproduced".

Nevertheless, the issue is not conclusively settled. According to Shimony's 2004 Stanford Encyclopedia overview article:[7]

Most of the dozens of experiments performed so far have favored Quantum Mechanics, but not decisively because of the 'detection loopholes' or the 'communication loophole.' The latter has been nearly decisively blocked by a recent experiment and there is a good prospect for blocking the former.

To explore the 'detection loophole', one must distinguish the classes of homogeneous and inhomogeneous Bell inequality.

The standard assumption in Quantum Optics is that "all photons of given frequency, direction and polarization are identical" so that photodetectors treat all incident photons on an equal basis. Such a fair sampling assumption generally goes unacknowledged, yet it effectively limits the range of local theories to those that conceive of the light field as corpuscular. The assumption excludes a large family of local realist theories, in particular, Max Planck's description. We must remember the cautionary words of Albert Einstein[20] shortly before he died: "Nowadays every Tom, Dick and Harry ('jeder Kerl' in German original) thinks he knows what a photon is, but he is mistaken".

Those who maintain the concept of duality, or simply of light being a wave, recognize the possibility or actuality that the emitted atomic light signals have a range of amplitudes and, furthermore, that the amplitudes are modified when the signal passes through analyzing devices such as polarizers and beam splitters. It follows that not all signals have the same detection probability.[21]

Two classes of Bell inequalities

The fair sampling problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and Clauser[22] used fair sampling in the form of the Clauser–Horne–Shimony–Holt (CHSH[18]) hypothesis. However, shortly afterwards Clauser and Horne[15] made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82.8% for singlet states,[23] but have very low dark rate and short dead and resolving times. This is well above the 30% achievable[24] so Shimony’s optimism in the Stanford Encyclopedia, quoted in the preceding section, appears over-stated.

Practical challenges

Because detectors don't detect a large fraction of all photons, Clauser and Horne[15] recognized that testing Bell's inequality requires some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):

A light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.

Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.

The experiment was performed by Freedman and Clauser,[22] who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman–Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement:

In the total set of signals from an atomic cascade there is a subset whose detection probability increases as a result of passing through a linear polarizer.

This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known[25] as stochastic resonance). One cannot conclude that this is the only local-realist alternative to Quantum Optics, but it does show that the word loophole is biased. Moreover, the analysis leads us to recognize that the Bell-inequality experiments, rather than showing a breakdown of realism or locality, are capable of revealing important new phenomena.

Theoretical challenges

Most advocates of the hidden variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories.[26]

If the hidden variables can communicate with each other faster than light, Bell's inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process that travels backwards in time along the past Light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time.[27]

A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that the superdeterminism loophole cannot be dismissed.[28][29]

The quantum mechanical wavefunction can also provide a local realistic description, if the wavefunction values are interpreted as the fundamental quantities that describe reality. Such an approach is called a many-worlds interpretation of quantum mechanics. In this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up.

This implies that there is a subtle assumption in the argument that realism is incompatible with quantum mechanics and locality. The assumption, in its weakest form, is called counterfactual definiteness. This states that if the results of an experiment are always observed to be definite, there is a quantity that determines what the outcome would have been even if you don't do the experiment.

Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined.

In 1989, E. T. Jaynes[30] claimed that there are two hidden assumptions in Bell Inequality that could limit its generality. According to him:

  1. Bell interpreted conditional probability P(X|Y) as a causal inference, i.e. Y exerted a causal inference on X in reality. However, P(X|Y) actually only means logical inference (deduction). Causes cannot travel faster than light or backward in time, but deduction can.
  2. Bell's inequality does not apply to some possible hidden variable theories. It only applies to a certain class of local hidden variable theories. In fact, it might have just missed the kind of hidden variable theories that Einstein is most interested in.

However, Richard D. Gill has argued that Jaynes misunderstood Bell's analysis. Gill also points out that in the same paper in which Jaynes argued against Bell, Jaynes cautiously praised a short proof by Steve Gull, that the singlet correlations could not be reproduced by a computer simulation of a local hidden variables theory.[31]

Final remarks

The violations of Bell's inequalities, due to quantum entanglement, just provide the definite demonstration of something that was already strongly suspected, that quantum physics cannot be represented by any version of the classical picture of physics.[32] Some earlier elements that had seemed incompatible with classical pictures included apparent complementarity and (hypothesized) wavefunction collapse. Complementarity is now seen not as an independent ingredient of the quantum picture but rather as a direct consequence of the Quantum decoherence expected from the quantum formalism itself. The possibility of wavefunction collapse is now seen as one possible problematic ingredient of some interpretations, rather than as an essential part of quantum mechanics. The Bell violations show that no resolution of such issues can avoid the ultimate strangeness of quantum behavior.[33]

The EPR paper "pinpointed" the unusual properties of the entangled states, e.g. the above-mentioned singlet state, which is the foundation for present-day applications of quantum physics, such as quantum cryptography; one application involves the measurement of quantum entanglement as a physical source of bits for Rabin's oblivious transfer protocol. This strange non-locality was originally supposed to be a Reductio ad absurdum, because the standard interpretation could easily do away with action-at-a-distance by simply assigning to each particle definite spin-states. Bell's theorem showed that the "entangledness" prediction of quantum mechanics has a degree of non-locality that cannot be explained away by any local theory.

In well-defined Bell experiments (see the paragraph on "test experiments") one can now falsify either quantum mechanics or Einstein's quasi-classical assumptions: currently many experiments of this kind have been performed, and the experimental results support quantum mechanics, though some point out that it is theoretically possible that detectors give a biased sample of photons, so that until the relative amount of "unpaired" photons is small enough, the final word has not yet been spoken. According to Marek Zukowski, quoted in Science Magazine (2011),[34] experimenters expect the first loophole free experiment to be done in five years. According to one of the most foremost experimenters in this field, Anton Zeilinger (2013), the goal of a loophole free experiment is very close and will be a major achievement.

What is powerful about Bell's theorem is that it doesn't refer to any particular physical theory. What makes Bell's theorem unique and powerful is that it shows that nature violates the most general assumptions behind classical pictures, not just details of some particular models. No combination of local deterministic and local random variables can reproduce the phenomena predicted by quantum mechanics and repeatedly observed in experiments.[35]

See also

Notes

References

  • A. Aspect et al., Experimental Tests of Realistic Local Theories via Bell's Theorem, Phys. Rev. Lett. 47, 460 (1981)
  • A. Aspect et al., Experimental Realization of Einstein–Podolsky–Rosen–Bohm Gedankenexperiment: A New Violation of Bell's Inequalities, Phys. Rev. Lett. 49, 91 (1982).
  • A. Aspect et al., Experimental Test of Bell's Inequalities Using Time-Varying Analyzers, Phys. Rev. Lett. 49, 1804 (1982).
  • A. Aspect and P. Grangier, About resonant scattering and other hypothetical effects in the Orsay atomic-cascade experiment tests of Bell inequalities: a discussion and some new experimental data, Lettere al Nuovo Cimento 43, 345 (1985)
  • B. D'Espagnat, , Scientific American, 241, 158 (1979)
  • J. S. Bell, On the problem of hidden variables in quantum mechanics, Rev. Mod. Phys. 38, 447 (1966)
  • J. S. Bell, On the Einstein Podolsky Rosen Paradox, Physics 1, 3, 195–200 (1964)
  • J. S. Bell, Introduction to the hidden variable question, Proceedings of the International School of Physics 'Enrico Fermi', Course IL, Foundations of Quantum Mechanics (1971) 171–81
  • J. S. Bell, Bertlmann’s socks and the nature of reality, Journal de Physique, Colloque C2, suppl. au numero 3, Tome 42 (1981) pp C2 41–61
  • J. S. Bell, Speakable and Unspeakable in Quantum Mechanics (Cambridge University Press 1987) [A collection of Bell's papers, including all of the above.]
  • J. F. Clauser and A. Shimony, Bell's theorem: experimental tests and implications, Reports on Progress in Physics 41, 1881 (1978)
  • J. F. Clauser and M. A. Horne, Phys. Rev D 10, 526–535 (1974)
  • E. S. Fry, T. Walther and S. Li, Proposal for a loophole-free test of the Bell inequalities, Phys. Rev. A 52, 4381 (1995)
  • E. S. Fry, and T. Walther, Atom based tests of the Bell Inequalities — the legacy of John Bell continues, pp 103–117 of Quantum [Un]speakables, R.A. Bertlmann and A. Zeilinger (eds.) (Springer, Berlin-Heidelberg-New York, 2002)
  • R. B. Griffiths, Consistent Quantum Theory', Cambridge University Press (2002).
  • L. Hardy, Nonlocality for 2 particles without inequalities for almost all entangled states. Physical Review Letters 71 (11) 1665–1668 (1993)
  • M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press (2000)
  • P. Pearle, Hidden-Variable Example Based upon Data Rejection, Physical Review D 2, 1418–25 (1970)
  • A. Peres, Quantum Theory: Concepts and Methods, Kluwer, Dordrecht, 1993.
  • P. Pluch, Theory of Quantum Probability, PhD Thesis, University of Klagenfurt, 2006.
  • B. C. van Frassen, Quantum Mechanics, Clarendon Press, 1991.
  • M.A. Rowe, D. Kielpinski, V. Meyer, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, Experimental violation of Bell's inequalities with efficient detection,(Nature, 409, 791–794, 2001).
  • S. Sulcs, The Nature of Light and Twentieth Century Experimental Physics, Foundations of Science 8, 365–391 (2003)
  • S. Gröblacher et al., An experimental test of non-local realism,(Nature, 446, 871–875, 2007).
  • D. N. Matsukevich, P. Maunz, D. L. Moehring, S. Olmschenk, and C. Monroe, Bell Inequality Violation with Two Remote Atomic Qubits, Phys. Rev. Lett. 100, 150404 (2008).
  • The comic 1992-09-22 strips.

Further reading

The following are intended for general audiences.

  • Amir D. Aczel, Entanglement: The greatest mystery in physics (Four Walls Eight Windows, New York, 2001).
  • A. Afriat and F. Selleri, The Einstein, Podolsky and Rosen Paradox (Plenum Press, New York and London, 1999)
  • J. Baggott, The Meaning of Quantum Theory (Oxford University Press, 1992)
  • N. David Mermin, "Is the moon there when nobody looks? Reality and the quantum theory", in Physics Today, April 1985, pp. 38–47.
  • Louisa Gilder, The Age of Entanglement: When Quantum Physics Was Reborn (New York: Alfred A. Knopf, 2008)
  • Brian Greene, The Fabric of the Cosmos (Vintage, 2004, ISBN 0-375-72720-5)
  • Nick Herbert, Quantum Reality: Beyond the New Physics (Anchor, 1987, ISBN 0-385-23569-0)
  • D. Wick, The infamous boundary: seven decades of controversy in quantum physics (Birkhauser, Boston 1995)
  • R. Anton Wilson, Prometheus Rising (New Falcon Publications, 1997, ISBN 1-56184-056-4)
  • Gary Zukav "The Dancing Wu Li Masters" (Perennial Classics, 2001, ISBN 0-06-095968-1)

External links

  • "On the Einstein Podolsky Rosen Paradox", Bell's original paper.
  • Another version of Bell's paper.
  • An explanation of Bell's Theorem, based on N. D. Mermin's article,
  • Mermin: Spooky Actions At A Distance? Oppenheimer Lecture
  • Quantum Entanglement Includes a simple explanation of Bell's Inequality.
  • Bell's theorem on arXiv.org
  • Interactive experiments with single photons: entanglement and Bell´s theorem
  • Bell's Inequalities: Obscurantist Obfuscation or Condign Confabulation?
  • Internet Encyclopedia of Philosophy
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.