World Library  
Flag as Inappropriate
Email this Article

Fermi-Dirac statistics


Fermi-Dirac statistics

In quantum statistics, a branch of physics, Fermi–Dirac statistics describes distribution of particles in certain systems comprising many identical particles that obey the Pauli exclusion principle. It is named after Enrico Fermi and Paul Dirac, who each discovered it independently, although Enrico Fermi defined the statistics earlier than Paul Dirac.[1][2]

Fermi–Dirac (F–D) statistics applies to identical particles with half-odd-integer spin in a system in thermodynamic equilibrium. Additionally, the particles in this system are assumed to have negligible mutual interaction. This allows the many-particle system to be described in terms of single-particle energy states. The result is the F–D distribution of particles over these states and includes the condition that no two particles can occupy the same state, which has a considerable effect on the properties of the system. Since F–D statistics applies to particles with half-integer spin, these particles have come to be called fermions. It is most commonly applied to electrons, which are fermions with spin 1/2. Fermi–Dirac statistics is a part of the more general field of statistical mechanics and uses the principles of quantum mechanics.


Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current.[3] It was also difficult to understand why the emission currents, generated by applying high electric fields to metals at room temperature, were almost independent of temperature.

The difficulty encountered by the electronic theory of metals at that time was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constant k. This statistical problem remained unsolved until the discovery of F–D statistics.

F–D statistics was first published in 1926 by Enrico Fermi[1] and Paul Dirac.[2] According to an account, Pascual Jordan developed in 1925 the same statistics which he called Pauli statistics, but it was not published in a timely manner.[4] According to Dirac, it was first studied by Fermi, and Dirac called it Fermi statistics and the corresponding particles fermions.[5]

F–D statistics was applied in 1926 by Fowler to describe the collapse of a star to a white dwarf.[6] In 1927 Sommerfeld applied it to electrons in metals[7] and in 1928 Fowler and Nordheim applied it to field electron emission from metals.[8] Fermi–Dirac statistics continues to be an important part of physics.

Fermi–Dirac distribution

For a system of identical fermions, the average number of fermions in a single-particle state i, is given by the Fermi–Dirac (F–D) distribution,[9]

\bar{n}_i = \frac{1}{e^{(\epsilon_i-\mu) / k T} + 1}

where k is Boltzmann's constant, T is the absolute temperature, \epsilon_i \ is the energy of the single-particle state i, and μ is the total chemical potential. At zero temperature, μ is equal to the Fermi energy plus the potential energy per electron. For the case of electrons in a semiconductor, \mu\ is typically called the Fermi level or electrochemical potential.[10][11]

The F–D distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect on \mu\ .[12] Since the F–D distribution was derived using the Pauli exclusion principle, which allows at most one electron to occupy each possible state, a result is that 0 < \bar{n}_i < 1 .[13]

(Click on a figure to enlarge.)

Distribution of particles over energy

The above Fermi–Dirac distribution gives the distribution of identical fermions over single-particle energy states, where no more than one fermion can occupy a state. Using the F–D distribution, one can find the distribution of identical fermions over energy, where more than one fermion can have the same energy.[15]

The average number of fermions with energy \epsilon_i \ can be found by multiplying the F–D distribution \bar{n}_i \ by the degeneracy g_i \ (i.e. the number of states with energy \epsilon_i \ ),[16]

\bar{n}(\epsilon_i) & = g_i \  \bar{n}_i \\
     & = \frac{g_i}{e^{(\epsilon_i-\mu) / k T} + 1} \\

When g_i \ge 2 \ , it is possible that \ \bar{n}(\epsilon_i) > 1 since there is more than one state that can be occupied by fermions with the same energy \epsilon_i \ .

When a quasi-continuum of energies \epsilon \ has an associated density of states g( \epsilon ) \ (i.e. the number of states per unit energy range per unit volume [17]) the average number of fermions per unit energy range per unit volume is,

\bar { \mathcal{N} }(\epsilon) = g(\epsilon) \ F(\epsilon)

where F(\epsilon) \ is called the Fermi function and is the same function that is used for the F–D distribution \bar{n}_i ,[18]

F(\epsilon) = \frac{1}{e^{(\epsilon-\mu) / k T} + 1}

so that,

\bar { \mathcal{N} }(\epsilon) = \frac{g(\epsilon)}{e^{(\epsilon-\mu) / k T} + 1} .

Quantum and classical regimes

The classical regime, where Maxwell–Boltzmann statistics can be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. Using this approach, it can be shown that the classical situation occurs if the concentration of particles corresponds to an average interparticle separation \bar{R} that is much greater than the average de Broglie wavelength \bar{\lambda} of the particles,[19]

\bar{R} \ \gg \ \bar{\lambda} \ \approx \ \frac{h}{\sqrt{3mkT}}

where h is Planck's constant, and m is the mass of a particle.

For the case of conduction electrons in a typical metal at T = 300K (i.e. approximately room temperature), the system is far from the classical regime because \bar{R} \approx \bar{\lambda}/25 . This is due to the small mass of the electron and the high concentration (i.e. small \bar{R}) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal.[19]

Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the white dwarf's temperature is high (typically T = 10,000K on its surface[20]), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required.[6]

Three derivations of the Fermi–Dirac distribution

Derivation starting with grand canonical ensemble

The Fermi-Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble.[21] In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential µ fixed by the reservoir).

Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. In other words, each single-particle level is a separate, tiny grand canonical ensemble. By the Pauli exclusion principle there are only two possible microstates for the single-particle level: no particle (energy E=0), or one particle (energy E=ϵ). The resulting partition function for that single-particle level therefore has just two terms:

\begin{align}\mathcal Z & = \exp(0(\mu - 0)/k_B T) + \exp(1(\mu - \epsilon)/k_B T) \\ & = 1 + \exp((\mu - \epsilon)/k_B T)\end{align}

and the average particle number for that single-particle substate is given by

\langle N\rangle = k_B T \frac{1}{\mathcal Z} \left(\frac{\partial \mathcal Z}{\partial \mu}\right)_{V,T} = \frac{1}{\exp((\epsilon-\mu)/k_B T)+1}

This result applies for each single-particle level, and thus gives the Fermi-Dirac distribution for the entire state of the system.[21]

The variance in particle number (due to thermal fluctuations) may also be derived (the particle number has a simple Bernoulli distribution):

\langle (\Delta N)^2 \rangle = k_B T \left(\frac{d\langle N\rangle}{d\mu}\right)_{V,T} = \langle N\rangle (1 - \langle N\rangle)

This quantity is important in transport phenomena such as the Mott relations for electrical conductivity and thermoelectric coefficient for an electron gas,[22] where the ability of an energy level to contribute to transport phenomena is proportional to \langle (\Delta N)^2 \rangle.

Derivations starting with canonical distribution

It is also possible to derive Fermi–Dirac statistics in the canonical ensemble.

Standard derivation

Consider a many-particle system composed of N identical fermions that have negligible mutual interaction and are in thermal equilibrium.[12] Since there is negligible interaction between the fermions, the energy E_R of a state R of the many-particle system can be expressed as a sum of single-particle energies,

E_R = \sum_{r} n_r \epsilon_r \;

where n_r is called the occupancy number and is the number of particles in the single-particle state r with energy \epsilon_r \;. The summation is over all possible single-particle states r.

The probability that the many-particle system is in the state R, is given by the normalized canonical distribution,[23]

P_R = \frac { e^{-\beta E_R} }
{ \displaystyle \sum_{R'} e^{-\beta E_{R'}} } 

where \beta\; = 1/kT,   k is Boltzmann's constant, T is the absolute temperature, e\scriptstyle -\beta E_R is called the Boltzmann factor, and the summation is over all possible states R' of the many-particle system.   The average value for an occupancy number n_i \; is[23]

\bar{n}_i \ = \ \sum_{R} n_i \ P_R

Note that the state R of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifying n_1,\, n_2,\, ... \;, so that

P_R = P_{n_1,n_2,...} = \frac{ e^{-\beta (n_1 \epsilon_1+n_2 \epsilon_2+...)} }
                                                                                   {\displaystyle \sum_\sum_{n_1,n_2,\dots} e^{-\beta (n_1\epsilon_1+n_2\epsilon_2+\cdots)} }

{\displaystyle \sum_{n_i=0} ^1 e^{-\beta (n_i\epsilon_i)} \qquad \sideset{ }{^{(i)}}\sum_{n_1,n_2,\dots} e^{-\beta (n_1\epsilon_1+n_2\epsilon_2+\cdots)} }

where the  ^{(i)} on the summation sign indicates that the sum is not over n_i and is subject to the constraint that the total number of particles associated with the summation is N_i = N-n_i . Note that \Sigma^{(i)} still depends on n_i through the N_i constraint, since in one case n_i=0 and \Sigma^{(i)} is evaluated with N_i=N , while in the other case n_i=1 and \Sigma^{(i)} is evaluated with N_i=N-1 .  To simplify the notation and to clearly indicate that \Sigma^{(i)} still depends on n_i through N-n_i , define

Z_i(N-n_i) \equiv \ \sideset{ }{^{(i)}}\sum_{n_1,n_2,...} e^{-\beta (n_1\epsilon_1+n_2\epsilon_2+\cdots)} \;

so that the previous expression for \bar{n}_i can be rewritten and evaluated in terms of the Z_i,

\begin{alignat} {3}

\bar{n}_i \ & = \frac{ \displaystyle \sum_{n_i=0} ^1 n_i \ e^{-\beta (n_i\epsilon_i)} \ \ Z_i(N-n_i)}

                                                                                       { \displaystyle  \sum_{n_i=0} ^1 e^{-\beta (n_i\epsilon_i)} \qquad     Z_i(N-n_i)} \\

\\ & = \ \frac { \quad 0 \quad \; + e^{-\beta\epsilon_i}\; Z_i(N-1)} {Z_i(N) + e^{-\beta\epsilon_i}\; Z_i(N-1)} \\ & = \ \frac {1} ; width:15em; margin:.3em">

\bar{n}_i = \ \frac {1} {e^{(\epsilon_i - \mu)/kT }+1}

Derivation using Lagrange multipliers

A result can be achieved by directly analyzing the multiplicities of the system and using Lagrange multipliers.[27]

Suppose we have a number of energy levels, labeled by index i, each level having energy εi  and containing a total of ni  particles. Suppose each level contains gi  distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi  associated with level i is called the "degeneracy" of that energy level. The Pauli exclusion principle states that only one fermion can occupy any such sublevel.

The number of ways of distributing ni indistinguishable particles among the gi sublevels of an energy level, with a maximum of one particle per sublevel, is given by the binomial coefficient, using its combinatorial interpretation

w(n_i,g_i)=\frac{g_i!}{n_i!(g_i-n_i)!} \ .

For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equals 3!/(2!1!). The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated:

W = \prod_i w(n_i,g_i) = \prod_i \frac{g_i!}{n_i!(g_i-n_i)!}.

Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of ni for which W is maximized, subject to the constraint that there be a fixed number of particles, and a fixed energy. We constrain our solution using Lagrange multipliers forming the function:

f(n_i)=\ln(W)+\alpha(N-\sum n_i)+\beta(E-\sum n_i \epsilon_i).

Using Stirling's approximation for the factorials, taking the derivative with respect to ni, setting the result to zero, and solving for ni yields the Fermi–Dirac population numbers:

n_i = \frac{g_i}{e^{\alpha+\beta \epsilon_i}+1}.

By a process similar to that outlined in the Maxwell-Boltzmann statistics article, it can be shown thermodynamically that \beta = \frac{1}{kT} and \alpha = - \frac{\mu}{kT} where \mu is the chemical potential, k is Boltzmann's constant and T is the temperature, so that finally, the probability that a state will be occupied is:

\bar{n}_i = \frac{n_i}{g_i} = \frac{1}{e^{(\epsilon_i-\mu)/kT}+1}.

See also



This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.