Random matrices

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.



In nuclear physics, random matrices were introduced by Eugene Wigner[1] to model the spectra of heavy atoms. He postulated that the spacings between the lines in the spectrum of a heavy atom should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[2] In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean field approximation.

In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture[3] asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.

Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics,[4] quantum gravity in two dimensions,[5] mesoscopic physics,[6] and more.[7][8][9][10][11]

Mathematical statistics and numerical analysis

In multivariate statistics, random matrices were introduced by John Wishart for statistical analysis of large samples;[12] see estimation of covariance matrices.

Significant results have been shown that extend the classical scalar Chernoff, Bernstein, and Hoeffding inequalities to the largest eigenvalues of finite sums of random Hermitian matrices.[13] Corollary results are derived for the maximum singular values of rectangular matrices.

In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine[14] to describe computation errors in operations such as matrix multiplication. See also[15] for more recent results.

Number theory

In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modelled by the distribution of eigenvalues of certain random matrices.[16] The connection was first discovered by Hugh Montgomery and Freeman J. Dyson. It is connected to the Hilbert–Pólya conjecture.

Gaussian ensembles

The most studied random matrix ensembles are the Gaussian ensembles.

The Gaussian unitary ensemble GUE(n) is described by the Gaussian measure with density

\frac{1}{Z_{\text{GUE}(n)}} e^{- \frac{n}{2} \mathrm{tr} H^2}

on the space of n × n Hermitian matrices H = (Hij)n
. Here ZGUE(n) = 2n/2 Template:Pin2/2 is a normalization constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.

The Gaussian orthogonal ensemble GOE(n) is described by the Gaussian measure with density

\frac{1}{Z_{\text{GOE}(n)}} e^{- \frac{n}{4} \mathrm{tr} H^2}

on the space of n × n real symmetric matrices H = (Hij)n
. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry.

The Gaussian symplectic ensemble GSE(n) is described by the Gaussian measure with density

\frac{1}{Z_{\text{GSE}(n)}} e^{- n \mathrm{tr} H^2} \,

on the space of n × n quaternionic Hermitian matrices H = (Hij)n
. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.

The joint probability density for the eigenvalues λ1,λ2,...,λn of GUE/GOE/GSE is given by

\frac{1}{Z_{\beta, n}} \prod_{k=1}^n e^{-\frac{\beta n}{4}\lambda_k^2}\prod_{i

where the Dyson index, β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE, counts the number of real components per matrix element; Zβ,n is a normalisation constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probability density has a zero (of \betath order) for coinciding eigenvalues \lambda_j=\lambda_i.


Wigner matrices are random Hermitian matrices \textstyle H_n = (H_n(i,j))_{i,j=1}^n such that the entries

\left\{ H_n(i, j)~, \, 1 \leq i \leq j \leq n \right\}

above the main diagonal are independent random variables with zero mean, and

\left\{ H_n(i, j)~, \, 1 \leq i < j \leq n \right\}

have identical second moments.

Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/ Hermitian/ quaternionic Hermitian matrices, which is of the form \textstyle \frac{1}{Z_n} e^{- n \mathrm{tr} V(H)}~, where the function V is called the potential.

The Gaussian ensembles are the only common special cases of these two classes of random matrices.

Spectral theory of random matrices

The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.

Global regime

In the global regime, one is interested in the distribution of linear statistics of the form Nf, H = n-1 tr f(H).

Empirical spectral measure

The empirical spectral measure μH of H is defined by

\mu_{H}(A) = \frac{1}{n} \, \# \left\{ \text{eigenvalues of }H\text{ in }A \right\} = N_{1_A, H}, \quad A \subset \mathbb{R}.

Usually, the limit of \mu_{H} is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ).

The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner, see Wigner's law. A more general theory was developed by Marčenko and Pastur [17][18]

The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.[19]


For the linear statistics Nf,H = n−1 ∑ f(λj), one is also interested in the fluctuations about ∫ f(λdN(λ). For many classes of random matrices, a central limit theorem of the form

\frac{N_{f,H} - \int f(\lambda) \, dN(\lambda)}{\sigma_{f, n}} \overset{D}{\longrightarrow} N(0, 1)

is known, see,[20][21] etc.

Local regime

In the local regime, one is interested in the spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support.

Bulk statistics

Formally, fix \lambda_0 in the interior of the support of N(\lambda). Then consider the point process

\Xi(\lambda_0) = \sum_j \delta\Big({\cdot} - n \rho(\lambda_0) (\lambda_j - \lambda_0) \Big)~,

where \lambda_j are the eigenvalues of the random matrix.

The point process \Xi(\lambda_0) captures the statistical properties of eigenvalues in the vicinity of \lambda_0. For the Gaussian ensembles, the limit of \Xi(\lambda_0) is known;[2] thus, for GUE it is a determinantal point process with the kernel

K(x, y) = \frac{\sin \pi(x-y)}{\pi(x-y)}

(the sine kernel).

The universality principle postulates that the limit of \Xi(\lambda_0) as n \to \infty should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on \lambda_0). This was rigorously proved for several models of random matrices: for invariant matrix ensembles,[22][23] for Wigner matrices,[24][25] et cet.

Edge statistics

See Tracy–Widom distribution.

Other classes of random matrices

Wishart matrices

Main article: Wishart distribution

Wishart matrices are n × n random matrices of the form H = X X*, where X is an n × n random matrix with independent entries, and X* is its conjugate matrix. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex).

The limit of the empirical spectral measure of Wishart matrices was found[17] by Vladimir Marchenko and Leonid Pastur, see Marchenko–Pastur distribution.

Random unitary matrices

See circular ensembles

Non-Hermitian random matrices

See circular law.

Guide to references


External links

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.