World Library  
Flag as Inappropriate
Email this Article

Gaussian integral

Article Id: WHEBN0000567580
Reproduction Date:

Title: Gaussian integral  
Author: World Heritage Encyclopedia
Language: English
Subject: Integral, Missing science topics/ExistingMathG, Normal distribution, Gaussian function, Integrals
Collection: Articles Containing Proofs, Gaussian Function, Integrals, Theorems in Analysis
Publisher: World Heritage Encyclopedia

Gaussian integral

A graph of ƒ(x) = ex2 and the area between the function and the x-axis, which is equal to \scriptstyle\sqrt{\pi} .

The Gaussian integral, also known as the Euler–Poisson integral is the integral of the Gaussian function ex2 over the entire real line. It is named after the German mathematician and physicist Carl Friedrich Gauss. The integral is:

\int_{-\infty}^{+\infty} e^{-x^2}\,\mathrm d x = \sqrt{\pi}

This integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related both to the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator, also in the path integral formulation, and to find the propagator of the harmonic oscillator, we make use of this integral.

Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for

\int e^{-x^2}\,dx,

but the definite integral

\int_{-\infty}^{+\infty} e^{-x^2}\,\mathrm d x

can be evaluated.

The Gaussian integral is encountered very often in physics and numerous generalizations of the integral are encountered in quantum field theory.


  • Computation 1
    • By polar coordinates 1.1
      • Careful proof 1.1.1
    • By Cartesian coordinates 1.2
  • Relation to the gamma function 2
  • Generalizations 3
    • The integral of a Gaussian function 3.1
    • n-dimensional and functional generalization 3.2
    • n-dimensional with linear term 3.3
    • Integrals of similar form 3.4
    • Higher-order polynomials 3.5
  • See also 4
  • References 5


By polar coordinates

A standard way to compute the Gaussian integral, the idea of which goes back to Poisson,[1] is to make use of the property that:

\left(\int_{-\infty}^{\infty} e^{-x^2}\,dx\right)^2 = \int_{-\infty}^{\infty} e^{-x^2}\,dx \int_{-\infty}^{\infty} e^{-y^2}\,dy = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-(x^2+y^2)}\, dx\,dy

Comparing these two computations yields the integral, though one should take care about the improper integrals involved.

On the other hand, \begin{align} \iint_{\mathbf{R}^2} e^{-(x^2+y^2)}\,d(x,y) &= \int_0^{2\pi} \int_0^{\infty} e^{-r^2}r\,dr\,d\theta\\ &= 2\pi \int_0^\infty re^{-r^2}\,dr\\ &= 2\pi \int_{-\infty}^0 \tfrac{1}{2} e^s\,ds && s = -r^2\\ &= \pi \int_{-\infty}^0 e^s\,ds \\ &= \pi (e^0 - e^{-\infty}) \\ &=\pi, \end{align}

where the factor of r comes from the transform to polar coordinates (r dr  is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking s = −r2, so ds = −2r dr.

Combining these yields

\left ( \int_{-\infty}^\infty e^{-x^2}\,dx \right )^2=\pi,


\int_{-\infty}^\infty e^{-x^2}\,dx=\sqrt{\pi}.

Careful proof

To justify the improper double integrals and equating the two expressions, we begin with an approximating function:

I(a)=\int_{-a}^a e^{-x^2}dx.

If the integral

\int_{-\infty}^\infty e^{-x^2}\,dx

were absolutely convergent we would have that its Cauchy principal value, that is, the limit

\lim_{a\to\infty} I(a)

would coincide with

\int_{-\infty}^\infty e^{-x^2}\,dx.

To see that this is the case, consider that

\int_{-\infty}^\infty |e^{-x^2}|\, dx < \int_{-\infty}^{-1} -x e^{-x^2}\, dx + \int_{-1}^1 e^{-x^2}\, dx+ \int_{1}^{\infty} x e^{-x^2}\, dx<\infty.

so we can compute

\int_{-\infty}^\infty e^{-x^2}\,dx

by just taking the limit

\lim_{a\to\infty} I(a).

Taking the square of I(a) yields

\begin{align} I(a)^2 & = \left ( \int_{-a}^a e^{-x^2}\, dx \right ) \left ( \int_{-a}^a e^{-y^2}\, dy \right ) \\ & = \int_{-a}^a \left ( \int_{-a}^a e^{-y^2}\, dy \right )\,e^{-x^2}\, dx \\ & = \int_{-a}^a \int_{-a}^a e^{-(x^2+y^2)}\,dy\,dx. \end{align}

Using Fubini's theorem, the above double integral can be seen as an area integral

\iint_ e^{-(x^2+y^2)}\,d(x,y),

taken over a square with vertices {(−aa), (aa), (a, −a), (−a, −a)} on the xy-plane.

Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than I(a)^2, and similarly the integral taken over the square's circumcircle must be greater than I(a)^2. The integrals over the two disks can easily be computed by switching from cartesian coordinates to polar coordinates:

\begin{align} x & = r \cos \theta \\ y & = r \sin\theta \\ d(x,y) & = r\, d(r,\theta). \end{align}
\int_0^{2\pi}\int_0^a re^{-r^2}\,dr\,d\theta < I^2(a) < \int_0^{2\pi}\int_0^{a\sqrt{2}} re^{-r^2}\,dr\,d\theta.

(See to polar coordinates from Cartesian coordinates for help with polar transformation.)


\pi (1-e^{-a^2}) < I^2(a) < \pi (1 - e^{-2a^2}).

By the squeeze theorem, this gives the Gaussian integral

\int_{-\infty}^\infty e^{-x^2}\, dx = \sqrt{\pi}.

By Cartesian coordinates

A different technique, which goes back to Laplace (1812),[2] is the following. Let

\begin{align} y & = xs \\ dy & = x\,ds. \end{align}

Since the limits on s as y → ±∞ depend on the sign of x, it simplifies the calculation to use the fact that ex2 is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,

\int_{-\infty}^{\infty} e^{-x^2}\,dx = 2\int_{0}^{\infty} e^{-x^2}\,dx.

Thus, over the range of integration, x ≥ 0, and the variables y and s have the same limits. This yields:

\begin{align} I^2 &= 4 \int_0^\infty \int_0^\infty e^{-(x^2 + y^2)} dy\,dx \\ &= 4 \int_0^\infty \left( \int_0^\infty e^{-(x^2 + y^2)} \, dy \right) \, dx \\ &= 4 \int_0^\infty \left( \int_0^\infty e^{-x^2(1+s^2)} x\,ds \right) \, dx \\ &= 4 \int_0^\infty \left( \int_0^\infty e^{-x^2(1 + s^2)} x \, dx \right) \, ds \\ &= 4 \int_0^\infty \left[ \frac{1}{-2(1+s^2)} e^{-x^2(1+s^2)} \right]_{x=0}^{x=\infty} \, ds \\ &= 4 \left (\tfrac{1}{2} \int_0^\infty \frac{ds}{1+s^2} \right ) \\ &= 2 \left [ \arctan s \frac{}{} \right ]_0^\infty \\ &= \pi \end{align}

Therefore, I = \sqrt{\pi}, as expected.

Relation to the gamma function

The integrand is an even function,

\int_{-\infty}^{\infty} e^{-x^2} dx = 2 \int_0^\infty e^{-x^2} dx

Thus, after the change of variable x=\sqrt{t}, this turns into the Euler integral

2 \int_0^\infty e^{-x^2} dx=2\int_0^\infty \frac{1}{2}\ e^{-t} \ t^{-\frac{1}{2}} dt = \Gamma\left(\frac{1}{2}\right) = \sqrt{\pi}

where Γ is the gamma function. This shows why the factorial of a half-integer is a rational multiple of \sqrt \pi. More generally,

\int_0^\infty e^{-ax^b} dx = \frac{\Gamma\left(\frac{1}{b}\right)}{ba^{\frac{1}{b}}}


The integral of a Gaussian function

The integral of an arbitrary Gaussian function is

\int_{-\infty}^{\infty} e^{-a(x+b)^2}\,dx= \sqrt{\frac{\pi}{a}}.

An alternative form is

\int_{-\infty}^{\infty}e^{- a x^2 + b x + c}\,dx=\sqrt{\frac{\pi}{a}}\,e^{\frac{b^2}{4a}+c},

This form is very useful in calculating mathematical expectations of some continuous probability distributions concerning normal distribution.

See, for example, the expectation of the log-normal distribution.

n-dimensional and functional generalization

Suppose A is a symmetric positive-definite (hence invertible) n×n covariance matrix. Then,

\int_{-\infty}^\infty \exp\left(-\frac 1 2 \sum_{i,j=1}^{n}A_{ij} x_i x_j \right) \, d^nx =\int_{-\infty}^\infty \exp\left(-\frac 1 2 x^{T} A x \right) \, d^nx=\sqrt{\frac{(2\pi)^n}{\det A}}

where the integral is understood to be over Rn. This fact is applied in the study of the multivariate normal distribution.


\int x^{k_1}\cdots x^{k_{2N}} \, \exp\left( -\frac{1}{2} \sum_{i,j=1}^{n}A_{ij} x_i x_j \right) \, d^nx =\sqrt{\frac{(2\pi)^n}{\det A}} \, \frac{1}{2^N N!} \, \sum_{\sigma \in S_{2N}}(A^{-1})^{k_{\sigma(1)}k_{\sigma(2)}} \cdots (A^{-1})^{k_{\sigma(2N-1)}k_{\sigma(2N)}}

where σ is a permutation of {1, ..., 2N} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, ..., 2N} of N copies of A−1.


\int f(\vec x) \exp\left( - \frac 1 2 \sum_{i,j=1}^{n}A_{ij} x_i x_j \right) d^nx=\sqrt{(2\pi)^n\over \det A} \, \left. \exp\left({1\over 2}\sum_{i,j=1}^{n}(A^{-1})_{ij}{\partial \over \partial x_i}{\partial \over \partial x_j}\right)f(\vec{x})\right|_{\vec{x}=0}

for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.

While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that (2\pi)^\infty is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:

\frac{\int f(x_1)\cdots f(x_{2N}) e^{-\iint \frac{1}{2}A(x_{2N+1},x_{2N+2}) f(x_{2N+1}) f(x_{2N+2}) d^dx_{2N+1} d^dx_{2N+2}} \mathcal{D}f}{\int e^{-\iint \frac{1}{2} A(x_{2N+1}, x_{2N+2}) f(x_{2N+1}) f(x_{2N+2}) d^dx_{2N+1} d^dx_{2N+2}} \mathcal{D}f} =\frac{1}{2^N N!}\sum_{\sigma \in S_{2N}}A^{-1}(x_{\sigma(1)},x_{\sigma(2)})\cdots A^{-1}(x_{\sigma(2N-1)},x_{\sigma(2N)}).

In the DeWitt notation, the equation looks identical to the finite-dimensional case.

n-dimensional with linear term

If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)

\int e^{-\frac{1}{2}\sum_{i,j=1}^{n}A_{ij} x_i x_j+\sum_{i=1}^{n}B_i x_i} d^nx=\int e^{-\frac{1}{2}\vec{x}^T \bold{A} \vec{x}+\vec{B}^T \vec{x}} d^nx= \sqrt{ \frac{(2\pi)^n}{\det{A}} }e^{\frac{1}{2}\vec{B}^{T}A^{-1}\vec{B}}.

Integrals of similar form

\int_0^\infty x^{2n} e^{-\frac{x^2}{a^2}}\,dx = \sqrt{\pi}\frac{a^{2n+1} (2n-1)!!}{2^{n+1}}
\int_0^\infty x^{2n+1} e^{-\frac{x^2}{a^2}}\,dx = \frac{n!}{2} a^{2n+2}
\int_0^\infty x^{n}e^{-a\,x^2}\,dx = \frac{\Gamma(\frac{(n+1)}{2})}{2\,a^{\frac{(n+1)}{2}}}
\int_0^\infty x^{2n}e^{-ax^2}\,dx = \frac{(2n-1)!!}{a^n 2^{n+1}} \sqrt{\frac{\pi}{a}}

(n positive integer)

An easy way to derive these is by parameter differentiation.

\int_{-\infty}^\infty x^{2n} e^{-\alpha x^2}\,dx = \left(-1\right)^n\int_{-\infty}^\infty \frac{\partial^n}{\partial \alpha^n} e^{-\alpha x^2}\,dx = \left(-1\right)^n\frac{\partial^n}{\partial \alpha^n} \int_{-\infty}^\infty e^{-\alpha x^2}\,dx = \sqrt{\pi} \left(-1\right)^n\frac{\partial^n}{\partial \alpha^n}\alpha^{-\frac{1}{2}} = \sqrt{\frac{\pi}{\alpha}}\frac{(2n-1)!!}{\left(2\alpha\right)^n}

One could also integrate by parts and find a recurrence relation to solve this.

Higher-order polynomials

Exponentials of other even polynomials can easily be solved using series. For example the solution to the integral of the exponential of a quartic polynomial is

\int_{-\infty}^{\infty} e^{a x^4+b x^3+c x^2+d x+f}\,dx =\frac12 e^f \ \sum_{\begin{smallmatrix}n,m,p=0 \\ n+p=0 \mod 2\end{smallmatrix}}^{\infty} \ \frac{b^n}{n!} \frac{c^m}{m!} \frac{d^p}{p!} \frac{\Gamma \left (\frac{3n+2m+p+1}{4} \right)}{(-a)^{\frac{3n+2m+p+1}4}}.

The n + p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)n+p/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.

See also


  1. ^
  2. ^
  • Weisstein, Eric W., "Gaussian Integral", MathWorld.
  • David Griffiths. Introduction to Quantum Mechanics. 2nd Edition back cover.
  • Abramowitz, M. and Stegun, I. A. Handbook of Mathematical Functions, Dover Publications, Inc. New York
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.