Stick-breaking process

In probability theory, a Dirichlet process is a random process, that is a probability distribution whose domain is itself a set of probability distributions. Because of this hierarchical structure of distribution over distributions, we will below use terminology "draw from" a distribution, as distinct from "draw of" a distribution. In the former terminology, the distribution mentioned governs the relative chances of different outcomes; in the latter, the distribution mentioned is the random outcome.

Given a Dirichlet process \mathrm{DP}\left(H, \alpha\right), where H (the base distribution or base measure) is an arbitrary distribution and \alpha (the concentration parameter) is a positive real number, a draw from \mathbf{DP} will return a random distribution (the output distribution) over some of the values that can be drawn from H. That is, the support of each draw of the output distribution is always a subset of the support of base distribution. The output distribution will be discrete, meaning that individual values drawn from the output distribution will sometimes repeat themselves even if the base distribution is continuous (i.e., if two different draws from the base distribution will be distinct with probability one). The extent to which values will repeat is determined by \alpha, with higher values causing less repetition. If the base distribution is continuous, so that separate draws from it always return distinct values, then the infinite set of probabilities corresponding to the frequency of each possible value that the output distribution can return are distributed according to a stick-breaking process.

Note that the Dirichlet process is a stochastic process, meaning that technically speaking it is an infinite sequence of random variables, rather than a single random distribution. The relation between the two is as follows. Consider the Dirichlet process as defined above, as a distribution over random distributions, and call this process \operatorname{DP}_1(). We can call this the distribution-centered view of the Dirichlet process. First, draw a random output distribution from this process, and then consider an infinite sequence of random variables representing values drawn from this distribution. Note that, conditioned on the output distribution, the variables are independent identically distributed. Now, consider instead the distribution of the random variables that results from marginalizing out (integrating over) the random output distribution. (This makes all the variables dependent on each other. However, they are still exchangeable, meaning that the marginal distribution of one variable is the same as that of all other variables. That is, they are "identically distributed" but not "independent".) The resulting infinite sequence of random variables with the given marginal distributions is another view onto the Dirichlet process, denoted here \operatorname{DP}_2(). We can call this the process-centered view of the Dirichlet process. The conditional distribution of one variable given all the others, or given all previous variables, is defined by the Chinese restaurant process (see below).

Another way to think of a Dirichlet process is as an infinite-dimensional generalization of the Dirichlet distribution. The Dirichlet distribution returns a finite-dimensional set of probabilities (for some size K, specified by the parameters of the distribution), all of which sum to 1. This can be thought of as a finite-dimensional discrete distribution; i.e. a Dirichlet distribution can be thought of as a distribution over K-dimensional discrete distributions. Imagine generalizing a symmetric Dirichlet distribution, defined by a dimension K and concentration parameter \alpha/K, to an infinite set of probabilities; the resulting distribution over infinite-dimensional discrete distributions is called the stick-breaking process (see below). Imagine then using this set of probabilities to create an infinite-dimensional mixture model, with each separate probability from the set associated with a mixture component, and the value of each component drawn separately from a base distribution H; then draw an infinite number of samples from this mixture model. The infinite set of random variables corresponding to the marginal distribution of these samples is a Dirichlet process with parameters H and \alpha.

The Dirichlet process was formally introduced by Thomas Ferguson in 1973.[1]

Introduction

Consider a simple mixture model:

\begin{array}{lcl} \theta_{1,\dots,K} &\sim& H() \\ \boldsymbol\beta &\sim& \operatorname{Dirichlet}(\alpha/K,..., \alpha/K) \\ z_{1,\dots,N} &\sim& \operatorname{Categorical}(\boldsymbol\beta) \\ \phi_i &=& \theta_{z_i} \\ x_{i=1,\dots,N} &\sim& F(\phi_i) \end{array}

This is a basic generative model where the observations x_1,\dots,x_N are distributed according to a mixture of K components, where each component is distributed according to a single parametric family F(\theta) but where different components have different values of \theta, which is drawn in turn from a distribution H. Typically, H will be the conjugate prior distribution of F. In addition, the prior probability of each component is specified by \boldsymbol\beta, which is a size-K vector of probabilities, all of which add up to 1. Also, the \phi_i are non-random variables, their purpose is to record the value of \theta that is associated with observation i. As K\to\infty, the distribution of the vector \phi_{1..N} (where \theta,\beta,z have been marginalized out) becomes the Dirichlet process.

For example, if the observations are apartment prices and the K components represent different neighborhoods, then F might be a Gaussian distribution with unknown mean and unknown variance, with the mean and variance specifying the distribution of prices in that neighborhood. Then the parameter \theta will be a vector of two values, a mean drawn from a Gaussian distribution and a variance drawn from an inverse gamma distribution, which are the conjugate priors of the mean and variance, respectively, of a Gaussian distribution.

Meanwhile, if the observations are words and the K components represent different topics, then F might be a categorical distribution over a vocabulary of size V, with unknown frequencies of each word in the vocabulary, specifying the distribution of words in each particular topic. Then the parameter \theta will be a vector of V values, each representing a probability and all summing to one, drawn from a Dirichlet distribution, which is the conjugate prior of the categorical distribution.

Now imagine we consider the limit as K\to\infty. Conceptually this means that we have no idea how many components are present. The result will be as follows:

\begin{array}{lcl} \theta_{1,\dots,\infty} &\sim& H() \\ \boldsymbol\beta &\sim& \operatorname{Stick}(1,\alpha) \\ z_{1,\dots,N} &\sim& \operatorname{Categorical}(\boldsymbol\beta) \\ x_{i=1,\dots,N} &\sim& F(\theta_{z_i}) \end{array}

In this model, conceptually speaking there are an infinite number of components, each with a separate parameter value, and a correspondingly infinite number of prior probabilities for each component, drawn from a stick-breaking process (see section below). Note that a practical application of such a model would not actually store an infinite number of components. Instead, it would generate the component prior probabilities one at a time from the stick-breaking process, which by construction tends to return the largest probability values first. As each component probability is drawn, a corresponding parameter value is also drawn. At any one time, some of the prior probability mass will be assigned to components and some unassigned. To generate a new observation, a random number between 0 and 1 is drawn uniformly, and if it lands in the unassigned mass, new components are drawn as necessary (each one reducing the amount of unassigned mass) until enough mass has been allocated to place this number in an existing component. Each time a new component probability is generated by the stick-breaking process, a corresponding parameter value is drawn from H.

Sometimes, the stick-breaking process is denoted as \operatorname{GEM}(), after the authors of this process, instead of \operatorname{Stick}().

Another view of this model comes from looking back at the finite-dimensional mixture model with mixing probabilities \boldsymbol\beta drawn from a Dirichlet distribution and considering the distribution of a particular component assignment z_i conditioned on all previous components, with the mixing probabilities integrated out. This distribution is a Dirichlet-multinomial distribution. Note that, conditioned on a particular value of \boldsymbol\beta, each z_i is independent of the others, but marginalizing over \boldsymbol\beta introduces dependencies among the component assignments. It can be shown (see the Dirichlet-multinomial distribution article) that

p(z_i = k|\mathbf{z}_{1,\dots,i-1},\alpha,K) = \frac{n_k + \alpha/K}{i - 1 + \alpha}

where k = 1 \dots K is a particular value of z_i and n_k is the number of times a topic assignment in the set \{z_1,\dots,z_{i-1}\} has the value k, i.e. probability of assigning an observation to a particular component is roughly proportional to the number of previous observations already assigned to this component.

Now consider the limit as K\to\infty. For a particular previously observed component k,

p(z_i = k|\mathbf{z}_{1,\dots,i-1},\alpha) = \lim_{K\to\infty} \frac{n_k + \alpha/K}{i - 1 + \alpha} = \frac{n_k}{i - 1 + \alpha}

That is, the probability of seeing a previously observed component is directly proportional to the number of times the component has already been seen. This is often expressed as the rich get richer.

For an unseen component k, n_k = 0, and as K\to\infty the probability of seeing this component goes to 0. However, the number of unseen components approaches infinity. Consider instead the set of all unseen components \mathbf{Q}. Note that, if there are L components seen so far, the number of unseen components |\mathbf{Q}| = K - L. Then, consider the probability of seeing any of these components:

\begin{align} p(z_i \in \mathbf{Q}|\mathbf{z}_{1,\dots,i-1},\alpha) & = \lim_{K\to\infty} \sum_{u\in\mathbf{Q}} \frac{\alpha/K}{i - 1 + \alpha} \\ &= \frac{\alpha}{i-1+\alpha} \lim_{K\to\infty} \frac{K-L}{K} \\ &= \frac{\alpha}{i-1+\alpha} \end{align}

In other words:

  1. The probability of seeing an already-seen component is proportional to the number of times that component has been seen.
  2. The probability of seeing any unseen component is proportional to the concentration parameter \alpha.

This process is termed a Chinese restaurant process (CRP). In terms of the CRP, the infinite-dimensional model can equivalently be written:

\begin{array}{lcl} \theta_{1,\dots,\infty} &\sim& H() \\ z_{1,\dots,N} &\sim& \operatorname{CRP}(\alpha) \\ x_{i=1,\dots,N} &\sim& F(\theta_{z_i}) \end{array}

Note that we have marginalized out the mixing probabilities \boldsymbol\beta, and thereby produced a more compact representation of the model.

Now imagine further that we also marginalize out the component assignments z_{1,\dots,N}, and instead we look directly at the distribution of \phi_i = \theta_{z_i}. Then, we can write the model directly in terms of the Dirichlet process:

\begin{array}{lcl} G &\sim& \operatorname{DP}_1(H,\alpha) \\ \phi_{1,\dots,N} &\sim& G \\ x_{i=1,\dots,N} &\sim& F(\phi_i) \end{array}

\operatorname{DP}_1 represents one view (the distribution-centered view) of the Dirichlet process as producing a random, infinite-dimensional discrete distribution G with values drawn from H.

An alternative view of the Dirichlet process (the process-centered view), adhering more closely to its definition as a stochastic process, sees it as directly producing an infinite stream of \boldsymbol\phi values. Notating this view as \operatorname{DP}_2, we can write the model as

\begin{array}{lcl} \phi_{1,\dots} &\sim& \operatorname{DP}_2(H,\alpha) \\ x_{i=1,\dots,N} &\sim& F(\phi_i) \end{array}

In this view, although the Dirichet process generates an infinite stream of parameter values, we only care about the first N values. Note that some of these values will be the same as previously seen values, in a "rich get richer" scheme, as determined by the Chinese restaurant process.

Formal definition

A Dirichlet process over a set S is a stochastic process whose sample path (i.e. an infinite-dimensional set of random variates drawn from the process) is a probability distribution on S. The finite dimensional distributions are from the Dirichlet distribution: If H is a finite measure on S, \alpha is a positive real number and X is a sample path drawn from a Dirichlet process, written as

X \sim \mathrm{DP}\left(\alpha, H\right) 

then for any measureable partition of S, say \left\{B_i\right\}_{i=1}^{n}, we have that

\left(X\left(B_1\right),\dots,X\left(B_n\right)\right) \sim \mathrm{Dirichlet}\left(\alpha H\left(B_1\right),\dots, \alpha H\left(B_n\right)\right).

The Chinese restaurant process

As shown above, a simple distribution, the so-called Chinese restaurant process, results from considering the conditional distribution of one component assignment given all previous ones in a Dirichlet distribution mixture model with K components, and then taking the limit as K goes to infinity. It can be shown, using the above formal definition of the Dirichlet process and considering the process-centered view of the process, that the conditional distribution of the component assignment of one sample from the process given all previous samples follows a Chinese restaurant process.

Suppose that J samples, \left\{\theta_j\right\}_{j=1}^{J} have already been obtained. According to the Chinese Restaurant Process, the \left(J+1\right)^{\mathrm{th}} sample should be drawn from

\theta_{J+1} \sim \frac{1}{H\left(S\right)+J} \left( H + \sum_{j=1}^{J}\delta_{\theta_j}\right)

where \delta_{\theta} is an atomic distribution centered on \theta. Interpreting this, two properties are clear:

  1. Even if S is a countable set, there is a finite probability that two samples will have exactly the same value. Samples from a Dirichlet process are therefore discrete.
  2. The Dirichlet process exhibits a self-reinforcing property; the more often a given value has been sampled in the past, the more likely it is to be sampled again.

The name "Chinese restaurant process" is derived from the following analogy: imagine an infinitely large restaurant containing an infinite number of tables, and able to serve an infinite number of dishes. The restaurant in question operates a somewhat unusual seating policy whereby new diners are seated either at a currently occupied table with probability proportional to the number of guests already seated there, or at an empty table with probability proportional to a constant. Guests who sit at an occupied table must order the same dish as those currently seated, whereas guests allocated a new table are served a new dish at random. The distribution of dishes after J guests are served is a sample drawn as described above. The Chinese Restaurant Process is related to the Polya Urn sampling scheme for finite Dirichlet distributions.

The stick-breaking process

A third approach to the Dirichlet process is provided by the so-called stick-breaking process, which can be used to provide a constructive algorithm (the stick-breaking construction) for generating a Dirichlet process. Let \left\{\beta'_k\right\}_{k=1}^\infty be a set of random variables such that

\beta'_k \sim \mathrm{Beta}\left(1,\alpha\right)

where \alpha is the normalisation constant for the measure H, so that H = \alpha H_\text{norm}. Define \left\{\beta_k\right\}_{k=1}^\infty according to

\beta_k = \beta'_k\cdot\prod_{i=1}^{k-1}\left(1-\beta'_i\right)

and let \left\{\theta_k\right\}_{k=1}^{\infty} be a set of samples from H_\text{norm}. The distribution given by the density f(\theta) = \sum_{k=1}^{\infty}\beta_k\cdot\delta_{\theta_k}(\theta) (where \delta is the Dirac delta measure, here used as an indicator function which evaluates to 0 except for \delta_{\theta_k}(\theta_k)=1), is then a sample from the corresponding Dirichlet process. This method provides an explicit construction of the non-parametric sample, and makes clear the fact that the samples are discrete.

The name 'stick-breaking' comes from the interpretation of \beta_k as the length of the piece of a unit-length stick assigned to the kth value. After the first k − 1 values have their portions assigned, the length of the remainder of the stick, \prod_{i=1}^{k-1}\left(1-\beta'_i\right), is broken according to a sample \beta'_k from a beta distribution. In this analogy, \beta'_k indicates the portion of the remainder to be assigned to the k-th value. The smaller \alpha is, the less of the stick will be left for subsequent values (on average).

The Polya urn scheme

Yet another way to visualize the Dirichlet process and Chinese restaurant process is as a modified Polya urn scheme. Imagine that we start with an urn filled with \alpha black balls. Then we proceed as follows:

  1. Each time we need an observation, we draw a ball from the urn.
  2. If the ball is black, we generate a new (non-black) color uniformly, label a new ball this color, drop the new ball into the urn along with the ball we drew, and return the color we generated.
  3. Otherwise, label a new ball with the color of the ball we drew, drop the new ball into the urn along with the ball we drew, and return the color we observed.

The resulting distribution over colors is the same as the distribution over tables in the Chinese restaurant process. Furthermore, when we draw a black ball, if rather than generating a new color, we instead pick a random value from a base distribution H and use that value to label the new ball, the resulting distribution over labels will be the same as the distribution over values in a Dirichlet process.

Applications of the Dirichlet process

Dirichlet processes are frequently used in Bayesian nonparametric statistics. "Nonparametric" here does not mean a parameter-less model, rather a model in which representations grow as more data are observed. Bayesian nonparametric models have gained considerable popularity in the field of machine learning because of the above-mentioned flexibility, especially in unsupervised learning. In a Bayesian nonparametric model, the prior and posterior distributions are not parametric distributions, but stochastic processes.[2] The fact that the Dirichlet distribution is a probability distribution on the simplex of non-negative numbers that sum to one makes it a good candidate to model distributions of distributions or distributions of functions. Additionally, the non-parametric nature of this model makes it an ideal candidate for clustering problems where the distinct number of clusters is unknown beforehand.

As draws from a Dirichlet process are discrete, an important use is as a prior probability in infinite mixture models. In this case, S is the parametric set of component distributions. The generative process is therefore that a sample is drawn from a Dirichlet process, and for each data point in turn a value is drawn from this sample distribution and used as the component distribution for that data point. The fact that there is no limit to the number of distinct components which may be generated makes this kind of model appropriate for the case when the number of mixture components is not well-defined in advance. For example, the infinite mixture of Gaussians model.[3]

The infinite nature of these models also lends them to natural language processing applications, where it is often desirable to treat the vocabulary as an infinite, discrete set.

Related distributions

  • The Pitman–Yor process is a generalization of the Dirichlet process to accommodate power-law tails
  • The hierarchical Dirichlet process extends the ordinary Dirichlet process for modelling grouped data.

References

External links

  • Introduction to the Dirichlet Distribution and Related Processes by Frigyik, Kapila and Gupta
  • Yee Whye Teh's overview of Dirichlet processes
  • Webpage for the NIPS 2003 workshop on non-parametric Bayesian methods
  • Peter Green's summary of construction of Dirichlet Processes
  • Peter Green's paper on probabilistic models of Dirichlet Processes with implications for statistical modelling and analysis
  • Zoubin Ghahramani's UAI 2005 tutorial on Nonparametric Bayesian methods
  • GIMM software for performing cluster analysis using Infinite Mixture Models
  • A Toy Example of Clustering using Dirichlet Process. by Zhiyuan Weng

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.