#jsDisabledContent { display:none; } My Account | Register | Help

# Chinese restaurant process

Article Id: WHEBN0004482900
Reproduction Date:

 Title: Chinese restaurant process Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:

### Chinese restaurant process

For other uses, see Chinese restaurant.

In probability theory, the Chinese restaurant process is a discrete-time stochastic process, analogous to seating customers at tables in a Chinese restaurant. Imagine a Chinese restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 is seated at an unoccupied table with probability 1. At time n + 1, a new customer chooses uniformly at random to sit at one of the following n + 1 places: directly to the left of one of the n customers already sitting at an occupied table, or at a new, unoccupied table. David J. Aldous attributes the restaurant analogy to Jim Pitman and Lester Dubins in his 1983 book.[1]

At time n, the value of the process is a partition of the set of n customers, where the tables are the blocks of the partition. Mathematicians are interested in the probability distribution of this random partition.

## Contents

• Formal definition 1
• Generalization 2
• Derivation 2.1
• Expected number of tables 2.2
• The Indian buffet process 2.3
• Applications 3
• References 4

## Formal definition

At any positive-integer time n, the value of the process is a partition Bn of the set {1, 2, 3, ..., n}, whose probability distribution is determined as follows. At time n = 1, the trivial partition { {1} } is obtained with probability 1. At time n + 1 the element n + 1 is either:

1. added to one of the blocks of the partition Bn, where each block is chosen with probability |b|/(n + 1) where |b| is the size of the block, or
2. added to the partition Bn as a new singleton block, with probability 1/(n + 1).

The random partition so generated has some special properties. It is exchangeable in the sense that relabeling {1, ..., n} does not change the distribution of the partition, and it is consistent in the sense that the law of the partition of n − 1 obtained by removing the element n from the random partition at time n is the same as the law of the random partition at time n − 1.

The probability assigned to any particular partition (ignoring the order in which customers sit around any particular table) is

\Pr(B_n = B) = \dfrac{\prod_{b\in B} (|b| -1)!}{n!}

where b is a block in the partition B and |b| is the size (i.e. number of elements) of b.

## Generalization

This construction can be generalized to a model with two parameters, α and θ,[2][3] commonly called the discount and strength (or concentration) parameters. At time n + 1, the next customer to arrive finds |B| occupied tables and decides to sit at an empty table with probability

\dfrac{\theta + |B| \alpha}{n + \theta},

or at an occupied table b of size |b| with probability

\dfrac{|b| - \alpha}{n + \theta}.

In order for the construction to define a valid probability measure it is necessary to suppose that either α < 0 and θ = - Lα for some L ∈ {1, 2, ...}; or that 0 ≤ α ≤ 1 and θ > −α.

Under this model the probability assigned to any particular partition B of n, in terms of the Pochhammer k-symbol, is

\Pr(B_n = B) = \dfrac{(\theta + \alpha)_{|B|-1, \alpha}}{(\theta+1)_{n-1, 1}} \prod_{b\in B}(1-\alpha)_{|b|-1, 1}

where, by convention, (a)_{0,c} = 1, and for b > 0

(a)_{b,c} = \prod_{i=0}^{b-1}(a+ic) = \begin{cases} a^b & \text{if }c = 0, \\ \\ \dfrac{c^b\,\Gamma(a/c + b)}{\Gamma(a/c)} & \text{otherwise}. \end{cases}

Thus, for the case when \theta > 0 the partition probability can be expressed in terms of the Gamma function as

\Pr(B_n = B) =\dfrac{\Gamma(\theta)}{\Gamma(\theta+n)}\dfrac{\alpha^{|B|}\,\Gamma(\theta/\alpha + |B|) }{\Gamma(\theta/\alpha)}\prod_{b\in B}\dfrac{\Gamma(|b|-\alpha)}{\Gamma(1-\alpha)}.

In the one-parameter case, where \alpha is zero, this simplifies to

\Pr(B_n = B) = \dfrac{\Gamma(\theta)\,\theta^{|B|}}{\Gamma(\theta+n)}\prod_{b\in B} \Gamma(|b|).

Or, when \theta is zero,

\Pr(B_n = B) =\dfrac{\alpha^{|B|-1}\,\Gamma(|B|) }\prod_{b\in B}\dfrac{\Gamma(|b|-\alpha)}{\Gamma(1-\alpha)}.

As before, the probability assigned to any particular partition depends only on the block sizes, so as before the random partition is exchangeable in the sense described above. The consistency property still holds, as before, by construction.

If α = 0, the probability distribution of the random partition of the integer n thus generated is the Ewens distribution with parameter θ, used in population genetics and the unified neutral theory of biodiversity.

### Derivation

Here is one way to derive this partition probability. Let Ci be the random block into which the number i is added, for i = 1, 2, 3, ... . Then

\Pr(C_i = c|C_1,\ldots,C_{i-1}) = \begin{cases} \dfrac{\theta + |B| \alpha }{\theta + i -1} & \text{if }c \in \text{new block}, \\ \\ \dfrac{|b| - \alpha }{\theta + i - 1} & \text{if }c\in b; \end{cases}

The probability that Bn is any particular partition of the set { 1, ..., n } is the product of these probabilities as i runs from 1 to n. Now consider the size of block b: it increases by 1 each time we add one element into it. When the last element in block b is to be added in, the block size is (|b| − 1). For example, consider this sequence of choices: (generate a new block b)(join b)(join b)(join b). In the end, block b has 4 elements and the product of the numerators in the above equation gets θ · 1 · 2 · 3. Following this logic, we obtain Pr(Bn = B) as above.

### Expected number of tables

For the one parameter case, with α = 0 and 0 < θ < ∞, the expected number of tables, given that there are n seated customers, is[4]

\begin{align} \sum_{k=1}^n \frac{\theta}{\theta+k-1} = \theta \cdot (\Psi(\theta+n) - \Psi(\theta)) \end{align}

where \Psi(\theta) is the digamma function. In the general case (α > 0) the expected number of occupied tables is[3]

\begin{align} \frac{\Gamma(\theta+n+\alpha)\Gamma(\theta+1)}{\alpha \Gamma(\theta+n)\Gamma(\theta+\alpha)}-\frac{\theta}{\alpha}. \end{align}

### The Indian buffet process

It is possible to adapt the model such that each data point is no longer uniquely associated with a class (i.e. we are no longer constructing a partition), but may be associated with any combination of the classes. This strains the restaurant-tables analogy and so is instead likened to a process in which a series of diners samples from some subset of an infinite selection of dishes on offer at a buffet. The probability that a particular diner samples a particular dish is proportional to the popularity of the dish among diners so far, and in addition the diner may sample from the untested dishes. This has been named the Indian buffet process and can be used to infer latent features in data.[5]

## Applications

The Chinese restaurant process is closely connected to Dirichlet processes and Pólya's urn scheme, and therefore useful in applications of nonparametric Bayesian methods including Bayesian statistics. The Generalized Chinese Restaurant Process is closely related to Pitman–Yor process. These processes have been used in many applications, including modeling text, clustering biological microarray data, biodiversity modelling and detecting objects in images .

## References

1. ^ Aldous, D. J. (1985). "École d'Été de Probabilités de Saint-Flour XIII — 1983". Lecture Notes in Mathematics 1117. pp. 1–1.
2. ^ Pitman, Jim (1995). "Exchangeable and Partially Exchangeable Random Partitions". Probability Theory and Related Fields 102 (2): 145–158.
3. ^ a b Pitman, Jim (2006). Combinatorial Stochastic Processes. Berlin: Springer-Verlag.
4. ^ Xinhua Zhang, "A Very Gentle Note on the Construction of Dirichlet Process", September 2008, The Australian National University, Canberra. Online: http://users.cecs.anu.edu.au/~xzhang/pubDoc/notes/dirichlet_process.pdf
5. ^ Griffiths, T.L. and Ghahramani, Z. (2005) Infinite Latent Feature Models and the Indian Buffet Process. Gatsby Unit Technical Report GCNU-TR-2005-001.