World Library  
Flag as Inappropriate
Email this Article

Vector quantization

Article Id: WHEBN0000047805
Reproduction Date:

Title: Vector quantization  
Author: World Heritage Encyclopedia
Language: English
Subject: Microsoft Video 1, ASCII art, Fractal transform, Signature recognition, Density estimation
Collection: Lossy Compression Algorithms
Publisher: World Heritage Encyclopedia

Vector quantization

Vector quantization (VQ) is a classical quantization technique from signal processing which allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms.

The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensioned data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation.

Vector quantization is based on the self-organizing map model.


  • Training 1
  • Applications 2
    • Use in data compression 2.1
    • Video codecs based on vector quantization 2.2
    • Audio codecs based on vector quantization 2.3
    • Use in pattern recognition 2.4
    • Use as clustering algorithm 2.5
    • Use in data stream mining 2.6
  • See also 3
  • References 4
  • External links 5


A simple training algorithm for vector quantization is:

  1. Pick a sample point at random
  2. Move the nearest quantization vector centroid towards this sample point, by a small fraction of the distance
  3. Repeat

A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter:

  1. Increase each centroid's sensitivity by a small amount
  2. Pick a sample point at random
  3. Find the quantization vector centroid with the smallest
  4. Move the chosen centroid toward the sample point by a small fraction of the distance
  5. Set the chosen centroid's sensitivity to zero
  6. Repeat
  7. It is desirable to use a cooling schedule to produce convergence: see Simulated annealing. Another (simpler) method is LBG which is based on K-Means.

    The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data are temporally correlated over many samples. A vector is represented either geometrically by an arrow whose length corresponds to its magnitude and points in an appropriate direction, or by two or three numbers representing the magnitude of its components.


    Vector quantization is used for lossy data compression, lossy data correction, pattern recognition, density estimation and clustering.

    Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid.

    For density estimation, the area/volume that is closer to a particular centroid than to any other is inversely proportional to the density (due to the density matching property of the algorithm).

    Use in data compression

    Vector quantization, also called "block quantization" or "pattern matching quantization" is often used in lossy data compression. It works by encoding values from a multidimensional vector space into a finite set of values from a discrete subspace of lower dimension. A lower-space vector requires less storage space, so the data is compressed. Due to the density matching property of vector quantization, the compressed data has errors that are inversely proportional to density.

    The transformation is usually done by projection or by using a codebook. In some cases, a codebook can be also used to entropy code the discrete value in the same step, by generating a prefix coded variable-length encoded value as its output.

    The set of discrete amplitude levels is quantized jointly rather than each sample being quantized separately. Consider a k-dimensional vector [x_1,x_2,...,x_k] of amplitude levels. It is compressed by choosing the nearest matching vector from a set of n-dimensional vectors [y_1,y_2,...,y_n], with n < k.

    All possible combinations of the n-dimensional vector [y_1,y_2,...,y_n] form the vector space to which all the quantized vectors belong.

    Only the index of the codeword in the codebook is sent instead of the quantized values. This conserves space and achieves more compression.

    Twin vector quantization (VQF) is part of the MPEG-4 standard dealing with time domain weighted interleaved vector quantization.

    Video codecs based on vector quantization

    The usage of video codecs based on vector quantization has declined significantly in favor of those based on motion compensated prediction combined with transform coding, e.g. those defined in MPEG standards, as the low decoding complexity of vector quantization has become less relevant.

    Audio codecs based on vector quantization

    Use in pattern recognition

    VQ was also used in the eighties for speech[4] and speaker recognition.[5] Recently it has also been used for efficient nearest neighbor search [6] and on-line signature recognition.[7] In pattern recognition applications, one codebook is constructed for each class (each class being a user in biometric applications) using acoustic vectors of this user. In the testing phase the quantization distortion of a testing signal is worked out with the whole set of codebooks obtained in the training phase. The codebook that provides the smallest vector quantization distortion indicates the identified user.

    The main advantage of VQ in pattern recognition is its low computational burden when compared with other techniques such as dynamic time warping (DTW) and hidden Markov model (HMM). The main drawback when compared to DTW and HMM is that it does not take into account the temporal evolution of the signals (speech, signature, etc.) because all the vectors are mixed up. In order to overcome this problem a multi-section codebook approach has been proposed.[8] The multi-section approach consists of modelling the signal with several sections (for instance, one codebook for the initial part, another one for the center and a last codebook for the ending part).

    Use as clustering algorithm

    As VQ is seeking for centroids as density points of nearby lying samples, it can be also directly used as a prototype-based clustering method: each centroid is then associated with one prototype. By aiming to minimize the expected squared quantization error[9] and introducing a decreasing learning gain fulfilling the Robbins-Monro conditions, multiple iterations over the whole data set with a concrete but fixed number of prototypes converges to the solution of k-means clustering algorithm in an incremental manner.

    Use in data stream mining

    Extending VQ-based clustering to the data stream mining case, results in a single-pass algorithm which is able to evolve new prototypes on demand and on-the-fly based on concepts from adaptive resonance theory (ART), thus termed as eVQ.[10] This ensures a kind of regularization property in terms that cluster prototypes are restricted to move in local regions only. An extension of eVQ has been proposed which recursively calculates not only the prototypes of clusters but also their ranges of influence in form of arbitrarily rotated ellipsoids (using inverse covariance matrix update schemes).[11]

    See also

    Part of this article was originally based on material from the Free On-line Dictionary of Computing and is used with under the GFDL.


    1. ^ "Bink video". Book of Wisdom. 2009-12-27. Retrieved 2013-03-16. 
    2. ^ Valin, JM. (October 2012). Pyramid Vector Quantization for Video Coding. . Retrieved 2013-12-17.
    3. ^ "Vorbis I Specification". 2007-03-09. Retrieved 2007-03-09. 
    4. ^ Burton, D. K.; Shore, J. E.; Buck, J. T. (1983). "A generalization of isolated word recognition using vector quantization". IEEE International Conference on Acoustics Speech and Signal Processing ICASSP: 1021–1024.  
    5. ^ Soong, F.; A. Rosenberg; L. Rabiner; B. Juang (1985). "A vector Quantization approach to Speaker Recognition". IEEE Proceedings International Conference on Acoustics, Speech and Signal Processing ICASSP 1: 387–390.  
    6. ^ H. Jegou; M. Douze and C. Schmid (2011). "Product Quantization for Nearest Neighbor Search". Transactions on Pattern Analysis and Machine Intelligence 33 (1): 117–128.  
    7. ^ Faundez-Zanuy, Marcos (2007). "On-line signature recognition based on VQ-DTW". Pattern Recognition 40 (3): 981–992.  
    8. ^ Faundez-Zanuy, Marcos; Juan Manuel Pascual-Gaspar (2011). "Efficient On-line signature recognition based on Multi-section VQ". Pattern Analysis and Applications 14 (1): 37–45.  
    9. ^ Gray, R.M. (1984). "Vector Quantization". IEEE ASSP Magazine 1 (2): 4–29.  
    10. ^ Lughofer, Edwin (2008). "Extensions of Vector Quantization for Incremental Clustering". Pattern Recognition 41 (3): 995–1011.  
    11. ^ Lughofer, Edwin (2013). "eVQ-AM: An Extended Dynamic Version of Evolving Vector Quantization". Proceedings of the 2013 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS): 40–47. 

    External links

    • QccPack — Quantization, Compression, and Coding Library (open source)
    • VQ Indexes Compression and Information Hiding Using Hybrid Lossless Index Coding, Wen-Jan Chen and Wen-Tsung Huang
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.