World Library  
Flag as Inappropriate
Email this Article


Early Minimoog by R.A. Moog Inc. (ca. 1970)

A sound synthesizer (often abbreviated as "synthesizer" or "synth", also spelled "synthesiser") is an electronic musical instrument that generates electric signals converted to sound through loudspeakers or headphones. Synthesizers may either imitate other instruments or generate new timbres. They are often played with a keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled via MIDI or CV/Gate.

Synthesizers use various methods to generate signal. Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis. Other sound synthesis methods including subharmonic synthesis used on mixture trautonium, granular synthesis resulting Soundscape or Cloud, are rarely used. (See #Types of synthesis)


  • History 1
    • Early electric instruments 1.1
    • Early additive synthesizer – Tonewheel organs 1.2
    • Emergence of electronics and early electronic instruments 1.3
    • Graphical sound 1.4
    • Subtractive synthesis & polyphonic synthesizer 1.5
    • Monophonic electronic keyboards 1.6
    • Other innovations 1.7
    • Electronic music studios as "sound synthesizer" 1.8
      • Origin of the term "sound synthesizer" 1.8.1
    • From modular synthesizer to popular music 1.9
    • Polyphonic keyboards and the digital revolution 1.10
  • Impact on popular music 2
  • Types of synthesis 3
    • Imitative synthesis 3.1
  • Components 4
    • Filter 4.1
    • ADSR envelope 4.2
    • LFO 4.3
  • Patch 5
  • Control interfaces 6
    • Fingerboard controller 6.1
    • Wind controllers 6.2
    • Others 6.3
    • MIDI control 6.4
  • Typical roles 7
    • Synth lead 7.1
    • Synth pad 7.2
    • Synth bass 7.3
    • Arpeggiator 7.4
  • See also 8
  • References 9
  • Further reading 10
  • External links 11


The beginnings of the synthesizer are difficult to trace, as there is confusion between sound synthesizers and arbitrary electric/electronic musical instruments.[1][2]

Early electric instruments

One of the earliest electric musical instruments, the musical telegraph, was invented in 1876 by American electrical engineer Elisha Gray. He accidentally discovered the sound generation from a self-vibrating electromagnetic circuit, and invented a basic single-note oscillator. This musical telegraph used steel reeds with oscillations created by electromagnets transmitted over a telegraphy line. Gray also built a simple loudspeaker device into later models, consisting of a vibrating diaphragm in a magnetic field, to make the oscillator audible.[3][4]

This instrument was a remote electromechanical musical instrument using telegraphy and electric buzzers which can generate fixed timbre sound. Though it lacked arbitrary sound-synthesis function, some have erroneously called it the first synthesizer.[1][2]

The Hammond organ (1934).

Early additive synthesizer – Tonewheel organs

In 1897, crosstalk issues on the telephone line, etc), and similar but more compact instruments were developed one after another.

Emergence of electronics and early electronic instruments

In 1906, a huge revolution of electronics had begun. American engineer Lee De Forest invented the world's first amplifying vacuum tube, called the Audion tube. This led to new technologies, including radio and sound film for entertainment. These new technologies also influenced the music industry, and resulted in various early electronic musical instruments that used vacuum tubes, including:

Left: Theremin (RCA AR-1264; 1930). Middle: Ondes Martenot (7G model; 1978). Right: Trautonium (Telefunken Volkstrautonium Ela T42; 1933).

etc. Most of these early instruments used "heterodyne circuit" to produce audio frequency, and these sound synthesis capabilities were initially limited; however, along with the development over a decade, these instruments finally won expression ability.

Graphical sound

Also in 1920s, Arseny Avraamov developed various systems of graphic sonic art,[8] and similar graphical sound systems were also developed around the world, one after another.[9] In 1938, USSR engineer Yevgeny Murzin invented a design for a music-composition tool called ANS, one of the earliest conceptions of a real-time additive synthesizer using optoelectronics. Although his idea of reconstructing a sound from its visible image was apparently simple, it was only realized 20 years later, in 1958, because his professional field was not related to music.[10]

Subtractive synthesis & polyphonic synthesizer

Hammond Novachord (1939) and Welte Lichtton orgel (1935)

In the 1930s and 1940s, the basic elements required for the modern polyphonic synthesizers.

Ondioline (c.1941)

Monophonic electronic keyboards

Georges Jenny built his first ondioline in France in 1941.

Other innovations

Wurlitzer model 44, 1953)

In the late 1940s, Canadian inventor and composer, Hugh Le Caine invented Electronic Sackbut, which provided the earliest realtime control of three aspects of sound (volume, pitch and timbre), corresponding to today's touch-sensitive keyboard, pitch & modulation controllers, etc. The controllers were initially implemented as the multidimensional pressure keyboard in 1945, then changed to a group of dedicated controllers operated by left hand in 1948.[16]

Also in Japan, as early as in 1935, [19][20]

Audio console (left) and Synthesizer at the Studio di fonologia musicale di Radio Milano

Electronic music studios as "sound synthesizer"

After World War II, electronic music including electroacoustic music and musique concrète was created by contemporary composers, and numerous electronic music studios were established around the world, especially in Bonn, Cologne, Paris and Milan. These studios were typically filled with electronic equipment including oscillators, filters, tape recorders, audio consoles, etc., and the whole studio functioned as a "sound synthesizer".

RCA Mark II Sound Synthesizer (1957) and Siemens Studio for Electronic Music (1959)

Origin of the term "sound synthesizer"

In 1951–1952, RCA produced a machine called the Electronic Music Synthesizer; however, it was more accurately a composition machine, because it did not produce sounds in real time.[21] Then, RCA developed the first programmable sound synthesizer, RCA Mark II Sound Synthesizer, and installed it to Columbia-Princeton Electronic Music Center in 1957.[22] Prominent composers including Vladimir Ussachevsky, Otto Luening, Milton Babbitt, Halim El-Dabh, Bülent Arel, Charles Wuorinen, and Mario Davidovsky used the RCA Synthesizer extensively in various compositions.[23]

From modular synthesizer to popular music

In 1959–1960, Harald Bode developed a modular synthesizer and sound processor,[24][25] and in 1961, he wrote a paper exploring the concept of self-contained portable modular synthesizer using newly emerging transistor technology;[26] also he served as AES session chairman on music and electronic for the fall conventions in 1962 and 1964;[27] after then, his ideas were adopted by Donald Buchla, Robert Moog, and others.

The Moog modular synthesizer of 1960s-1970s.

Robert Moog released the first commercially available modern synthesizer in 1964.[28] In the late 1960s to 1970s, the development of miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments, as proposed by Harald Bode in 1961. By the early 1980s companies were selling compact, modestly priced synthesizers to the public. This, along with the development of Musical Instrument Digital Interface (MIDI), made it easier to integrate and synchronize synthesizers and other electronic instruments for use in musical composition. In the 1990s, synthesizer emulations began to appear in computer software, known as software synthesizers. Later, VST and other plugins were able to emulate classic hardware synthesizers to a moderate degree.

First Movement (Allegro) of Brandenburg Concerto Number 3 played on synthesizer.

Problems playing this file? See .

The synthesizer had a considerable effect on 20th-century music.[29] Micky Dolenz of The Monkees bought one of the first Moog synthesizers. The band was the first to release an album featuring a Moog with Pisces, Aquarius, Capricorn & Jones Ltd. in 1967.[30] It reached #1 in the charts. The Perrey and Kingsley album The In Sound From Way Out! using the Moog and tape loops had been in released in 1966.A few months later, both the Rolling Stones' "2000 Light Years from Home" and the title track of the Doors' 1967 album Strange Days also featured a Moog, played by Brian Jones and Paul Beaver respectively. In the same year Bruce Haack built a homemade synthesizer which he demonstrated on Mister Rogers Neighborhood. The synthesizer included a sampler (musical instrument) which recorded, stored, played and looped sounds controlled by switches, light sensors and human skin contact. Wendy Carlos's Switched-On Bach (1968), recorded using Moog synthesizers, also influenced numerous musicians of that era and is one of the most popular recordings of classical music ever made,[31] alongside the records of Isao Tomita (particularly Snowflakes are Dancing in 1974), who in the early 1970s utilized synthesizers to create new artificial sounds (rather than simply mimicking real instruments)[32] and made significant advances in analog synthesizer programming.[33]

The sound of the Moog reached the mass market with Simon and Garfunkel's Bookends in 1968 and The Beatles' Abbey Road the following year; hundreds of other popular recordings subsequently used synthesizers, most famously the portable Minimoog. Electronic music albums by Beaver and Krause, Tonto's Expanding Head Band, The United States of America, and White Noise reached a sizable cult audience and progressive rock musicians such as Richard Wright of Pink Floyd and Rick Wakeman of Yes were soon using the new portable synthesizers extensively. Stevie Wonder and Herbie Hancock also contributed strongly to the popularisation of synthesizers in Black American music.[34][35] Other early users included Emerson, Lake & Palmer's Keith Emerson, Todd Rundgren, Pete Townshend, and The Crazy World of Arthur Brown's Vincent Crane. In Europe, the first no 1 single to feature a Moog prominently was Chicory Tip's 1972 hit Son of My Father.[36]

Polyphonic keyboards and the digital revolution

In 1978, the success of the string synthesizers before advancing to multi-synthesizers incorporating monosynths and more, gradually fell out of fashion in the wake of these newer, fully polyphonic keyboard synthesizers, constructed with a clearer distance from older technology.[39] These now successful and newly configured polyphonic keyboard synthesizers, manufactured mainly in the United States and Japan from the mid-1970s to the early-1980s, included the Yamaha CS-80 (1976), the Oberheims Polyphonic and OBX (1976 and 1979), the Sequential Circuits Prophet-5 (1977), the Roland Jupiter 8 in 1981, and the PPG Wave of the same year.

In 1983, however, Yamaha's revolutionary M1, of 1988, confirmed the development of sample-based synthesis, while also heralding the era of the workstation synthesizer, now orientated around the recall of ROM sample sounds for the composition and sequencing of whole songs, rather than traditional sound synthesis per se.[42] Over the 1990s, the popularity of electronic dance music employing analog sounds, the appearance of digital analog modelling synthesizers to recreate these sounds, and the development of the Eurorack modular synthesiser system, initially introduced with the Doepfer A-100 and since adopted by other manufacturers, all contributed to the resurgence of interest in analog technology by the end of the century. At the same time, the beginning of the new century saw improvements in technology that led to the popularisation of digital software synthesizers.[43] In the 2010s, new analog synthesizers, both in keyboard instrument and modular form, now regularly co-exist with the many software synthesizers developed since the early 2000s, and the variety of digital hardware instruments available.[44]

Impact on popular music

The Prophet-5 synthesizer of the late 1970s-early 1980s.

In the 1970s, Jean Michel Jarre, Larry Fast, and Vangelis released successful synthesizer-led instrumental albums. This helped influence over time the emergence of synthpop, a sub-genre of new wave in the late 1970s. The work of German electronic bands such as Kraftwerk and Tangerine Dream, British acts Gary Numan and David Bowie and the Japanese Yellow Magic Orchestra were also influential in the development of the genre.[45] English musician Gary Numan's 1979 hits "Are 'Friends' Electric?" and "Cars" used synthesizers heavily.[46][47] OMD's "Enola Gay" (1980) used a distinctive electronic percussion and synthesized melody. Soft Cell used a synthesized melody on their 1981 hit "Tainted Love".[45] Nick Rhodes, keyboardist of Duran Duran, used various synthesizers including slightly minor Roland Jupiter-4 and Jupiter-8.[48]

Other chart hits include Howard Jones, Kitaro, Stevie Wonder, Peter Gabriel, Thomas Dolby, Kate Bush, Dónal Lunny and Frank Zappa.

The synthesizer became one of the most important instruments in the music industry.[45]

Types of synthesis

Hammond organ in 1930s.

wavetable synthesis is useful for reducing required hardware/processing power,[50] and is commonly used in low-end MIDI instruments (such as educational keyboards) and low-end sound cards.

Subtractive synthesis is still utilized on various synths, including virtual analog synth.

Subtractive synthesis is based on filtering harmonically rich waveforms. Due to its simplicity, it is the basis of early synthesizers such as the Moog synthesizer. Subtractive synthesizers use a simple acoustic model that assumes an instrument can be approximated by a simple signal generator (producing sawtooth waves, square waves, etc.) followed by a filter. The combination of simple modulation routings (such as pulse width modulation and oscillator sync), along with the physically unrealistic lowpass filters, is responsible for the "classic synthesizer" sound commonly associated with "analog synthesis"—a term which is often mistakenly used when referring to software synthesizers using subtractive synthesis.

FM synthesis was hugely successful in earliest digital synthesizers.

FM synthesis (frequency modulation synthesis) is a process that usually involves the use of at least two signal generators (sine-wave oscillators, commonly referred to as "operators" in FM-only synthesizers) to create and modify a voice. Often, this is done through the analog or digital generation of a signal that modulates the tonal and amplitude characteristics of a base carrier signal. FM synthesis was pioneered by John Chowning, who patented the idea and sold it to Yamaha. Unlike the exponential relationship between voltage-in-to-frequency-out and multiple waveforms in classical 1-volt-per-octave synthesizer oscillators, Chowning-style FM synthesis uses a linear voltage-in-to-frequency-out relationship and sine-wave oscillators. The resulting complex waveform may have many component frequencies, and there is no requirement that they all bear a harmonic relationship. Sophisticated FM synths such as the Yamaha DX-7 series can have 6 operators per voice; some synths with FM can also often use filters and variable amplifier types to alter the signal's characteristics into a sonic voice that either roughly imitates acoustic instruments or creates sounds that are unique. FM synthesis is especially valuable for metallic or clangorous noises such as bells, cymbals, or other percussion.

Phase distortion synthesis is a method implemented on Casio CZ synthesizers. It is quite similar to FM synthesis but avoids infringing on the Chowning FM patent. Also it should be categorized to modulation synthesis along with FM synthesis, and also to distortion synthesis along with waveshaping synthesis, and discrete summation formulas.

Granular synthesis is a type of synthesis based on manipulating very small sample slices.

Physical modelling synthesis is the synthesis of sound by using a set of equations and algorithms to simulate a real instrument, or some other physical source of sound. This involves taking up models of components of musical objects and creating systems that define action, filters, envelopes and other parameters over time. The definition of such instruments is virtually limitless, as one can combine any given models available with any amount of sources of modulation in terms of pitch, frequency and contour. For example, the model of a violin with characteristics of a pedal steel guitar and perhaps the action of piano hammer. When an initial set of parameters is run through the physical simulation, the simulated sound is generated. Although physical modeling was not a new concept in acoustics and synthesis, it was not until the development of the Karplus-Strong algorithm and the increase in DSP power in the late 1980s that commercial implementations became feasible. Physical modeling on computers gets better and faster with higher processing.

Analysis/resynthesis is typically known as vocoder.
Sample-based synthesis is currently one of the most popular methods of synthesis.

Sample-based synthesis One of the easiest synthesis systems is to record a real instrument as a digitized waveform, and then play back its recordings at different speeds to produce different tones. This is the technique used in "sampling". Most samplers designate a part of the sample for each component of the ADSR envelope, and then repeat that section while changing the volume for that segment of the envelope. This lets the sampler have a persuasively different envelope using the same note. See also Wavetable synthesis, Vector synthesis, etc.

Analysis/resynthesis is a form of synthesis that uses a series of bandpass filters or Fourier transforms to analyze the harmonic content of a sound. The resulting analysis data is then used in a second stage to resynthesize the sound using a band of oscillators. The vocoder, linear predictive coding, and some forms of speech synthesis are based on analysis/resynthesis.

Imitative synthesis

Sound synthesis can be used to mimic acoustic sound sources. Generally, a sound that does not change over time includes a fundamental partial or harmonic, and any number of partials. Synthesis may attempt to mimic the amplitude and pitch of the partials in an acoustic sound source.

When natural sounds are analyzed in the frequency domain (as on a spectrum analyzer), the spectra of their sounds exhibits amplitude spikes at each of the fundamental tone's harmonics corresponding to resonant properties of the instruments (spectral peaks that are also referred to as formants). Some harmonics may have higher amplitudes than others. The specific set of harmonic-vs-amplitude pairs is known as a sound's harmonic content. A synthesized sound requires accurate reproduction of the original sound in both the frequency domain and the time domain. A sound does not necessarily have the same harmonic content throughout the duration of the sound. Typically, high-frequency harmonics die out more quickly than the lower harmonics.

In most conventional synthesizers, for purposes of re-synthesis, recordings of real instruments are composed of several components representing the acoustic responses of different parts of the instrument, the sounds produced by the instrument during different parts of a performance, or the behavior of the instrument under different playing conditions (pitch, intensity of playing, fingering, etc.)


Basic components of an analogue subtractive synthesizer
analogue synth components

Synthesizers generate sound through various analogue and digital techniques. Early synthesizers were analog hardware based but many modern synthesizers use a combination of DSP software and hardware or else are purely software-based (see softsynth). Digital synthesizers often emulate classic analog designs. Sound is controllable by the operator by means of circuits or virtual stages that may include:

  • organ, while Frequency modulation and Phase distortion synthesis use one oscillator to modulate another. Subtractive synthesis depends upon filtering a harmonically rich oscillator waveform. Sample-based and Granular synthesis use one or more digitally recorded sounds in place of an oscillator.
  • Voltage-controlled filter (VCF) – "shape" the sound generated by the oscillators in the frequency domain, often under the control of an envelope or LFO. These are essential to subtractive synthesis.
  • Voltage-controlled amplifier (VCA) – After the signal generated by one (or a mix of more) VCOs has been modified by filters and LFOs, and its waveform has been shaped (contoured) by an ADSR Envelope Generator, it then passes on to one or more voltage-controlled amplifiers (VCAs). A VCA is a preamp that boosts (amplifies) the electronic signal before passing it on to an external or built-in power amplifier, as well as a means to control its amplitude (volume) using an attenuator. The gain of the VCA is affected by a control voltage (CV), coming from an envelope generator, an LFO, the keyboard or some other source.[51]
  • ADSR envelopes – provide envelope modulation to "shape" the volume or harmonic content of the produced note in the time domain with the principle parameters being attack, decay, sustain and release. These are used in most forms of synthesis. ADSR control is provided by Envelope Generators.
  • Low frequency oscillator (LFO) – an oscillator of adjustable frequency that can be used to modulate the sound rhythmically, for example to create tremolo or vibrato or to control a filter's operating frequency. LFOs are used in most forms of synthesis.
  • Other sound processing effects such as ring modulators may be encountered.


Electronic filters are particularly important in subtractive synthesis, being designed to pass some frequency regions through unattenuated while significantly attenuating ("subtracting") others. The low-pass filter is most frequently used, but band-pass filters, band-reject filters and high-pass filters are also sometimes available.

The filter may be controlled with a second ADSR envelope. An "envelope modulation" ("env mod") parameter on many synthesizers with filter envelopes determines how much the envelope affects the filter. If turned all the way down, the filter producs a flat sound with no envelope. When turned up the envelope becomes more noticeable, expanding the minimum and maximum range of the filter.

ADSR envelope

Schematic of ADSR
Attack Decay Sustain Release
Key on off
Inverted ADSR envelope

When an acoustic musical instrument produces sound, the loudness and spectral content of the sound change over time in ways that vary from instrument to instrument. The "attack" and "decay" of a sound have a great effect on the instrument's sonic character.[52] Sound synthesis techniques often employ an envelope generator that controls a sound's parameters at any point in its duration. Most often this is an "ADSR" (Attack Decay Sustain Release) envelope, which may be applied to overall amplitude control, filter frequency, etc. The envelope may be a discrete circuit or module, or implemented in software. The contour of an ADSR envelope is specified using four parameters:

  • Attack time is the time taken for initial run-up of level from nil to peak, beginning when the key is first pressed.
  • Decay time is the time taken for the subsequent run down from the attack level to the designated sustain level.
  • Sustain level is the level during the main sequence of the sound's duration, until the key is released.
  • Release time is the time taken for the level to decay from the sustain level to zero after the key is released.

An early implementation of ADSR can be found on the Hammond Novachord in 1938 (which predates the first Moog synthesizer by over 25 years). A seven-position rotary knob set preset ADS parameter for all 72 notes; a pedal controlled release time.[14] The notion of ADSR was specified by Vladimir Ussachevsky (then head of the Columbia-Princeton Electronic Music Center) in 1965 while suggesting improvements for Bob Moog's pioneering work on synthesizers, although the earlier notations of parameter were (T1, T2, Esus, T3), then these were simplified to current form (Attack time, Decay time, Sustain level, Release time) by ARP.[53]

Some electronic musical instruments allow the ADSR envelope to be inverted, which results in opposite behavior compared to the normal ADSR envelope. During the attack phase, the modulated sound parameter fades from the maximum amplitude to zero then, during the decay phase, rises to the value specified by the sustain parameter. After the key has been released the sound parameter rises from sustain amplitude back to maximum amplitude.

8-step envelope on Casio CZ series

A common variation of the ADSR on some synthesizers, such as the General Instruments AY-3-8912 sound chip included a hold time parameter only; the sustain level was not programmable. Another common variation in the same vein is the AHDSR (attack, hold, decay, sustain, release) envelope, in which the "hold" parameter controls how long the envelope stays at full volume before entering the decay phase. Multiple attack, decay and release settings may be found on more sophisticated models.

Certain synthesizers also allow for a delay parameter before the attack. Modern synthesizers like the Dave Smith Instruments Prophet '08 have DADSR (delay, attack, decay, sustain, release) envelopes. The delay setting determines the length of silence between hitting a note and the attack. Some software synthesizers, such as Image-Line's 3xOSC (included with their DAW FL Studio) have DAHDSR (delay, attack, hold, decay, sustain, release) envelopes.

LFO section of Access Virus C


A low-frequency oscillator (LFO) generates an electronic signal, usually below 20 Hz. LFO signals create a periodic control signal or sweep, often used in vibrato, tremolo and other effects. In certain genres of electronic music, the LFO signal can control the cutoff frequency of a VCF to make a rhythmic wah-wah sound, or the signature dubstep wobble bass.


One of the earliest patch memory (bottom left) on Oberheim Four-voice (1975/1976)

A synthesizer patch (some manufacturers chose the term program) is a sound setting. Modular synthesizers used cables ("patch cords") to connect the different sound modules together. Since these machines had no memory to save settings, musicians wrote down the locations of the patch cables and knob positions on a "patch sheet" (which usually showed a diagram of the synthesizer). Ever since, an overall sound setting for any type of synthesizer has been known as a patch. In mid–late 1970s, patch memory (allowing storage and loading of 'patches' or 'programs') began to appear in synths like the Oberheim Four-voice (1975/1976)[54] and Sequential Circuits Prophet-5 (1977/1978). After MIDI was introduced in 1983, more and more synthesizers could import or export patches via MIDI SYSEX commands. When a synthesizer patch is uploaded to a personal computer that has patch editing software installed, the user can alter the parameters of the patch and download it back to the synthesizer. Because there is no standard patch language it is rare that a patch generated on one synthesizer can be used on a different model. However sometimes manufacturers design a family of synthesizers to be compatible.

Control interfaces

Tangible interface (Reactable)
Guitar-style interface (SynthAxe)

Modern synthesizers often look like small pianos, though with many additional knob and button controls. These are integrated controllers, where the sound synthesis electronics are integrated into the same package as the controller. However, many early synthesizers were modular and keyboardless, while most modern synthesizers may be controlled via MIDI, allowing other means of playing such as:

Fingerboard controller

Left: Ondes Martenot (6G in 1960) Right: Mixture Trautonium (replica of 1952)
on Korg monotron
Ribbon controller
on Moog 3P (1972)

A ribbon controller or other violin-like user interface may be used to control synthesizer parameters. The concept dates to Léon Theremin's 1922 first conceive[55] and his 1932 Fingerboard Theremin and Keyboard Theremin,[56][57]  Maurice Martenot's 1928 Ondes Martenot (sliding a metal ring),[58]  Friedrich Trautwein's 1929 Trautonium (finger pressure),  and also later utilized by Robert Moog.[59][60][61] The ribbon controller has no moving parts. Instead, a finger pressed down and moved along it creates an electrical contact at some point along a pair of thin, flexible longitudinal strips whose electric potential varies from one end to the other. Older fingerboards used a long wire pressed to a resistive plate. A ribbon controller is similar to a touchpad, but a ribbon controller only registers linear motion. Although it may be used to operate any parameter that is affected by control voltages, a ribbon controller is most commonly associated with pitch bending.

Fingerboard-controlled instruments include the Kurzweil synthesizers, Moog synthesizers, and others.

Rock musician Keith Emerson used it with the Moog modular synthesizer from 1970 onward. In the late 1980s, keyboards in the synth lab at Berklee College of Music were equipped with membrane thin ribbon style controllers that output MIDI. They functioned as MIDI managers, with their programming language printed on their surface, and as expression/performance tools. Designed by Jeff Tripp of Perfect Fretworks Co., they were known as Tripp Strips. Such ribbon controllers can serve as a main MIDI controller instead of a keyboard, as with the Continuum instrument.

Wind controllers

  • Midi Lifestyle Midi Lifestyle is a website dedicated to all things midi from midi controller reviews to building your own DIY midi controller.
  • Sound Synthesis Theory wikibook
  • Principles of Sound Synthesis at Salford University

External links

  • Gorges, Peter (2005). Programming Synthesizers. Germany, Bremen: Wizoobooks.  
  • Schmitz, Reinhard (2005). Analog Synthesis. Germany, Bremen: Wizoobooks.  
  • Shapiro, Peter (2000). Modulations: A History of Electronic Music: Throbbing Words on Sound.  

Further reading

  1. ^ a b "The Palatin Project-The life and work of Elisha Gray". Palatin Project. 
  2. ^ a b Brown, Jeremy K. (2010). Stevie Wonder: Musician. Infobase Publishing. p. 50.  
  3. ^ "Elisha Gray and "The Musical Telegraph"(1876)", 120 Years of Electronic Music, 2005, archived from the original on 2009-02-22, retrieved 2011-08-01 
  4. ^  
  5. ^ US patent 580,035, Thaddeus Cahill, "Art of and apparatus for generating and distributing msic electrically", issued 1897-04-06 
  6. ^ "The Audion Piano (1915)". 120 Years of Electronic Music. 
  7. ^ Glinsky, Albert (2000), Theremin: Ether Music and Espionage, Urbana, Illinois: University of Illinois Press, p. 26,  
  8. ^ Edmunds, Neil (2004), Soviet Music and Society Under Lenin and Stalin, London: Routledge Curzon 
  9. ^ Holzer, Derek (February 2010), Tonewheels – a brief history of optical synthesis, 
  10. ^ Kreichi, Stanislav (10 November 1997), The ANS Synthesizer: Composing on a Photoelectronic Instrument, Theremin Center 
  11. ^ Rhea, Thomas L., "Harald Bode’s Four-Voice Assignment Keyboard (1937)", eContact! (reprint ed.) (Canadian Electroacoustic Community) 13 (4)  (July 2011), originally published as Rhea, Tom (December 1979), "Electronic Perspectives", Contemporary Keyboard 5 (12): 89 
  12. ^ Warbo Formant Organ (photograph), 1937 
  13. ^ "The 'Warbo Formant Orgel' (1937), The 'Melodium' (1938), The 'Melochord' (1947-9), and 'Bode Sound Co' (1963-)", 120 years of Electronic Music 
  14. ^ a b Cirocco, Phil (2006). "The Novachord Restoration Project". Cirocco Modular Synthesizers. 
  15. ^ Steve Howell; Dan Wilson. "Novachord". Hollow Sun.  (see also 'History' page)
  16. ^ Gayle Young (1999). "Electronic Sackbut (1945–1973)". 
  17. ^ 一時代を画する新楽器完成 浜松の青年技師山下氏 [An epoch new musical instrument was developed by a young engineer Mr. Yamashita in Hamamatsu].  
  18. ^ ]New Electric Musical Instrument — Introduction of Magna Organ新電氣樂器 マグナオルガンの御紹介 [ (in Japanese). Hamamatsu: 日本樂器製造株式會社 ( 
  19. ^ Fujii, Koichi (2004). "Chronology of early electroacoustic music in Japan: What types of source materials are available?".  
  20. ^ Holmes, Thom (2008), "Early Electronic Music in Japan", Electronic and experimental music: technology, music, and culture (3rd ed.),  
  21. ^ Davies, Hugh (2001). "Synthesizer [Synthesiser]". In ed. Stanley Sadie and John Tyrrell. The New Grove Dictionary of Music and Musicians (second ed.). London: Macmillan Publishers.  
  22. ^ Holmes, Thom (2008). "Early Synthesizers and Experimenters". Electronic and experimental music: technology, music, and culture (3rd ed.).  
  23. ^ "The RCA Synthesizer & Its Synthesists". Contemporary Keyboard (GPI Publications) 6 (10): 64. October 1980. Retrieved 2011-06-05. 
  24. ^ Harald Bode (The Wurlitzer Company). "Sound Synthesizer Creates New Musical Effects".  
  25. ^ Harald Bode (Bode Sound Co.) (September 1984). "History of Electronic Sound Modification".   (Note: Draft typescript is available at the tail of PDF version, along with HTML version at the Wayback Machine (archived June 9, 2011) without draft.)
  26. ^ Bode, Harald (1961), "European Electronic Music Instrument Design", Journal of the Audio Engineering Society (JAES) ix (1961): 267 
  27. ^ "In Memoriam".  
  28. ^ Catchlove, Lucina (August 2002), "Robert Moog", Remix (Oklahoma City) 
  29. ^ Eisengrein, Doug (September 1, 2005), Renewed Vision, Remix Magazine, retrieved 2008-04-16 
  30. ^ Lefcowitz, Eric (1989), The Monkees Tale, Last Gasp, p. 48,  
  31. ^ Catchlove, Lucinda (April 1, 2002), Wendy Carlos (electronic musician), Remix Magazine 
  32. ^ Tomita at AllMusic. Retrieved 2011-06-04.
  33. ^  
  34. ^ Stevie Wonder, American profile, retrieved 1-09-2014 
  35. ^ Herbie Hancock profile, Sound on Sound, retrieved 1-09-2014 
  36. ^ Chicory Tip (official website) 
  37. ^ The Prophet 5 and 10,, retrieved 1-09-2014 
  38. ^ The Synthesizers that shaped modern music,, retrieved 1-09-2014 
  39. ^ Sound Synthesis and Sampling by Martin Russ, Taylor and Francis, May 2004, retrieved 1-09-2014 
  40. ^ Synthlearn – the DX7, synthlearn, retrieved 1-09-14 
  41. ^ Synth FX, Sound On Sound, retrieved 1-09-2014 
  42. ^ The Korg M1, Sound On Sound, retrieved 1-09-2014 
  43. ^ Paul Holmes (22 May 2012), "Electronic and Experimental Music", Routlege, retrieved 1-09-2014 
  44. ^ The revival of analog electronics in a digital world, newelectronics, August 2013, retrieved 1-09-14 
  45. ^ a b c d e Borthwick, Stuart (2004), Popular Music Genres: An Introduction, Edinburgh University Press, p. 120,  
  46. ^ George-Warren, Holly (2001), The Rolling Stone Encyclopedia of Rock & Roll, Fireside, pp. 707–734,  
  47. ^ Robbins, Ira A (1991), The Trouser Press Record Guide, Maxwell Macmillan International, p. 473,  
  48. ^ Black, Johnny (2003), "The Greatest Songs Ever! Hungry Like the Wolf", Blender Magazine (January/February 2003), retrieved 2008-04-16 
  49. ^ Borthwick 2004, p. 130
  50. ^ Vail, Mark (2000), Vintage Synthesizers: Groundbreaking Instruments and Pioneering Designers of Electronic Music Synthesizers, Backbeat Books, pp. 68–342,  
  51. ^ Reid, Gordon (2000). "Synth Secrets, Part 9: An Introduction to VCAs". Sound on Sound (January 2000). Retrieved 2010-05-25. 
  52. ^ Charles Dodge, Thomas A. Jerse (1997). Computer Music. New York: Schirmer Books. p. 82. 
  53. ^ Pinch, Trevor; Frank Trocco (2004). Analog Days: The Invention and Impact of the Moog Synthesizer. Harvard University Press.  
  54. ^ "Oberheim Polyphonic Synthesizer Programmer (ad)". Contemporary Keyboard Magazine (ad) (September/October 1976): 19. 
  55. ^ Thom Holmes, Thomas B. Holmes (2002), Electronic and experimental music: pioneers in technology and composition, Routledge, p. 59,  
  56. ^ "Radio Squeals Turned to Music for Entire Orchestra", Popular Science (June 1932): 51 
    — the article reported Léon Theremin's new electronic instruments used on his electric orchestra's first public recital at Carnegie Hall, New York City, including Fingerboard Theremin, Keyboard Theremin with fingerboard controller, and Terpsitone (a performance instrument in the style of platform on which a dancer could play a music by the movement of body).
  57. ^ Glinsky, Albert (2000), Theremin: ether music and espionage, University of Illinois Press, p. 145,  
  58. ^ Brend, Mark (2005). Strange sounds: offbeat instruments and sonic experiments in pop. Hal Leonard Corporation. p. 22.  
  59. ^ "Moogtonium (1966–1968)". Moog Foundation. Max Brand's version of Mixture Trautonium, built by Robert Moog during 1966–1968.
  60. ^ Synthesizer technique. Hal Leonard Publishing Corporation. 1984. p. 47.  
  61. ^ Pinch, Trevor; Frank Trocco (2004). Analog Days: The Invention and Impact of the Moog Synthesizer. Harvard University Press. p. 62.  
  62. ^ "The "Hellertion"(1929) & the "Heliophon"(1936)", 120 Years of Electronic Music 
  63. ^ Peter Lertes (1933), Elektrische Musik:ein gemeinverständliche Darstellung ihrer Grundlagen, des heutigen Standes der Technik und ihre Zukunftsmöglickkeiten, (Dresden & Leipzig, 1933) 
  64. ^ J. Marx (1947). "Heliophon, ein neues Musikinstrument". Ömz ii (1947): 314. 
  65. ^ Christoph Reuter, Martinetta and Variophon, 
  66. ^ Christoph Reuter, Variophon and Martinetta Enthusiasts Page, 
  67. ^ Zawinul Pepe Joseph,  (also another photograph is shown on gallery page)
  68. ^ Millioniser 2000 Promo Video Rock Erickson London, England 1983,, July 21, 2009 
  69. ^ Crumar Steiner Masters Touch CV Breath Controller,, January 21, 2008 
  70. ^ Yamaha DX100 with BC-1 Breath Controller,, December 16, 2007 
  71. ^ The Complete MIDI 1.0 Detailed Specification, MIDI Manufacturers Association Inc., retrieved 2008-04-10 
  72. ^ a b c Rothtein, Joseph (1995), MIDI: A Comprehensive Introduction, A-R Editions, pp. 1–11,  
  73. ^ Webster, Peter Richard; Williams, David Brian (2005), Experiencing Music Technology: Software, Data, and Hardware, Thomson Schirmer, p. 221,  
  74. ^ Royalty Free Music : Funk – incompetech (mp3). Kevin MacLeod ( 
  75. ^ Aitken, Stuart (10 May 2011). "Charanjit Singh on how he invented acid house … by mistake".  
  76. ^ US patent 3,358,070, Alan C. Young (Hammond Co.), "Electronic Organ Arpeggio Effect Device", issued 1967-12-12 
  77. ^ "RMI Harmonic Synthesizer". Jarrography – The ultimate Jean Michel Jarre discography. 


See also

Arpeggiators seem to have grown from the accompaniment system of Duran Duran's song Rio, in which the arpeggiator on a Roland Jupiter-4 is heard playing a C minor chord in random mode. They fell out of favor by the latter part of the 1980s and early 1990s and were absent from the most popular synthesizers of the period but a resurgence of interest in analog synthesizers during the 1990s, and the use of rapid-fire arpeggios in several popular dance hits, brought with it a resurgence.

An arpeggiator interface on Novation Nova
Trance Lead
A sample of Eurodance synthesizer riff with use of rapid 1/16 notes arpeggiator

An arpeggiator is a feature available on several synthesizers that automatically steps through a sequence of notes based on an input chord, thus creating an arpeggio. The notes can often be transmitted to a MIDI sequencer for recording and further editing. An arpeggiator may have controls for speed, range, and order in which the notes play; upwards, downwards, or in a random order. More advanced arpeggiators allow the user to step through a pre-programmed complex sequence of notes, or play several arpeggios at once. Some allow a pattern sustained after releasing keys: in this way, sequence of arpeggio patterns may be built up over time by pressing several keys one after the other. Arpeggiators are also commonly found in software sequencers. Some arpeggiators/sequencers expand features into a full phrase sequencer, which allows the user to trigger complex, multi-track blocks of sequenced data from a keyboard or input device, typically synchronized with the tempo of the master clock.


In the 2000s, several equipment manufacturers such as pedalboard or button board.

When the programmable music sequencer became widely available in the 1980s (e.g., the Synclavier), bass synths were used to create highly syncopated rhythms and complex, rapid basslines. Bass synth patches incorporate a range of sounds and tones, including wavetable-style, analog, and FM-style bass sounds, delay effects, distortion effects, envelope filters. A particularly influential bass synthesizer was the Roland TB-303 following Firstman SQ-01. Released in late 1981, it featured a built-in sequencer and later became strongly associated with acid house music. This method gained wide popularity after Phuture used it for the single "Acid Tracks" in 1987.[75]

Roland TB-303 acid bass machine in 1980s
Acid bass
An example of acid bass track, using SH-101 for bass, MC-202 for filter hookline, and TR-808 for drums.

In the 1970s miniaturized solid-state components allowed self-contained, portable instruments such as the Moog Taurus, a 13-note pedal keyboard played by the feet. The Moog Taurus was used in live performances by a range of pop, rock, and blues-rock bands. An early use of bass synthesizer was in 1972, on a solo album by John Entwistle (the bassist for The Who), entitled Whistle Rymes. Genesis bass player Mike Rutherford used a Dewtron "Mister Bassman" for the recording of their album Nursery Cryme in August 1971. Stevie Wonder introduced synth bass to a pop audience in the early 1970s, notably on "Superstition" (1972) and "Boogie On Reggae Woman" (1974). In 1977 Parliament's funk single "Flash Light" used the bass synthesizer. Lou Reed, widely considered a pioneer of electric guitar textures, played bass synthesizer on the song "Families", from his 1979 album The Bells.

The bass synthesizer (or "bass synth") is used to create sounds in the bass range, from simulations of the electric bass or double bass to distorted, buzz-saw-like artificial bass sounds, by generating and combining signals of different frequencies. Bass synth patches may incorporate a range of sounds and tones, including wavetable-style, analog, and FM-style bass sounds, delay effects, distortion effects, envelope filters. A modern digital synthesizer uses a frequency synthesizer microprocessor component to generate signals of different frequencies. While most bass synths are controlled by electronic keyboards or pedalboards, some performers use an electric bass with MIDI pickups to trigger a bass synthesizer.

A 1970s-era minimoog
Minimoog bass
An example of funk-styled grooving synth bass by Kevin MacLeod.[74]

Moog Taurus pedal bass synth
Synth bass
An example of a classic analog bass synthesizer sound. Four sawtooth bass filter sweeps with gradually increasing resonance.

Synth bass

The main feature of a synth pad is very long attack and decay time with extended sustains. In some instances pulse-width modulation (PWM) using a square wave oscillator can be added to create a "vibrating" sound.

A synth pad is a sustained chord or tone generated by a synthesizer, often employed for background harmony and atmosphere in much the same fashion that a string section is often used in acoustic music. Typically, a synth pad plays many whole or half notes, sometimes holding the same note while a lead voice sings or plays an entire musical phrase. Often, the sounds used for synth pads have a vaguely organ, string, or vocal timbre. Much popular music in the 1980s employed synth pads, this being the time of polyphonic synthesizers, as did the then-new styles of smooth jazz and New Age music. One of many well-known songs from the era to incorporate a synth pad is "West End Girls" by the Pet Shop Boys, who were noted users of the technique.

Synth pad

In popular music, a synth lead is generally used for playing the main melody of a song, but it is also often used for creating rhythmic or bass effects. Although most commonly heard in electronic dance music, synth leads have been used extensively in hip-hop since the 1980s and rock songs since the 1970s. Most modern music relies heavily on the synth lead to provide a musical hook to sustain the listener's interest throughout an entire song.

Synth lead

Synth lead
George Duke

Typical roles

Open Sound Control (OSC) is another music data specification designed for online networking. In contrast with MIDI, OSC allows thousands of synthesizers or computers to share music performance data over the Internet in realtime.

The General MIDI (GM) software standard was devised in 1991 to serve as a consistent way of describing a set of over 200 tones (including percussion) available to a PC for playback of musical scores.[73] For the first time, a given MIDI preset consistently produced a specific instrumental sound on any GM-conforming device. The Standard MIDI File (SMF) format (extension .mid) combined MIDI events with delta times – a form of time-stamping – and became a popular standard for exchange of music scores between computers. In the case of SMF playback using integrated synthesizers (as in computers and cell phones), the hardware component of the MIDI interface design is often unneeded.

Synthesizers became easier to integrate and synchronize with other electronic instruments and controllers with the introduction of Musical Instrument Digital Interface (MIDI) in 1983.[71] First proposed in 1981 by engineer Dave Smith of Sequential Circuits, the MIDI standard was developed by a consortium now known as the MIDI Manufacturers Association.[72] MIDI is an opto-isolated serial interface and communication protocol.[72] It provides for the transmission from one device or instrument to another of real-time performance data. This data includes note events, commands for the selection of instrument presets (i.e. sounds, or programs or patches, previously stored in the instrument's memory), the control of performance-related parameters such as volume, effects levels and the like, as well as synchronization, transport control and other types of data. MIDI interfaces are now almost ubiquitous on music equipment and are commonly available on personal computers (PCs).[72]

Kirk Pearson's "A Man Who Went Missing" utilizes MIDI samplers to produce effects that would be difficult or even impossible for their acoustic counterparts.

MIDI control

The other controllers include: Theremin, lightbeam controllers, touch buttons (touche d’intensité) on the Ondes Martenot, and various type of footpedals, etc. Envelope following systems, the most sophiscated being the vocoder, follow the power or amplitude of input audio signal, instead of breath pressure on the wind controllers. More direct articulation, the vocal tract without breath, is utilized on the Talk box, although it is rarely categorized as a synthesizer.


Accordion controllers use pressure transducers on bellows for articulation.

Trumpet style controllers have included products by Steiner/Crumar/Akai, Yamaha, and Morrison. Also breath controllers may be used as an adjunct to a conventional synthesizer: the Crumar Steiner Masters Touch,[69] Yamaha Breath Controller and its compatible products are the examples.[70] Several controllers also provide breath-like articulation capabilities.

[68] style interface was the Millionizer 2000 (c.1983).Harmonica A [67]

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.