World Library  
Flag as Inappropriate
Email this Article

Temporal dynamics of music and language

Article Id: WHEBN0036560848
Reproduction Date:

Title: Temporal dynamics of music and language  
Author: World Heritage Encyclopedia
Language: English
Subject: Music psychology, Cognitive musicology, Musical semantics, Music-related memory, Lipps–Meyer law
Collection: Cognitive Musicology, Cognitive Science, Music Psychology, Neuroanatomy, Neuroscience
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Temporal dynamics of music and language

The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements (such as tones or words) that are combined in ordered ways to create complete musical or lingual ideas.

Contents

  • Neuroanotomy of language and music 1
  • Imaging the brain in action 2
  • Other research methods 3
  • Recent research 4
  • Developmental aspects 5
  • References 6

Neuroanotomy of language and music

Key areas of the brain are used in both music processing and language processing, such as Brocas area that is devoted to language production and comprehension. Patients with lesions, or damage, in the Brocas area often exhibit poor grammar, slow speech production and poor sentence comprehension. The inferior frontal gyrus, is a gyrus of the frontal lobe that is involved in timing events and reading comprehension, particularly for the comprehension of verbs. The Wernickes area is located on the posterior section of the superior temporal gyrus and is important for understanding vocabulary and written language.

The primary auditory cortex is located on the temporal lobe of the cerebral cortex. This region is important in music processing and plays an important role in determining the pitch and volume of a sound.[1] Brain damage to this region often results in a loss of the ability to hear any sounds at all. The frontal cortex has been found to be involved in processing melodies and harmonies of music. For example when a patient is asked to tap out a beat or try to reproduce a tone, this region is very active on fMRI and PET scans.[2] The cerebellum is the "mini" brain at the rear of the skull. Similar to the frontal cortex, brain imaging studies suggest that the cerebellum is involved in processing melodies and determining tempos. The medial prefrontal cortex along with the primary auditory cortex has also been implicated in tonality, or determining pitch and volume.[1]

In addition to the specific regions mentioned above many "information switch points" are active in language and music processing. These regions are believed to act as transmission routes that conduct information. These neural impulses allow the above regions to communicate and process information correctly. These structures include the thalamus and the basal ganglia.[2]

Some of the above mentioned areas have been shown to be active in both music and language processing through PET and fMRI studies. These areas include the primary motor cortex, the Brocas area, the cerebellum, and the primary auditory cortices.[2]

Imaging the brain in action

The imaging techniques best suited for studying temporal dynamics provide information in real time. The methods most utilized in this research are functional magnetic resonance imaging, or fMRI, and positron emission tomography known as PET scans.[3]

Positron emission tomography involves injecting a short-lived radioactive tracer isotope into the blood. When the radioisotope decays, it emits positrons which are detected by the machine sensor. The isotope is chemically incorporated into a biologically active molecule, such as glucose, which powers metabolic activity. Whenever brain activity occurs in a given area these molecules are recruited to the area. Once the concentration of the biologically active molecule, and its radioactive "dye", rises enough, the scanner can detect it.[3] About one second elapses from when brain activity begins to when the activity is detected by the PET device. This is because it takes a certain amount of time for the dye to reach the needed concentrations can be detected.[4]

PET.
Example of a PET scan.

Functional magnetic resonance imaging or fMRI is a form of the traditional MRI imaging device that allows for brain activity to be observed in real time. An fMRI device works by detecting changes in neural blood flow that is associated with brain activity. fMRI devices use a strong, static magnetic field to align nuclei of atoms within the brain. An additional magnetic field, often called the gradient field, is then applied to elevate the nuclei to a higher energy state.[5] When the gradient field is removed, the nuclei revert to their original state and emit energy. The emitted energy is detected by the fMRI machine and is used to form an image. When neurons become active blood flow to those regions increases. This oxygen-rich blood displaces oxygen depleted blood in these areas. Hemoglobin molecules in the oxygen-carrying red blood cells have different magnetic properties depending on whether it is oxygenated.[5] By focusing the detection on the magnetic disturbances created by hemoglobin, the activity of neurons can be mapped in near real time.[5] Few other techniques allow for researchers to study temporal dynamics in real time.

MEG.
Patient gets a "MEG".

Another important tool for analyzing temporal dynamics is magnetoencephalography, known as MEG. It is used to map brain activity by detecting and recording magnetic fields produced by electrical currents generated by neural activity. The device uses a large array of superconducting quantum interface devices, called SQUIDS, to detect magnetic activity. Because the magnetic fields generated by the human brain are so small the entire device must be placed in a specially designed room that is built to shield the device from external magnetic fields.[5]

Other research methods

Another common method for studying brain activity when processing language and music is transcranial magnetic stimulation or TMS. TMS uses induction to create weak electromagnetic currents within the brain by using a rapidly changing magnetic field. The changes depolarize or hyper-polarize neurons. This can produce or inhibit activity in different regions. The effect of the disruptions on function can be used to assess brain interconnections.[6]

Recent research

Many aspects of language and musical melodies are processed by the same brain areas. In 2006, Brown, Martinez and Parsons found that listening to a melody or a sentence resulted in activation of many of the same areas including the primary motor cortex, the supplementary motor area, the Brocas area, anterior insula, the primary audio cortex, the thalamus, the basal ganglia and the cerebellum.[7]

A 2008 study by Koelsch, Sallat and Friederici found that language impairment may also affect the ability to process music. Children with specific language impairments, or SLIs were not as proficient at matching tones to one another or at keeping tempo with a simple metronome as children with no language disabilities. This highlights that fact that neurological disorders that effect language may also affect musical processing ability.[8]

Walsh, Stewart, and Frith in 2001 investigated which regions processed melodies and language by asking subjects to create a melody on a simple keyboard or write a poem. They applied TMS to the location where musical and lingual data. The research found that TMS applied to the left frontal lobe had affected the ability to write or produce language material, while TMS applied to the auditory and Brocas area of the brain most inhibited the research subject’s ability to play musical melodies. This suggests that some differences exist between music and language creation.[9]

Developmental aspects

The basic elements of musical and lingual processing appear to be present at birth. For example, a French 2011 study that monitored fetal heartbeats found that past the age of 28 weeks, fetuses respond to changes in musical pitch and tempo. Baseline heart rates were determined by 2 hours of monitoring before any stimulus. Descending and ascending frequencies at different tempos were played near the womb. The study also investigated fetal response to lingual patterns, such as playing a sound clip of different syllables, but found no response to different lingual stimulus. Heart rates increased in response to high pitch loud sounds compared to low pitched soft sounds. This suggests that the basic elements of sound processing, such as discerning pitch, tempo and loudness are present at birth, while later-developed processes discern speech patterns after birth.[10]

A 2010 study researched the development of lingual skills in children with speech difficulties. It found that musical stimulation improved the outcome of traditional speech therapy. Children aged 3.5 to 6 years old were separated into two groups. One group heard lyric-free music at each speech therapy session while the other group was given traditional speech therapy. The study found that both phonological capacity and the children’s ability to understand speech increased faster in the group that was exposed to regular musical stimulation.[11]

References

  1. ^ a b Ghazanfar, A. A.; Nicolelis, M. A. (2001). "Feature Article: The Structure and Function of Dynamic Cortical and Thalamic Receptive Fields". Cerebral Cortex 11 (3): 183–193.  
  2. ^ a b c Theunissen, F; David, SV; Singh, NC; Hsu, A; Vinje, WE; Gallant, JL (August 2001). "Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli". Network: Computation in Neural Systems 12 (3): 289-316. PMID 11563531Theunissen, F. E.; David, S. V.; Singh, N. C.; Hsu, A.; Vinje, W. E.; Gallant, J. L. (2001). "Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli". Network (Bristol, England) 12 (3): 289–316.  
  3. ^ a b Baird, A.; Samson, S. V. (2009). "Memory for Music in Alzheimer's Disease: Unforgettable?". Neuropsychology Review 19 (1): 85–101.  
  4. ^ Bailey, D.L; Townsend, D.W.; Valk, P.E.; Maisey, M.N. (2003). Positron Emission Tomography: Basic Sciences. Secaucus, NJ: Springer-Verlag..  
  5. ^ a b c d Hauk O, Wakeman D, Henson R (Feb 2011). "Comparison of noise-normalized minimum norm estimates for MEG analysis using multiple resolution metrics". Neuroimage 54 (3): 1966–74. DOI:10.1016/j.neuroimage.2010.09.053. PMC 3018574. PMID 20884360. //www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3018574.Hauk, O.; Wakeman, D. G.; Henson, R. (2011). "Comparison of noise-normalized minimum norm estimates for MEG analysis using multiple resolution metrics". NeuroImage 54 (3): 1966–1974.  
  6. ^ Fitzgerald, P; Fountain, S; Daskalakis, Z (2006). "A comprehensive review of the effects of rTMS on motor cortical excitability and inhibition". Clinical Neurophysiology 117 (12): 2584–2596.  
  7. ^ Brown, S.; Martinez, M. J.; Parsons, L. M. (2006). "Music and language side by side in the brain: A PET study of the generation of melodies and sentences". European Journal of Neuroscience 23 (10): 2791–2803.  
  8. ^ Jentschke, S.; Koelsch, S.; Sallat, S.; Friederici, A. D. (2008). "Children with Specific Language Impairment Also Show Impairment of Music-syntactic Processing". Journal of Cognitive Neuroscience 20 (11): 1940–1951.  
  9. ^ Stewart, L.; Walsh, V.; Frith, U. T. A.; Rothwell, J. (2006). "Transcranial Magnetic Stimulation Produces Speech Arrest but Not Song Arrest". Annals of the New York Academy of Sciences 930: 433–435.  
  10. ^ Granier-Deferre C, Ribeiro A, Jacquet A, Bassereau S. Near-term fetuses process temporal features of speech. Developmental Science [serial online]. March 2011;14(2):336-352. Available from: MEDLINE with Full Text, Ipswich, MA. Accessed July 26, 2012. Granier-Deferre, C.; Ribeiro, A.; Jacquet, A. Y.; Bassereau, S. (2011). "Near-term fetuses process temporal features of speech". Developmental science 14 (2): 336–352.  
  11. ^ Gross W, Linden U, Ostermann T. Effects of music therapy in the treatment of children with delayed speech development -results of a pilot study. BMC Complementary And Alternative Medicine [serial online]. July 21, 2010;10:39. Available from: MEDLINE with Full Text, Ipswich, MA. Accessed July 27, 2012. Frank, H. A.; Davidson, T. M. (1989). "Ethical dilemmas in head and neck cancer". Head & neck 11 (1): 22–26.  
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.