Hearing alerts us to many types of useful information.
Periodic compressions of air, water, or other media are sensed as auditory signals.
Humans experience hearing by detecting sound waves.
Sound waves are periodic compressions of air, water, or other media.
Sound waves vary in amplitude and frequency.
Amplitude refers to the intensity of the sound wave (loudness).
Frequency is the number of compressions per second and is measured in hertz (Hz).
Pitch is the psychoacoustical attribute of a sound wave, a person subjective perception of a sound wave, cannot be directly measured.
Timbre is tone quality or tone complexity.
Children hear higher frequencies than adults; the ability to recognize high frequencies diminishes with age and exposure to loud noises.
Words are a way we use to categorize information regarding our perceptual world; we can tell what we think, know, imagine.
More complex auditory stimulation, such as speech syllables, is analyzed in adjacent secondary auditory areas.
Listening to words activates the posterior speech area, including Wernicke’s area.
Area A1 analyzes incoming auditory signals while secondary areas are responsible for higher order processing required to analyze language sound patterns.
Positron emission tomography (PET) is an imaging technique that detects changes in blood flow by measuring changes in the uptake of compounds such as oxygen or glucose.
Making a phonetic discrimination activates the frontal region, including Broca’s area.
Simple auditory stimulation, such as bursts of noise, is analyzed by area A1.
Zatorre and colleagues (1992, 1995) found that passively listening to noise bursts activates the primary auditory cortex.
Eliciting speech involves stimulation of the facial areas in the motor cortex and the somatosensory cortex, which produces some vocalization related to movements of the mouth and tongue.
The auditory system must have a way to categorize sounds: deep, deck, duke …
The auditory system can extract pitch, despite its source.
People communicate emotion by alterations in pitch, loudness, and timbre (Prosody).
Aprosodia is a neurological condition characterized by the inability of a person to properly convey or interpret emotional prosody.
The inner ear contains a snail shaped structure called the cochlea.
The cochlea contains three fluid-filled tunnels (scala vestibuli, scala media, & the scala tympani).
Hair cells are auditory receptors that lie between the basilar membrane and the tectorial membrane in the cochlea.
When displaced by vibrations in the fluid of the cochlea, they excite the cells of the auditory nerve.
A1 produced simple tones (e.g., ringing sounds).
Insula is located within the lateral fissure and is a multifunctional cortical tissue containing regions related to language, taste perception, and the neural structures underlying social cognition.
Languages have many structural elements in common.
Wernicke’s area is a posterior speech area at the rear of the left temporal lobe that regulates language comprehension, also called the posterior speech zone.
Wernicke proposed a model explaining how areas in the left hemisphere interact to produce speech.
Broca’s area produces a word by activating a specific motor programme.
Language is universal in human populations.
Tonotopic Representation of Area A1 is a concept in auditory neuroscience.
Penfield used a weak electrical current to stimulate the brain surface.
Wernicke’s area was apt to cause some interpretation of a sound (e.g., buzzing sound to a familiar source such as a cricket).
Humans learn language early in life and seemingly without effort.
Musical ability is generally a right-hemisphere specialization complementary to language ability, which is in the left hemisphere in most people.
Broca’s area is an anterior speech area in the left hemisphere that functions with the motor cortex to produce the movements needed for speaking.