Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech in noisy and reverberent environments.
The process of binaural fusion is important for perceiving the locations of sound sources, especially along the horizontal or azimuth direction, and it is important for sound segregation.[1] Sound segregation refers to the ability to identify acoustic components from one or more sound sources.[2] The binaural auditory system is highly dynamic and capable of rapidly adjusting tuning properties depending on the context in which sounds are heard. Each eardrum moves one-dimensionally; the auditory brain analyzes and compares movements of the two eardrums to extract physical cues and perceive auditory objects.[3]
When stimulation from a sound reaches the ear, the eardrum deflects in a mechanical fashion, and the three middle ear bones (ossicles) transmit the mechanical signal to the cochlea, where hair cells transform the mechanical signal into an electrical signal. The auditory nerve, also called the cochlear nerve, then transmits action potentials to the central auditory nervous system.[3]
In binaural fusion, inputs from both ears integrate and fuse to create a complete auditory picture in the brainstem. Therefore, the signals sent to the higher auditory nervous system are representative of this complete picture, integrated information from both ears instead of a single ear.
The binaural squelch effect is a result of nuclei of the brainstem processing timing, amplitude, and spectral differences between the two ears. Sounds are integrated and then separated into auditory objects. For this effect to take place, neural integration from both sides is required.[4]
Anatomy
editIn vertebrate mammals, as sound waves travel via the eardrum, through the cochlea in the inner ear, they stimulate the hair cells that line the basilar membrane.[5] Using these hair cells, the cochlea converts auditory information at each ear into electrical impulses, which travel by means of the auditory nerve (AN) from the cochlea to the cochlear nucleus (CN), which is located in the pons of the brainstem.[6] From the ventral CN (VCN), nerve signals project to the superior olivary complex (SOC), a set of brainstem nuclei that consists primarily of two nuclei, the medial superior olive (MSO) and the lateral superior olive (LSO), and is the primary site of binaural fusion. The subdivision of the VCN that concerns binaural fusion is the anteroventral cochlear nucleus (AVCN).[3] The AVCN consists of spherical bushy cells and globular bushy cells and can also transmit signals to the medial nucleus of the trapezoid body (MNTB), whose neurons project to the MSO. Transmissions from the SOC travel to the inferior colliculus (IC) via the lateral lemniscus. At the level of the IC, binaural fusion is more complete. The signal ascends to the medial geniculate body (MGC) of the thalamocortical system; sensory inputs to the MGB are then relayed to the primary auditory cortex.[3][7][8][9]
Function
editBinaural fusion is responsible for avoiding the creation of multiple sound images from a sound source and its reflections. The advantages of this phenomenon are more noticeable in small rooms, decreasing as the reflective surfaces are placed farther from the listener.[10]
Central auditory system
editThe central auditory system converges inputs from both ears onto neurons within the brainstem. This system contains many subcortical nuclei that collect, integrate, and analyze afferent signals from the ears, for extraction and analysis of the dimensions of sounds. The outcome is a representation of auditory space and auditory objects.[3][11]
The cells of lower auditory pathways are specialized to analyze physical sound parameters.[3] Summation is observed when the loudness of a sound from one stimulus is perceived as having been doubled when heard by both ears instead of only one. This process of summation is called binaural summation and is the result of different acoustics at each ear, depending on where sound is coming from.[4]
Medial superior olive and lateral superior olive
editThe MSO contains cells that function in comparing inputs from the left and right cochlear nuclei.[12] The tuning of neurons in the MSO favors low frequencies, whereas those in the LSO favor high frequencies.[13]
GABAB receptors in the LSO and MSO are involved in balance of excitatory and inhibitory inputs. The GABAB receptors are coupled to G proteins and provide a way of regulating synaptic efficacy. Specifically, GABAB receptors modulate excitatory and inhibitory inputs to the LSO.[3] Whether the GABAB receptor functions as excitatory or inhibitory for the postsynaptic neuron, depends on the exact location and action of the receptor.[1]
Sound localization
editSound localization is the ability to correctly identify the directional location of sounds, typically quantified in terms of azimuth (angle around the horizontal plane) and elevation (defined in various ways as an angle from the horizontal plane). The time, intensity, and spectral differences in the sounds arriving at the two ears are used in localization. Lateralization (localization in azimuth) of sounds is accomplished primarily by analyzing interaural time difference (ITD). Localization of high-frequency sounds is aided by analyzing interaural level difference (ILD) and spectral cues.[4]
Mechanism
editAuditory nerve and cochlear nucleus
editThe key mechanisms of the AN and CN are fast synapses that preserve the detail timings, or temporal fine structure, of sounds as transduced to action potentials, from the hair cells in the cochlea through to the olivary complex. The mechanisms involved include the largest and fastest synapses in the mammalian body, the endbulbs of Held, where myelinated AN fibers innvervate the AVCN, and the calyx of Held, where neurons from the AVCN innervate the MNTB. The processing and propagation of action potentials through these large excitatory synapses is rapid and temporally precise, and therefore, information about the timing of sound waves, which is crucial to binaural processing, is precisely preserved.[14]
Superior olivary complex
editBinaural processing occurs through the interaction of excitatory and inhibitory inputs in the LSO and MSO.[1][3][12] The SOC processes and integrates binaural information, usually described as ITD and ILD. This initial processing of ILD and ITD is regulated by GABAB receptors.[1] The exact mechanisms are still being investigated.[15]
Outputs from the MSO and LSO are sent via the lateral lemniscus to the IC, which integrates the spatial localization of sound. In the IC, acoustic cues have been processed and filtered into separate streams, forming the basis of auditory object recognition.[3] Each IC responds primarily to sounds from the contralateral direction.
Lateral superior olive
editLSO neurons are excited by inputs from one ear and inhibited by inputs from the other, and are therefore referred to as IE neurons. Excitatory inputs are received at the LSO from spherical bushy cells of the ipsilateral cochlear nucleus, which combine inputs coming from several auditory nerve fibers. Precisely timed inhibitory inputs are received at the LSO from the MNTB, relayed from globular bushy cells of the contralateral cochlear nucleus.[3]
Medial superior olive
editMSO neurons are excited bilaterally, meaning that they are excited by inputs from both ears, and they are therefore referred to as EE neurons.[3] MSO neurons extract ITD information from binaural inputs and resolve small differences in the time of arrival of sounds at each ear.[3]
Binaural fusion abnormalities in autism
editCurrent research is being performed on the dysfunction of binaural fusion in individuals with autism. The neurological disorder autism is associated with many symptoms of impaired brain function, including the degradation of hearing, both unilateral and bilateral.[16] Individuals with autism who experience hearing loss maintain symptoms such as difficulty listening to background noise and impairments in sound localization. Both the ability to distinguish particular speakers from background noise and the process of sound localization are key products of binaural fusion. They are particularly related to the proper function of the SOC, and there is increasing evidence that morphological abnormalities within the brainstem, namely in the SOC, of autistic individuals are a cause of the hearing difficulties.[17] The neurons of the MSO of individuals with autism display atypical anatomical features, including atypical cell shape and orientation of the cell body as well as stellate and fusiform formations.[18] Data also suggests that neurons of the LSO and MNTB contain distinct dysmorphology in autistic individuals, such as irregular stellate and fusiform shapes and a smaller than normal size. Moreover, a significant depletion of SOC neurons is seen in the brainstem of autistic individuals. All of these structures play a crucial role in the proper functioning of binaural fusion, so their dysmorphology may be at least partially responsible for the incidence of these auditory symptoms in autistic patients.[17]
References
edit- ^ a b c d Grothe, Benedikt; Koch, Ursula (2011). "Dynamics of binaural processing in the mammalian sound localization pathway--the role of GABA(B) receptors". Hearing Research. 279 (1–2): 43–50. doi:10.1016/j.heares.2011.03.013. PMID 21447375. S2CID 7196476.
- ^ Schwartz, Andrew; McDermott, Josh (2012). "Spatial cues alone produce inaccurate sound segregation: The effect of inter aural time differences". Journal of the Acoustical Society of America. 132 (1): 357–368. Bibcode:2012ASAJ..132..357S. doi:10.1121/1.4718637. PMC 3407160. PMID 22779483.
- ^ a b c d e f g h i j k l Grothe, Benedikt; Pecka, Michael; McAlpine, David (2010). "Mechanisms of sound localization in mammals". Physiol Rev. 90 (3): 983–1012. doi:10.1152/physrev.00026.2009. PMID 20664077.
- ^ a b c Tyler, R.S.; Dunn, C.C.; Witt, S.A.; Preece, J.P. (2003). "Update on bilateral cochlear implantation". Current Opinion in Otolaryngology & Head and Neck Surgery. 11 (5): 388–393. doi:10.1097/00020840-200310000-00014. PMID 14502072. S2CID 7209119.
- ^ Lim, DJ (1980). "Cochlear anatomy related to cochlear micromechanics. A review". J. Acoust. Soc. Am. 67 (5): 1686–1695. Bibcode:1980ASAJ...67.1686L. doi:10.1121/1.384295. PMID 6768784.
- ^ Moore, JK (2000). "Organization of the human superior olivary complex". Microsc Res Tech. 51 (4): 403–412. doi:10.1002/1097-0029(20001115)51:4<403::AID-JEMT8>3.0.CO;2-Q. PMID 11071722. S2CID 10151612.
- ^ Cant, Nell B; Benson, Christina G (2003). "Parallel auditory pathways: projection patterns of the different neuronal populations in the dorsal and ventral cochlear nuclei". Brain Research Bulletin. 60 (5–6): 457–474. doi:10.1016/s0361-9230(03)00050-9. PMID 12787867. S2CID 42563918.
- ^ Herrero, Maria-Trinidad; Barcia, Carlos; Navarro, Juana Mari (2002). "Functional anatomy of thalamus and basal ganglia". Child's Nerv Syst. 18 (8): 386–404. doi:10.1007/s00381-002-0604-1. PMID 12192499. S2CID 8237423.
- ^ Twefik, Ted L (2019-10-19). "Auditory System Anatomy".
- ^ Litovsky, R.; Colburn, H.; Yost, W. (1999). "The Precedence Effect". Journal of the Acoustical Society of America. 106 (4 Pt 1): 1633–1654. Bibcode:1999ASAJ..106.1633L. doi:10.1121/1.427914. PMID 10530009.
- ^ Masterton, R.B. (1992). "Role of the central auditory system in hearing: the new direction". Trends in Neurosciences. 15 (8): 280–285. doi:10.1016/0166-2236(92)90077-l. PMID 1384196. S2CID 4024835.
- ^ a b Eldredge, D.H.; Miller, J.D. (1971). "Physiology of hearing". Annu. Rev. Physiol. 33: 281–310. doi:10.1146/annurev.ph.33.030171.001433. PMID 4951051.
- ^ Guinan, JJ; Norris, BE; Guinan, SS (1972). "Single auditory units in the superior olivary complex II: Locations of unit categories and tonotopic organization". Int J Neurosci. 4 (4): 147–166. doi:10.3109/00207457209164756.
- ^ Forsythe, Ian D. "Excitatory and inhibitory transmission in the superior olivary complex" (PDF).
- ^ Yin, Tom CT, Phil H. Smith, and Philip X. Joris (2019). "Neural mechanisms of binaural processing in the auditory brainstem". Comprehensive Physiology. 9 (4): 1503–1575. doi:10.1002/cphy.c180036. PMID 31688966. Retrieved 7 August 2024.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Rosenhall, U; Nordin, V; Sandstrom, M (1999). "Autism and hearing loss". J Autism Dev Disord. 29 (5): 349–357. doi:10.1023/A:1023022709710. PMID 10587881. S2CID 18700224.
- ^ a b Kulesza Jr., Randy J.; Lukose, Richard; Stevens, Lisa Veith (2011). "Malformation of the human superior olive in autism spectrum disorders". Brain Research. 1367: 360–371. doi:10.1016/j.brainres.2010.10.015. PMID 20946889. S2CID 39753895.
- ^ Kulesza, RJ; Mangunay, K (2008). "Morphological features of the medial superior olive in autism". Brain Res. 1200: 132–137. doi:10.1016/j.brainres.2008.01.009. PMID 18291353. S2CID 7388703.
External links
edit- Saberi, Kourosh; Farahbod, Haleh; Konishi, Masakazu (26 May 1998). "How do owls localize interaurally phase-ambiguous signals?". Proceedings of the National Academy of Sciences. 95 (11): 6465–6468. Bibcode:1998PNAS...95.6465S. doi:10.1073/pnas.95.11.6465. PMC 27804. PMID 9600989.
- Moncrieff, Deborah (2002-12-02). "Binaural Integration: An Overview". audiologyonline.com. Retrieved 2018-03-11.