Statistical Learning, Syllable Processing, and Speech Production in Healthy Hearing and Hearing-Impaired Preschool Children: A Mismatch Negativity Study.Objectives: The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Design: Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Results: Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. Conclusions: In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
The Effect of Functional Hearing and Hearing Aid Usage on Verbal Reasoning in a Large Community-Dwelling Population.Objectives: Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. Design: Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. Results: Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p < 0.001). There was no association between the subjective measure of functional hearing and verbal reasoning. Functional hearing significantly interacted with education (p < 0.002), showing a trend for functional hearing to have a greater impact on verbal reasoning among those with a higher level of formal education. Among those with poor hearing, hearing aid usage had a significant positive, but not necessarily causal, effect on both numerical and linguistic verbal reasoning (p < 0.005). The estimated effect of hearing aid usage was less than the effect of poor functional hearing. Structural equation modeling analyses confirmed that controlling for education reduced the effect of functional hearing on verbal reasoning and showed that controlling for executive function eliminated the effect. However, when computer usage was controlled for, the eliminating effect of executive function was weakened. Conclusions: Poor functional hearing was associated with poor verbal reasoning in a 40- to 70-year-old community-dwelling population after controlling for age, gender, and education. The effect of functional hearing on verbal reasoning was significantly reduced among hearing aid users and completely overcome by good executive function skills, which may be enhanced by playing computer games. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Masking Period Patterns and Forward Masking for Speech-Shaped Noise: Age-Related Effects.Objective: The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to nonsimultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Design: Participants included younger (n = 11), middle-age (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions and assessed how well the temporal window fits accounted for these data. Results: The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. Conclusions: This study demonstrated an age-related increase in susceptibility to nonsimultaneous masking, supporting the hypothesis that exacerbated nonsimultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data, suggesting an association between susceptibility to forward masking and speech understanding in modulated noise. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Development of Open-Set Word Recognition in Children: Speech-Shaped Noise and Two-Talker Speech Maskers.Objective: The goal of this study was to establish the developmental trajectories for children's open-set recognition of monosyllabic words in each of two maskers: two-talker speech and speech-shaped noise. Design: Listeners were 56 children (5 to 16 years) and 16 adults, all with normal hearing. Thresholds for 50% correct recognition of monosyllabic words were measured in a two-talker speech or a speech-shaped noise masker in the sound field using an open-set task. Target words were presented at a fixed level of 65 dB SPL throughout testing, while the masker level was adapted. A repeated-measures design was used to compare the performance of three age groups of children (5 to 7 years, 8 to 12 years, and 13 to 16 years) and a group of adults. The pattern of age-related changes during childhood was also compared between the two masker conditions. Results: Listeners in all four age groups performed more poorly in the two-talker speech than the speech-shaped noise masker, but the developmental trajectories differed for the two masker conditions. For the speech-shaped noise masker, children's performance improved with age until about 10 years of age, with little systematic child-adult differences thereafter. In contrast, for the two-talker speech masker, children's thresholds gradually improved between 5 and 13 years of age, followed by an abrupt improvement in performance to adult-like levels. Children's thresholds in the two masker conditions were uncorrelated. Conclusions: Younger children require a more advantageous signal-to-noise ratio than older children and adults to achieve 50% correct word recognition in both masker conditions. However, children's ability to recognize words appears to take longer to mature and follows a different developmental trajectory for the two-talker speech masker than the speech-shaped noise masker. These findings highlight the importance of considering both age and masker type when evaluating children's masked speech perception abilities. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Evaluation of Speech-Evoked Envelope Following Responses as an Objective Aided Outcome Measure: Effect of Stimulus Level, Bandwidth, and Amplification in Adults With Hearing Loss.Objectives: The present study evaluated a novel test paradigm based on speech-evoked envelope following responses (EFRs) as an objective aided outcome measure for individuals fitted with hearing aids. Although intended for use in infants with hearing loss, this study evaluated the paradigm in adults with hearing loss, as a precursor to further evaluation in infants. The test stimulus was a naturally male-spoken token /susa[integral]i/, modified to enable recording of eight individual EFRs, two from each vowel for different formants and one from each fricative. In experiment I, sensitivity of the paradigm to changes in audibility due to varying stimulus level and use of hearing aids was tested. In experiment II, sensitivity of the paradigm to changes in aided audible bandwidth was evaluated. As well, experiment II aimed to test convergent validity of the EFR paradigm by comparing the effect of bandwidth on EFRs and behavioral outcome measures of hearing aid fitting. Design: Twenty-one adult hearing aid users with mild to moderately severe sensorineural hearing loss participated in the study. To evaluate the effects of level and amplification in experiment I, the stimulus was presented at 50 and 65 dB SPL through an ER-2 insert earphone in unaided conditions and through individually verified hearing aids in aided conditions. Behavioral thresholds of EFR carriers were obtained using an ER-2 insert earphone to estimate sensation level of EFR carriers. To evaluate the effect of aided audible bandwidth in experiment II, EFRs were elicited by /susa[integral]i/ low-pass filtered at 1, 2, and 4 kHz and presented through the programmed hearing aid. EFRs recorded in the 65 dB SPL aided condition in experiment I represented the full bandwidth condition. EEG was recorded from the vertex to the nape of the neck over 300 sweeps. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple-Stimulus Hidden Reference and Anchor paradigm were measured in the same bandwidth conditions. Results: In experiment I, an increase in stimulus level above threshold and the use of amplification resulted in a significant increase in the number of EFRs detected per condition. At positive sensation levels, an increase in level demonstrated a significant increase in response amplitude in unaided and aided conditions. At 50 and 65 dB SPL, the use of amplification led to a significant increase in response amplitude for the majority of carriers. In experiment II, the number of EFR detections and the combined response amplitude of all eight EFRs improved with an increase in bandwidth up to 4 kHz. In contrast, behavioral measures continued to improve at wider bandwidths. Further change in EFR parameters was possibly limited by the hearing aid bandwidth. Significant positive correlations were found between EFR parameters and behavioral test scores in experiment II. Conclusions: The EFR paradigm demonstrates sensitivity to changes in audibility due to a change in stimulus level, bandwidth, and use of amplification in clinically feasible test times. The paradigm may thus have potential applications as an objective aided outcome measure. Further investigations exploring stimulus-response relationships in aided conditions and validation studies in children are warranted. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Effect of Stimulus Level and Bandwidth on Speech-Evoked Envelope Following Responses in Adults With Normal Hearing.Objective: The use of auditory evoked potentials as an objective outcome measure in infants fitted with hearing aids has gained interest in recent years. This article proposes a test paradigm using speech-evoked envelope following responses (EFRs) for use as an objective-aided outcome measure. The method uses a running speech-like, naturally spoken stimulus token /susa[Latin small letter esh]i/ (fundamental frequency [f0] = 98 Hz; duration 2.05 sec), to elicit EFRs by eight carriers representing low, mid, and high frequencies. Each vowel elicited two EFRs simultaneously, one from the region of formant one (F1) and one from the higher formants region (F2+). The simultaneous recording of two EFRs was enabled by lowering f0 in the region of F1 alone. Fricatives were amplitude modulated to enable recording of EFRs from high-frequency spectral regions. The present study aimed to evaluate the effect of level and bandwidth on speech-evoked EFRs in adults with normal hearing. As well, the study aimed to test convergent validity of the EFR paradigm by comparing it with changes in behavioral tasks due to bandwidth. Design: Single-channel electroencephalogram was recorded from the vertex to the nape of the neck over 300 sweeps in two polarities from 20 young adults with normal hearing. To evaluate the effects of level in experiment I, EFRs were recorded at test levels of 50 and 65 dB SPL. To evaluate the effects of bandwidth in experiment II, EFRs were elicited by /susa[Latin small letter esh]i/ low-pass filtered at 1, 2, and 4 kHz, presented at 65 dB SPL. The 65 dB SPL condition from experiment I represented the full bandwidth condition. EFRs were averaged across the two polarities and estimated using a Fourier analyzer. An F test was used to determine whether an EFR was detected. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple Stimulus Hidden Reference and Anchors paradigm were measured in identical bandwidth conditions. Results: In experiment I, the increase in level resulted in a significant increase in response amplitudes for all eight carriers (mean increase of 14 to 50 nV) and the number of detections (mean increase of 1.4 detections). In experiment II, an increase in bandwidth resulted in a significant increase in the number of EFRs detected until the low-pass filtered 4 kHz condition and carrier-specific changes in response amplitude until the full bandwidth condition. Scores in both behavioral tasks increased with bandwidth up to the full bandwidth condition. The number of detections and composite amplitude (sum of all eight EFR amplitudes) significantly correlated with changes in behavioral test scores. Conclusions: Results suggest that the EFR paradigm is sensitive to changes in level and audible bandwidth. This may be a useful tool as an objective-aided outcome measure considering its running speech-like stimulus, representation of spectral regions important for speech understanding, level and bandwidth sensitivity, and clinically feasible test times. This paradigm requires further validation in individuals with hearing loss, with and without hearing aids. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Nonmuscle Myosin Heavy Chain IIA Mutation Predicts Severity and Progression of Sensorineural Hearing Loss in Patients With MYH9-Related Disease.Objectives: MYH9-related disease (MYH9-RD) is an autosomal--dominant disorder deriving from mutations in MYH9, the gene for the nonmuscle myosin heavy chain (NMMHC)-IIA. MYH9-RD has a complex phenotype including congenital features, such as thrombocytopenia, and noncongenital manifestations, namely sensorineural hearing loss (SNHL), nephropathy, cataract, and liver abnormalities. The disease is caused by a limited number of mutations affecting different regions of the NMMHC-IIA protein. SNHL is the most frequent noncongenital manifestation of MYH9-RD. However, only scarce and anecdotal information is currently available about the clinical and audiometric features of SNHL of MYH9-RD subjects. The objective of this study was to investigate the severity and propensity for progression of SNHL in a large series of MYH9-RD patients in relation to the causative NMMHC-IIA mutations. Design: This study included the consecutive patients diagnosed with MYH9-RD between July 2007 and March 2012 at four participating institutions. A total of 115 audiograms were analyzed from 63 patients belonging to 45 unrelated families with different NMMHC-IIA mutations. Cross-sectional analyses of audiograms were performed. Regression analysis was performed, and age-related typical audiograms (ARTAs) were derived to characterize the type of SNHL associated with different mutations. Results: Severity of SNHL appeared to depend on the specific NMMHC-IIA mutation. Patients carrying substitutions at the residue R702 located in the short functional SH1 helix had the most severe degree of SNHL, whereas patients with the p.E1841K substitution in the coiled-coil region or mutations at the nonhelical tailpiece presented a mild degree of SNHL even at advanced age. The authors also disclosed the effects of different amino acid changes at the same residue: for instance, individuals with the p.R702C mutation had more severe SNHL than those with the p.R702H mutation, and the p.R1165L substitution was associated with a higher degree of hearing loss than the p.R1165C. In general, mild SNHL was associated with a fairly flat audiogram configuration, whereas severe SNHL correlated with downsloping configurations. ARTA plots showed that the most progressive type of SNHL was associated with the p.R702C, the p.R702H, and the p.R1165L substitutions, whereas the p.R1165C mutation correlated with a milder, nonprogressive type of SNHL than the p.R1165L. ARTA for the p.E1841K mutation demonstrated a mild degree of SNHL with only mild progression, whereas the ARTA for the mutations at the nonhelical tailpiece did not show any substantial progression. Conclusions: These data provide useful tools to predict the progression and the expected degree of severity of SNHL in individual MYH9-RD patients, which is especially relevant in young patients. Consequences in clinical practice are important not only for appropriate patient counseling but also for development of customized, genotype-driven clinical management. The authors recently reported that cochlear implantation has a good outcome in MYH9-RD patients; thus, stricter follow-up and earlier intervention are recommended for patients with unfavorable genotypes. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Background Noise Degrades Central Auditory Processing in Toddlers.Objectives: Noise, as an unwanted sound, has become one of modern society's environmental conundrums, and many children are exposed to higher noise levels than previously assumed. However, the effects of background noise on central auditory processing of toddlers, who are still acquiring language skills, have so far not been determined. The authors evaluated the effects of background noise on toddlers' speech-sound processing by recording event-related brain potentials. The hypothesis was that background noise modulates neural speech-sound encoding and degrades speech-sound discrimination. Design: Obligatory P1 and N2 responses for standard syllables and the mismatch negativity (MMN) response for five different syllable deviants presented in a linguistic multifeature paradigm were recorded in silent and background noise conditions. The participants were 18 typically developing 22- to 26-month-old monolingual children with healthy ears. Results: The results showed that the P1 amplitude was smaller and the N2 amplitude larger in the noisy conditions compared with the silent conditions. In the noisy condition, the MMN was absent for the intensity and vowel changes and diminished for the consonant, frequency, and vowel duration changes embedded in speech syllables. Furthermore, the frontal MMN component was attenuated in the noisy condition. However, noise had no effect on P1, N2, or MMN latencies. Conclusions: The results from this study suggest multiple effects of background noise on the central auditory processing of toddlers. It modulates the early stages of sound encoding and dampens neural discrimination vital for accurate speech perception. These results imply that speech processing of toddlers, who may spend long periods of daytime in noisy conditions, is vulnerable to background noise. In noisy conditions, toddlers' neural representations of some speech sounds might be weakened. Thus, special attention should be paid to acoustic conditions and background noise levels in children's daily environments, like day-care centers, to ensure a propitious setting for linguistic development. In addition, the evaluation and improvement of daily listening conditions should be an ordinary part of clinical intervention of children with linguistic problems. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved. |
Clinical Outcomes for Adult Cochlear Implant Recipients Experiencing Loss of Usable Acoustic Hearing in the Implanted Ear: Erratum: Changes to the Editorial Board and New Guidelines for Reporting Population-Based Research: Erratum.No abstract available |
Vestibular, Visual Acuity, and Balance Outcomes in Children With Cochlear Implants: A Preliminary Report.Objectives: There is a high incidence of vestibular loss in children with cochlear implants (CCI). However, the relationship between vestibular loss and various outcomes is unknown in children. The objectives of this study are to (1) determine if age-related changes in peripheral vestibular tests occur; (2) quantify peripheral vestibular function in children with normal hearing and CCI; (3) determine if amount of vestibular loss predicts visual acuity and balance performance. Design: Eleven CCI and 12 children with normal hearing completed the following tests of vestibular function: ocular and cervical vestibular-evoked myogenic potential to assess utricle and saccule function and the video head impulse test to assess semicircular canal function. The relationship between amount of vestibular loss and the following balance and visual acuity outcomes was assessed: dynamic gait index, single-leg stance, the sensory organization test, and tests of visual acuity, including dynamic visual acuity and the gaze stabilization test. Results: (1) There were no significant age-related changes in peripheral vestibular testing with the exception of the n23 cervical vestibular-evoked myogenic potential latency, which was moderately correlated with age. (2) CCI had significantly higher rates of vestibular loss for each test of canal and otolith function. (3) Amount of vestibular loss predicted performance on single-leg stance, the dynamic gait index, some conditions of the sensory organization test, and the dynamic visual acuity test. Age was also a contributing factor for predicting the performance of almost all outcomes. Conclusions: Preliminarily, children with vestibular loss do not recover naturally to levels of their healthy peers, particularly with activities that utilize vestibular input; they have poorer visual acuity and balance function. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
Hearing Impairment in Relation to Severity of Diabetes in a Veteran CohortObjective: Type 2 diabetes is epidemic among veterans, approaching three times the prevalence of the general population. Diabetes leads to devastating complications of vascular and neurologic malfunction and appears to impair auditory function. Hearing loss prevention is a major health-related initiative in the Veterans Health Administration. Thus, this research sought to identify, and quantify with effect sizes, differences in hearing, speech recognition, and hearing-related quality of life (QOL) measures associated with diabetes and to determine whether well-controlled diabetes diminishes the differences. Design: The authors examined selected cross-sectional data from the baseline (initial) visit of a longitudinal study of Veterans with and without type 2 diabetes designed to assess the possible differences in age-related trajectories of peripheral and central auditory function between the two groups. In addition, the diabetes group was divided into subgroups on the basis of medical diagnosis of diabetes and current glycated hemoglobin (HbA1c) as a metric of disease severity and control. Outcome measures were pure-tone thresholds, word recognition using sentences presented in noise or time-compressed, and an inventory assessing the self-perceived impact of hearing loss on QOL. Data were analyzed from 130 Veterans ages 24 to 73 (mean 48) years with well-controlled (controlled) diabetes, poorly controlled (uncontrolled) diabetes, prediabetes, and no diabetes. Regression was used to identify any group differences in age, noise exposure history, and other sociodemographic factors, and multiple regression was used to model each outcome variable, adjusting for potential confounders. Results were evaluated in relation to diabetes duration, use of insulin (yes, no), and presence of selected diabetes complications (neuropathy and retinopathy). Results: Compared with nondiabetics, Veterans with uncontrolled diabetes had significant differences in hearing at speech frequencies, including poorer hearing by 3 to 3.5 dB for thresholds at 250 Hz and in a clinical pure-tone average, respectively. Compared with nondiabetic controls, individuals with uncontrolled diabetes also significantly more frequently reported that their hearing adversely impacted QOL on one of the three subscales (ability to adapt). Despite this, although they also had slightly poorer mean scores on both word recognition tasks performed, these differences did not reach statistical significance and all subjects performed well on these tasks. Compared with Veterans with controlled diabetes, those with uncontrolled disease tended to have had diabetes longer, be insulin-dependent, and have a greater prevalence of diabetic retinopathy. Results are generally comparable with the literature with regard to the magnitude of threshold differences and the prevalence of hearing impairment but extend prior work by providing threshold difference and hearing loss prevalence effect sizes by category of diabetes control and by including additional functional measures. Conclusions: In a cohort of Veterans with type 2 diabetes and relatively good hearing, significant effects of disease severity were found for hearing thresholds at a subset of frequencies and for one of the three QOL subscales. Significant differences were concentrated among those with poorly controlled diabetes based on current HbA1c. Results provide evidence that the observed hearing dysfunction in type 2 diabetes might be prevented or delayed through tight metabolic control. Findings need to be corroborated using longitudinal assessments. | Peripheral Hearing and Cognition: Evidence From the Staying Keen in Later Life (SKILL) StudyObjectives: Research has increasingly suggested a consistent relationship between peripheral hearing and selected measures of cognition in older adults. However, other studies yield conflicting findings. The primary purpose of the present study was to further elucidate the relationship between peripheral hearing and three domains of cognition and one measure of global cognitive status. It was hypothesized that peripheral hearing loss would be significantly associated with poorer performance across measures of cognition, even after adjusting for documented risk factors. No study to date has examined the relationship between peripheral hearing and such an extensive array of cognitive measures. Design: Eight hundred ninety-four older adult participants from the Staying Keen in Later Life study cohort were eligible, agreed to participate, and completed the baseline evaluation. Inclusion criteria were minimal to include a sample of older adults with a wide range of sensory and cognitive abilities. Multiple linear regression analyses were conducted to evaluate the extent to which peripheral hearing predicted performance on a global measure of cognitive status, as well as multiple cognitive measures in the domains of speed of processing (Digit Symbol Substitution and Copy, Trail Making Test Part A, Letter and Pattern Comparison, and Useful Field of View), executive function (Trail Making Test Part B and Stroop Color-Word Interference Task), and memory (Digit Span, Spatial Span, and Hopkins Verbal Learning Test). Results: Peripheral hearing, measured as the three-frequency pure-tone average (PTA) in the better ear, accounted for a significant, but minimal, amount of the variance in measures of speed of processing, executive function, and memory, as well as global cognitive status. Alternative measures of hearing (i.e., three-frequency PTAs in the right and left ears and a bilateral, six-frequency PTA [three frequencies per ear]) yielded similar findings across measures of cognition and did not alter the study outcomes in any meaningful way. Conclusions: Consistent with literature suggesting a significant relationship between peripheral hearing and cognition, and in agreement with our hypothesis, peripheral hearing was significantly related to 10 of 11 measures of cognition that assessed processing speed, executive function, or memory, as well as global cognitive status. Although evidence, including the present results, suggests a relationship between peripheral hearing and cognition, little is known about the underlying mechanisms. Examination of these mechanisms is a critical need to direct appropriate treatment. | A Retrospective Multicenter Study Comparing Speech Perception Outcomes for Bilateral Implantation and Bimodal RehabilitationObjectives: To compare speech perception outcomes between bilateral implantation (cochlear implants [CIs]) and bimodal rehabilitation (one CI on one side plus one hearing aid [HA] on the other side) and to explore the clinical factors that may cause asymmetric performances in speech intelligibility between the two ears in case of bilateral implantation. Design: Retrospective data from 2247 patients implanted since 2003 in 15 international centers were collected. Intelligibility scores, measured in quiet and in noise, were converted into percentile ranks to remove differences between centers. The influence of the listening mode among three independent groups, one CI alone (n = 1572), bimodal listening (CI/HA, n = 589), and bilateral CIs (CI/CI, n = 86), was compared in an analysis taking into account the influence of other factors such as duration of profound hearing loss, age, etiology, and duration of CI experience. No within-subject comparison (i.e., monitoring outcome modifications in CI/HA subjects becoming CI/CI) was possible from this dataset. Further analyses were conducted on the CI/CI subgroup to investigate a number of factors, such as implantation side, duration of hearing loss, amount of residual hearing, and use of HAs that may explain asymmetric performances of this subgroup. Results: Intelligibility ranked scores in quiet and in noise were significantly greater with both CI/CI and CI/HA than with a CI-alone group, and improvement with CI/CI (+11% and +16% in quiet and in noise, respectively) was significantly better than with CI/HA (+6% and +9% in quiet and in noise, respectively). From the CI/HA group, only subjects with ranked preoperative aided speech scores >60% performed as well as CI/CI participants. Furthermore, CI/CI subjects displayed significantly lower preoperative aided speech scores on average compared with that displayed by CI/HA subjects. Routine clinical data available from the present database did not explain the asymmetrical results of bilateral implantation. Conclusions: This retrospective study, based on basic speech audiometry (no lateralization cues), indicates that, on average, a second CI is likely to provide slightly better postoperative speech outcome than an additional HA for people with very low preoperative performance. These results may be taken into consideration to refine surgical indications for CIs. | Consensus on Hearing Aid Candidature and Fitting for Mild Hearing Loss, With and Without Tinnitus: Delphi ReviewObjectives: In many countries including the United Kingdom, hearing aids are a first line of audiologic intervention for many people with tinnitus and aidable hearing loss. Nevertheless, there is a lack of high quality evidence to support that they are of benefit for tinnitus, and wide variability in their use in clinical practice especially for people with mild hearing loss. The aim of this study was to identify a consensus among a sample of UK clinicians on the criteria for hearing aid candidature and clinical practice in fitting hearing aids specifically for mild hearing loss with and without tinnitus. This will allow professionals to establish clinical benchmarks and to gauge their practice with that used elsewhere. Design: The Delphi technique, a systematic methodology that seeks consensus amongst experts through consultation using a series of iterative questionnaires, was used. A three-round Delphi survey explored clinical consensus among a panel of 29 UK hearing professionals. The authors measured panel agreement on 115 statements covering: (i) general factors affecting the decision to fit hearing aids, (ii) protocol-driven factors affecting the decision to fit hearing aids, (iii) general practice, and (iv) clinical observations. Consensus was defined as a priori ≥70% agreement across the panel. Results: Consensus was reached for 58 of the 115 statements. The broad areas of consensus were around factors important to consider when fitting hearing aids; hearing aid technology/features offered; and important clinical assessment to verify hearing aid fit (agreement of 70% or more). For patients with mild hearing loss, the greatest priority was given by clinicians to patient-centered criteria for fitting hearing aids: hearing difficulties, motivation to wear hearing aids, and impact of hearing loss on quality of life (chosen as top five by at least 64% of panelists). Objective measures were given a lower priority: degree of hearing loss and shape of the audiogram (chosen as top five by less than half of panelists). Areas where consensus was not reached were related to the use of questionnaires to predict and verify hearing aid benefit for both hearing and tinnitus; audiometric criteria for fitting hearing aids; and safety of using loud sounds when verifying hearing aid fitting when the patient has tinnitus (agreement of <70%). Conclusions: The authors identified practices that are considered important when recommending or fitting hearing aid for a patient with tinnitus. More importantly perhaps, they identified practical issues where there are divided opinions. Their findings inform the design of clinical trials and open up debate on the potential impact of practice differences on patient outcomes. | Peripheral and Central Contributions to Cortical Responses in Cochlear Implant UsersObjectives: The primary goal of this study was to describe relationships between peripheral and central electrophysiologic measures of auditory processing within individual cochlear implant (CI) users. The distinctiveness of neural excitation patterns resulting from the stimulation of different electrodes, referred to as spatial selectivity, was evaluated. The hypothesis was that if central representations of spatial interactions differed across participants semi-independently of peripheral input, then the within-subject relationships between peripheral and central electrophysiologic measures of spatial selectivity would reflect those differences. Cross-subject differences attributable to processing central to the auditory nerve may help explain why peripheral electrophysiologic measures of spatial selectivity have not been found to correlate with speech perception. Design: Eleven adults participated in this and a companion study. All were peri- or post-lingually deafened with more than 1 year of CI experience. Peripheral spatial selectivity was evaluated at 13 cochlear locations using 13 electrodes as probes to elicit electrically evoked compound action potentials (ECAPs). Masker electrodes were varied across the array for each probe electrode to derive channel-interaction functions. The same 13 electrodes were used to evaluate spatial selectivity represented at a cortical level. Electrode pairs were stimulated sequentially to elicit the auditory change complex (ACC), an obligatory cortical potential suggestive of discrimination. For each participant, the relationship between ECAP channel-interaction functions (quantified as channel-separation indices) and ACC N1-P2 amplitudes was modeled using the saturating exponential function y = a * (1−e−bx). Both a and b coefficients were varied using a least-squares approach to optimize the fits. Results: Electrophysiologic measures of spatial selectivity assessed at peripheral (ECAP) and central (ACC) levels varied across participants. The results indicate that differences in ACC amplitudes observed across participants for the same stimulus conditions were not solely the result of differences in peripheral excitation patterns. This finding supports the view that processing at multiple points along the auditory neural pathway from the periphery to the cortex may vary across individuals with different etiologies and auditory experiences. Conclusions: The distinctiveness of neural excitation resulting from electrical stimulation varies across CI recipients, and this variability was observed in both peripheral and cortical electrophysiologic measures. The ACC amplitude differences observed across participants were partially independent from differences in peripheral neural spatial selectivity. These findings are clinically relevant because they imply that there may be limits (1) to the predictive ability of peripheral measures and (2) in the extent to which improving the selectivity of electrical stimulation via programming options (e.g., current focusing/steering) will result in more specific central neural excitation patterns or will improve speech perception. | Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant UsersObjectives: The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Design: Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford–Kowal–Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. Results: All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. Conclusions: The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered. | Electrode Selection and Speech Understanding in Patients With Auditory Brainstem ImplantsObjectives: The objective of this study was to evaluate whether speech understanding in auditory brainstem implant (ABI) users who have a tumor pathology could be improved by the selection of a subset of electrodes that were appropriately pitch ranked and distinguishable. It was hypothesized that disordered pitch or spectral percepts and channel interactions may contribute significantly to the poor outcomes in most ABI users. Design: A single-subject design was used with five participants. Pitch ranking information for all electrodes in the patients' clinic maps was obtained using a pitch ranking task and previous pitch ranking information from clinic sessions. A multidimensional scaling task was used to evaluate the stimulus space evoked by stimuli on the same set of electrodes. From this information, a subset of four to six electrodes was chosen and a new map was created, using just this subset, that the subjects took home for 1 month's experience. Closed-set consonant and vowel perception and sentences in quiet were tested at three sessions: with the clinic map before the test map was given, after 1 month with the test map, and after an additional 2 weeks with their clinic map. Results: The results of the pitch ranking and multidimensional scaling procedures confirmed that the ABI users did not have a well-ordered set of percepts related to electrode position, thus supporting the proposal that difficulty in processing of spectral information may contribute to poor speech understanding. However, none of the subjects benefited from a map that reduced the stimulation electrode set to a smaller number of electrodes that were well ordered in place pitch. Conclusions: Although poor spectral processing may contribute to poor understanding in ABI users, it is not likely to be the sole contributor to poor outcomes. | Between-Frequency and Between-Ear Gap Detections and Their Relation to Perception of Stop ConsonantsObjectives: The objective of this study was to examine the hypothesis that between-channel gap detection, which includes between-frequency and between-ear gap detection, and perception of stop consonants, which is mediated by the length of voice-onset time (VOT), share common mechanisms, namely relative-timing operation in monitoring separate perceptual channels. Design: The authors measured gap detection thresholds and identification functions of /ba/ and /pa/ along VOT in 49 native young adult Japanese listeners. There were three gap detection tasks. In the between-frequency task, the leading and trailing markers differed in terms of center frequency (Fc). The leading marker was a broadband noise of 10 to 20,000 Hz. The trailing marker was a 0.5-octave band-passed noise of 1000-, 2000-, 4000-, or 8000-Hz Fc. In the between-ear task, the two markers were spectrally identical but presented to separate ears. In the within-frequency task, the two spectrally identical markers were presented to the same ear. The /ba/-/pa/ identification functions were obtained in a task in which the listeners were presented synthesized speech stimuli of varying VOTs from 10 to 46 msec and asked to identify them as /ba/ or /pa/. Results: The between-ear gap thresholds were significantly positively correlated with the between-frequency gap thresholds (except those obtained with the trailing marker of 4000-Hz Fc). The between-ear gap thresholds were not significantly correlated with the within-frequency gap thresholds, which were significantly correlated with all the between-frequency gap thresholds. The VOT boundaries and slopes of /ba/-/pa/ identification functions were not significantly correlated with any of these gap thresholds. Conclusions: There was a close relation between the between-ear and between-frequency gap detection, supporting the view that these two types of gap detection share common mechanisms of between-channel gap detection. However, there was no evidence for a relation between the perception of stop consonants and the between-frequency/ear gap detection in native Japanese speakers. | Air and Bone Conduction Click and Tone-Burst Auditory Brainstem Thresholds Using Kalman Adaptive Processing in Nonsedated Normal-Hearing InfantsObjectives: To study normative thresholds and latencies for click and tone-burst auditory brainstem response (TB-ABR) for air and bone conduction in normal infants and those discharged from neonatal intensive care units, who passed newborn hearing screening and follow-up distortion product otoacoustic emission. An evoked potential system (Vivosonic Integrity) that incorporates Bluetooth electrical isolation and Kalman-weighted adaptive processing to improve signal to noise ratios was employed for this study. Results were compared with other published data. Design: One hundred forty-five infants who passed two-stage hearing screening with transient-evoked otoacoustic emission or automated auditory brainstem response were assessed with clicks at 70 dB nHL and threshold TB-ABR. Tone bursts at frequencies between 500 and 4000 Hz were used for air and bone conduction auditory brainstem response testing using a specified staircase threshold search to establish threshold levels and wave V peak latencies. Results: Median air conduction hearing thresholds using TB-ABR ranged from 0 to 20 dB nHL, depending on stimulus frequency. Median bone conduction thresholds were 10 dB nHL across all frequencies, and median air-bone gaps were 0 dB across all frequencies. There was no significant threshold difference between left and right ears and no significant relationship between thresholds and hearing loss risk factors, ethnicity, or gender. Older age was related to decreased latency for air conduction. Compared with previous studies, mean air conduction thresholds were found at slightly lower (better) levels, while bone conduction levels were better at 2000 Hz and higher at 500 Hz. Latency values were longer at 500 Hz than previous studies using other instrumentation. Sleep state did not affect air or bone conduction thresholds. Conclusions: This study demonstrated slightly better wave V thresholds for air conduction than previous infant studies. The differences found in the present study, while statistically significant, were within the test step size of 10 dB. This suggests that threshold responses obtained using the Kalman weighting software were within the range of other published studies using traditional signal averaging, given step-size limitations. Thresholds were not adversely affected by variable sleep states. | Delayed Stream Segregation in Older Adults: More Than Just Informational MaskingObjective: To determine whether the time course for the buildup of auditory stream segregation differs between younger and older adults. Design: Word recognition thresholds were determined for the first and last keywords in semantically anomalous but syntactically correct sentences (e.g., "A rose could paint a fish") when the target sentences were masked by speech-spectrum noise, 3-band vocoded speech, 16-band vocoded speech, intact and colocated speech, and intact and spatially separated speech. A significant reduction in thresholds from the first to the last keyword was interpreted as indicating that stream segregation improved with time. Results: The buildup of stream segregation is slowed for both age groups when the masker is intact, colocated speech. Conclusions: Older adults are more disadvantaged; for them, stream segregation is also slowed even when a speech masker is spatially separated, conveys little meaning (3-band vocoding), and vocal fine structure cues are impoverished but envelope cues remain available (16-band vocoding). |
|