Αρχειοθήκη ιστολογίου

Αλέξανδρος Γ. Σφακιανάκης
ΩτοΡινοΛαρυγγολόγος
Αναπαύσεως 5
Άγιος Νικόλαος Κρήτη 72100
2841026182
6032607174

Κυριακή 30 Αυγούστου 2015

Ear and Hearing

  • The Effect of Residual Acoustic Hearing and Adaptation to Uncertainty on Speech Perception in Cochlear Implant Users: Evidence from Eye-Tracking.

    McMurray, Bob; Farris-Trimble, Ashley; Seedorff, Michael; Rigler, Hannah, 2015-08-27 08:00:00 AM

    Objectives: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/[Latin small letter esh], which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. Design: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/[Latin small letter esh] continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and [Latin small letter esh]-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. Results: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. Conclusion: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • The Importance of Acoustic Temporal Fine Structure Cues in Different Spectral Regions for Mandarin Sentence Recognition.

    Li, Bei; Wang, Hui; Yang, Guang; Hou, Limin; Su, Kaiming; Feng, Yanmei; Yin, Shankai, 2015-08-27 08:00:00 AM

    Objectives: To study the relative contribution of acoustic temporal fine structure (TFS) cues in low-, mid-, and high-frequency regions to Mandarin sentence recognition. Design: Twenty-one subjects with normal hearing were involved in a study of Mandarin sentence recognition using acoustic TFS. The acoustic TFS information was extracted from 10 3-equivalent rectangular bandwidth-wide bands within the range 80 to 8858 Hz using the Hilbert transform and was assigned to low-, mid-, and high-frequency regions. Percent-correct recognition scores were obtained with acoustic TFS information presented using one, two, or three frequency regions. The relative weights of the three frequency regions were calculated using the least-squares approach. Results: Results indicated that the mean percent-correct scores for sentence recognition using acoustic TFS were nearly perfect for stimuli with all three frequency regions together. Recognition was approximately 50 to 60% correct with only the low- or mid-frequency region but decreased to approximately 5% correct with only the high-frequency region of acoustic TFS. The mean weights of the low-, mid-, and high-frequency regions were 0.39, 0.48, and 0.13, respectively, and the difference between each pair of frequency regions was statistically significant. Conclusion: The acoustic TFS cues in low- and mid-frequency regions convey greater information for Mandarin sentence recognition, whereas those in the high-frequency region have little effect. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Predicting Speech-in-Noise Recognition from Performance on the Trail Making Test: Results from a Large-Scale Internet Study.

    Ellis, Rachel J.; Molander, Peter; Rönnberg, Jerker; Lyxell, Björn; Andersson, Gerhard; Lunner, Thomas, 2015-08-27 08:00:00 AM

    Objective: The aim of the study was to investigate the utility of an internet-based version of the trail making test (TMT) to predict performance on a speech-in-noise perception task. Design: Data were taken from a sample of 1509 listeners aged between 18 and 91 years old. Participants completed computerized versions of the TMT and an adaptive speech-in-noise recognition test. All testing was conducted via the internet. Results: The results indicate that better performance on both the simple and complex subtests of the TMT are associated with better speech-in-noise recognition scores. Thirty-eight percent of the participants had scores on the speech-in-noise test that indicated the presence of a hearing loss. Conclusions: The findings suggest that the TMT may be a useful tool in the assessment, and possibly the treatment, of speech-recognition difficulties. The results indicate that the relation between speech-in-noise recognition and TMT performance relates both to the capacity of the TMT to index processing speed and to the more complex cognitive abilities also implicated in TMT performance. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Finite Verb Morphology in the Spontaneous Speech of Dutch-Speaking Children with Hearing Loss.

    Hammer, Annemiek; Coene, Martine, 2015-08-27 08:00:00 AM

    Objective: In this study, the acquisition of Dutch finite verb morphology is investigated in children with cochlear implants (CIs) with profound hearing loss and in children with hearing aids (HAs) with moderate to severe hearing loss. Comparing these two groups of children increases our insight into how hearing experience and audibility affect the acquisition of morphosyntax. Design: Spontaneous speech samples were analyzed of 48 children with CIs and 29 children with HAs, ages 4 to 7 years. These language samples were analyzed by means of standardized language analysis involving mean length of utterance, the number of finite verbs produced, and target-like subject-verb agreement. The outcomes were interpreted relative to expectations based on the performance of typically developing peers with normal hearing. Outcomes of all measures were correlated with hearing level in the group of HA users and age at implantation in the group of CI users. Results: For both groups, the number of finite verbs that were produced in 50-utterance sample was on par with mean length of utterance and at the lower bound of the normal distribution. No significant differences were found between children with CIs and HAs on any of the measures under investigation. Yet, both groups produced more subject-verb agreement errors than are to be expected for typically developing hearing peers. No significant correlation was found between the hearing level of the children and the relevant measures of verb morphology, both with respect to the overall number of verbs that were used and the number of errors that children made. Within the group of CI users, the outcomes were significantly correlated with age at implantation. Conclusion: When producing finite verb morphology, profoundly deaf children wearing CIs perform similarly to their peers with moderate-to-severe hearing loss wearing HAs. Hearing loss negatively affects the acquisition of subject-verb agreement regardless of the hearing device (CI or HA) that the child is wearing. The results are of importance for speech-language pathologists who are working with children with a hearing impairment indicating the need to focus on subject-verb agreement in speech-language therapy. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Word Recognition Variability With Cochlear Implants: "Perceptual Attention" Versus "Auditory Sensitivity".

    Moberly, Aaron C.; Lowenstein, Joanna H.; Nittrouer, Susan, 2015-08-27 08:00:00 AM

    Objectives: Cochlear implantation does not automatically result in robust spoken language understanding for postlingually deafened adults. Enormous outcome variability exists, related to the complexity of understanding spoken language through cochlear implants (CIs), which deliver degraded speech representations. This investigation examined variability in word recognition as explained by "perceptual attention" and "auditory sensitivity" to acoustic cues underlying speech perception. Design: Thirty postlingually deafened adults with CIs and 20 age-matched controls with normal hearing (NH) were tested. Participants underwent assessment of word recognition in quiet and perceptual attention (cue-weighting strategies) based on labeling tasks for two phonemic contrasts: (1) "cop"-"cob," based on a duration cue (easily accessible through CIs) or a dynamic spectral cue (less accessible through CIs), and (2) "sa"-"sha," based on static or dynamic spectral cues (both potentially poorly accessible through CIs). Participants were also assessed for auditory sensitivity to the speech cues underlying those labeling decisions. Results: Word recognition varied widely among CI users (20 to 96%), but it was generally poorer than for NH participants. Implant users and NH controls showed similar perceptual attention and auditory sensitivity to the duration cue, while CI users showed poorer attention and sensitivity to all spectral cues. Both attention and sensitivity to spectral cues predicted variability in word recognition. Conclusions: For CI users, both perceptual attention and auditory sensitivity are important in word recognition. Efforts should be made to better represent spectral cues through implants, while also facilitating attention to these cues through auditory training. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Subjective Ratings of Fatigue and Vigor in Adults with Hearing Loss Are Driven by Perceived Hearing Difficulties Not Degree of Hearing Loss.

    Hornsby, Benjamin W. Y.; Kipp, Aaron M., 2015-08-27 08:00:00 AM

    Objectives: Anecdotal reports and qualitative research suggests that fatigue is a common, but often overlooked, accompaniment of hearing loss which negatively affects quality of life. However, systematic research examining the relationship between hearing loss and fatigue is limited. In this study, the authors examined relationships between hearing loss and various domains of fatigue and vigor using standardized and validated measures. Relationships between subjective ratings of multidimensional fatigue and vigor and the social and emotional consequences of hearing loss were also explored. Design: Subjective ratings of fatigue and vigor were assessed using the profile of mood states and the multidimensional fatigue symptom inventory-short form. To assess the social and emotional impact of hearing loss participants also completed, depending on their age, the hearing handicap inventory for the elderly or adults. Responses were obtained from 149 adults (mean age = 66.1 years, range 22 to 94 years), who had scheduled a hearing test and/or a hearing aid selection at the Vanderbilt Bill Wilkerson Center Audiology clinic. These data were used to explore relationships between audiometric and demographic (i.e., age and gender) factors, fatigue, and hearing handicap scores. Results: Compared with normative data, adults seeking help for their hearing difficulties in this study reported significantly less vigor and more fatigue. Reports of severe vigor/fatigue problems (ratings exceeding normative means by +/-1.5 standard deviations) were also increased in the study sample compared with that of normative data. Regression analyses, with adjustments for age and gender, revealed that the subjective percepts of fatigue, regardless of domain, and vigor were not strongly associated with degree of hearing loss. However, similar analyses controlling for age, gender, and degree of hearing loss showed a strong association between measures of fatigue and vigor (multidimensional fatigue symptom inventory-short form scores) and the social and emotional consequences of hearing loss (hearing handicap inventory for the elderly/adults scores). Conclusions: Adults seeking help for hearing difficulties are more likely to experience severe fatigue and vigor problems; surprisingly, this increased risk appears unrelated to degree of hearing loss. However, the negative psychosocial consequences of hearing loss are strongly associated with subjective ratings of fatigue, across all domains, and vigor. Additional research is needed to define the pathogenesis of hearing loss-related fatigue and to identify factors that may modulate and mediate (e.g., hearing aid or cochlear implant use) its impact. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Cortical Auditory Evoked Potentials Recorded From Nucleus Hybrid Cochlear Implant Users.

    Brown, Carolyn J.; Jeon, Eun Kyung; Chiou, Li-Kuei; Kirby, Benjamin; Karsten, Sue A.; Turner, Christopher W.; Abbas, Paul J., 2015-08-27 08:00:00 AM

    Objectives: Nucleus Hybrid Cochlear Implant (CI) users hear low--frequency sounds via acoustic stimulation and high-frequency sounds via electrical stimulation. This within-subject study compares three different methods of coordinating programming of the acoustic and electrical components of the Hybrid device. Speech perception and cortical auditory evoked potentials (CAEP) were used to assess differences in outcome. The goals of this study were to determine whether (1) the evoked potential measures could predict which programming strategy resulted in better outcome on the speech perception task or was preferred by the listener, and (2) CAEPs could be used to predict which subjects benefitted most from having access to the electrical signal provided by the Hybrid implant. Design: CAEPs were recorded from 10 Nucleus Hybrid CI users. Study participants were tested using three different experimental processor programs (MAPs) that differed in terms of how much overlap there was between the range of frequencies processed by the acoustic component of the Hybrid device and range of frequencies processed by the electrical component. The study design included allowing participants to acclimatize for a period of up to 4 weeks with each experimental program prior to speech perception and evoked potential testing. Performance using the experimental MAPs was assessed using both a closed-set consonant recognition task and an adaptive test that measured the signal-to-noise ratio that resulted in 50% correct identification of a set of 12 spondees presented in background noise. Long-duration, synthetic vowels were used to record both the cortical P1-N1-P2 "onset" response and the auditory "change" response (also known as the auditory change complex [ACC]). Correlations between the evoked potential measures and performance on the speech perception tasks are reported. Results: Differences in performance using the three programming strategies were not large. Peak-to-peak amplitude of the ACC was not found to be sensitive enough to accurately predict the programming strategy that resulted in the best performance on either measure of speech perception. All 10 Hybrid CI users had residual low-frequency acoustic hearing. For all 10 subjects, allowing them to use both the acoustic and electrical signals provided by the implant improved performance on the consonant recognition task. For most subjects, it also resulted in slightly larger cortical change responses. However, the impact that listening mode had on the cortical change responses was small, and again, the correlation between the evoked potential and speech perception results was not significant. Conclusions: CAEPs can be successfully measured from Hybrid CI users. The responses that are recorded are similar to those recorded from normal-hearing listeners. The goal of this study was to see if CAEPs might play a role either in identifying the experimental program that resulted in best performance on a consonant recognition task or in documenting benefit from the use of the electrical signal provided by the Hybrid CI. At least for the stimuli and specific methods used in this study, no such predictive relationship was found. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Fast, Continuous Audiogram Estimation Using Machine Learning.

    Song, Xinyu D.; Wallace, Brittany M.; Gardner, Jacob R.; Ledbetter, Noah M.; Weinberger, Kilian Q.; Barbour, Dennis L., 2015-08-27 08:00:00 AM

    Objectives: Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study was to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and one repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results: The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 +/- 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 +/- 4.45 dB HL. These values compare favorably with those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions: The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Statistical Learning, Syllable Processing, and Speech Production in Healthy Hearing and Hearing-Impaired Preschool Children: A Mismatch Negativity Study.

    Studer-Eichenberger, Esther; Studer-Eichenberger, Felix; Koenig, Thomas, 2015-08-27 08:00:00 AM

    Objectives: The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Design: Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Results: Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. Conclusions: In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • The Effect of Functional Hearing and Hearing Aid Usage on Verbal Reasoning in a Large Community-Dwelling Population.

    Keidser, Gitte; Rudner, Mary; Seeto, Mark; Hygge, Staffan; Rönnberg, Jerker, 2015-08-27 08:00:00 AM

    Objectives: Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. Design: Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. Results: Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p < 0.001). There was no association between the subjective measure of functional hearing and verbal reasoning. Functional hearing significantly interacted with education (p < 0.002), showing a trend for functional hearing to have a greater impact on verbal reasoning among those with a higher level of formal education. Among those with poor hearing, hearing aid usage had a significant positive, but not necessarily causal, effect on both numerical and linguistic verbal reasoning (p < 0.005). The estimated effect of hearing aid usage was less than the effect of poor functional hearing. Structural equation modeling analyses confirmed that controlling for education reduced the effect of functional hearing on verbal reasoning and showed that controlling for executive function eliminated the effect. However, when computer usage was controlled for, the eliminating effect of executive function was weakened. Conclusions: Poor functional hearing was associated with poor verbal reasoning in a 40- to 70-year-old community-dwelling population after controlling for age, gender, and education. The effect of functional hearing on verbal reasoning was significantly reduced among hearing aid users and completely overcome by good executive function skills, which may be enhanced by playing computer games. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Masking Period Patterns and Forward Masking for Speech-Shaped Noise: Age-Related Effects.

    Grose, John H.; Menezes, Denise C.; Porter, Heather L.; Griz, Silvana, 2015-08-27 08:00:00 AM

    Objective: The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to nonsimultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Design: Participants included younger (n = 11), middle-age (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions and assessed how well the temporal window fits accounted for these data. Results: The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. Conclusions: This study demonstrated an age-related increase in susceptibility to nonsimultaneous masking, supporting the hypothesis that exacerbated nonsimultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data, suggesting an association between susceptibility to forward masking and speech understanding in modulated noise. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Development of Open-Set Word Recognition in Children: Speech-Shaped Noise and Two-Talker Speech Maskers.

    Corbin, Nicole E.; Bonino, Angela Yarnell; Buss, Emily; Leibold, Lori J., 2015-08-27 08:00:00 AM

    Objective: The goal of this study was to establish the developmental trajectories for children's open-set recognition of monosyllabic words in each of two maskers: two-talker speech and speech-shaped noise. Design: Listeners were 56 children (5 to 16 years) and 16 adults, all with normal hearing. Thresholds for 50% correct recognition of monosyllabic words were measured in a two-talker speech or a speech-shaped noise masker in the sound field using an open-set task. Target words were presented at a fixed level of 65 dB SPL throughout testing, while the masker level was adapted. A repeated-measures design was used to compare the performance of three age groups of children (5 to 7 years, 8 to 12 years, and 13 to 16 years) and a group of adults. The pattern of age-related changes during childhood was also compared between the two masker conditions. Results: Listeners in all four age groups performed more poorly in the two-talker speech than the speech-shaped noise masker, but the developmental trajectories differed for the two masker conditions. For the speech-shaped noise masker, children's performance improved with age until about 10 years of age, with little systematic child-adult differences thereafter. In contrast, for the two-talker speech masker, children's thresholds gradually improved between 5 and 13 years of age, followed by an abrupt improvement in performance to adult-like levels. Children's thresholds in the two masker conditions were uncorrelated. Conclusions: Younger children require a more advantageous signal-to-noise ratio than older children and adults to achieve 50% correct word recognition in both masker conditions. However, children's ability to recognize words appears to take longer to mature and follows a different developmental trajectory for the two-talker speech masker than the speech-shaped noise masker. These findings highlight the importance of considering both age and masker type when evaluating children's masked speech perception abilities. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Evaluation of Speech-Evoked Envelope Following Responses as an Objective Aided Outcome Measure: Effect of Stimulus Level, Bandwidth, and Amplification in Adults With Hearing Loss.

    Easwar, Vijayalakshmi; Purcell, David W.; Aiken, Steven J.; Parsa, Vijay; Scollie, Susan D., 2015-08-27 08:00:00 AM

    Objectives: The present study evaluated a novel test paradigm based on speech-evoked envelope following responses (EFRs) as an objective aided outcome measure for individuals fitted with hearing aids. Although intended for use in infants with hearing loss, this study evaluated the paradigm in adults with hearing loss, as a precursor to further evaluation in infants. The test stimulus was a naturally male-spoken token /susa[integral]i/, modified to enable recording of eight individual EFRs, two from each vowel for different formants and one from each fricative. In experiment I, sensitivity of the paradigm to changes in audibility due to varying stimulus level and use of hearing aids was tested. In experiment II, sensitivity of the paradigm to changes in aided audible bandwidth was evaluated. As well, experiment II aimed to test convergent validity of the EFR paradigm by comparing the effect of bandwidth on EFRs and behavioral outcome measures of hearing aid fitting. Design: Twenty-one adult hearing aid users with mild to moderately severe sensorineural hearing loss participated in the study. To evaluate the effects of level and amplification in experiment I, the stimulus was presented at 50 and 65 dB SPL through an ER-2 insert earphone in unaided conditions and through individually verified hearing aids in aided conditions. Behavioral thresholds of EFR carriers were obtained using an ER-2 insert earphone to estimate sensation level of EFR carriers. To evaluate the effect of aided audible bandwidth in experiment II, EFRs were elicited by /susa[integral]i/ low-pass filtered at 1, 2, and 4 kHz and presented through the programmed hearing aid. EFRs recorded in the 65 dB SPL aided condition in experiment I represented the full bandwidth condition. EEG was recorded from the vertex to the nape of the neck over 300 sweeps. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple-Stimulus Hidden Reference and Anchor paradigm were measured in the same bandwidth conditions. Results: In experiment I, an increase in stimulus level above threshold and the use of amplification resulted in a significant increase in the number of EFRs detected per condition. At positive sensation levels, an increase in level demonstrated a significant increase in response amplitude in unaided and aided conditions. At 50 and 65 dB SPL, the use of amplification led to a significant increase in response amplitude for the majority of carriers. In experiment II, the number of EFR detections and the combined response amplitude of all eight EFRs improved with an increase in bandwidth up to 4 kHz. In contrast, behavioral measures continued to improve at wider bandwidths. Further change in EFR parameters was possibly limited by the hearing aid bandwidth. Significant positive correlations were found between EFR parameters and behavioral test scores in experiment II. Conclusions: The EFR paradigm demonstrates sensitivity to changes in audibility due to a change in stimulus level, bandwidth, and use of amplification in clinically feasible test times. The paradigm may thus have potential applications as an objective aided outcome measure. Further investigations exploring stimulus-response relationships in aided conditions and validation studies in children are warranted. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Effect of Stimulus Level and Bandwidth on Speech-Evoked Envelope Following Responses in Adults With Normal Hearing.

    Easwar, Vijayalakshmi; Purcell, David W.; Aiken, Steven J.; Parsa, Vijay; Scollie, Susan D., 2015-08-27 08:00:00 AM

    Objective: The use of auditory evoked potentials as an objective outcome measure in infants fitted with hearing aids has gained interest in recent years. This article proposes a test paradigm using speech-evoked envelope following responses (EFRs) for use as an objective-aided outcome measure. The method uses a running speech-like, naturally spoken stimulus token /susa[Latin small letter esh]i/ (fundamental frequency [f0] = 98 Hz; duration 2.05 sec), to elicit EFRs by eight carriers representing low, mid, and high frequencies. Each vowel elicited two EFRs simultaneously, one from the region of formant one (F1) and one from the higher formants region (F2+). The simultaneous recording of two EFRs was enabled by lowering f0 in the region of F1 alone. Fricatives were amplitude modulated to enable recording of EFRs from high-frequency spectral regions. The present study aimed to evaluate the effect of level and bandwidth on speech-evoked EFRs in adults with normal hearing. As well, the study aimed to test convergent validity of the EFR paradigm by comparing it with changes in behavioral tasks due to bandwidth. Design: Single-channel electroencephalogram was recorded from the vertex to the nape of the neck over 300 sweeps in two polarities from 20 young adults with normal hearing. To evaluate the effects of level in experiment I, EFRs were recorded at test levels of 50 and 65 dB SPL. To evaluate the effects of bandwidth in experiment II, EFRs were elicited by /susa[Latin small letter esh]i/ low-pass filtered at 1, 2, and 4 kHz, presented at 65 dB SPL. The 65 dB SPL condition from experiment I represented the full bandwidth condition. EFRs were averaged across the two polarities and estimated using a Fourier analyzer. An F test was used to determine whether an EFR was detected. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple Stimulus Hidden Reference and Anchors paradigm were measured in identical bandwidth conditions. Results: In experiment I, the increase in level resulted in a significant increase in response amplitudes for all eight carriers (mean increase of 14 to 50 nV) and the number of detections (mean increase of 1.4 detections). In experiment II, an increase in bandwidth resulted in a significant increase in the number of EFRs detected until the low-pass filtered 4 kHz condition and carrier-specific changes in response amplitude until the full bandwidth condition. Scores in both behavioral tasks increased with bandwidth up to the full bandwidth condition. The number of detections and composite amplitude (sum of all eight EFR amplitudes) significantly correlated with changes in behavioral test scores. Conclusions: Results suggest that the EFR paradigm is sensitive to changes in level and audible bandwidth. This may be a useful tool as an objective-aided outcome measure considering its running speech-like stimulus, representation of spectral regions important for speech understanding, level and bandwidth sensitivity, and clinically feasible test times. This paradigm requires further validation in individuals with hearing loss, with and without hearing aids. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Nonmuscle Myosin Heavy Chain IIA Mutation Predicts Severity and Progression of Sensorineural Hearing Loss in Patients With MYH9-Related Disease.

    Verver, Eva J. J.; Topsakal, Vedat; Kunst, Henricus P. M.; Huygen, Patrick L. M.; Heller, Paula G.; Pujol-Moix, Nuria; Savoia, Anna; Benazzo, Marco; Fierro, Tiziana; Grolman, Wilko; Gresele, Paolo; Pecci, Alessandro, 2015-08-27 08:00:00 AM

    Objectives: MYH9-related disease (MYH9-RD) is an autosomal--dominant disorder deriving from mutations in MYH9, the gene for the nonmuscle myosin heavy chain (NMMHC)-IIA. MYH9-RD has a complex phenotype including congenital features, such as thrombocytopenia, and noncongenital manifestations, namely sensorineural hearing loss (SNHL), nephropathy, cataract, and liver abnormalities. The disease is caused by a limited number of mutations affecting different regions of the NMMHC-IIA protein. SNHL is the most frequent noncongenital manifestation of MYH9-RD. However, only scarce and anecdotal information is currently available about the clinical and audiometric features of SNHL of MYH9-RD subjects. The objective of this study was to investigate the severity and propensity for progression of SNHL in a large series of MYH9-RD patients in relation to the causative NMMHC-IIA mutations. Design: This study included the consecutive patients diagnosed with MYH9-RD between July 2007 and March 2012 at four participating institutions. A total of 115 audiograms were analyzed from 63 patients belonging to 45 unrelated families with different NMMHC-IIA mutations. Cross-sectional analyses of audiograms were performed. Regression analysis was performed, and age-related typical audiograms (ARTAs) were derived to characterize the type of SNHL associated with different mutations. Results: Severity of SNHL appeared to depend on the specific NMMHC-IIA mutation. Patients carrying substitutions at the residue R702 located in the short functional SH1 helix had the most severe degree of SNHL, whereas patients with the p.E1841K substitution in the coiled-coil region or mutations at the nonhelical tailpiece presented a mild degree of SNHL even at advanced age. The authors also disclosed the effects of different amino acid changes at the same residue: for instance, individuals with the p.R702C mutation had more severe SNHL than those with the p.R702H mutation, and the p.R1165L substitution was associated with a higher degree of hearing loss than the p.R1165C. In general, mild SNHL was associated with a fairly flat audiogram configuration, whereas severe SNHL correlated with downsloping configurations. ARTA plots showed that the most progressive type of SNHL was associated with the p.R702C, the p.R702H, and the p.R1165L substitutions, whereas the p.R1165C mutation correlated with a milder, nonprogressive type of SNHL than the p.R1165L. ARTA for the p.E1841K mutation demonstrated a mild degree of SNHL with only mild progression, whereas the ARTA for the mutations at the nonhelical tailpiece did not show any substantial progression. Conclusions: These data provide useful tools to predict the progression and the expected degree of severity of SNHL in individual MYH9-RD patients, which is especially relevant in young patients. Consequences in clinical practice are important not only for appropriate patient counseling but also for development of customized, genotype-driven clinical management. The authors recently reported that cochlear implantation has a good outcome in MYH9-RD patients; thus, stricter follow-up and earlier intervention are recommended for patients with unfavorable genotypes. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Background Noise Degrades Central Auditory Processing in Toddlers.

    Niemitalo-Haapola, Elina; Haapala, Sini; Jansson-Verkasalo, Eira; Kujala, Teija, 2015-08-27 08:00:00 AM

    Objectives: Noise, as an unwanted sound, has become one of modern society's environmental conundrums, and many children are exposed to higher noise levels than previously assumed. However, the effects of background noise on central auditory processing of toddlers, who are still acquiring language skills, have so far not been determined. The authors evaluated the effects of background noise on toddlers' speech-sound processing by recording event-related brain potentials. The hypothesis was that background noise modulates neural speech-sound encoding and degrades speech-sound discrimination. Design: Obligatory P1 and N2 responses for standard syllables and the mismatch negativity (MMN) response for five different syllable deviants presented in a linguistic multifeature paradigm were recorded in silent and background noise conditions. The participants were 18 typically developing 22- to 26-month-old monolingual children with healthy ears. Results: The results showed that the P1 amplitude was smaller and the N2 amplitude larger in the noisy conditions compared with the silent conditions. In the noisy condition, the MMN was absent for the intensity and vowel changes and diminished for the consonant, frequency, and vowel duration changes embedded in speech syllables. Furthermore, the frontal MMN component was attenuated in the noisy condition. However, noise had no effect on P1, N2, or MMN latencies. Conclusions: The results from this study suggest multiple effects of background noise on the central auditory processing of toddlers. It modulates the early stages of sound encoding and dampens neural discrimination vital for accurate speech perception. These results imply that speech processing of toddlers, who may spend long periods of daytime in noisy conditions, is vulnerable to background noise. In noisy conditions, toddlers' neural representations of some speech sounds might be weakened. Thus, special attention should be paid to acoustic conditions and background noise levels in children's daily environments, like day-care centers, to ensure a propitious setting for linguistic development. In addition, the evaluation and improvement of daily listening conditions should be an ordinary part of clinical intervention of children with linguistic problems. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Vestibular, Visual Acuity, and Balance Outcomes in Children With Cochlear Implants: A Preliminary Report.

    Janky, Kristen L.; Givens, Diane, 2015-08-27 08:00:00 AM

    Objectives: There is a high incidence of vestibular loss in children with cochlear implants (CCI). However, the relationship between vestibular loss and various outcomes is unknown in children. The objectives of this study are to (1) determine if age-related changes in peripheral vestibular tests occur; (2) quantify peripheral vestibular function in children with normal hearing and CCI; (3) determine if amount of vestibular loss predicts visual acuity and balance performance. Design: Eleven CCI and 12 children with normal hearing completed the following tests of vestibular function: ocular and cervical vestibular-evoked myogenic potential to assess utricle and saccule function and the video head impulse test to assess semicircular canal function. The relationship between amount of vestibular loss and the following balance and visual acuity outcomes was assessed: dynamic gait index, single-leg stance, the sensory organization test, and tests of visual acuity, including dynamic visual acuity and the gaze stabilization test. Results: (1) There were no significant age-related changes in peripheral vestibular testing with the exception of the n23 cervical vestibular-evoked myogenic potential latency, which was moderately correlated with age. (2) CCI had significantly higher rates of vestibular loss for each test of canal and otolith function. (3) Amount of vestibular loss predicted performance on single-leg stance, the dynamic gait index, some conditions of the sensory organization test, and the dynamic visual acuity test. Age was also a contributing factor for predicting the performance of almost all outcomes. Conclusions: Preliminarily, children with vestibular loss do not recover naturally to levels of their healthy peers, particularly with activities that utilize vestibular input; they have poorer visual acuity and balance function. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Effects of Aging on the Encoding of Dynamic and Static Components of Speech.

    Presacco, Alessandro; Jenkins, Kimberly; Lieberman, Rachel; Anderson, Samira, 2015-08-27 08:00:00 AM

    Objectives: The authors investigated aging effects on the envelope of the frequency following response to dynamic and static components of speech. Older adults frequently experience problems understanding speech, despite having clinically normal hearing. Improving audibility with hearing aids provides variable benefit, as amplification cannot restore the temporal precision degraded by aging. Previous studies have demonstrated age-related delays in subcortical timing specific to the dynamic, transition region of the stimulus. However, it is unknown whether this delay is mainly due to a failure to encode rapid changes in the formant transition because of central temporal processing deficits or as a result of cochlear damage that reduces audibility for the high-frequency components of the speech syllable. To investigate the nature of this delay, the authors compared subcortical responses in younger and older adults with normal hearing to the speech syllables /da/ and /a/, hypothesizing that the delays in peak timing observed in older adults are mainly caused by temporal processing deficits in the central auditory system. Design: The frequency following response was recorded to the speech syllables /da/ and /a/ from 15 younger and 15 older adults with normal hearing, normal IQ, and no history of neurological disorders. Both speech syllables were presented binaurally with alternating polarities at 80 dB SPL at a rate of 4.3 Hz through electromagnetically shielded insert earphones. A vertical montage of four Ag-AgCl electrodes (Cz, active, forehead ground, and earlobe references) was used. Results: The responses of older adults were significantly delayed with respect to younger adults for the transition and onset regions of the /da/ syllable and for the onset of the /a/ syllable. However, in contrast with the younger adults who had earlier latencies for /da/ than for /a/ (as was expected given the high-frequency energy in the /da/ stop consonant burst), latencies in older adults were not significantly different between the responses to /da/ and /a/. An unexpected finding was noted in the amplitude and phase dissimilarities between the two groups in the later part of the steady-state region, rather than in the transition region. This amplitude reduction may indicate prolonged neural recovery or response decay associated with a loss of auditory nerve fibers. Conclusions: These results suggest that older adults' peak timing delays may arise from decreased synchronization to the onset of the stimulus due to reduced audibility, though the possible role of impaired central auditory processing cannot be ruled out. Conversely, a deterioration in temporal processing mechanisms in the auditory nerve, brainstem, or midbrain may be a factor in the sudden loss of synchronization in the later part of the steady-state response in older adults. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Self-Reported Hearing Difficulties Among Adults With Normal Audiograms: The Beaver Dam Offspring Study.

    Tremblay, Kelly L.; Pinto, Alex; Fischer, Mary E.; Klein, Barbara E. K.; Klein, Ronald; Levy, Sarah; Tweed, Ted S.; Cruickshanks, Karen J., 2015-08-27 08:00:00 AM

    Objective: Clinicians encounter patients who report experiencing hearing difficulty (HD) even when audiometric thresholds fall within normal limits. When there is no evidence of audiometric hearing loss, it generates debate over possible biomedical and psychosocial etiologies. It is possible that self-reported HDs relate to variables within and/or outside the scope of audiology. The purpose of this study is to identify how often, on a population basis, people with normal audiometric thresholds self-report HD and to identify factors associated with such HDs. Design: This was a cross-sectional investigation of participants in the Beaver Dam Offspring Study. HD was defined as a self-reported HD on a four-item scale despite having pure-tone audiometric thresholds within normal limits (<20 dB HL0.5, 1, 2, 3, 4, 6, 8 kHz bilaterally, at each frequency). Distortion product otoacoustic emissions and word-recognition performance in quiet and with competing messages were also analyzed. In addition to hearing assessments, relevant factors such as sociodemographic and lifestyle factors, environmental exposures, medical history, health-related quality of life, and symptoms of neurological disorders were also examined as possible risk factors. The Center for Epidemiological Studies-Depression was used to probe symptoms associated with depression, and the Medical Outcomes Study Short-Form 36 mental score was used to quantify psychological stress and social and role disability due to emotional problems. The Visual Function Questionnaire-25 and contrast sensitivity test were used to query vision difficulties. Results: Of the 2783 participants, 686 participants had normal audiometric thresholds. An additional grouping variable was created based on the available scores of HD (four self-report questions), which reduced the total dataset to n = 682 (age range, 21-67 years). The percentage of individuals with normal audiometric thresholds who self-reported HD was 12.0% (82 of 682). The prevalence in the entire cohort was therefore 2.9% (82 of 2783). Performance on audiological tests (distortion product otoacoustic emissions and word-recognition tests) did not differ between the group self-reporting HD and the group reporting no HD. A multivariable model controlling for age and sex identified the following risk factors for HD: lower incomes (odds ratio [OR] $50,000+ = 0.55, 95% confidence interval [CI] = 0.30-1.00), noise exposure through loud hobbies (OR = 1.48, 95% CI = 1.15-1.90), or firearms (OR = 2.07, 95% CI = 1.04-4.16). People reporting HD were more likely to have seen a doctor for hearing loss (OR = 12.93, 95% CI = 3.86-43.33) and report symptoms associated with depression (Center for Epidemiological Studies-Depression [OR = 2.39, 95% CI = 1.03-5.54]), vision difficulties (Visual Function Questionnaire-25 [OR = 0.93, 95% CI = 0.89-0.97]), and neuropathy (e.g., numbness, tingling, and loss of sensation [OR = 1.98, 95% CI = 1.14-3.44]). Conclusions: The authors used a population approach to identify the prevalence and risk factors associated with self-reported HD among people who perform within normal limits on common clinical tests of auditory function. The percentage of individuals with normal audiometric thresholds who self-reported HD was 12.0%, resulting in an overall prevalence of 2.9%. Auditory and nonauditory risk factors were identified, therefore suggesting that future directions aimed at assessing, preventing, and managing these types of HDs might benefit from information outside the traditional scope of audiology. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Electrophysiology and Perception of Speech in Noise in Older Listeners: Effects of Hearing Impairment and Age.

    Billings, Curtis J.; Penman, Tina M.; McMillan, Garnett P.; Ellis, Emily M., 2015-08-27 08:00:00 AM

    Objectives: Speech perception in background noise is difficult for many individuals, and there is considerable performance variability across listeners. The combination of physiological and behavioral measures may help to understand sources of this variability for individuals and groups and prove useful clinically with hard-to-test populations. The purpose of this study was threefold: (1) determine the effect of signal-to-noise ratio (SNR) and signal level on cortical auditory evoked potentials (CAEPs) and sentence-level perception in older normal-hearing (ONH) and older hearing-impaired (OHI) individuals, (2) determine the effects of hearing impairment and age on CAEPs and perception, and (3) explore how well CAEPs correlate with and predict speech perception in noise. Design: Two groups of older participants (15 ONH and 15 OHI) were tested using speech-in-noise stimuli to measure CAEPs and sentence-level perception of speech. The syllable /ba/, used to evoke CAEPs, and sentences were presented in speech-spectrum background noise at four signal levels (50, 60, 70, and 80 dB SPL) and up to seven SNRs (-10, -5, 0, 5, 15, 25, and 35 dB). These data were compared between groups to reveal the hearing impairment effect and then combined with previously published data for 15 young normal-hearing individuals to determine the aging effect. Results: Robust effects of SNR were found for perception and CAEPs. Small but significant effects of signal level were found for perception, primarily at poor SNRs and high signal levels, and in some limited instances for CAEPs. Significant effects of age were seen for both CAEPs and perception, while hearing impairment effects were only found with perception measures. CAEPs correlate well with perception and can predict SNR50s to within 2 dB for ONH. However, prediction error is much larger for OHI and varies widely (from 6 to 12 dB) depending on the model that was used for prediction. Conclusions: When background noise is present, SNR dominates both perception-in-noise testing and cortical electrophysiological testing, with smaller and sometimes significant contributions from signal level. A mismatch between behavioral and electrophysiological results was found (hearing impairment effects were primarily only seen for behavioral data), illustrating the possible contributions of higher order cognitive processes on behavior. It is interesting that the hearing impairment effect size was more than five times larger than the aging effect size for CAEPs and perception. Sentence-level perception can be predicted well in normal-hearing individuals; however, additional research is needed to explore improved prediction methods for older individuals with hearing impairment. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • The Effect of Increasing Intracranial Pressure on Ocular Vestibular-Evoked Myogenic Potential Frequency Tuning.

    Jerin, Claudia; Wakili, Reza; Kalla, Roger; Gürkov, Robert, 2015-08-27 08:00:00 AM

    Objective: Ocular vestibular-evoked myogenic potentials (oVEMPs) represent extraocular muscle activity in response to vestibular stimulation. The authors sought to investigate whether posture-induced increase of the intracranial pressure (ICP) modulated oVEMP frequency tuning, that is, the amplitude ratio between 500-Hz and 1000-Hz stimuli. Design: Ten healthy subjects were enrolled in this study. The subjects were positioned in the horizontal plane (0 degree) and in a 30-degree head-downwards position to elevate the ICP. In both positions, oVEMPs were recorded using 500-Hz and 1000-Hz air-conducted tone bursts. Results: When tilting the subject from the horizontal plane to the 30-degree head-down position, oVEMP amplitudes in response to 500-Hz tone bursts distinctly decreased (3.40 [mu]V versus 2.06 [mu]V; p < 0.001), whereas amplitudes to 1000 Hz were only slightly diminished (2.74 [mu]V versus 2.48 [mu]V; p = 0.251). Correspondingly, the 500/1000-Hz amplitude ratio significantly decreased when tilting the subjects from 0- to 30-degree inclination (1.59 versus 1.05; p = 0.029). Latencies were not modulated by head-down position. Conclusions: Increasing ICP systematically alters oVEMPs in terms of absolute amplitudes and frequency tuning characteristics. oVEMPs are therefore in principle suited for noninvasive ICP monitoring. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Changes in Tinnitus After Middle Ear Implant Surgery: Comparisons With the Cochlear Implant.

    Seo, Young Joon; Kim, Hyun Ji; Moon, In Seok; Choi, Jae Young, 2015-08-27 08:00:00 AM

    Objectives: Tinnitus is a very common symptom in patients with hearing loss. Several studies have confirmed that hearing restoration using hearing aids or cochlear implants (CIs) has a suppressive effect on tinnitus in users. The aim of this study was to analyze the effect of other hearing restoration devices, specifically the middle ear implant (MEI), on changes in tinnitus severity. Design: From 2012 to October 2014, 11 adults with tinnitus and hearing loss underwent MEI surgery. Pure-tone audiometry, tinnitus handicap inventory (THI), and visual analog scale scores for loudness, awareness, and annoyance and psychosocial instruments were measured before, immediately after, and 6 months after surgery. Changes in hearing thresholds and THI scores were analyzed and compared with those of 16 CI recipients. Results: In both MEI and CI groups, significant improvements in tinnitus were found after the surgery. The THI scores improved in 91% of patients in the MEI group and in 56% of those in the CI group. Visual analog scale scores and psychosocial scale scores also decreased after surgery, but there were no statistical differences between the groups. Conclusions: The results indicate that the MEI may be as beneficial as the CI in relieving tinnitus in subjects with unilateral tinnitus accompanying hearing loss. Furthermore, this improvement may manifest as hearing restoration or habituation rather than a direct electrical nerve stimulation, which was previously considered as the main mechanism underlying tinnitus suppression by auditory implants. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License, where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Negative Middle Ear Pressure and Composite and Component Distortion Product Otoacoustic Emissions.

    Thompson, Suzanne; Henin, Simon; Long, Glenis R., 2015-08-27 08:00:00 AM

    Objectives: Distortion product otoacoustic emissions (DPOAEs), a by-product of normal outer hair cell function, are used in research and clinical settings to noninvasively test cochlear health. The composite DPOAE recorded in the ear canal is the result of interactions of at least two components: a nonlinear distortion component (the generator component) and a linear reflection component. Negative middle ear pressure (NMEP) results in the tympanic membrane being pulled inward and increases middle ear impedance. This influences both the forward travel of stimuli used to induce distortion products and the reverse travel of the emissions back to the ear canal. NMEP may therefore limit the effectiveness of DPOAEs in clinical settings. Design: Twenty-six normal-hearing subjects were recruited, and eight were able to reliably and consistently induce NMEP using the Toynbee maneuver. Eight interleaved measures of tympanic peak pressure (TPP) were collected for each subject at normal pressure and NMEP. DPOAEs were then collected both when middle ear pressure was normal and during subject-induced NMEP. All measures were interleaved. Two primary tones were swept logarithmically across frequency (1 second per octave) from f1 = 410 to 6560 Hz and f2 = 500 to 8000 Hz (f1/f2 = 1.22), producing 2f1 - f2 DPOAEs from 320 to 5120 Hz. DPOAEs were collected at three equal-level primary level combinations (L65, L70, L75 dB SPL). Before composite and component DPOAE analysis, analysis of the f1 DPOAE primary confirmed that subjects had successfully induced NMEP. DPOAE and component magnitudes were separately analyzed using repeated measures analysis of variances with three factors, primary level (L65, L70, L75 dB SPL), middle ear pressure (normal pressure versus NMEP), and frequency (500 to 4000 Hz). Results: Mean subject-induced NMEP ranged from -65 to -324 daPa. Changes in the magnitude (dB) of the primary tones used to induce the DPOAE provided a reliable indicator of subject-induced NMEP. Composite DPOAE and component levels were significantly affected by NMEP for all the frequencies tested. Changes were most clearly observed for the generator component with one subject demonstrating a mean decrease of 12 dB in magnitude during NMEP. Results were subject-specific, and there was a correlation between the degree of negative TPP induced and the amount of change in DPOAE level. Conclusions: Mean TPPs collected during NMEP ranged from -65 to -324 daPa and significantly affected composite DPOAE, generator, and reflection component levels. Changes in the magnitude of the swept-primaries as a function of frequency were used to confirm that NMEP had been successfully induced. The patterns of change in the composite DPOAEs were clearer and easier to interpret when the components of the DPOAE were separated with evaluation of the generator component alone. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Bigger Is Better: Increasing Cortical Auditory Response Amplitude Via Stimulus Spectral Complexity.

    Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey, 2015-08-27 08:00:00 AM

    Objective: To determine the influence of auditory stimuli spectral characteristics on cortical auditory evoked potentials (CAEPs). Design: CAEPs were obtained from 15 normal-hearing adults in response to six multitone (MT), four pure-tone (PT), and two narrowband noise stimuli. The sounds were presented at 10, 20, and 40 dB above threshold, which were estimated behaviorally beforehand. The root mean square amplitude of the CAEP and the detectability of the response were calculated and analyzed. Results: Amplitudes of the CAEPs to the MT were significantly larger compared with PT for stimuli with frequencies centered around 1, 2, and 4 kHz, whereas no significant difference was found for 0.5 kHz. The objective detection score for the MT was significantly higher compared with the PT. For the 1- and 2-kHz stimuli, the CAEP amplitudes to narrowband noise were not significantly different than those evoked by PT. Conclusion: The study supports the notion that spectral complexity, not just bandwidth, has an impact on the CAEP amplitude for stimuli with center frequency above 0.5 kHz. The implication of these findings is that the clinical test time required to estimate thresholds can potentially be decreased by using complex band-limited MT rather than conventional PT stimuli. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Auditory Perception and Production of Speech Feature Contrasts by Pediatric Implant Users.

    Mahshie, James; Core, Cynthia; Larsen, Michael D., 2015-08-27 08:00:00 AM

    Objective: The aim of the present research is to examine the relations between auditory perception and production of specific speech contrasts by children with cochlear implants (CIs) who received their implants before 3 years of age and to examine the hierarchy of abilities for perception and production for consonant and vowel features. The following features were examined: vowel height, vowel place, consonant place of articulation (front and back), continuance, and consonant voicing. Design: Fifteen children (mean age = 4;0 and range 3;2 to 5;11) with a minimum of 18 months of experience with their implants and no additional known disabilities served as participants. Perception of feature contrasts was assessed using a modification of the Online Imitative Speech Pattern Contrast test, which uses imitation to assess speech feature perception. Production was examined by having the children name a series of pictures containing consonant and vowel segments that reflected contrasts of each feature. Results: For five of the six feature contrasts, production accuracy was higher than perception accuracy. There was also a significant and positive correlation between accuracy of production and auditory perception for each consonant feature. This correlation was not found for vowels, owing largely to the overall high perception and production scores attained on the vowel features. The children perceived vowel feature contrasts more accurately than consonant feature contrasts. On average, the children had lower perception scores for Back Place and Continuance feature contrasts than for Anterior Place and Voicing contrasts. For all features, the median production scores were 100%; the majority of the children were able to accurately and consistently produce the feature contrasts. The mean production scores for features reflect greater score variability for consonant feature production than for vowel features. Back Place of articulation for back consonants and Continuance contrasts appeared to be the most difficult features to produce, as reflected in lower mean production scores for these features. Conclusions: The finding of greater production than auditory perception accuracy for five of the six features examined suggests that the children with CIs were able to produce articulatory contrasts that were not readily perceived through audition alone. Factors that are likely to play a role in the greater production accuracy in addition to audition include the lexical and phonetic properties of the words elicited, a child's phonological representation of the words and motor abilities, and learning through oro-tactile, visual, proprioceptive, and kinesthetic perception. The differences among the features examined, and between perception and production, point to the clinical importance of evaluating these abilities in children with CIs. The present findings further point to the utility of picture naming to establish a child's production accuracy, which in turn is necessary if using imitation as a measure of auditory capacity. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Pediatric Cochlear Implantation: Why Do Children Receive Implants Late?.

    Fitzpatrick, Elizabeth M.; Ham, Julia; Whittingham, JoAnne, 2015-08-27 08:00:00 AM

    Objectives: Early cochlear implantation has been widely promoted for children who derive inadequate benefit from conventional acoustic amplification. Universal newborn hearing screening has led to earlier identification and intervention, including cochlear implantation in much of the world. The purpose of this study was to examine age and time to cochlear implantation and to understand the factors that affected late cochlear implantation in children who received cochlear implants. Design: In this population-based study, data were examined for all children who underwent cochlear implant surgery in one region of Canada from 2002 to 2013. Clinical characteristics were collected prospectively as part of a larger project examining outcomes from newborn hearing screening. For this study, audiologic details including age and severity of hearing loss at diagnosis, age at cochlear implant candidacy, and age at cochlear implantation were documented. Additional detailed medical chart information was extracted to identify the factors associated with late implantation for children who received cochlear implants more than 12 months after confirmation of hearing loss. Results: The median age of diagnosis of permanent hearing loss for 187 children was 12.6 (interquartile range: 5.5, 21.7) months, and the age of cochlear implantation over the 12-year period was highly variable with a median age of 36.2 (interquartile range: 21.4, 71.3) months. A total of 118 (63.1%) received their first implant more than 12 months after confirmation of hearing loss. Detailed analysis of clinical profiles for these 118 children revealed that late implantation could be accounted for primarily by progressive hearing loss (52.5%), complex medical conditions (16.9%), family indecision (9.3%), geographical location (5.9%), and other miscellaneous known (6.8%) and unknown factors (8.5%). Conclusions: This study confirms that despite the trend toward earlier implantation, a substantial number of children can be expected to receive their first cochlear implant well beyond their first birthday because they do not meet audiologic criteria of severe to profound hearing loss for cochlear implantation at the time of identification of permanent hearing loss. This study underscores the importance of carefully monitoring all children with permanent hearing loss to ensure that optimal intervention including cochlear implantation occurs in a timely manner. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License, where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Social Support Predicts Hearing Aid Satisfaction.

    Singh, Gurjit; Lau, Sin-Tung; Pichora-Fuller, M. Kathleen, 2015-08-27 08:00:00 AM

    Objectives: The goals of the current research were to determine: (i) whether there is a relationship between perceived social support and hearing aid satisfaction, and (ii) how well perceived social support predicts hearing aid satisfaction relative to other correlates previously identified in the literature. Design: In study 1, 173 adult (x age = 68.9 years; SD = 13.4) users of hearing aids completed a survey assessing attitudes toward health, hearing, and hearing aids, as well as a questionnaire assessing Big-Five personality factors (Openness to Experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) either using paper and pencil or the Internet. In a follow-up study designed to replicate and extend the results from study 1, 161 adult (x age = 32.8 years; SD = 13.3) users of hearing aids completed a similar survey on the Internet. In study 2, participants also completed a measure of hearing aid benefit and reported the style of their hearing aid. Results: In studies 1 and 2, perceived social support was significantly correlated with hearing aid satisfaction (respectively, r = 0.34, r = 0.51, ps < 0.001). The results of a regression analysis revealed that in study 1, 22% of the variance in hearing aid satisfaction scores was predicted by perceived social support, satisfaction with one's hearing health care provider, duration of daily hearing aid use, and openness. In study 2, 43% of the variance in hearing aid satisfaction was predicted by perceived social support, hearing aid benefit, neuroticism, and hearing aid style. Overall, perceived social support was the best predictor of hearing aid satisfaction in both studies. After controlling for response style (i.e., acquiescence or the tendency to respond positively), the correlation between perceived social support and hearing aid satisfaction remained the same in study 1 (r = 0.34, p < 0.001) and was lower in study 2 (r = 0.39, p < 0.001), although the change in correlation was not significant. Conclusions: The results from study 1 provide evidence to suggest that perceived social support is a significant predictor of satisfaction with hearing aids, a finding that was replicated in a different sample of participants investigated in study 2. A significant relationship between perceived social support and hearing aid satisfaction was observed in both studies, even though the composition of the two samples differed in terms of age, relationship status, income, proportion of individuals with unilateral versus bilateral hearing impairment, and lifetime experience with hearing aids. The results from both studies 1 and 2 provide no support for the claim that participant response style accounts for the relationship between hearing aid satisfaction and perceived social support. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Use of Questionnaire-Based Measures in the Assessment of Listening Difficulties in School-Aged Children.

    Barry, Johanna G.; Tomlin, Danielle; Moore, David R.; Dillon, Harvey, 2015-08-27 08:00:00 AM

    Objectives: In this study, the authors assessed the potential utility of a recently developed questionnaire (Evaluation of Children's Listening and Processing Skills [ECLiPS]) for supporting the clinical assessment of children referred for auditory processing disorder (APD). Design: A total of 49 children (35 referred for APD assessment and 14 from mainstream schools) were assessed for auditory processing (AP) abilities, cognitive abilities, and symptoms of listening difficulty. Four questionnaires were used to capture the symptoms of listening difficulty from the perspective of parents (ECLiPS and Fisher's auditory problem checklist), teachers (Teacher's Evaluation of Auditory Performance), and children, that is, self-report (Listening Inventory for Education). Correlation analyses tested for convergence between the questionnaires and both cognitive and AP measures. Discriminant analyses were performed to determine the best combination of tests for discriminating between typically developing children and children referred for APD. Results: All questionnaires were sensitive to the presence of difficulty, that is, children referred for assessment had significantly more symptoms of listening difficulty than typically developing children. There was, however, no evidence of more listening difficulty in children meeting the diagnostic criteria for APD. Some AP tests were significantly correlated with ECLiPS factors measuring related abilities providing evidence for construct validity. All questionnaires correlated to a greater or lesser extent with the cognitive measures in the study. Discriminant analysis suggested that the best discrimination between groups was achieved using a combination of ECLiPS factors, together with nonverbal Intelligence Quotient (cognitive) and AP measures (i.e., dichotic digits test and frequency pattern test). Conclusions: The ECLiPS was particularly sensitive to cognitive difficulties, an important aspect of many children referred for APD, as well as correlating with some AP measures. It can potentially support the preliminary assessment of children referred for APD. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.

    Rader, Tobias; Adel, Youssef; Fastl, Hugo; Baumann, Uwe, 2015-08-27 08:00:00 AM

    Objective: The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects. Design: Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations. Results: NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior to NH subjects despite a positive signal-to-noise ratio shift for both noise conditions, while demonstrating the synergistic effect for cutoff frequencies >=300 Hz. One SDT model largely predicted the combined-stimulation results in continuous noise, while falling short of predicting performance observed in modulated noise. Conclusions: The presented simulation was able to demonstrate the combined-stimulation advantage for NH subjects as observed in EAS users. Only NH subjects tested with EAS simulations were able to take advantage of the gap listening effect, while CI and EAS user performance was consistently degraded in modulated noise compared with performance in continuous noise. The application of ASR systems seems feasible to assess the impact of different signal processing strategies on speech perception with CI and EAS simulations. In continuous noise, SDT models were largely able to predict the performance gain without assuming any synergistic effects, but model amendments are required to explain the gap listening effect in modulated noise. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.
  • Validity of Automated Threshold Audiometry: A Systematic Review and Meta-Analysis.

    Mahomed, Faheema; Swanepoel, De Wet; Eikelboom, Robert H.; Soer, Maggi, 2015-08-27 08:00:00 AM

    Objectives: A systematic literature review and meta-analysis on the validity (test-retest reliability and accuracy) of automated threshold audiometry compared with the gold standard of manual threshold audiometry was conducted. Design: A systematic literature review was completed in peer-reviewed databases on automated compared with manual threshold audiometry. Subsequently a meta-analysis was conducted on the validity of automated audiometry. A multifaceted approach, covering several databases and using different search strategies was used to ensure comprehensive coverage and to cross-check search findings. Databases included: MEDLINE, SCOPUS, and PubMed with a secondary search strategy reviewing references from identified reports. Reports including within-subject comparisons of manual and automated threshold audiometry were selected according to inclusion/exclusion criteria before data were extracted. For the meta-analysis weighted mean differences (and standard deviations) on test-retest reliability for automated compared with manual audiometry were determined to assess the validity of automated threshold audiometry. Results: In total, 29 reports on automated audiometry (method of limits and the method of adjustment techniques) met the inclusion criteria and were included in this review. Most reports included data on adult populations using air conduction testing with limited data on children, bone conduction testing, and the effects of hearing status on automated audiometry. Meta-analysis test-retest reliability for automated audiometry was within typical test-retest variability for manual audiometry. Accuracy results on the meta-analysis indicated overall average differences between manual and automated air conduction audiometry (0.4 dB; 6.1 SD) to be comparable with test-retest differences for manual (1.3 dB; 6.1 SD) and automated (0.3 dB; 6.9 SD) audiometry. Nosignificant differences (p > 0.01; summarized data analysis of variance) were seen in any of the comparisons between test-retest reliability of manual and automated audiometry compared with differences between manual and automated audiometry. Conclusions: Automated audiometry provides an accurate measure of hearing threshold, but validation data are still limited for (a) automated bone conduction audiometry; (b) automated audiometry in children and difficult-to-test populations; and (c) different types and degrees of hearing loss. Copyright (C) 2015 Wolters Kluwer Health, Inc. All rights reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου