Show simple item record

dc.contributor.authorAmes, Heatheren_US
dc.contributor.authorGrossberg, Stephenen_US
dc.date.accessioned2011-11-14T18:17:08Z
dc.date.available2011-11-14T18:17:08Z
dc.date.issued2007-12en_US
dc.identifier.urihttp://hdl.handle.net/2144/1960
dc.description.abstractAuditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.en_US
dc.description.sponsorshipNational Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624)en_US
dc.language.isoen_USen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBU CAS/CNS Technical Reports;CAS/CNS-TR-2007-022en_US
dc.rightsCopyright 2007 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.titleSpeaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State Vowel Identificationen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record