Center for Spoken Language Understanding (CSLU)
Institute on Development & Disability (IDD)
School of Medicine
Oregon Health & Science University (OHSU)
20000 NW Walker Road
Beaverton, Oregon 97006
Email: kaina at ohsu edu
Phone: (503) 748-1539
Fax: (503) 748-1306
- Assistant Professor, 2007-present
Oregon Health & Science University, Portland, OR
- Chief Scientist, 2005-present
BioSpeech, Inc., Portland, OR
- Lead Speech Synthesis Technologist and Consultant, 2001-2008
Sensory, Inc., Santa Clara, CA
- Visiting Researcher, 1999
AT&T Research Labs, Florham Park, NJ
- Reviewer / Guest Editor for: Journal of the Acoustical Society of America (JASA); Computer, Speech, and Language; Journal of Speech, Language, and Hearing Research (JSLHR); IEEE Journals; scientific conferences such as Interspeech; National Science Foundation (NSF) proposals.
- Postdoctoral Training, 2002-2005
OGI School of Science & Engineering, Portland, OR
- Ph.D. in Computer Science and Engineering, 2001
Oregon Graduate Institute, Portland, OR
- B.A. in Computer Science and B.A. in Mathematics, 1995
Rockford College, Rockford, IL
- Development of a coarticulation model of speech with application to the study of conversational, clear, and disordered speech
- Automatic classification of breathing sounds during sleep for low-cost, ubiquitous, minimally-obtrusive screening of apnea
- Quantitative assessment and transformation of clear and conversational speech, with the aim of advancing hearing-aid performance (without extra noise: conversational, clear prosody and conversational spectrum, conversational prosody and clear spectrum, clear; with multi-talker background noise: conversational, clear prosody and conversational spectrum, conversational prosody and clear spectrum, clear)
- Transformation of aphonic speech to improve intelligibility and acceptability ( aphonic speech, transformation)
- Transformation of dysarthric speech to improve intelligibility and perceived voice quality ( dysarthric speech, transformation)
- Increasing spectral control in concatenative synthesizers to eliminate concatenation errors (baseline, formant + spectral-band + time-domain crossfading)
- Representing acoustic inventories of Text-to-Speech systems with an asynchronous interpolation model, allowing high rates of compression, elimination of concatenation errors, and speaker transformation (compression: original, compression with AIM coder @ 3.4kbps, compression with speex coder @ 3.4kbps for comparison; speaker transformation: transformation-1, transformation-2, transformation-3, transformation-4, transformation-5)
- Improving the accuracy and quality of speaker transformation systems and designing speaker recognizability perceptual tests (transformation of natural speech: source, transformation, target; transformation of TTS synthesis voices: source, transformation, target)
- Multi-purpose speech modification algorithms (original, resynthesis, slow to 300%, speed-up to 50%, lower pitch to 50%, raise pitch to 200%, scale formants to 80%, scale formants to 120%, mimic child, mimic man)
- Singing synthesis ("The Search is Over")
- 2011/12/01-2013/11/30: National Institute of Health R21DC012139, "Computer-Based Pronunciation Analysis for Children with Speech Sound Disorders", PI: Kain (OHSU). The aim is to develop speech-production assessment and pronunciation training tools for children with speech sound disorders.
- 2010/05/15-2013/04/30: National Science Foundation IIS-0964468, "HCC: Medium: Synthesis and Perception of Speaker Identity", PI: Kain (OHSU). To achieve the goal of synthesis of speaker identity from a small training corpus the project will address problems including trainable abstract parameterizations of the prosodic patterns that characterize a speaker and voice conversion methods.
- 2009/09/01-2012/08/31: National Science Foundation IIS-0915754, "RI: Small: Modeling Coarticulation for Automatic Speech Recognition", PI: Kain (OHSU). Performing automatic speech recognition (ASR) using the Asynchronous Interpolation Model (AIM) framework. By decomposing the input speech signal into basis vectors and weights, we search for phonemic basis vectors and weights that yield the highest-probability match to the input signal.
- 2011/04/01-2012/03/31: National Institute of Health 5R42DC008712, "User Adaptation of AAC Device Voices - Phase 2", PI: Klabbers (BioSpeech). Developing and evaluating voice transformation and prosody modification technologies to customize synthetic voices in AAC devices, mimicking the individual user's pre-morbid speech.
- 2009/07/15-2012/06/30: National Science Foundation IIS-0905095, "HCC: Automatic detection of atypical patterns in cross-modal affect", PI: van Santen (OHSU). The long term goal is to build interactive, agent based systems for (1) remediation of poor affect communication and (2) diagnosis of the underlying neurological disorders based on analysis of affective signals.
- 2009/07/17-2012/06/30: National Institute of Health 5R21DC010035, "Quantitative Modeling of Segmental Timing in Dysarthria", PI: van Santen (OHSU). The project seeks to apply a quantitative modeling framework to segment durations in sentences produced by speakers with a variety of neurological diagnoses and dysarthrias.
- 2007/09/01-2011/08/31: National Science Foundation IIS-0713617, "HCC: High-quality Compression, Enhancement, and Personalization of Text-to-Speech Voices", PI: Kain (OHSU). Developed Text-to-Speech technologies that focus on elimination of concatenation errors, and accurate speech modifications in the areas of coarticulation, degree of articulation, prosodic effects, and speaker characteristics, using an asynchronous interpolation model.
- 2005/01/10-2010/12/31: National Institute of Health 5R01DC007129, "Expressive crossmodal affect integration in Autism", PI: van Santen (OHSU). This study performed a comprehensive analysis of crossmodal integration of affect expression in ASD.
- 2008-2009: Nancy Lurie Marks Family Foundation, "In Your Own Voice: Personal AAC Voices for Minimally Verbal Children with Autism Spectrum Disorder", PI: van Santen (OHSU). Adapted a text-to-speech voice to sound like a child's voice.
- 2007/01/01-2008/06/30: National Institute of Health 1R41DC008712, "User Adaptation of AAC Device Voices - Phase 1", PI: van Santen (BioSpeech). Developed and evaluated voice transformation and prosody modification technologies to customize synthetic voices in AAC devices, mimicking the individual user's pre-morbid speech.
- 2006/09/01-2008/03/31: National Institute of Health 1R41DC007240, "Voice Transformation for Dysarthria - Phase 1", PI: van Santen (BioSpeech). Developed software that transforms speech compromised by dysarthria into easier-to understand and more natural-sounding speech. The software resides on a wearable computer, with headset microphone input and powered speaker or line output.
- 2005/01/01-2006/06/30: National Science Foundation IIP-0441125, "STTR Phase 1: Small Footprint Speech Synthesis", PI: Kain (BioSpeech). Created and evaluated speech compression technologies for concatenative text-to-speech synthesizers.
- 2001/10/01-2005/09/30: National Science Foundation IIS-0117911, "Making Dysarthric Speech Intelligible", PI: van Santen (OHSU). Developed new algorithms that enable dysarthric individuals to be more easily understood by the general population.
CS 506/606 - Special Topics: Speech Signal Processing
Description: Speech systems are becoming more and more commonplace in today's computer systems. Examples are speech recognition systems and Text-to-Speech synthesis systems. This course will introduce the fundamentals of the underlying speech signal processing that enables such systems. Topics include speech production and perception by humans, frequency transforms, filters, linear predictive features, pitch estimation, speech coding, speech enhancement, and prosodic speech modification.
CS 553/653 - Speech Synthesis
Description: This course will introduce students to the problem of synthesizing speech from text input. Speech synthesis is a challenging area that draws on expertise from a diverse set of scientific fields, including signal processing, linguistics, psychology, statistics, and artificial intelligence. Fundamental advances in each of these areas will be needed to achieve truly human-like synthesis quality and advances in other realms of speech technology (like speech recognition, speech coding, speech enhancement). In this course, we will consider current approaches to sub-problems such as text analysis, pronunciation, linguistic analysis of prosody, and generation of the speech waveform. Lectures, demonstrations, and readings of relevant literature in the area will be supplemented by student lab exercises using hands-on tools.
CS 506/606 - Special Topics: Computational Approaches to Speech and Language Disorders
Description: This course covers a range of speech and language analysis algorithms that have been developed for measurement of speech or language based markers of neurological disorders, for the creation of assistive devices, and for remedial applications. Topics will include introduction to speech and language disorders, robust speech signal processing, statistical approaches to pitch and timing modeling, voice transformation algorithms, speech segmentation, and modeling of disfluency. The class will use a wide array of clinical data, and will be closely tied to several ongoing research projects.
- B. Bush, A. Kain, "Estimating Phoneme Formant Targets and Coarticulation Parameters of Conversational and Clear Speech", ICASSP, 2013.
- S. Mohammadi, A. Kain, J. van Santen, "Making Conversational Vowels More Clear", Proceedings of Interspeech, 2012.
- B. Bush, J.-P. Hosom, A. Kain, and A. Amano-Kusumoto, " Using a genetic algorithm to estimate parameters of a coarticulation model", Interspeech, 2011.
- A. Amano-Kusumoto, J.-P. Hosom, and A. Kain, "Speaking style dependency of formant targets", Interspeech, 2010.
- A. Kain, J. van Santen, "Using Speech Transformation to Increase Speech Intelligibility for the Hearing- and Speaking-impaired", Proceedings of ICASSP, April 2009.
- A. Kain, A. Amano-Kusumoto, and J.-P. Hosom, "Hybridizing Conversational and Clear Speech to Determine the Degree of Contribution of Acoustic Features to Intelligibility", Journal of the Acoustical Society of America, Volume 124, Issue 4, October 2008, Pages 2308-2319.
- A. Kusumoto, A. Kain, P. Hosom, and J. van Santen, "Hybridizing Conversational and Clear Speech", Proceedings of Interspeech, August 2007.
- A. Kain, J. Hosom, X. Niu, J. van Santen, M. Fried-Oken, J. Staehely, "Improving the Intelligibility of Dysarthric Speech", Speech Communication, Volume 49, Issue 9, September 2007, Pages 743-759.
- X. Niu, A. Kain, J. van Santen, "A Noninvasive, Low-cost Device to Study the Velopharyngeal Port During Speech and Some Preliminary Results", Proceedings of Interspeech, September 2006.
- X. Niu, A. Kain, J. van Santen, "Estimation of the Acoustic Properties of the Nasal Tract during the Production of Nasalized Vowels", Proceedings of EUROSPEECH, September 2005.
- A. Kain, X. Niu, J. Hosom, Q. Miao, J. van Santen, "Formant Re-synthesis of Dysarthric Speech", Proceedings of 5th ISCA Workshop on Speech Synthesis, June 2004.
- J. Hosom, A. Kain, T. Mishra, J. van Santen, M. Fried-Oken, J. Staehely, "Intelligibility of modifications to dysarthric speech", Proceedings of ICASSP, May 2003.
Text-to-Speech Synthesis (TTS)
- A. Kain and T. Leen, "Compression of Line Spectral Frequency Parameters using the Asynchronous Interpolation Model, Proceedings of 7th ISCA Workshop on Speech Synthesis, September 2010.
- Q. Miao, A. Kain, J. van Santen, Perceptual Cost Function for Cross-fading Based Concatenation", Proceedings of Interspeech, 2009.
- R. Moldover, A. Kain, "Compression of Line Spectral Frequency Parameters with Asynchronous Interpolation", Proceedings of ICASSP, April 2009.
- A. Kain, Q. Miao, J. van Santen, "Spectral Control in Concatenative Speech Synthesis", Proceedings of 6th ISCA Workshop on Speech Synthesis, August 2007.
- A. Kain and J. van Santen, "Unit-Selection Text-to-Speech Synthesis Using an Asynchronous Interpolation Model", Proceedings of 6th ISCA Workshop on Speech Synthesis, August 2007.
- E. Klabbers, J. van Santen, A. Kain, "The Contribution of Various Sources of Spectral Mismatch to Audible Discontinuities in a Diphone Database", IEEE Transactions on Audio, Speech, and Language Processing Journal, Volume 15, Issue 3, Pages 949-956, March 2007.
- J. van Santen, A. Kain, E. Klabbers, and T. Mishra, "Synthesis of Prosody using Multi-level Unit Sequences", Speech Communication Journal, Volume 46, Issues 3-4, Pages 365-375, July 2005.
- J. van Santen, A. Kain, and E. Klabbers, "Synthesis by Recombination of Segmental and Prosodic Information", Speech Prosody 2004, March 2004.
- A. Kain and J. van Santen, "A speech model of acoustic inventories based on asynchronous interpolation", Proceedings of EUROSPEECH, Pages 329-332, August 2003.
- J. van Santen, L. Black, G. Cohen, A. Kain, E. Klabbers, T. Mishra, J. de Villiers, X. Niu, "Applications of computer generated expressive speech for communication disorders", Proceedings of EUROSPEECH, Pages 1657-1660, August 2003.
- A. Kain and J. van Santen, "Compression of Acoustic Inventories using Asynchronous Interpolation", Proceedings of IEEE Workshop on Speech Synthesis, Pages 83-86, September 2002.
- J. van Santen, J. Wouters, and A. Kain, "Modification of Speech: A Tribute to Mike Macon", Proceedings of IEEE Workshop on Speech Synthesis, September 2002.
- A. Kain and Y. Stylianou, "Stochastic Modeling of Spectral Adjustment for High Quality Pitch Modification", Proceedings of ICASSP, June 2000, vol. 2, pp. 949-952.
- S. Mohammadi, A. Kain, "Transmutative Voice Conversion", ICASSP, 2013.
- E. Morley, E. Klabbers, J. van Santen, A. Kain, S. Mohammadi, "Synthetic F0 can Effectively Convey Speaker ID in Delexicalized Speech", Interspeech, 2012.
- E. Morley, J. van Santen, E. Klabbers, A. Kain, "F0 Range and Peak Alignment across Speakers and Emotions", ICASSP, 2011.
- E. Klabbers, A. Kain, and J. van Santen, "Evaluation of speaker mimic technology for personalizing SGD voices", Interspeech, 2010.
- H. Duxans, A. Bonafonte, A. Kain, and J. van Santen, "Including Dynamic and Phonetic Information in Voice Conversion Systems", Proceedings of ICSLP, October 2004.
- A. Kain, "High Resolution Voice Transformation", Ph.D. thesis, OGI School of Science & Engineering at Oregon Health & Science University, 2001. The data used in this thesis are available from the Linguistic Data Consortium as the VOICES Corpus.
- A. Kain and M. Macon, "Design and Evaluation of a Voice Conversion Algorithm based on Spectral Envelope Mapping and Residual Prediction", Proceedings of ICASSP, May 2001.
- A. Kain and M. Macon, "Personalizing a speech synthesizer by voice adaptation", Third ESCA/COCOSDA International Speech Synthesis Workshop, November 1998, pp. 225-230.
- A. Kain and M. Macon, "Text-to-speech voice adaptation from sparse training data", Proceedings of ICSLP, November 1998, vol.7, pp. 2847-50.
- A. Kain and M. Macon, "Spectral Voice Conversion for Text-to-Speech Synthesis", Proceedings of ICASSP, May 1998, vol. 1, pp. 285-288.
- B. Snider and A. Kain, "Automatic Classification of Breathing Sounds during Sleep", ICASSP, 2013.
- A. Kain and J. van Santen, "Frequency-domain delexicalization using surrogate vowels", Interspeech, 2010.
- J. House, A. Kain, and J. Hines, "ESP - Metaphor for learning: an evolutionary algorithm", Proceedings of GECCO 2000, Las Vegas, NV.
- S. Sutton, R. Cole, J. de Villiers, J. Schalkwyk, P. Vermeulen, M. Macon, Y. Yan, E. Kaiser, B. Rundle, K. Shobaki, P. Hosom, A. Kain, J. Wouters, D. Massaro, M. Cohen, "Universal Speech Tools: The CSLU Toolkit", Proceedings of ICSLP, November 1998, vol. 7, pp. 3221-24.
- N. Malayath, H. Hermansky, A. Kain and R. Carlson, "Speaker-independent Feature Extraction by Oriented Principal Component Analysis", Proceedings of EUROSPEECH 1997.
- J. van Santen and A. Kain, OHSU. System and Method for Compressing Concatenative Acoustic Inventories for Speech Synthesis.
- A. Kain and Y. Stylianou, AT&T Research Laboratories. Stochastic Modeling Of Spectral Adjustment For High Quality Pitch Modification.
- B. R. Snider and A. Kain, "Adaptive Reduction of Additive Noise from Sleep Breathing Sounds", CSLU-2012-001.
- A. Kain, J.-P. Hosom, S. H. Ferguson, B. Bush, "Creating a speech corpus with semi-spontaneous, parallel conversational and clear speech", CSLU-11-003.
- A. Amano-Kusumoto and J.-P. Hosom, "A review of research on speech intelligibility and correlations with acoustic features", CSLU-11-001.