default search action
INTERSPEECH 2000: Beijing, China
- Sixth International Conference on Spoken Language Processing, ICSLP 2000 / INTERSPEECH 2000, Beijing, China, October 16-20, 2000. ISCA 2000
Volume 1
Speech Production Control (Special Session)
- Johan Liljencrants, Gunnar Fant, Anita Kruckenberg:
Subglottal pressure and prosody in Swedish. 1-4 - Kiyoshi Honda, Shinobu Masaki, Yasuhiro Shimada:
Observation of laryngeal control for voicing and pitch change by magnetic resonance imaging technique. 5-8 - Hiroya Fujisaki, Ryou Tomana, Shuichi Narusawa, Sumio Ohno, Changfu Wang:
Physiological mechanisms for fundamental frequency control in standard Chinese. 9-12 - René Carré:
On vocal tract asymmetry/symmetry. 13-16 - Olov Engwall:
Are static MRI measurements representative of dynamic speech? results from a comparative study using MRI, EPG and EMA. 17-20 - Shinan Lu, Lin He, Yufang Yang, Jianfen Cao:
Prosodic control in Chinese TTS system. 21-24 - Yuqing Gao, Raimo Bakis, Jing Huang, Bing Xiang:
Multistage coarticulation model combining articulatory, formant and cepstral features. 25-28 - Osamu Fujimura:
Rhythmic organization and signal characteristics of speech. 29-35 - Sven E. G. Öhman:
Oral culture in the 21st century: the case of speech processing. 36-41 - Jintao Jiang, Abeer Alwan, Lynne E. Bernstein, Patricia A. Keating, Edward T. Auer:
On the correlation between facial movements, tongue movements and speech acoustics. 42-45
Linguistics, Phonology, Phonetics, and Psycholinguistics 1, 2
- Sandra P. Whiteside, E. Rixon:
Coarticulation patterns in identical twins: an acoustic case study. 46-49 - Philip Hanna, Darryl Stewart, Ji Ming, Francis Jack Smith:
Improved lexicon formation through removal of co-articulation and acoustic recognition errors. 50-53 - Anders Lindström, Anna Kasaty:
A two-level approach to the handling of foreign items in Swedish speech technology applications. 54-57 - Yasuharu Den, Herbert H. Clark:
Word repetitions in Japanese spontaneous speech. 58-61 - Allard Jongman, Corinne B. Moore:
The role of language experience in speaker and rate normalization processes. 62-65 - Achim F. Müller, Jianhua Tao, Rüdiger Hoffmann:
Data-driven importance analysis of linguistic and phonetic information. 66-69 - Hiroya Fujisaki, Katsuhiko Shirai, Shuji Doshita, Seiichi Nakagawa, Keikichi Hirose, Shuichi Itahashi, Tatsuya Kawahara, Sumio Ohno, Hideaki Kikuchi, Kenji Abe, Shinya Kiriyama:
Overview of an intelligent system for information retrieval based on human-machine dialogue through spoken language. 70-73 - Li-chiung Yang:
The expression and recognition of emotions through prosody. 74-77 - Marc Swerts, Miki Taniguchi, Yasuhiro Katagiri:
Prosodic marking of information status in tokyo Japanese. 78-81 - Britta Wrede, Gernot A. Fink, Gerhard Sagerer:
Influence of duration on static and dynamic properties of German vowels in spontaneous speech. 82-85 - Bo Zheng, Bei Wang, Yufang Yang, Shinan Lu, Jianfen Cao:
The regular accent in Chinese sentences. 86-89 - Odile Mella, Dominique Fohr, Laurent Martin, Andreas J. Carlen:
A tool for the synchronization of speech and mouth shapes: LIPS. 90-93 - Mohamed-Zakaria Kurdi:
Semantic tree unification grammar: a new formalism for spoken language processing. 94-97
Discourse and Dialogue 1, 2
- Akira Kurematsu, Yousuke Shionoya:
Identification of utterance intention in Japanese spontaneous spoken dialogue by use of prosody and keyword information. 98-101 - Sherif M. Abdou, Michael S. Scordilis:
Improved speech understanding using dialogue expectation in sentence parsing. 102-105 - Helen M. Meng, Carmen Wai, Roberto Pieraccini:
The use of belief networks for mixed-initiative dialog modeling. 106-109 - Michael F. McTear, Susan Allen, Laura Clatworthy, Noelle Ellison, Colin Lavelle, Helen McCaffery:
Integrating flexibility into a structured dialogue model: some design considerations. 110-113 - Yasuhisa Niimi, Tomoki Oku, Takuya Nishimoto, Masahiro Araki:
A task-independent dialogue controller based on the extended frame-driven method. 114-117 - Wei Xu, Alex Rudnicky:
Language modeling for dialog system. 118-121 - Kallirroi Georgila, Nikos Fakotakis, George Kokkinakis:
Building stochastic language model networks based on simultaneous word/phrase clustering. 122-125 - Li-chiung Yang, Richard Esposito:
Prosody and topic structuring in spoken dialogue. 126-129 - Stéphane H. Maes:
Elements of conversational computing - a paradigm shift. 130-133 - Ludek Müller, Filip Jurcícek, Lubos Smídl:
Rejection and key-phrase spottin techniques using a mumble model in a czech telephone dialog system. 134-137 - Tim Paek, Eric Horvitz, Eric K. Ringger:
Continuous listening for unconstrained spoken dialog. 138-141 - Stefanie Shriver, Alan W. Black, Ronald Rosenfeld:
Audio signals in speech interfaces. 142-145 - Péter Pál Boda:
Visualisation of spoken dialogues. 146-149 - Mary Zajicek:
The construction of speech output to support elderly visually impaired users starting to use the internet. 150-153
Recognition and Understanding of Spoken Language 1, 2
- Kazuyuki Takagi, Rei Oguro, Kazuhiko Ozeki:
Effects of word string language models on noisy broadcast news speech recognition. 154-157 - Xiaoqiang Luo, Martin Franz:
Semantic tokenization of verbalized numbers in language modeling. 158-161 - Kazuomi Kato, Hiroaki Nanjo, Tatsuya Kawahara:
Automatic transcription of lecture speech using topic-independent language modeling. 162-165 - Rocio Guillén, Randal Erman:
Extending grammars based on similar-word recognition. 166-169 - Edward W. D. Whittaker, Philip C. Woodland:
Particle-based language modelling. 170-173 - Wing Nin Choi, Yiu Wing Wong, Tan Lee, P. C. Ching:
Lexical tree decoding with a class-based language model for Chinese speech recognition. 174-177 - Karthik Visweswariah, Harry Printz, Michael Picheny:
Impact of bucketing on performance of linearly interpolated language models. 178-181 - Shuwu Zhang, Hirofumi Yamamoto, Yoshinori Sagisaka:
An embedded knowledge integration for hybrid language modelling. 182-195 - Lucian Galescu, James F. Allen:
Hierarchical statistical language models: experiments on in-domain adaptation. 186-189 - Hirofumi Yamamoto, Kouichi Tanigaki, Yoshinori Sagisaka:
A language model for conversational speech recognition using information designed for speech translation. 190-193 - Bob Carpenter, Sol Lerner, Roberto Pieraccini:
Optimizing BNF grammars through source transformations. 194-197 - Jian Wu, Fang Zheng:
On enhancing katz-smoothing based back-off language model. 198-201 - Wei Xu, Alex Rudnicky:
Can artificial neural networks learn language models? 202-205 - Guergana Savova, Michael Schonwetter, Sergey V. Pakhomov:
Improving language model perplexity and recognition accuracy for medical dictations via within-domain interpolation with literal and semi-literal corpora. 206-209 - Karl Weilhammer, Günther Ruske:
Placing structuring elements in a word sequence for generating new statistical language models. 210-213 - Yannick Estève, Frédéric Béchet, Renato de Mori:
Dynamic selection of language models in a dialogue system. 214-217 - Magne Hallstein Johnsen, Trym Holter, Torbjørn Svendsen, Erik Harborg:
Stochastic modeling of semantic content for use IN a spoken dialogue system. 218-221 - Tomio Takara, Eiji Nagaki:
Spoken word recognition using the artificial evolution of a set of vocabulary. 222-225 - Eric Horvitz, Tim Paek:
Deeplistener: harnessing expected utility to guide clarification dialog in spoken language systems. 226-229 - Yunbin Deng, Bo Xu, Taiyi Huang:
Chinese spoken language understanding across domain. 230-233 - Sven C. Martin, Andreas Kellner, Thomas Portele:
Interpolation of stochastic grammar and word bigram models in natural language understanding. 234-237 - Satoru Kogure, Seiichi Nakagawa:
A portable development tool for spoken dialogue systems. 238-241 - Yi-Chung Lin, Huei-Ming Wang:
Error-tolerant language understanding for spoken dialogue systems. 242-245 - Akinori Ito, Chiori Hori, Masaharu Katoh, Masaki Kohda:
Language modeling by stochastic dependency grammar for Japanese speech recognition. 246-249 - Ruiqiang Zhang, Ezra Black, Andrew M. Finch, Yoshinori Sagisaka:
A tagger-aided language model with a stack decoder. 250-253 - Julia Hirschberg, Diane J. Litman, Marc Swerts:
Generalizing prosodic prediction of speech recognition errors. 254-257 - Jerome R. Bellegarda, Kim E. A. Silverman:
Toward unconstrained command and control: data-driven semantic inference. 258-261 - Ken Hanazawa, Shinsuke Sakai:
Continuous speech recognition with parse filtering. 262-265 - Martine Adda-Decker, Gilles Adda, Lori Lamel:
Investigating text normalization and pronunciation variants for German broadcast transcription. 266-269 - Mirjam Wester, Eric Fosler-Lussier:
A comparison of data-derived and knowledge-based modeling of pronunciation variation. 270-273 - Judith M. Kessens, Helmer Strik, Catia Cucchiarini:
A bottom-up method for obtaining information about pronunciation variation. 274-277 - Jiyong Zhang, Fang Zheng, Mingxing Xu, Ditang Fang:
Semi-continuous segmental probability modeling for continuous speech recognition. 278-281 - Christos Andrea Antoniou, T. Jeff Reynolds:
Acoustic modelling using modular/ensemble combinations of heterogeneous neural networks. 282-285 - Hsiao-Wuen Hon, Shankar Kumar, Kuansan Wang:
Unifying HMM and phone-pair segment models. 286-289 - Ming Li, Tiecheng Yu:
Multi-group mixture weight HMM. 290-292 - Tetsuro Kitazoe, Tomoyuki Ichiki, Makoto Funamori:
Application of pattern recognition neural network model to hearing system for continuous speech. 293-296 - Nathan Smith, Mahesan Niranjan:
Data-dependent kernels in svm classification of speech patterns. 297-300 - Srinivasan Umesh, Richard C. Rose, Sarangarajan Parthasarathy:
Exploiting frequency-scaling invariance properties of the scale transform for automatic speech recognition. 301-304 - Masahiro Fujimoto, Jun Ogata, Yasuo Ariki:
Large vocabulary continuous speech recognition under real environments using adaptive sub-band spectral subtraction. 305-308 - Liang Gu, Kenneth Rose:
Perceptual harmonic cepstral coefficients as the front-end for speech recognition. 309-312 - Yik-Cheung Tam, Brian Kan-Wing Mak:
Optimization of sub-band weights using simulated noisy speech in multi-band speech recognition. 313-316 - Robert Faltlhauser, Thilo Pfau, Günther Ruske:
On the use of speaking rate as a generalized feature to improve decision trees. 317-320 - Jun Toyama, Masaru Shimbo:
Syllable recognition using glides based on a non-linear transformation. 321-324 - M. Kemal Sönmez, Madelaine Plauché, Elizabeth Shriberg, Horacio Franco:
Consonant discrimination in elicited and spontaneous speech: a case for signal-adaptive front ends in ASR. 325-328 - Khalid Daoudi, Dominique Fohr, Christophe Antoine:
A new approach for multi-band speech recognition based on probabilistic graphical models. 329-332 - Hervé Glotin, Frédéric Berthommier:
Test of several external posterior weighting functions for multiband full combination ASR. 333-336 - Kenji Okada, Takayuki Arai, Noburu Kanederu, Yasunori Momomura, Yuji Murahara:
Using the modulation wavelet transform for feature extraction in automatic speech recognition. 337-340 - Qifeng Zhu, Abeer Alwan:
AM-demodulation of speech spectra and its application io noise robust speech recognition. 341-344 - Astrid Hagen, Andrew C. Morris:
Comparison of HMM experts with MLP experts in the full combination multi-band approach to robust ASR. 345-348 - Astrid Hagen, Hervé Bourlard:
Using multiple time scales in the framework of multi-stream speech recognition. 349-352 - Hua Yu, Alex Waibel:
Streamlining the front end of a speech recognizer. 353-356 - Bhiksha Raj, Michael L. Seltzer, Richard M. Stern:
Reconstruction of damaged spectrographic features for robust speech recognition. 357-360 - Janienke Sturm, Hans Kamperman, Lou Boves, Els den Os:
Impact of speaking style and speaking task on acoustic models. 361-364 - Shubha Kadambe, Ron Burns:
Encoded speech recognition accuracy improvement in adverse environments by enhancing formant spectral bands. 365-368 - Jon Barker, Ljubomir Josifovski, Martin Cooke, Phil D. Green:
Soft decisions in missing data techniques for robust automatic speech recognition. 373-376 - Jian Liu, Tiecheng Yu:
New tone recognition methods for Chinese continuous speech. 377-380 - Bo Zhang, Gang Peng, William S.-Y. Wang:
Reliable bands guided similarity measure for noise-robust speech recognition. 381-384 - Tsuneo Nitta, Masashi Takigawa, Takashi Fukuda:
A novel feature extraction using multiple acoustic feature planes for HMM-based speech recognition. 385-388 - Fang Zheng, Guoliang Zhang:
Integrating the energy information into MFCC. 389-392 - Omar Farooq, Sekharjit Datta:
Speaker independent phoneme recognition by MLP using wavelet features. 393-396 - Laurent Couvreur, Christophe Couvreur, Christophe Ris:
A corpus-based approach for robust ASR in reverberant environments. 397-400 - Issam Bazzi, James R. Glass:
Modeling out-of-vocabulary words for robust speech recognition. 401-404 - Bojana Gajic, Richard C. Rose:
Hidden Markov model environmental compensation for automatic speech recognition on hand-held mobile devices. 405-408 - Andrew C. Morris, Ljubomir Josifovski, Hervé Bourlard, Martin Cooke, Phil D. Green:
A neural network for classification with incomplete data: application to robust ASR. 409-412 - Shigeki Matsuda, Mitsuru Nakai, Hiroshi Shimodaira, Shigeki Sagayama:
Feature-dependent allophone clustering. 413-416 - Qian Yang, Jean-Pierre Martens:
Data-driven lexical modeling of pronunciation variations for ASR. 417-420 - Dat Tran, Michael Wagner:
Fuzzy entropy hidden Markov models for speech recognition. 421-424 - Carl Quillen:
Adjacent node continuous-state HMM's. 425-428 - Janienke Sturm, Eric Sanders:
Modelling phonetic context using head-body-tail models for connected digit recognition. 429-432 - Issam Bazzi, Dina Katabi:
Using support vector machines for spoken digit recognition. 433-436 - Jiping Sun, Xing Jing, Li Deng:
Data-driven model construction for continuous speech recognition using overlapping articulatory features. 437-440 - Marcel Vasilache:
Speech recognition using HMMs with quantized parameters. 441-444 - Yingyong Qi, Jack Xin:
A perception and PDE based nonlinear transformation for processing spoken words. 445-448 - Reinhard Blasig, Georg Rose, Carsten Meyer:
Training of isolated word recognizers with continuous speech. 449-452
Production of Spoken Language
- Shu-Chuan Tseng:
Repair patterns in spontaneous Chinese dialogs: morphemes, words, and phrases. 453-456 - Jianwu Dang, Kiyoshi Honda:
Improvement of a physiological articulatory model for synthesis of vowel sequences. 457-460 - Kunitoshi Motoki, Xavier Pelorson, Pierre Badin, Hiroki Matsuzaki:
Computation of 3-d vocal tract acoustics based on mode-matching technique. 461-464 - Lucie Ménard, Louis-Jean Boë:
Exploring vowel production strategies from infant to adult by means of articulatory inversion of formant data. 465-468 - Gavin Smith, Tony Robinson:
Segmentation of a speech waveform according to glottal open and closed phases using an autoregressive-HMM. 469-472 - Rosemary Orr, Bert Cranen, Felix de Jong, Lou Boves:
Comparison of inverse filtering of the flow signal and microphone signal. 473-476 - Markus Iseli, Abeer Alwan:
Inter- and intra-speaker variability of glottal flow derivative using the LF model. 477-480
Linguistics, Phonology, Phonetics, and Psycholinguistics 3
- Philippe Blache, Daniel Hirst:
Multi-level annotation for spoken language corpora. 481-484 - Aijun Li, Fang Zheng, William Byrne, Pascale Fung, Terri Kamm, Yi Liu, Zhanjiang Song, Umar Ruhi, Veera Venkataramani, Xiaoxia Chen:
CASS: a phonetically transcribed corpus of mandarin spontaneous speech. 485-488 - Kazuhide Yamamoto, Eiichiro Sumita:
Multiple decision-tree strategy for input-error robustness: a simulation of tree combinations. 489-492 - Zheng Chen, Kai-Fu Lee, Mingjing Li:
Discriminative training on language model. 493-496 - Jianfeng Gao, Mingjing Li, Kai-Fu Lee:
N-gram distribution based language model adaptation. 497-500 - Francisco Palou, Paolo Bravetti, Ossama Emam, Volker Fischer, Eric Janke:
Towards a common phone alphabet for multilingual speech recognition. 501-504