


Остановите войну!
for scientists:


default search action
ICASSP 1989: Glasgow, Scotland
- IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '89, Glasgow, Scotland, May 23-26, 1989. IEEE 1989
- Kazunaga Yoshida, Takao Watanabe, Shinji Koga:
Large vocabulary word recognition based on demi-syllable hidden Markov model using small amount of training data. 1-4 - Gerhard Rigoll:
Speaker adaptation for large vocabulary speech recognition systems using speaker Markov models. 5-8 - Eleftherios D. Frangoulis:
Vector quantisation of the continuous distributions of an HMM speech recogniser based on mixtures of continuous distributions. 9-12 - Jerome R. Bellegarda, David Nahamoo:
Tied mixture continuous parameter models for large vocabulary isolated speech recognition. 13-16 - Les T. Niles, Harvey F. Silverman, Gary N. Tajchman, Marcia A. Bush:
How limited training data can allow a neural network to outperform an 'optimal' statistical classifier. 17-20 - Andreas Krause, Heidi Hackbarth:
Scaly artificial neural networks for speaker-independent recognition of isolated words. 21-24 - Hidefumi Sawai, Alex Waibel, Masanori Miyatake, Kiyohiro Shikano:
Spotting Japanese CV-syllables and phonemes using time-delay neural networks. 25-28 - Hiroaki Sakoe, Ryosuke Isotani, Kazunaga Yoshida, Ken-ichi Iso, Takao Watanabe:
Speaker-independent word recognition using dynamic programming neural networks. 29-32 - Hervé Bourlard, Christian J. Wellekens:
Speech dynamics and recurrent neural networks. 33-36 - Frédéric Guyot, Frédéric Alexandre, Jean Paul Haton:
Toward a continuous model of the cortical column: Application to speech recognition. 37-40 - E. Ofer, David Malah, Amir Dembo:
A unified framework for LPC excitation representation in residual speech coders. 41-44 - Bishnu S. Atal:
A model of LPC excitation in terms of eigenvectors of the autocorrelation matrix of the impulse response of the LPC filter. 45-48 - Shihua Wang, Allen Gersho:
Phonetically-based vector excitation coding of speech at 3.6 kbps. 49-52 - Anders Bergström, Per Hedelin:
Code-book driven glottal pulse analysis. 53-56 - Mark Ireton, Costas S. Xydeas:
On improving vector excitation coders through the use of spherical lattice codebooks (SLCs). 57-60 - Claude Lamblin, Jean-Pierre Adoul, Dominique Massaloux, Sarto Morissette:
Fast CELP coding based on the Barnes-Wall lattice in 16 dimensions. 61-64 - Nikil S. Jayant, Juin-Hwey Chen:
Speech coding with time-varying bit allocations to excitation and LPC parameters. 65-68 - Bishnu S. Atal, Richard V. Cox, Peter Kroon:
Spectral quantization and interpolation for CELP coders. 69-72 - Luca Cellario, Giuseppe Ferraris, Daniele Sereno:
A 2 ms delay CELP coder. 73-76 - Timothy Thorpe:
The mean squared error criterion: Its effect on the performance of speech coders. 77-80 - Erik McDermott, Shigeru Katagiri:
Shift-invariant, multi-category phoneme recognition using Kohonen's LVQ2. 81-84 - Gert-Jan Vernooij, Gerrit Bloothooft, Yvonne van Holsteijn:
A simulation study on the usefulness of broad phonetic classification in automatic speech recognition. 85-88 - Satoshi Nakamura, Kiyohiro Shikano:
Speaker adaptation applied to HMM and neural networks. 89-92 - Anna Maria Colla:
Automatic extraction of acoustic prototypes for large vocabulary speech recognition by using speaker-independent features. 93-96 - Li Deng, Patrick Kenny, Matthew Lennig, Vishwa Gupta, Paul Mermelstein:
A locus model of coarticulation in an HMM speech recognizer. 97-100 - Abdulmesih Aktas, Harald Höge:
Real-time recognition of subword units on a hybrid multi-DSP/ASIC based acoustic front-end. 101-103 - Kathy L. Brown, V. Ralph Algazi:
Characterization of spectral transitions with applications to acoustic sub-word segmentation and automatic speech recognition. 104-107 - Torbjørn Svendsen, Kuldip K. Paliwal, Erik Harborg, P. O. Husoy:
An improved sub-word based speech recognizer. 108-111 - Alex Waibel, Hidefumi Sawai, Kiyohiro Shikano:
Consonant recognition by modular construction of large phonemic time-delay neural networks. 112-115 - Anne-Marie Derouault, Bernard Mérialdo:
Improving speech recognition accuracy with contextual phonemes and MMI training. 116-119 - Alain Le Guyader, Dominique Massaloux, Jean-Pierre Petit:
Robust and fast code-excited linear predictive coding of speech signals. 120-123 - Redwan Salami, Lajos Hanzo, Derek G. Appleby:
A fully vector quantised self-excited vocoder. 124-127 - Ahmet M. Kondoz, K. Y. Lee, Barry G. Evans:
Improved quality CELP base-band coding of speech at low-bit rates. 128-131 - Jean E. Menez, Claude R. Galand, Michele M. Rosso, F. Bottau:
Adaptive code excited linear predictive coder (ACELPC). 132-135 - Hong Chae Woo, Jerry D. Gibson:
Multipulse-based codebooks for CELP coding at 7 kbps. 136-139 - H. Brehm, Manfred Herbert:
Lattice quantizers in speech coding. 140-143 - Jae H. Chung, Ronald W. Schafer:
A 4.8 Kbps homomorphic vocoder using analysis-by-synthesis excitation analysis. 144-147 - Masumi Akamine, Kimio Miseki:
ARMA model based speech coding at 8 kb/s. 148-151 - Martin Schultheiß, Arild Lacroix:
On the performance of CELP algorithms for low rate speech coding. 152-155 - Tomohiko Taniguchi, Shigeyuki Unagami, Robert M. Gray:
Multimode coding: application to CELP. 156-159 - Yair Shoham:
Cascaded likelihood vector coding of the LPC information. 160-163 - Yoshua Bengio, Régis Cardin, Piero Cosi, Renato De Mori:
Speech coding with multi-layer networks. 164-167 - Nariman Farvardin, Rajiv Laroia:
Efficient encoding of speech LSP parameters using the discrete cosine transformation. 168-171 - Baruch Mazor, C. Hudson, D. Borkowski:
Transform subbands coding with channel error control. 172-175 - Salvatore D. Morgera, Mohammad Reza Soleymani, Yves Normandin:
Combined source-channel coding. 176-179 - Robert E. Bogner, Tzuyin Li:
Pattern search prediction of speech. 180-183 - James L. Dixon, Vijay K. Varma, Nelson Sollenberger, David W. Lin:
Single DSP implementation of a 16 kbps sub-band speech coder for portable communications. 184-187 - Junji Suzuki, Naohisa Ohta:
Variable rate coding scheme for audio signal with subband and embedded coding techniques. 188-191 - Rosario Drogo de Jacovo, Roberto Montagna, Franco Perosino, Daniele Sereno:
Some experiments of 7 kHz audio coding at 16 kbit/s. 192-195 - Takehiro Moriya, Hirohito Suda:
An 8 kbit/s transform coder for noisy channels. 196-199 - David P. Kemp, Retha A. Sueda, Thomas E. Tremain:
An evaluation of 4800 bps voice coders. 200-203 - Y. J. Liu, Joseph Rothweiler:
A high quality speech coder at 400 bps. 204-206 - Thomas F. Quatieri, Robert J. McAulay:
Phase coherence in speech reconstruction for enhancement and coding applications. 207-210 - William M. Kushner, Vladimir Goncharoff, Chung Wu, Vien Nguyen, John N. Damoulakis:
The effects of subtractive-type speech enhancement/noise reduction algorithms on parameter estimation for improved recognition and coding in high noise environments. 211-214 - David M. Howard, Andrew P. Breen:
Methods for dynamic excitation control in parallel formant speech synthesis. 215-218 - Michael S. Scordilis, John N. Gowdy:
Neural network based generation of fundamental frequency contours. 219-222 - Rolf Carlson, Gunnar Fant, Christer Gobl, Björn Granström, Inger Karlsson, Qiguang Lin:
Voice source rules for text-to-speech synthesis. 223-226 - Mazin G. Rahim, Colin C. Goodyear:
Articulatory synthesis with the aid of a neural net. 227-230 - René Carré, Mohamad Mrayati:
New concept in acoustic-articulatory-phonetic relations-perspectives and applications. 231-234 - John Brian Pickering:
Modelling coarticulation. 235-237 - Christian Hamon, Eric Moulines, Francis Charpentier:
A diphone synthesis system based on time-domain prosodic modifications of speech. 238-241 - Hector R. Javkin, Kazue Hata, Lucio Mendes, Steven Pearson, Hisayo Ikuta, Abigail Kaun, Gregory DeHaan, Alan Jackson, Beatrix Zimmermann, Tracy Wise, Caroline Henton, Merrilyn Gow, Kenji Matsui, Noriyo Hara, Masaki Kitano, Der-Hwa Lin, Chun-Hong Lin:
A multi-lingual text-to-speech system. 242-245 - Hirohisa Iijima, Nobuhiro Miki, Nobuo Nagai:
Fundamental consideration of finite element method for the simulation of the vibration of vocal cords. 246-249 - Bert Van Coile:
The DEPES development system for text-to-speech synthesis. 250-253 - Jay G. Wilpon, Chin-Hui Lee, Lawrence R. Rabiner:
Application of hidden Markov models for recognition of a limited set of words in unconstrained speech. 254-257 - Dirk Van Compernolle:
Spectral estimation using a log-distance error criterion applied to speech recognition. 258-261 - Melvyn J. Hunt, Claude Lefèbvre:
A comparison of several acoustic representations for speech recognition with degraded and undegraded speech. 262-265 - John H. L. Hansen, Mark A. Clements:
Stress compensation and noise reduction algorithms for robust speech recognition. 266-269 - Gary E. Kopec, Marcia A. Bush:
An LPC-based spectral similarity measure for speech recognition in the presence of co-channel speech interference. 270-273 - Tomio Takara:
Isolated word recognition using continuous state transition-probability and DP-matching. 274-277 - Paul Cosgrove, J. Patrick Wilson, Roy D. Patterson:
Formant transition detection in isolated vowels with transitions in initial and final position. 278-281 - Y. Guedon, C. Cocozza-Thivent:
Use of the Derin's algorithm in hidden semi-Markov models for automatic speech recognition. 282-285 - Sadaoki Furui:
Unsupervised speaker adaptation method based on hierarchical spectral clustering. 286-289 - D. Hsu, John R. Deller Jr.:
On the use of HMMs to recognize cerebral palsy speech: isolated word case. 290-293 - Stephen J. Cox, John S. Bridle:
Unsupervised speaker adaptation by probabilistic spectrum fitting. 294-297 - Masafumi Nishimura:
HMM-based speech recognition using dynamic spectral feature. 298-301 - Ted H. Applebaum, Brian A. Hanson:
Enhancing the discrimination of speaker independent hidden Markov models with corrective training. 302-305 - X. Zhang, John S. Mason:
Improved training using semi-hidden Markov models in speech recognition. 306-309 - Fred Stentiford, Richard Hemmings:
A piecewise approach to connectionist networks for speech recognition. 310-313 - Piero Demichelis, L. Fissore, Pietro Laface, Giorgio Micca, E. Piccolo:
On the use of neural networks for speaker independent isolated word recognition. 314-317 - Magne Hallstein Johnsen:
A sub-word based speaker independent speech recognizer using a two-pass segmentation scheme. 318-321 - Shigeru Katagiri, Erik McDermott, Manami Yokota:
A new algorithm for representing acoustic feature dynamics. 322-325 - Yoshiharu Abe, Kunio Nakajima:
Speech recognition using dynamic transformation of phoneme templates depending on acoustic/phonetic environments. 326-329 - Ki Chui Kim, Hwang Soo Lee, Jung Wan Cho:
Phonetic recognition using peak weighted binary spectrum. 330-333 - Siyu Zhu, Jiankui Zhao, Cheng Fan:
Feature-based recognition of nonsonorant consonants in Chinese speech. 334-337 - Shinta Kimura, Hitoshi Iwamida, Toru Sanada:
Extraction and evaluation of phonetic-acoustic rules for continuous speech recognition. 338-341 - H. Hasan, J. M. Pardo, S. Alexandres, C. Casado:
Phonetic properties of a large Spanish lexicon and its implications for large vocabulary speech recognition. 342-344 - Lalit R. Bahl, P. S. Gopalakrishnan, Dimitri Kanevsky, David Nahamoo:
Matrix fast match: a fast method for identifying a short list of candidate words for decoding. 345-348 - Boneung Koo, Jerry D. Gibson, Steven D. Gray:
Filtering of colored noise for speech enhancement and coding. 349-352 - Yariv Ephraim, David Malah, Biing-Hwang Juang:
Speech enhancement based upon hidden Markov modeling. 353-356 - Charles H. Rogers, D. Chien, M. Featherstone, Kwang-Shik Min:
Neural network enhancement for a two speaker separation system. 357-360 - Marc A. Zissman, Clifford J. Weinstein, Louis D. Braida, Rosalie M. Uchanski, William M. Rabinowitz:
Speech-state-adaptive simulation of co-channel talker interference suppression. 361-364 - Hidefumi Kobatake, Katsuhisa Tawa, Akira Ishida:
Speech/nonspeech discrimination for speech recognition system under real life noise environments. 365-368 - D. K. Freeman, G. Cosier, C. B. Southcott, I. Boyd:
The voice activity detector for the Pan-European digital cellular mobile telephone service. 369-372 - V. Viswanathan, C. Henry:
Noise-immune multisensor speech input: formal subjective testing in operational conditions. 373-376 - Saeed Vaseghi, Peter J. W. Rayner:
The effects of non-stationary signal characteristics on the performance of adaptive audio restoration systems. 377-380 - Mitsuhiro Yuito, Naoki Matsuo:
A new sample-interpolation method for recovering missing speech samples in packet voice communications. 381-384 - Gérard Faucon, Saïd Tazi Mezalek, Régine Le Bouquin
:
Study and comparison of three structures for enhancement of noisy speech. 385-388 - Victor Zue, James R. Glass, Michael Philips, Stephanie Seneff:
Acoustic segmentation and phonetic classification in the SUMMIT system. 389-392 - Kaichiro Hatazaki, Yasuhiro Komori, Takeshi Kawabata, Kiyohiro Shikano:
Phoneme segmentation using spectrogram reading knowledge. 393-396 - Shigeki Sagayama:
Phoneme environment clustering for speech recognition. 397-400 - K. Frimpong-Ansah, David J. B. Pearce, Wendy J. Holmes, N. G. Dixon:
A stochastic/feature based recogniser and its training algorithm. 401-404 - Lawrence R. Rabiner, Chin-Hui Lee, Biing-Hwang Juang, Jay G. Wilpon:
HMM clustering for connected word recognition. 405-408 - Claude Montacié, Khalid Choukri, Gérard Chollet:
Speech recognition using temporal decomposition and multi-layer feed-forward automata. 409-412 - Steve Renals, Richard Rohwer:
Learning phoneme recognition using neural networks. 413-416 - T. D. Harrison, Frank Fallside:
A connectionist model for phoneme recognition in continuous speech. 417-420 - Joseph Picone:
On modeling duration in context in speech recognition. 421-424 - Michael A. Franzini, Michael J. Witbrock
, Kai-Fu Lee:
A connectionist approach to continuous speech recognition. 425-428 - J. Mariani:
Recent advances in speech processing. 429-440 - Stephen E. Levinson, M. Y. Liberman, Andrej Ljolje, Laura G. Miller:
Speaker independent phonetic transcription of fluent speech for large vocabulary speech recognition. 441-444 - Kai-Fu Lee, Hsiao-Wuen Hon, Mei-Yuh Hwang, Sanjoy Mahajan, Raj Reddy:
The SPHINX speech recognition system. 445-448 - Douglas B. Paul:
The Lincoln robust continuous speech recognizer. 449-452 - L. Fissore, Pietro Laface, Giorgio Micca, Roberto Pieraccini:
A word hypothesizer for a large vocabulary continuous speech understanding system. 453-456 - Martin Brenner, Harald Höge, Erwin Marschall, Jorge Romano:
Word recognition in continuous speech using a phonological based two-network matching parser and a synthesis based prediction. 457-460 - Takeshi Kawabata, Kiyohiro Shikano:
Island-driven continuous speech recognizer using phone-based HMM word spotting. 461-464 - Lalit R. Bahl, Raimo Bakis, Jerome R. Bellegarda, Peter F. Brown, David Burshtein, Subrata K. Das, Peter V. de Souza, P. S. Gopalakrishnan, Frederick Jelinek, Dimitri Kanevsky, Robert L. Mercer, Arthur Nádas, David Nahamoo, Michael A. Picheny:
Large vocabulary natural language continuous speech recognition. 465-467 - V. Ralph Algazi, Sang Chung, Michael J. Ready, Kathy L. Brown:
Robust LPC analysis and synthesis using the KL transformation of acoustic subwords spectra. 468-471 - John D. Tardelli:
Intelligibility measurement of interrupted voice communication systems. 472-475 - Jean-Claude Junqua, Hisashi Wakita:
A comparative study of cepstral lifters and distance measures for all pole models of speech in noise. 476-479