default search action
ICASSP 1988: New York, NY, USA, USA
- IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '88, New York, New York, USA, April 11-14, 1988. IEEE 1988
- Richard P. Lippmann:
Neural nets for computing. 1-6 - Alan B. Poritz:
Hidden Markov models: a guided tour. 7-13 - P. Jeffrey Bloom:
Application of digital signal processing in professional audio. 14-19 - P. S. Gopalakrishnan, Dimitri Kanevsky, Arthur Nádas, David Nahamoo, Michael A. Picheny:
Decoder selection based on cross-entropies. 20-23 - Yariv Ephraim, Lawrence R. Rabiner:
On the relations between modeling approaches for information sources [speech recognition]. 24-27 - Yunxin Zhao, Les Atlas, Xinhua Zhuang:
Application of the Gibbs distribution to hidden Markov modeling in isolated word recognition. 28-31 - Harald Katterfeldt:
A speaker-independent isolated-word recognizer based on polynomial classifiers. 32-35 - David Mansour, Biing-Hwang Juang:
A family of distortion measures based upon projection operation for robust speech recognition. 36-39 - Lalit R. Bahl, Peter F. Brown, Peter V. de Souza, Robert L. Mercer:
Speech recognition with continuous-parameter hidden Markov models. 40-43 - Bernard Gold, Richard P. Lippmann:
A neural network for isolated-word recognition. 44-47 - Richard P. Lippmann, Edward A. Martin:
Discriminant clustering using an HMM isolated-word recognizer. 48-51 - Edward A. Martin, Richard P. Lippmann, Douglas B. Paul:
Dynamic adaptation of Hidden Markov models for robust isolated-word speech recognition. 52-54 - Jay G. Wilpon, D. M. DeMarco, Rajendra P. Mikkilineni:
Isolated word recognition over the DDD telephone network. Results of two extensive field studies. 55-58 - M. L. Rossen, Les T. Niles, Gary N. Tajchman, Marcia A. Bush, J. A. Anderson, Sheila E. Blumstein:
A connectionist model for consonant-vowel syllable recognition. 59-62 - Benjamin Monderer, Aurel A. Lazar:
Speech signal detection at the output of a cochlear model. 63-66 - Alvaro De Lima-Veiga, Yves Grenier:
A multi-step excited model for speech parameter trajectories. 67-70 - Adoram Erell, Yaakov Orgad, Julius L. Goldstein:
Psychoacoustically based scalar quantization of the LPC poles. 71-74 - Steve W. Beet, H. E. G. Powrie, Roger K. Moore, Michael J. Tomlinson:
Improved speech recognition using a reduced auditory representation. 75-78 - Mohamad K. Asi, Bahaa E. A. Saleh:
A linear filter for time scaling of speech. 79-82 - Juergen Schroeter, Jerry N. Larar, M. Mohan Sondhi:
Multi-frame approach for parameter estimation of a physiological model of speech production. 83-86 - Naftali Tishby:
Information theoretic factorization of speaker and language in hidden Markov models, with application to speaker recognition. 87-90 - Oded Ghitza:
Auditory neural feedback as a basis for speech processing. 91-94 - Yi-Teh Lee, Harvey F. Silverman:
On a general time-varying model for speech signals. 95-98 - William Huang, Richard Lippmann, Ben Gold:
A neural net approach to speech recognition. 99-102 - Yoshua Bengio, Renato De Mori:
Use of neural networks for the recognition of place of articulation. 103-106 - Alex Waibel, Toshiyuki Hanazawa, Geoffrey E. Hinton, Kiyohiro Shikano, Kevin J. Lang:
Phoneme recognition: neural networks vs. hidden Markov models. 107-110 - Bernard Mérialdo:
Phonetic recognition using hidden Markov models and maximum mutual information training. 111-114 - Hy Murveit, Mitchel Weintraub:
1000-word speaker-independent continuous-speech recognition using hidden Markov models. 1115-1118 - Lawrence R. Rabiner, Jay G. Wilpon, Frank K. Soong:
High performance connected digit recognition, using hidden Markov models. 119-122 - Kai-Fu Lee, Hsiao-Wuen Hon:
Large-vocabulary speaker-independent continuous speech recognition using HMM. 123-126 - Salim E. Roucos, Mari Ostendorf, Herbert Gish, Alan Derr:
Stochastic segment modelling using the estimate-maximize algorithm [speech recognition]. 127-130 - Ming-Whei Feng, Francis Kubala, Richard M. Schwartz, John Makhoul:
Improved speaker adaptation using text dependent spectral mappings. 131-134 - Ghassan J. Freij, Frank Fallside:
Lexical stress recognition using hidden Markov models. 135-138 - Masato Akagi, Yoh'ichi Tohkura:
On the application of spectrum target prediction model to speech recognition. 139-142 - Maurizio Copperi:
Rule-based speech analysis and application to CELP coding. 143-146 - Peter Kabal, J.-L. Moncet, C. C. Chu:
Synthesis filter optimization and coding: Applications to CELP [speech analysis]. 147-150 - Peter Kroon, Bishnu S. Atal:
Strategies for improving the performance of CELP coders at low bit rates [speech analysis]. 151-154 - W. Bastiaan Kleijn, Daniel J. Krasinski, Richard H. Ketchum:
Improved speech quality and efficient vector quantization in SELP. 155-158 - Ahmet M. Kondoz, Barry G. Evans:
CELP base-band coder for high quality speech coding at 9.6 to 2.4 kbps. 159-162 - Grant A. Davidson, Allen Gersho:
Multiple-stage vector excitation coding of speech waveforms. 163-166 - Luis A. Hernández Gómez, Francisco Javier Casajús-Quirós, Ramón García-Gómez:
High-quality vector adaptive transform coding at 4.8 kb/s. 167-170 - T. V. Sreenivas:
Modelling LPC-residue by components for good quality speech coding. 171-174 - Shigeru Ono, Kazunori Ozawa:
2.4 kbps pitch prediction multi-pulse speech coding. 175-178 - Steven F. Boll, Jack E. Porter, Lawrence G. Bahler:
Robust syntax free speech recognition. 17982 - Hiroshi Matsu'ura, Tsuneo Nitta, Shoichi Hirai, Yoichi Takebayashi, Hiroyuki Tsuboi, Hiroshi Kanazawa:
A large vocabulary word recognition system based on syllable recognition and nonlinear word matching. 183-186 - Pei-Yih Ting, Chiu-yu Tseng, Lin-Shan Lee:
New speech recognition approaches based upon finite state vector quantization with structural constraints. 187-190 - Shang-Chin Chen, Xue Yang:
Speech recognition with high recognition rate by smoothed space pseudo Wigner-Ville distribution (SSPWD) and overlap slide window spectrum methods. 191-194 - Mohsen A. Rashwan, Moustafa M. Fahmy:
A new technique for speaker-independent isolated word recognition. 195-198 - Takao Watanabe:
Speaker-independent word recognition using dynamic programming matching with statistic time warping cost. 199-202 - L. Fissore, Pietro Laface, Giorgio Micca, Roberto Pieraccini:
Very large vocabulary isolated utterance recognition: a comparison between one pass and two pass strategies. 203-206 - Masafumi Nishimura, Kazuhide Sugawara:
Speaker adaptation method for HMM-based speech recognition. 207-210 - Satoru Hayamizu, Kazuyo Tanaka, Kozo Ohta:
A large vocabulary word recognition system using rule-based network representation of acoustic characteristic variations. 211-214 - Melvyn J. Hunt, Claude Lefèbvre:
Speaker dependent and independent speech recognition experiments with an auditory model. 215-218 - Hynek Hermansky, Jean-Claude Junqua:
Optimization of perceptually-based ASR front-end [automatic speech recognition]. 219-222 - Kiyoaki Aikawa, Sadaoki Furui:
Spectral movement function and its application to speech recognition. 223-226 - Peter Vary, Karl Hellwig, Rudolf Hofmann, Robert J. Sluyter, Claude R. Galand, Michele M. Rosso:
Speech codec for the European mobile radio system. 227-230 - I. Lecomte, Michel Lever, L. Lelièvre, M. Delprat, Alain Tassy:
Medium band speech coding for mobile radio communications. 231-234 - Richard V. Cox, Joachim Hagenauer, Nambi Seshadri, Carl-Erik W. Sundberg:
A sub-band coder designed for combined source and channel coding [speech coding]. 235-238 - Henri G. Suyderhoud, Spiros Dimolitsas:
Impact of noise and encoder/decoder mistracking on ADPCM system performance. 239-242 - Vasu Iyengar, Peter Kabal:
A low delay 16 kbits/sec speech coder. 243-246 - Michael W. Marcellin, Thomas R. Fischer, Jerry D. Gibson:
Predictive trellis coded quantization of speech. 247-250 - Jerry D. Gibson, Greg B. Haschke:
Backward adaptive tree coding of speech at 16 kbps. 251-254 - Nader Moayeri, David L. Neuhoff:
Decision trees for vector quantizer codebook searching. 255-258 - Fumio Amano, Kohei Iseda, Koji Okazaki, Shigeyuki Unagami:
An 8 kbps TC-MQ (time domain compression ADPCM-MQ) speech codec. 259-262 - Bruce Fette, Wilburn Clark, Cynthia Jaskie, Michelle Tugenberg, William Yip:
Experiments with a high quality, low complexity 4800 bps residual excited LPC (RELP) vocoder. 263-266 - Jan Robin Rohlicek, Yen-Lu Chow, Salim E. Roucos:
Statistical language modeling using a small corpus from an application domain. 267-270 - Laura G. Miller, Stephen E. Levinson:
Syntactic analysis for large vocabulary speech recognition using a context-free covering grammar. 271-274 - Wayne H. Ward, Alexander G. Hauptmann, Richard M. Stern, Thomas Chanak:
Parsing spoken phrases despite missing words. 275-278 - L. Fissore, Pietro Laface, Giorgio Micca, Roberto Pieraccini:
Interaction between fast lexical access and word verification in large vocabulary continuous speech recognition. 279-282 - Douglas B. Paul, Edward A. Martin:
Speaker stress-resistant continuous speech recognition. 283-286 - Arnon Cohen, Dan E. Tamir:
Selection of optimal features for phoneme recognition in different phonetic environments. 287-290 - Francis Kubala, Yen-Lu Chow, Alan Derr, Ming-Whei Feng, Owen Kimball, John Makhoul, Patti Price, Jan Robin Rohlicek, Salim E. Roucos, Richard M. Schwartz, Jeffrey Vandegrift:
Continuous speech recognition results of the BYBLOS system on the DARPA 1000-word resource management database. 291-294 - Allen L. Gorin, David B. Roe:
Parallel level-building on a tree machine [speech recognition]. 295-298 - David P. Morgan, Harvey F. Silverman:
An event-synchronous signal processing system for connected-speech recognition. 299-302 - D. Bigorgne, Alain Cozannet, Marc Guyomard, Guy Mercier, Laurent Miclet, M. Querre, Jacques Siroux:
A versatile speaker-dependant continuous speech understanding system. 303-306 - Werner Verhelst, Oscar Steenhaut:
On short-time cepstra of voiced speech. 311-314 - Peter Kabal, Ravi Prakash Ramachandran:
Joint solutions for formant and pitch predictors in speech processing. 315-318 - Mohamed Najim, M. Salhi, Driss Aboutajdine, A. Rajouani, M. Zyoute:
Reconstruction of Arabic long vowels using time varying linear prediction technique. 319-322 - Eric P. Farges, Mark A. Clements:
An analysis-synthesis hidden Markov model of speech. 323-326 - Toomas Altosaar, Matti Karjalainen:
Event-based multiple-resolution analysis of speech signals. 327-330 - Bill J. Stanton, Leah H. Jamieson, George D. Allen:
Acoustic-phonetic analysis of loud and Lombard speech in simulated cockpit conditions. 331-334 - James L. Lansford, Rao K. Yarlagadda:
Adaptive Lp approach to speech coding. 335-338 - Per Hedelin:
Phase compensation in all-pole speech analysis. 339-342 - Vladimir Goncharoff, Suresh Chandran:
Adaptive speech modification by spectral warping. 343-346 - Jerome R. Bellegarda, David C. Farden:
Continuously adaptive linear predictive coding of speech. 347-350 - Christophe d'Alessandro, Jean-Sylvain Liénard:
Decomposition of the speech signal into short-time waveforms using spectral segmentation. 351-354 - Edward P. Neuburg:
On estimating rate of change of pitch. 355-357 - Edward H. S. Chilton, Barry G. Evans:
The spectral autocorrelation applied to the linear prediction residual of speech for robust pitch detection. 358-361 - T. D. Nguyen, James B. Ferguson III, T. W. McNamara:
A geometric approach to real time pitch detection. 362-365 - C. S. Chen, Jing Yuan:
A robust pitch boundary detector. 366-369 - Robert J. McAulay, Thomas F. Quatieri:
Computationally efficient sine-wave synthesis and its application to sinusoidal transform coding. 370-373 - John C. Hardwick, Jae S. Lim:
A 4.8 kbps multi-band excitation speech coder. 374-377 - David L. Thomson:
Parametric models of the magnitude/phase spectrum for harmonic speech coding. 378-381 - Isabel Trancoso, Joaquim S. Rodrigues, Luís B. Almeida, Jorge S. Marques, António Joaquim Serralheiro, Diana Santos, José M. Tribolet:
Quantization issues in harmonic coders [speech coding]. 382-385 - Schuyler Quackenbush:
Hardware implementation of a 16 kbps subband coder using vector quantization. 386-389 - Kambiz Nayebi, Thomas P. Barnwell III, Mark J. T. Smith:
Analysis of the self-excited subband coder: a new approach to medium band speech coding. 390-393 - Frank K. Soong, Bling-Hwang Juang:
Optimal quantization of LSP parameters [speech coding]. 394-397 - Noboru Sugamura, Nariman Farvardin:
Quantizer design in LSP speech analysis and synthesis. 398-401 - Mei Yong, Grant A. Davidson, Allen Gersho:
Encoding of LPC spectral parameters using switched-adaptive interframe vector prediction [speech coding]. 402-405 - James D. Mills, James L. Melsa:
Multiply occupied cells [speech codecs]. 406-409 - Chin-Hui Lee, Lawrence R. Rabiner:
A network-based frame-synchronous level building algorithm for connected word recognition. 410-413 - L. Fissore, Egidio P. Giachin, Pietro Laface, Giorgio Micca, Roberto Pieraccini, Claudio Rullent:
Experimental results on large-vocabulary continuous speech recognition and understanding. 414-417 - David Lubensky:
Learning spectral-temporal dependencies using connectionist networks. 418-421 - Hong C. Leung, Victor W. Zue:
Some phonetic recognition experiments using artificial neural nets. 422-425 - Richard Rohwer, Stephen Renals, Mark Terry:
Unstable connectionist networks in speech recognition. 426-428 - James R. Glass, Victor W. Zue:
Multi-level acoustic segmentation of continuous speech. 429-432 - Rajendra P. Mikkilineni, Jay G. Wilpon, Lawrence R. Rabiner:
A procedure to generate training sequences for a connected word recognizer using the segmental k-means training algorithm. 433-436 - Hermann Ney, Andreas Noll:
Phoneme modelling using continuous mixture densities. 437-440 - Steve J. Young, N. H. Russell, J. H. S. Thornton:
Speech recognition in VODIS II. 441-444 - Frédéric Bimbot, Gérard Chollet, Paul Deléglise, Claude Montacié:
Temporal decomposition and acoustic-phonetic decoding of speech. 445-448 - Benjamin Chigier, Robert A. Brennan:
Broad class network generation using a combination of rules and statistics for speaker independent continuous speech. 449-452 - Ronald A. Cole, Lily Hou:
Segmentation and broad classification of continuous speech. 453-456 - Melvyn J. Hunt:
Evaluating the performance of connected-word speech recognition systems. 457-460 - Dominique Vicard:
Transient part recognition for continuous speech using transition spotting. 461-464 - V. Ralph Algazi, Kathy L. Brown:
Automatic speech recognition using acoustic sub-words and no time alignment. 465-468 - Alexander I. Rudnicky, Zongge Li, Joseph Polifroni, Eric H. Thayer, Julia L. Gale:
An unanchored matching algorithm for lexical access. 469-472 - Aaron E. Rosenberg:
Connected sentence recognition using diphone-like templates. 473-476 - S. C. Austin, Frank Fallside:
Frame compression in hidden Markov models. 477-480 - Andrew Varga, Roger K. Moore, John S. Bridle, Keith Ponting, Martin J. Russel:
Noise compensation algorithms for use with hidden Markov model based speech recognition. 481-484 - Kuldip K. Paliwal:
A study of line spectrum pair frequencies for speech recognition. 485-488 - Lalit R. Bahl, Raimo Bakis, Peter V. de Souza, Robert L. Mercer:
Obtaining candidate words by polling in a large vocabulary speech recognition system. 489-492 - Lalit R. Bahl, Peter F. Brown, Peter V. de Souza,