default search action
AVSP 2009: Norwich, UK
- Barry-John Theobald, Richard W. Harvey:
Auditory-Visual Speech Processing, AVSP 2009, Norwich, UK, September 10-13, 2009. ISCA 2009 - Lisette Mol, Emiel Krahmer, Marc Swerts:
Alignment in iconic gestures: does it make sense? 3-8 - Shuichi Sakamoto, Akihiro Tanaka, Shun Numahata, Atsushi Imai, Tohru Takagi, Yôiti Suzuki:
Aging effect on audio-visual speech asynchrony perception: comparison of time-expanded speech and a moving image of a talker2s face. 9-12 - Piero Cosi, Graziano Tisato:
LW2a: an easy tool to transform voice WAV files into talking animations. 13-17 - Sascha Fagel:
Effects of smiled speech on lips, larynx and acoustics. 18-21 - Alexandra Jesse, Esther Janse:
Visual speech information aids elderly adults in stream segregation. 22-27 - Fiona Kyle, Mairéad MacSweeney, Tara Mohammed, Ruth Campbell:
The development of speechreading in deaf and hearing children: introducing a new test of child speechreading (toCS). 28-31 - Girija Chetty, Roland Göcke, Michael Wagner:
Audio-visual mutual dependency models for biometric liveness checks. 32-37 - Satoko Hisanaga, Kaoru Sekiyama, Tomohiko Igasaki, Nobuki Murayama:
Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials. 38-42 - Samer Al Moubayed, Jonas Beskow:
Effects of visual prominence cues on speech intelligibility. 43-46 - Wesley Mattheyses, Lukas Latacz, Werner Verhelst:
Multimodal coherency issues in designing and optimizing audiovisual speech synthesis techniques. 47-53 - Sanaul Haq, Philip J. B. Jackson:
Speaker-dependent audio-visual emotion recognition. 53-58 - Natalie A. Phillips, Shari R. Baum, Vanessa Taler:
Audio-visual speech perception in mild cognitive impairment and healthy elderly controls. 59-64 - Takaaki Kuratate, Kathryn Ayers, Jeesun Kim, Marcia Riley, Denis Burnham:
Are virtual humans uncanny?: varying speech, appearance and motion to better understand the acceptability of synthetic humans. 65-69 - Christian Kroos, Katherine Hogan:
Visual influence on auditory perception: is speech special? 70-75 - Marion Coulon, Bahia Guellaï, Arlette Streri:
Auditory-visual perception of talking faces at birth: a new paradigm. 76-79 - Cong-Thanh Do, Abdeldjalil Aïssa-El-Bey, Dominique Pastor, André Goalic:
Area of mouth opening estimation from speech acoustics using blind deconvolution technique. 80-85 - Sarah Hilder, Richard W. Harvey, Barry-John Theobald:
Comparison of human and machine-based lip-reading. 86-89 - Marieke Hoetjes, Emiel Krahmer, Marc Swerts:
Untying the knot between gestures and speech. 90-95 - Olov Engwall, Preben Wik:
Can you tell if tongue movements are real or synthesized? 96-101 - Yuxuan Lan, Richard W. Harvey, Barry-John Theobald, Eng-Jon Ong, Richard Bowden:
Comparing visual features for lipreading. 102-106 - Takaaki Shochi, Kaoru Sekiyama, Nicole Lees, Mark Boyce, Roland Göcke, Denis Burnham:
Auditory-visual infant directed speech in Japanese and English. 107-112 - Akihiro Tanaka, Kaori Asakawa, Hisato Imai:
Recalibration of audiovisual simultaneity in speech. 113-116 - Dorothea Kolossa, Steffen Zeiler, Alexander Vorwerk, Reinhold Orglmeister:
Audiovisual speech recognition with missing or unreliable data. 117-122 - Axel H. Winneke, Natalie A. Phillips:
Older and younger adults use fewer neural resources during audiovisual than during auditory speech perception. 123-126 - Jana Eger, Hans-Heinrich Bothe:
Startegies and results for the evaluation of the naturalness of the LIPPS facial animation system. 127-129 - Chris Davis, Jeesun Kim:
Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues. 130-133 - Ibrahim Almajai, Ben Milner:
Effective visually-derived Wiener filtering for audio-visual speech processing. 134-139 - Aymeric Devergie, Frédéric Berthommier, Nicolas Grimault:
Pairing audio speech and various visual displays: binding or not binding? 140-146 - Charlotte Wollermann, Bernhard Schröder:
Effects of exhaustivity and uncertainty on audiovisual focus production. 145-150 - Shin'ichi Takeuchi, Takashi Hashiba, Satoshi Tamura, Satoru Hayamizu:
Voice activity detection based on fusion of audio and visual information. 151-154 - Samuel Pachoud, Shaogang Gong, Andrea Cavallaro:
Space-time audio-visual speech recognition with multiple multi-class probabilistic support vector machines. 155-160 - Zdenek Krnoul:
Refinement of lip shape in sign speech synthesis. 161-165 - Kang Liu, Jörn Ostermann:
An image-based talking head system. 166 - Zdenek Krnoul, Milos Zelezný:
The UWB 3d talking head text-driven system controlled by the SAT method used for the LIPS 2009 challenge. 167-168 - Jonas Beskow, Giampiero Salvi, Samer Al Moubayed:
Synface - verbal and non-verbal face animation from audio. 169 - Lijuan Wang, Wei Han, Xiaojun Qian, Frank K. Soong:
HMM-based motion trajectory generation for speech animation synthesis. 170
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.