default search action
10th ICMI 2008: Chania, Crete, Greece
- Vassilios Digalakis, Alexandros Potamianos, Matthew A. Turk, Roberto Pieraccini, Yuri Ivanov:
Proceedings of the 10th International Conference on Multimodal Interfaces, ICMI 2008, Chania, Crete, Greece, October 20-22, 2008. ACM 2008, ISBN 978-1-60558-198-9 - Philip R. Cohen:
Natural interfaces in the field: the case of pen and paper. 1-2
Multimodal system evaluation (oral session)
- Tatiana Evreinova:
Manipulating trigonometric expressions encodedthrough electro-tactile signals. 3-8 - Manolis Perakakis, Alexandros Potamianos:
Multimodal system evaluation using modality efficiency and synergy metrics. 9-16 - Jérôme Simonin, Noëlle Carbonell, Danielle Pelé:
Effectiveness and usability of an online help agent embodied as a talking head. 17-20 - Chreston A. Miller, Ashley Robinson, Rongrong Wang, Pak Chung, Francis K. H. Quek:
Interaction techniques for the analysis of complex data on high-resolution displays. 21-28
Special session on social signal processing (oral session)
- Sarah Favre, Hugues Salamin, John Dines, Alessandro Vinciarelli:
Role recognition in multiparty recordings using social affiliation networks and discrete distributions. 29-36 - Stavros Petridis, Maja Pantic:
Audiovisual laughter detection based on temporal features. 37-44 - Dinesh Babu Jayagopi, Sileye O. Ba, Jean-Marc Odobez, Daniel Gatica-Perez:
Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cues. 45-52 - Fabio Pianesi, Nadia Mana, Alessandro Cappelletti, Bruno Lepri, Massimo Zancanaro:
Multimodal recognition of personality traits in social interactions. 53-60 - Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard, Alex Pentland:
Social signals, their function, and automatic analysis: a survey. 61-68
Multimodal systems I (poster session)
- Susumu Harada, Jonathan Lester, Kayur Patel, T. Scott Saponas, James Fogarty, James A. Landay, Jacob O. Wobbrock:
VoiceLabel: using speech to label mobile sensor data. 69-76 - Jan Schehl, Alexander Pfalzgraf, Norbert Pfleger, Jochen Steigner:
The babbleTunes system: talk to your ipod! 77-80 - Christine Kühnel, Benjamin Weiss, Ina Wechsung, Sascha Fagel, Sebastian Möller:
Evaluating talking heads for smart home systems. 81-84 - Teemu Tuomas Ahmaniemi, Vuokko Lantz, Juha Marila:
Perception of dynamic audiotactile feedback to gesture input. 85-92 - Madoka Miki, Chiyomi Miyajima, Takanori Nishino, Norihide Kitaoka, Kazuya Takeda:
An integrative recognition method for speech and gestures. 93-96 - Francis K. H. Quek, Roger W. Ehrich, Thurmon E. Lockhart:
As go the feet...: on the estimation of attentional focus from stance. 97-104 - Ali Choumane, Jacques Siroux:
Knowledge and data flow architecture for reference processing in multimodal dialog systems. 105-108 - Elise Arnaud, Heidi Christensen, Yan-Chen Lu, Jon Barker, Vasil Khalidov, Miles E. Hansard, Bertrand Holveck, Hervé Mathieu, Ramya Narasimha, Elise Taillant, Florence Forbes, Radu Horaud:
The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements. 109-116 - Li Li, Wu Chou:
Towards a minimalist multimodal dialogue framework using recursive MVC pattern. 117-120 - Andreas Ratzka:
Explorative studies on multimodal interaction in a PDA- and desktop-based scenario. 121-128
Multimodal system design and tools (oral session)
- Lode Vanacken, Joan De Boeck, Chris Raymaekers, Karin Coninx:
Designing context-aware multimodal virtual environments. 129-136 - Philip R. Cohen, Colin Swindells, Sharon L. Oviatt, Alexander M. Arthur:
A high-performance dual-wizard infrastructure for designing speech, pen, and multimodal interfaces. 137-140 - Alexander Gruenstein, Ian McGraw, Ibrahim Badr:
The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces. 141-148 - Marcos Serrano, David Juras, Laurence Nigay:
A three-dimensional characterization space of software components for rapidly developing multimodal interfaces. 149-156
Multimodal interfaces I (oral session)
- Eve E. Hoggan, Topi Kaaresoja, Pauli Laitinen, Stephen A. Brewster:
Crossmodal congruence: the look, feel and sound of touchscreen widgets. 157-164 - Manuel Giuliani, Alois C. Knoll:
MultiML: a general purpose representation language for multimodal human utterances. 165-172 - Michael Voit, Rainer Stiefelhagen:
Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. 173-180 - Louis-Philippe Morency, Iwan de Kok, Jonathan Gratch:
Context-based recognition during human interactions: automatic feature selection and encoding dictionary. 181-188
Demo session
- José Luis Hernandez-Rebollar, Ethar Ibrahim Elsakay, José D. Alanís-Urquieta:
AcceleSpell, a gestural interactive game to learn and practice finger spelling. 189-190 - Rajesh Balchandran, Mark E. Epstein, Gerasimos Potamianos, Ladislav Serédi:
A multi-modal spoken dialog system for interactive TV. 191-192 - David Juras, Laurence Nigay, Michael Ortega, Marcos Serrano:
Multimodal slideshow: demonstration of the openinterface interaction development environment. 193-194 - Kouichi Katsurada, Teruki Kirihata, Masashi Kudo, Junki Takada, Tsuneo Nitta:
A browser-based multimodal interaction system. 195-196 - Dominic W. Massaro, Miguel Á. Carreira-Perpiñán, David J. Merrill, Cass Sterling, Stephanie Bigler, Elise Piazza, Marcus Perlman:
IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situations. 197-198 - Jonas Beskow, Jens Edlund, Teodore Gjermani, Björn Granström, Joakim Gustafson, Oskar Jonsson, Gabriel Skantze, Helena Tobiasson:
Innovative interfaces in MonAMI: the reminder. 199-200 - Jonathan Padilla San Diego, Alastair Barrow, Margaret J. Cox, William S. Harwin:
PHANTOM prototype: exploring the potential for learning with multimodal features in dentistry. 201-202 - George Drettakis:
Audiovisual 3d rendering as a tool for multimodal interfaces. 203-204
Multimodal interfaces II (oral session)
- David Damm, Christian Fremerey, Frank Kurth, Meinard Müller, Michael Clausen:
Multimodal presentation and browsing of music. 205-208 - Delphine Devallez, Federico Fontana, Davide Rocchesso:
An audio-haptic interface based on auditory depth cues. 209-216 - Vasil Khalidov, Florence Forbes, Miles E. Hansard, Elise Arnaud, Radu Horaud:
Detection and localization of 3d audio-visual objects using unsupervised clustering. 217-224 - Srinivas Bangalore, Michael Johnston:
Robust gesture processing for multimodal interaction. 225-232
Multimodal modelling (oral session)
- Hayley Hung, Dinesh Babu Jayagopi, Sileye O. Ba, Jean-Marc Odobez, Daniel Gatica-Perez:
Investigating automatic dominance estimation in groups from visual attention and speaking activity. 233-236 - Mihai Gurban, Jean-Philippe Thiran, Thomas Drugman, Thierry Dutoit:
Dynamic modality weighting for multi-stream hmms inaudio-visual speech recognition. 237-240 - Roel Vertegaal:
A Fitts Law comparison of eye tracking and manual input in the selection of visual targets. 241-248 - Minkyung Lee, Mark Billinghurst:
A Wizard of Oz study for an AR multimodal interface. 249-256
Multimodal systems II (poster session)
- Kazuhiro Otsuka, Shoko Araki, Kentaro Ishizuka, Masakiyo Fujimoto, Martin Heinrich, Junji Yamato:
A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarization. 257-264 - Saija Lemmelä, Akos Vetek, Kaj Mäkelä, Dari Trendafilov:
Designing and evaluating multimodal interaction for mobile contexts. 265-272 - Rana El Kaliouby, Mina Mikhail:
Automated sip detection in naturally-evoked video. 273-280 - Toni Pakkanen, Jani Lylykangas, Jukka Raisamo, Roope Raisamo, Katri Salminen, Jussi Rantala, Veikko Surakka:
Perception of low-amplitude haptic stimuli when biking. 281-284 - Muhammad Tahir, Gilles Bailly, Eric Lecolinet, Gérard Mouret:
TactiMote: a tactile remote control for navigating in long lists. 285-288 - Jörn Anemüller, Jörg-Hendrik Bach, Barbara Caputo, Michal Havlena, Jie Luo, Hendrik Kayser, Bastian Leibe, Petr Motlícek, Tomás Pajdla, Misha Pavel, Akihiko Torii, Luc Van Gool, Alon Zweig, Hynek Hermansky:
The DIRAC AWEAR audio-visual platform for detection of unexpected and incongruent events. 289-292 - Kotaro Funakoshi, Kazuki Kobayashi, Mikio Nakano, Seiji Yamada, Yasuhiko Kitamura, Hiroshi Tsujino:
Smoothing human-robot speech interactions by using a blinking-light as subtle expression. 293-296 - Emilia Koskinen, Topi Kaaresoja, Pauli Laitinen:
Feel-good touch: finding the most pleasant tactile feedback for a mobile touch screen button. 297-304 - Álvaro Hernández Trapote, Beatriz López-Mencía, David Díaz Pardo de Vera, Rubén Fernández Pozo, Javier Caminero:
Embodied conversational agents for voice-biometric interfaces. 305-312
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.