- Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache:
Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent. ICMI Companion 2023: 228-237 - Théo Deschamps-Berger, Lori Lamel, Laurence Devillers:
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations. ICMI Companion 2023: 337-343 - Abhinav Dhall, Monisha Singh, Roland Goecke, Tom Gedeon, Donghuo Zeng, Yanan Wang, Kazushi Ikeda:
EmotiW 2023: Emotion Recognition in the Wild Challenge. ICMI 2023: 746-749 - Steve DiPaola, Suk Kyoung Choi:
Art creation as an emergent multimodal journey in Artificial Intelligence latent space. ICMI Companion 2023: 247-253 - Steve DiPaola, Meehae Song:
Combining Artificial Intelligence, Bio-Sensing and Multimodal Control for Bio-Responsive Interactives. ICMI Companion 2023: 318-322 - Annika Dix, Clarissa Sabrina Arlinghaus, A. Marie Harkin, Sebastian Pannasch:
The Role of Audiovisual Feedback Delays and Bimodal Congruency for Visuomotor Performance in Human-Machine Interaction. ICMI 2023: 555-563 - Emily Doherty, Cara A. Spencer, Lucca Eloy, Nitin Kumar, Rachel Dickler, Leanne M. Hirshfield:
Using Speech Patterns to Model the Dimensions of Teamness in Human-Agent Teams. ICMI 2023: 640-648 - Cecilia Domingo:
Recording multimodal pair-programming dialogue for reference resolution by conversational agents. ICMI 2023: 731-735 - Daksitha Senel Withanage Don, Philipp Müller, Fabrizio Nunnari, Elisabeth André, Patrick Gebhard:
ReNeLiB: Real-time Neural Listening Behavior Generation for Socially Interactive Agents. ICMI 2023: 507-516 - Metehan Doyran, Ronald Poppe, Albert Ali Salah:
Embracing Contact: Detecting Parent-Infant Interactions. ICMI 2023: 198-206 - Bernd Dudzik, Tiffany Matej Hrkalovic, Dennis Küster, David St-Onge, Felix Putze, Laurence Devillers:
The 5th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild (MSECP-Wild). ICMI 2023: 828-829 - Gauthier Robert Jean Faisandaz, Alix Goguey, Christophe Jouffrais, Laurence Nigay:
µGeT: Multimodal eyes-free text selection technique combining touch interaction and microgestures. ICMI 2023: 594-603 - Siska Fitrianie, Iulia Lefter:
On Head Motion for Recognizing Aggression and Negative Affect during Speaking and Listening. ICMI 2023: 455-464 - Jack Fitzgerald, Ethan Seefried, James E. Yost, Sangmi Pallickara, Nathaniel Blanchard:
Paying Attention to Wildfire: Using U-Net with Attention Blocks on Multimodal Data for Next Day Prediction. ICMI 2023: 470-480 - Yann Frachi, Guillaume Chanel, Mathieu Barthet:
Affective gaming using adaptive speed controlled by biofeedback. ICMI Companion 2023: 238-246 - Olga V. Frolova, Aleksandr Nikolaev, Platon Grave, Elena E. Lyakso:
Speech Features of Children with Mild Intellectual Disabilities. ICMI Companion 2023: 406-413 - Joan Fruitet, Mélodie Fouillen, Valentine Facque, Hanna Chainay, Stéphanie De Chalvron, Franck Tarpin-Bernard:
Engaging with an embodied conversational agent in a computerized cognitive training: an acceptability study with the elderly. ICMI Companion 2023: 359-362 - Jia Fu, Jiarui Tan, Wenjie Yin, Sepideh Pashami, Mårten Björkman:
Component attention network for multimodal dance improvisation recognition. ICMI 2023: 114-118 - Monika Gahalawat:
Explainable Depression Detection using Multimodal Behavioural Cues. ICMI 2023: 721-725 - Monika Gahalawat, Raul Fernandez Rojas, Tanaya Guha, Ramanathan Subramanian, Roland Goecke:
Explainable Depression Detection via Head Motion Patterns. ICMI 2023: 261-270 - Martina Galletti, Eleonora Pasqua, Francesca Bianchi, Manuela Calanca, Francesca Padovani, Daniele Nardi, Donatella Tomaiuoli:
A Reading Comprehension Interface for Students with Learning Disorders. ICMI Companion 2023: 282-287 - Yingxue Gao, Huan Zhao, Yufeng Xiao, Zixing Zhang:
GCFormer: A Graph Convolutional Transformer for Speech Emotion Recognition. ICMI 2023: 307-313 - Sushant Gautam:
Bridging Multimedia Modalities: Enhanced Multimodal AI Understanding and Intelligent Agents. ICMI 2023: 695-699 - Tan Gemicioglu, R. Michael Winters, Yu-Te Wang, Thomas M. Gable, Ivan J. Tashev:
TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn Devices. ICMI 2023: 564-573 - Cristina Gena, Francesca Manini, Antonio Lieto, Alberto Lillo, Fabiana Vernero:
Can empathy affect the attribution of mental states to robots? ICMI 2023: 94-103 - Setareh Nasihati Gilani, Kimberly A. Pollard, David R. Traum:
Multimodal Prediction of User's Performance in High-Stress Dialogue Interactions. ICMI Companion 2023: 71-75 - Alina Glushkova, Dimitrios Makrygiannis, Sotirios Manitsaris:
Embodied edutainment experience in a museum: discovering glass-blowing gestures. ICMI Companion 2023: 288-291 - Amr Gomaa, Michael Feld:
Towards Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems. ICMI 2023: 689-694 - Kaan Gönç, Baturay Saglam, Onat Dalmaz, Tolga Çukur, Suleyman Serdar Kozat, Hamdi Dibeklioglu:
User Feedback-based Online Learning for Intent Classification. ICMI 2023: 613-621 - Andrey Goncharov, Özge Nilay Yalçin, Steve DiPaola:
Expectations vs. Reality: The Impact of Adaptation Gap on Avatars in Social VR Platforms. ICMI Companion 2023: 146-153