Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 2,605 matches
- 2023
- Konstantin Kuznetsov, Michael Barz, Daniel Sonntag:
Detection of contract cheating in pen-and-paper exams through the analysis of handwriting style. ICMI Companion 2023: 26-30 - Rajagopal A., Nirmala V., Immanuel Johnraja Jebadurai, Arun Muthuraj Vedamanickam, Prajakta Uthaya Kumar:
Design of Generative Multimodal AI Agents to Enable Persons with Learning Disability. ICMI Companion 2023: 259-271 - Romina Abadi, Laurie M. Wilcox, Robert Scott Allison:
Using Augmented Reality to Assess the Role of Intuitive Physics in the Water-Level Task. ICMI 2023: 622-630 - Muneeb Ahmad, Abdullah Alzahrani:
Crucial Clues: Investigating Psychophysiological Behaviors for Measuring Trust in Human-Robot Interaction. ICMI 2023: 135-143 - Tamim Ahmed, Thanassis Rikakis, Aisling Kelliher, Mohammad Soleymani:
ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke Survivors. ICMI Companion 2023: 11-15 - Pepijn Van Aken, Merel M. Jung, Werner Liebregts, Itir Önal Ertugrul:
Deciphering Entrepreneurial Pitches: A Multimodal Deep Learning Approach to Predict Probability of Investment. ICMI 2023: 144-152 - Nada Alalyani, Nikhil Krishnaswamy:
A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents. ICMI Companion 2023: 164-173 - Arnaud Allemang-Trivalle:
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing. ICMI 2023: 716-720 - Sean Andrist, Dan Bohus, Zongjian Li, Mohammad Soleymani:
Platform for Situated Intelligence and OpenSense: A Tutorial on Building Multimodal Interactive Applications for Research. ICMI Companion 2023: 105-106 - Marjorie Armando, Isabelle Régner, Magalie Ochs:
Toward a Tool Against Stereotype Threat in Math: Children's Perceptions of Virtual Role Models. ICMI Companion 2023: 306-310 - Anderson Augusma, Dominique Vaufreydaz, Frédérique Letué:
Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features. ICMI 2023: 750-754 - Julia Ayache, Marta Bienkiewicz, Kathleen Richardson, Benoît G. Bardy:
eXtended Reality of socio-motor interactions: Current Trends and Ethical Considerations for Mixed Reality Environments Design. ICMI Companion 2023: 154-158 - Aswin Balasubramaniam:
Come Fl.. Run with Me: Understanding the Utilization of Drones to Support Recreational Runners' Well Being. ICMI 2023: 700-705 - Alisa Barkar, Mathieu Chollet, Béatrice Biancardi, Chloé Clavel:
Insights Into the Importance of Linguistic Textual Features on the Persuasiveness of Public Speaking. ICMI Companion 2023: 51-55 - Fábio Barros, António J. S. Teixeira, Samuel S. Silva:
Developing a Generic Focus Modality for Multimodal Interactive Environments. ICMI Companion 2023: 31-35 - Eleonora Aida Beccaluva, Marta Curreri, Giulia Da Lisca, Pietro Crovari:
Using Implicit Measures to Assess User Experience in Children: A Case Study on the Application of the Implicit Association Test (IAT). ICMI Companion 2023: 272-281 - Marilou Beyeler, Yi Fei Cheng, Christian Holz:
Cross-Device Shortcuts: Seamless Attention-guided Content Transfer via Opportunistic Deep Links between Apps and Devices. ICMI 2023: 125-134 - Maneesh Bilalpur, Saurabh Hinduja, Laura A. Cariola, Lisa Sheeber, Nicholas B. Allen, Louis-Philippe Morency, Jeffrey F. Cohn:
SHAP-based Prediction of Mother's History of Depression to Understand the Influence on Child Behavior. ICMI 2023: 537-544 - Auriane Boudin, Roxane Bertrand, Stéphane Rauzy, Matthis Houlès, Thierry Legou, Magalie Ochs, Philippe Blache:
SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal. ICMI Companion 2023: 344-352 - Jeffrey A. Brooks, Vineet Tiruvadi, Alice Baird, Panagiotis Tzirakis, Haoqi Li, Chris Gagne, Moses Oh, Alan Cowen:
Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective Interactions. ICMI Companion 2023: 353-358 - Yekta Said Can, Elisabeth André:
Performance Exploration of RNN Variants for Recognizing Daily Life Stress Levels by Using Multimodal Physiological Signals. ICMI 2023: 481-487 - Alexander Cao, Jean Utke, Diego Klabjan:
Early Classifying Multimodal Sequences. ICMI 2023: 183-189 - Fabio Catania, Tanya Talkar, Franca Garzotto, Benjamin R. Cowan, Thomas F. Quatieri, Satrajit S. Ghosh:
Multimodal Conversational Agents for People with Neurodevelopmental Disorders. ICMI 2023: 824-825 - Eleonora Ceccaldi, Béatrice Biancardi, Sara Falcone, Silvia Ferrando, Geoffrey Gorisse, Thomas Janssoone, Anna Martin Coesel, Pierre Raimbaud:
ACE: how Artificial Character Embodiment shapes user behaviour in multi-modal interaction. ICMI 2023: 818-819 - Sutirtha Chakraborty, Joseph Timoney:
Multimodal Synchronization in Musical Ensembles: Investigating Audio and Visual Cues. ICMI Companion 2023: 76-80 - Saikat Chakraborty, Noble Thomas, Anup Nandy:
Gait Event Prediction of People with Cerebral Palsy using Feature Uncertainty: A Low-Cost Approach. ICMI 2023: 301-306 - Ankur Chemburkar, Shuhong Lu, Andrew Feng:
Discrete Diffusion for Co-Speech Gesture Synthesis. ICMI Companion 2023: 186-192 - Kapotaksha Das, Mohamed Abouelenien, Mihai G. Burzo, John Elson, Kwaku O. Prakah-Asante, Clay Maranville:
Towards Autonomous Physiological Signal Extraction From Thermal Videos Using Deep Learning. ICMI 2023: 584-593 - Armand Deffrennes, Lucile Vincent, Marie Pivette, Kevin El Haddad, Jacqueline Deanna Bailey, Monica Perusquía-Hernández, Soraia M. Alarcão, Thierry Dutoit:
The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction Applications. ICMI Companion 2023: 81-85 - Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow:
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation. ICMI 2023: 755-762
skipping 2,575 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-04-25 22:47 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint