Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 27 matches
- 2010
- Abdul Rehman Abbasi, Matthew N. Dailey, Nitin V. Afzulpurkar, Takeaki Uno:
Student mental state inference from unintentional body gestures using dynamic Bayesian networks. J. Multimodal User Interfaces 3(1-2): 21-31 (2010) - Diego Arnone, Alessandro Rossi, Massimo Bertoncini:
An open source integrated framework for rapid prototyping of multimodal affective applications in digital entertainment. J. Multimodal User Interfaces 3(3): 227-236 (2010) - Lynne Baillie, Lee Morton, Stephen Uzor, David C. Moffatt:
An investigation of user responses to specifically designed activities in a multimodal location based game. J. Multimodal User Interfaces 3(3): 179-188 (2010) - Birgitta Burger, Roberto Bresin:
Communication of musical expression by means of mobile robot gestures. J. Multimodal User Interfaces 3(1-2): 109-118 (2010) - George Caridakis, Kostas Karpouzis, Manolis Wallace, Loïc Kessous, Noam Amir:
Multimodal user's affective state analysis in naturalistic interaction. J. Multimodal User Interfaces 3(1-2): 49-66 (2010) - Ginevra Castellano, Kostas Karpouzis, Christopher E. Peters, Jean-Claude Martin:
Special issue on real-time affect analysis and interpretation: closing the affective loop in virtual agents and robots. J. Multimodal User Interfaces 3(1-2): 1-3 (2010) - Ginevra Castellano, Iolanda Leite, André Pereira, Carlos Martinho, Ana Paiva, Peter W. McOwan:
Affect recognition for interactive companions: challenges and design in real world scenarios. J. Multimodal User Interfaces 3(1-2): 89-98 (2010) - Luca Chittaro:
Distinctive aspects of mobile interaction and their implications for the design of multimodal interfaces. J. Multimodal User Interfaces 3(3): 157-165 (2010) - Bruno Dumas, Denis Lalanne, Rolf Ingold:
Description languages for multimodal interaction: a set of guidelines and its illustration with SMUIML. J. Multimodal User Interfaces 3(3): 237-247 (2010) - Florian Eyben, Martin Wöllmer, Alex Graves, Björn W. Schuller, Ellen Douglas-Cowie, Roddy Cowie:
On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues. J. Multimodal User Interfaces 3(1-2): 7-19 (2010) - Dennis Hofs, Mariët Theune, Rieks op den Akker:
Natural interaction with a virtual guide in a virtual environment. J. Multimodal User Interfaces 3(1-2): 141-153 (2010) - Loïc Kessous, Ginevra Castellano, George Caridakis:
Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. J. Multimodal User Interfaces 3(1-2): 33-48 (2010) - Werner A. König, Roman Rädle, Harald Reiterer:
Interactive design of multimodal user interfaces. J. Multimodal User Interfaces 3(3): 197-213 (2010) - Pieter-Jan Maes, Marc Leman, Micheline Lesaffre, Michiel Demey, Dirk Moelants:
From expressive gesture to sound. J. Multimodal User Interfaces 3(1-2): 67-78 (2010) - Marilyn Rose McGee-Lennon, Laurence Nigay, Philip D. Gray:
The challenges of engineering multimodal interaction. J. Multimodal User Interfaces 3(3): 155-156 (2010) - Nicole Novielli:
HMM modeling of user engagement in advice-giving dialogues. J. Multimodal User Interfaces 3(1-2): 131-140 (2010) - Christopher E. Peters, Stylianos Asteriadis, Kostas Karpouzis:
Investigating shared attention with a virtual agent using a gaze-based interface. J. Multimodal User Interfaces 3(1-2): 119-130 (2010) - Isabella Poggi, Francesca D'Errico:
The mental ingredients of bitterness. J. Multimodal User Interfaces 3(1-2): 79-86 (2010) - Andrew Ramsay, Marilyn Rose McGee-Lennon, Graham A. Wilson, Steven J. Gray, Philip D. Gray, François De Turenne:
Tilt and go: exploring multimodal mobile maps in the field. J. Multimodal User Interfaces 3(3): 167-177 (2010) - Laurel D. Riek, Philip C. Paul, Peter Robinson:
When my robot smiles at me: Enabling human-robot rapport via real-time head gesture mimicry. J. Multimodal User Interfaces 3(1-2): 99-108 (2010) - Guillaume Rivière, Nadine Couture, Patrick Reuter:
The activation of modality in virtual objects assembly. J. Multimodal User Interfaces 3(3): 189-196 (2010) - Marcos Serrano, Laurence Nigay:
A wizard of oz component-based approach for rapidly prototyping and testing input multimodal interfaces. J. Multimodal User Interfaces 3(3): 215-225 (2010) - 2009
- Stéphanie Buisine, Yun Wang, Ouriel Grynszpan:
Empirical investigation of the temporal relations between speech and facial expressions of emotion. J. Multimodal User Interfaces 3(4): 263-270 (2009) - Maurizio Mancini, Catherine Pelachaud:
Generating distinctive behavior for Embodied Conversational Agents. J. Multimodal User Interfaces 3(4): 249-261 (2009) - Samer Al Moubayed, Jonas Beskow, Björn Granström:
Auditory visual prominence. J. Multimodal User Interfaces 3(4): 299-309 (2009) - David Díaz Pardo de Vera, Beatriz López-Mencía, Álvaro Hernández Trapote, Luis A. Hernández Gómez:
Non-verbal communication strategies to improve robustness in dialogue systems: a comparative study. J. Multimodal User Interfaces 3(4): 285-297 (2009) - Herwin van Welbergen, Dennis Reidsma, Zsófia Ruttkay, Job Zwiers:
Elckerlyc. J. Multimodal User Interfaces 3(4): 271-284 (2009)
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-16 17:53 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint