Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 29 matches
- 2020
- Ahmed Housni Alsswey, Hosam Al-Samarraie:
Elderly users' acceptance of mHealth user interface (UI) design-based culture: the moderator role of age. J. Multimodal User Interfaces 14(1): 49-59 (2020) - Mriganka Biswas, Marta Romeo, Angelo Cangelosi, Ray Jones:
Are older people any different from younger people in the way they want to interact with robots? Scenario based survey. J. Multimodal User Interfaces 14(1): 61-72 (2020) - Andrea Lorena Aldana Blanco, Steffen Grautoff, Thomas Hermann:
ECG sonification to support the diagnosis and monitoring of myocardial infarction. J. Multimodal User Interfaces 14(2): 207-218 (2020) - Aditya Tirumala Bukkapatnam, Philippe Depalle, Marcelo M. Wanderley:
Defining a vibrotactile toolkit for digital musical instruments: characterizing voice coil actuators, effects of loading, and equalization of the frequency response. J. Multimodal User Interfaces 14(3): 285-301 (2020) - Austin Erickson, Nahal Norouzi, Kangsoo Kim, Ryan Schubert, Jonathan Jules, Joseph J. LaViola, Gerd Bruder, Gregory F. Welch:
Sharing gaze rays for visual target identification tasks in collaborative augmented reality. J. Multimodal User Interfaces 14(4): 353-371 (2020) - Katharina Groß-Vogt, Matthias Frank, Robert Höldrich:
Focused Audification and the optimization of its parameters. J. Multimodal User Interfaces 14(2): 187-198 (2020) - Myounghoon Jeon, Areti Andreopoulou, Brian F. G. Katz:
Auditory displays and auditory user interfaces: art, design, science, and research. J. Multimodal User Interfaces 14(2): 139-141 (2020) - Hayoung Jeong, Taeho Kang, Jiwon Choi, Jong Kim:
A comparative assessment of Wi-Fi and acoustic signal-based HCI methods on the practicality. J. Multimodal User Interfaces 14(1): 123-137 (2020) - Seungwon Kim, Mark Billinghurst, Kangsoo Kim:
Multimodal interfaces and communication cues for remote collaboration. J. Multimodal User Interfaces 14(4): 313-319 (2020) - Seungwon Kim, Gun A. Lee, Mark Billinghurst, Weidong Huang:
The combination of visual communication cues in mixed reality remote collaboration. J. Multimodal User Interfaces 14(4): 321-335 (2020) - Steven Landry, Myounghoon Jeon:
Interactive sonification strategies for the motion and emotion of dance performances. J. Multimodal User Interfaces 14(2): 167-186 (2020) - James Leonard, Jérôme Villeneuve, Alexandros Kontogeorgakopoulos:
Multisensory instrumental dynamics as an emergent paradigm for digital musical creation. J. Multimodal User Interfaces 14(3): 235-253 (2020) - Vincenzo Lussu, Radoslaw Niewiadomski, Gualtiero Volpe, Antonio Camurri:
The role of respiration audio in multimodal analysis of movement qualities. J. Multimodal User Interfaces 14(1): 1-15 (2020) - Charlotte Magnusson, Kirsten Rassmus-Gröhn, Bitte Rydeman:
Developing a mobile activity game for stroke survivors - lessons learned. J. Multimodal User Interfaces 14(3): 303-312 (2020) - Justin Mathew, Stéphane Huot, Brian F. G. Katz:
Comparison of spatial and temporal interaction techniques for 3D audio trajectory authoring. J. Multimodal User Interfaces 14(1): 83-100 (2020) - Jindrich Matousek, Zdenek Krnoul, Michal Campr, Zbynek Zajíc, Zdenek Hanzlícek, Martin Gruber, Marie Kocurová:
Speech and web-based technology to enhance education for pupils with visual impairment. J. Multimodal User Interfaces 14(2): 219-230 (2020) - Sebastian Merchel, Mehmet Ercan Altinsoy:
Psychophysical comparison of the auditory and tactile perception: a survey. J. Multimodal User Interfaces 14(3): 271-283 (2020) - Joseph W. Newbold, Nicolas E. Gold, Nadia Bianchi-Berthouze:
Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications. J. Multimodal User Interfaces 14(2): 153-166 (2020) - Rafael N. C. Patrick, Tomasz R. Letowski, Maranda E. McBride:
A multimodal auditory equal-loudness comparison of air and bone conducted sounds. J. Multimodal User Interfaces 14(2): 199-206 (2020) - Thomas Pietrzak, Marcelo M. Wanderley:
Haptic and audio interaction design. J. Multimodal User Interfaces 14(3): 231-233 (2020) - Yuri De Pra, Stefano Papetti, Federico Fontana, Hanna Järveläinen, Michele Simonato:
Tactile discrimination of material properties: application to virtual buttons for professional appliances. J. Multimodal User Interfaces 14(3): 255-269 (2020) - Gowdham Prabhakar, Aparna Ramakrishnan, Modiksha Madan, L. R. D. Murthy, Vinay Krishna Sharma, Sachin Deshmukh, Pradipta Biswas:
Interactive gaze and finger controlled HUD for cars. J. Multimodal User Interfaces 14(1): 101-121 (2020) - Stephen Roddy, Brian Bridges:
Mapping for meaning: the embodied sonification listening model and its implications for the mapping problem in sonic information design. J. Multimodal User Interfaces 14(2): 143-151 (2020) - David Rudi, Peter Kiefer, Ioannis Giannopoulos, Martin Raubal:
Gaze-based interactions in the cockpit of the future: a survey. J. Multimodal User Interfaces 14(1): 25-48 (2020) - Hiroki Tanaka, Hidemi Iwasaka, Hideki Negoro, Satoshi Nakamura:
Analysis of conversational listening skills toward agent-based social skills training. J. Multimodal User Interfaces 14(1): 73-82 (2020) - Theophilus Teo, Mitchell Norman, Gun A. Lee, Mark Billinghurst, Matt Adcock:
Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration. J. Multimodal User Interfaces 14(4): 373-385 (2020) - Wei Wei, Qingxuan Jia, Yongli Feng, Gang Chen, Ming Chu:
Multi-modal facial expression feature based on deep-neural networks. J. Multimodal User Interfaces 14(1): 17-23 (2020) - Jing Yang, Prasanth Sasikumar, Huidong Bai, Amit Barde, Gábor Sörös, Mark Billinghurst:
The effects of spatial auditory and visual cues on mixed reality remote collaboration. J. Multimodal User Interfaces 14(4): 337-352 (2020) - Jianlong Zhou, Simon Luo, Fang Chen:
Effects of personality traits on user trust in human-machine collaborations. J. Multimodal User Interfaces 14(4): 387-400 (2020)
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-03 14:10 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint