default search action
Jonas Beskow
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c96]Shivam Mehta, Anna Deichler, Jim O'Regan, Birger Moëll, Jonas Beskow, Gustav Eje Henter, Simon Alexanderson:
Fake it to make it: Using synthetic data to remedy the data shortage in joint multi-modal speech-and-gesture synthesis. CVPR Workshops 2024: 1952-1964 - [c95]Shivam Mehta, Ruibo Tu, Simon Alexanderson, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Unified Speech and Gesture Synthesis Using Flow Matching. ICASSP 2024: 8220-8224 - [c94]Shivam Mehta, Ruibo Tu, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Matcha-TTS: A Fast TTS Architecture with Conditional Flow Matching. ICASSP 2024: 11341-11345 - [i21]Shivam Mehta, Anna Deichler, Jim O'Regan, Birger Moëll, Jonas Beskow, Gustav Eje Henter, Simon Alexanderson:
Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis. CoRR abs/2404.19622 (2024) - [i20]Shivam Mehta, Harm Lameris, Rajiv Punmiya, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Should you use a probabilistic duration model in TTS? Probably! Especially for spontaneous speech. CoRR abs/2406.05401 (2024) - [i19]Anna Deichler, Simon Alexanderson, Jonas Beskow:
Incorporating Spatial Awareness in Data-Driven Gesture Generation for Virtual Agents. CoRR abs/2408.04127 (2024) - 2023
- [j18]Anna Deichler, Siyang Wang, Simon Alexanderson, Jonas Beskow:
Learning to generate pointing gestures in situated embodied conversational agents. Frontiers Robotics AI 10 (2023) - [j17]Simon Alexanderson, Rajmund Nagy, Jonas Beskow, Gustav Eje Henter:
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models. ACM Trans. Graph. 42(4): 44:1-44:20 (2023) - [c93]Joakim Gustafson, Éva Székely, Simon Alexandersson, Jonas Beskow:
Casual chatter or speaking up? Adjusting articulatory effort in generation of speech and animation for conversational characters. FG 2023: 1-4 - [c92]Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow:
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation. ICMI 2023: 755-762 - [c91]Shivam Mehta, Ambika Kirkland, Harm Lameris, Jonas Beskow, Éva Székely, Gustav Eje Henter:
OverFlow: Putting flows on top of neural transducers for better TTS. INTERSPEECH 2023: 4279-4283 - [c90]Joakim Gustafson, Éva Székely, Jonas Beskow:
Generation of speech and facial animation with controllable articulatory effort for amusing conversational characters. IVA 2023: 16:1-16:9 - [c89]Jura Miniota, Siyang Wang, Jonas Beskow, Joakim Gustafson, Éva Székely, André Pereira:
Hi robot, it's not what you say, it's how you say it. RO-MAN 2023: 307-314 - [c88]Shivam Mehta, Siyang Wang, Simon Alexanderson, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis. SSW 2023: 150-156 - [i18]Shivam Mehta, Siyang Wang, Simon Alexanderson, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis. CoRR abs/2306.09417 (2023) - [i17]Shivam Mehta, Ruibo Tu, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Matcha-TTS: A fast TTS architecture with conditional flow matching. CoRR abs/2309.03199 (2023) - [i16]Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow:
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation. CoRR abs/2309.05455 (2023) - [i15]Shivam Mehta, Ruibo Tu, Simon Alexanderson, Jonas Beskow, Éva Székely, Gustav Eje Henter:
Unified speech and gesture synthesis using flow matching. CoRR abs/2310.05181 (2023) - 2022
- [c87]Shivam Mehta, Éva Székely, Jonas Beskow, Gustav Eje Henter:
Neural HMMS Are All You Need (For High-Quality Attention-Free TTS). ICASSP 2022: 7457-7461 - [i14]Shivam Mehta, Ambika Kirkland, Harm Lameris, Jonas Beskow, Éva Székely, Gustav Eje Henter:
OverFlow: Putting flows on top of neural transducers for better TTS. CoRR abs/2211.06892 (2022) - [i13]Simon Alexanderson, Rajmund Nagy, Jonas Beskow, Gustav Eje Henter:
Listen, denoise, action! Audio-driven motion synthesis with diffusion models. CoRR abs/2211.09707 (2022) - 2021
- [j16]Patrik Jonell, Birger Moëll, Krister Håkansson, Gustav Eje Henter, Taras Kucherenko, Olga Mikheeva, Göran Hagman, Jasper Holleman, Miia Kivipelto, Hedvig Kjellström, Joakim Gustafson, Jonas Beskow:
Multimodal Capture of Patient Behaviour for Improved Detection of Early Dementia: Clinical Feasibility and Preliminary Results. Frontiers Comput. Sci. 3: 642633 (2021) - [j15]Guillermo Valle Pérez, Gustav Eje Henter, Jonas Beskow, Andre Holzapfel, Pierre-Yves Oudeyer, Simon Alexanderson:
Transflower: probabilistic autoregressive dance generation with multimodal attention. ACM Trans. Graph. 40(6): 195:1-195:14 (2021) - [c86]Siyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, Éva Székely:
Integrated Speech and Gesture Synthesis. ICMI 2021: 177-185 - [c85]Jonas Beskow, Charlie Caper, Johan Ehrenfors, Nils Hagberg, Anne Jansen, Chris Wood:
Expressive Robot Performance Based on Facial Motion Capture. Interspeech 2021: 2343-2344 - [c84]Joakim Gustafson, Jonas Beskow, Éva Székely:
Personality in the mix - investigating the contribution of fillers and speaking style to the perception of spontaneous speech synthesis. SSW 2021: 48-53 - [i12]Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow:
Generating coherent spontaneous speech and gesture from text. CoRR abs/2101.05684 (2021) - [i11]Guillermo Valle Pérez, Gustav Eje Henter, Jonas Beskow, Andre Holzapfel, Pierre-Yves Oudeyer, Simon Alexanderson:
Transflower: probabilistic autoregressive dance generation with multimodal attention. CoRR abs/2106.13871 (2021) - [i10]Siyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, Éva Székely:
Integrated Speech and Gesture Synthesis. CoRR abs/2108.11436 (2021) - [i9]Shivam Mehta, Éva Székely, Jonas Beskow, Gustav Eje Henter:
Neural HMMs are all you need (for high-quality attention-free TTS). CoRR abs/2108.13320 (2021) - [i8]Patrik Jonell, Anna Deichler, Ilaria Torre, Iolanda Leite, Jonas Beskow:
Mechanical Chameleons: Evaluating the effects of a social robot's non-verbal behavior on social influence. CoRR abs/2109.01206 (2021) - 2020
- [j14]Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow:
Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows. Comput. Graph. Forum 39(2): 487-496 (2020) - [j13]Kalin Stefanov, Jonas Beskow, Giampiero Salvi:
Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially Aware Language Acquisition. IEEE Trans. Cogn. Dev. Syst. 12(2): 250-259 (2020) - [j12]Gustav Eje Henter, Simon Alexanderson, Jonas Beskow:
MoGlow: probabilistic and controllable motion synthesis using normalising flows. ACM Trans. Graph. 39(6): 236:1-236:14 (2020) - [c83]Michelle Cohn, Patrik Jonell, Taylor Kim, Jonas Beskow, Georgia Zellou:
Embodiment and gender interact in alignment to TTS voices. CogSci 2020 - [c82]Éva Székely, Gustav Eje Henter, Jonas Beskow, Joakim Gustafson:
Breathing and Speech Planning in Spontaneous Speech Synthesis. ICASSP 2020: 7649-7653 - [c81]Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow:
Generating coherent spontaneous speech and gesture from text. IVA 2020: 1:1-1:3 - [c80]Patrik Jonell, Taras Kucherenko, Ilaria Torre, Jonas Beskow:
Can we trust online crowdworkers?: Comparing online and offline participants in a preference test of virtual agents. IVA 2020: 30:1-30:8 - [c79]Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow:
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings. IVA 2020: 31:1-31:8 - [i7]Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow:
Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. CoRR abs/2006.09888 (2020) - [i6]Patrik Jonell, Taras Kucherenko, Ilaria Torre, Jonas Beskow:
Can we trust online crowdworkers? Comparing online and offline participants in a preference test of virtual agents. CoRR abs/2009.10760 (2020)
2010 – 2019
- 2019
- [j11]Kalin Stefanov, Giampiero Salvi, Dimosthenis Kontogiorgos, Hedvig Kjellström, Jonas Beskow:
Modeling of Human Visual Attention in Multiparty Open-World Dialogues. ACM Trans. Hum. Robot Interact. 8(2): 8:1-8:21 (2019) - [c78]Chaona Chen, Laura B. Hensel, Yaocong Duan, Robin A. A. Ince, Oliver G. B. Garrod, Jonas Beskow, Rachael E. Jack, Philippe G. Schyns:
Equipping social robots with culturally-sensitive facial expressions of emotion using data-driven methods. FG 2019: 1-8 - [c77]Éva Székely, Gustav Eje Henter, Jonas Beskow, Joakim Gustafson:
Off the Cuff: Exploring Extemporaneous Speech Delivery with TTS. INTERSPEECH 2019: 3687-3688 - [c76]Éva Székely, Gustav Eje Henter, Jonas Beskow, Joakim Gustafson:
Spontaneous Conversational Speech Synthesis from Found Data. INTERSPEECH 2019: 4435-4439 - [c75]Petra Wagner, Jonas Beskow, Simon Betz, Jens Edlund, Joakim Gustafson, Gustav Eje Henter, Sébastien Le Maguer, Zofia Malisz, Éva Székely, Christina Tånnander, Jana Voße:
Speech Synthesis Evaluation - State-of-the-Art Assessment and Suggestion for a Novel Research Program. SSW 2019: 105-110 - [c74]Éva Székely, Gustav Eje Henter, Jonas Beskow, Joakim Gustafson:
How to train your fillers: uh and um in spontaneous speech synthesis. SSW 2019: 245-250 - [c73]Zofia Malisz, Harald Berthelsen, Jonas Beskow, Joakim Gustafson:
PROMIS: a statistical-parametric speech synthesis system with prominence control via a prominence network. SSW 2019: 257-262 - [p3]Gabriel Skantze, Joakim Gustafson, Jonas Beskow:
Multimodal conversational interaction with robots. The Handbook of Multimodal-Multisensor Interfaces, Volume 3 (3) 2019 - [i5]Andreas Wedenborn, Preben Wik, Olov Engwall, Jonas Beskow:
The effect of a physical robot on vocabulary learning. CoRR abs/1901.10461 (2019) - [i4]Gustav Eje Henter, Simon Alexanderson, Jonas Beskow:
MoGlow: Probabilistic and controllable motion synthesis using normalising flows. CoRR abs/1905.06598 (2019) - 2018
- [c72]Chaona Chen, Oliver G. B. Garrod, Jiayu Zhan, Jonas Beskow, Philippe G. Schyns, Rachael E. Jack:
Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots. FG 2018: 448-452 - [c71]Aravind Elanjimattathil Vijayan, Simon Alexanderson, Jonas Beskow, Iolanda Leite:
Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior. ICRA 2018: 1955-1961 - [c70]Hans-Jörg Vögel, Christian Süß, Thomas Hubregtsen, Elisabeth André, Björn W. Schuller, Jérôme Härri, Jörg Conradt, Asaf Adi, Alexander Zadorojniy, Jacques M. B. Terken, Jonas Beskow, Ann Morrison, Kynan Eng, Florian Eyben, Samer Al Moubayed, Susanne Muller, Nicholas Cummins, Viviane S. Ghaderi, Ronee Chadowitz, Raphaël Troncy, Benoit Huet, Melek Önen, Adlen Ksentini:
Emotion-Awareness for Intelligent Vehicle Assistants: A Research Agenda. SEFAIAS@ICSE 2018: 11-15 - [c69]Patrik Jonell, Catharine Oertel, Dimosthenis Kontogiorgos, Jonas Beskow, Joakim Gustafson:
Crowdsourced Multimodal Corpora Collection Tool. LREC 2018 - [c68]Dimosthenis Kontogiorgos, Vanya Avramova, Simon Alexandersson, Patrik Jonell, Catharine Oertel, Jonas Beskow, Gabriel Skantze, Joakim Gustafson:
A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction. LREC 2018 - 2017
- [j10]Simon Alexanderson, Carol O'Sullivan, Jonas Beskow:
Real-time labeling of non-rigid motion capture marker sets. Comput. Graph. 69: 59-67 (2017) - [j9]Simon Alexanderson, Carol O'Sullivan, Michael Neff, Jonas Beskow:
Mimebot - Investigating the Expressibility of Non-Verbal Communication Across Agent Embodiments. ACM Trans. Appl. Percept. 14(4): 24:1-24:13 (2017) - [c67]Catharine Oertel, Patrik Jonell, Dimosthenis Kontogiorgos, Joseph Mendelson, Jonas Beskow, Joakim Gustafson:
Crowd-Sourced Design of Artificial Attentive Listeners. INTERSPEECH 2017: 854-858 - [c66]Zofia Malisz, Harald Berthelsen, Jonas Beskow, Joakim Gustafson:
Controlling Prominence Realisation in Parametric DNN-Based Speech Synthesis. INTERSPEECH 2017: 1079-1083 - [c65]Patrik Jonell, Catharine Oertel, Dimosthenis Kontogiorgos, Jonas Beskow, Joakim Gustafson:
Crowd-Powered Design of Virtual Attentive Listeners. IVA 2017: 188-191 - [c64]Muhammad Sikandar Lal Khan, Shafiq ur Réhman, Yongcui Mi, Usman Naeem, Jonas Beskow, Haibo Li:
Moveable Facial Features in a Social Mediator. IVA 2017: 205-208 - [c63]Yanxia Zhang, Jonas Beskow, Hedvig Kjellström:
Look but Don't Stare: Mutual Gaze Interaction in Social Robots. ICSR 2017: 556-566 - [e3]Slim Ouni, Chris Davis, Alexandra Jesse, Jonas Beskow:
14th International Conference on Auditory-Visual Speech Processing, AVSP 2017, Stockholm, Sweden, August 25-26, 2017. ISCA 2017 [contents] - [e2]Jonas Beskow, Christopher E. Peters, Ginevra Castellano, Carol O'Sullivan, Iolanda Leite, Stefan Kopp:
Intelligent Virtual Agents - 17th International Conference, IVA 2017, Stockholm, Sweden, August 27-30, 2017, Proceedings. Lecture Notes in Computer Science 10498, Springer 2017, ISBN 978-3-319-67400-1 [contents] - [i3]Patrik Jonell, Joseph Mendelson, Thomas Storskog, Goran Hagman, Per Östberg, Iolanda Leite, Taras Kucherenko, Olga Mikheeva, Ulrika Akenine, Vesna Jelic, Alina Solomon, Jonas Beskow, Joakim Gustafson, Miia Kivipelto, Hedvig Kjellström:
Machine Learning and Social Robotics for Detecting Early Signs of Dementia. CoRR abs/1709.01613 (2017) - [i2]Kalin Stefanov, Jonas Beskow, Giampiero Salvi:
Self-Supervised Vision-Based Detection of the Active Speaker as a Prerequisite for Socially-Aware Language Acquisition. CoRR abs/1711.08992 (2017) - 2016
- [c62]Simon Alexanderson, David House, Jonas Beskow:
Automatic annotation of gestural units in spontaneous face-to-face interaction. MA3HMI@ICMI 2016: 15-19 - [c61]Kalin Stefanov, Akihiro Sugimoto, Jonas Beskow:
Look who's talking: visual identification of the active speaker in multi-party human-robot interaction. ASSP4MI@ICMI 2016: 22-27 - [c60]Kalin Stefanov, Jonas Beskow:
A Multi-party Multi-modal Dataset for Focus of Visual Attention in Human-human and Human-robot Interaction. LREC 2016 - [c59]Simon Alexanderson, Carol O'Sullivan, Jonas Beskow:
Robust online motion capture labeling of finger markers. MIG 2016: 7-13 - [c58]John Andersson, Sebastian Berlin, André Costa, Harald Berthelsen, Hanna Lindgren, Nikolaj Lindberg, Jonas Beskow, Jens Edlund, Joakim Gustafson:
WikiSpeech - enabling open source text-to-speech for Wikipedia. SSW 2016: 93-99 - [c57]Jonas Beskow, Harald Berthelsen:
A hybrid harmonics-and-bursts modelling approach to speech synthesis. SSW 2016: 208-213 - 2015
- [j8]Simon Alexanderson, Jonas Beskow:
Towards Fully Automated Motion Capture of Signs - Development and Evaluation of a Key Word Signing Avatar. ACM Trans. Access. Comput. 7(2): 7:1-7:17 (2015) - [c56]Gabriel Skantze, Martin Johansson, Jonas Beskow:
Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects. ICMI 2015: 67-74 - [c55]Gabriel Skantze, Martin Johansson, Jonas Beskow:
A Collaborative Human-Robot Game as a Test-bed for Modelling Multi-party, Situated Interaction. IVA 2015: 348-351 - [c54]Jonas Beskow:
Talking Heads, Signing Avatars and Social Robots. SLPAT@Interspeech 2015: 1 - 2014
- [j7]Simon Alexanderson, Jonas Beskow:
Animated Lombard speech: Motion capture, facial animation and visual intelligibility of speech produced in adverse conditions. Comput. Speech Lang. 28(2): 607-618 (2014) - [c53]Samer Al Moubayed, Jonas Beskow, Bajibabu Bollepalli, Joakim Gustafson, Ahmed Hussen Abdelaziz, Martin Johansson, Maria Koutsombogera, José David Águas Lopes, Jekaterina Novikova, Catharine Oertel, Gabriel Skantze, Kalin Stefanov, Gül Varol:
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue. HRI 2014: 112-113 - [c52]Samer Al Moubayed, Jonas Beskow, Gabriel Skantze:
Spontaneous spoken dialogues with the furhat human-like robot head. HRI 2014: 326 - 2013
- [j6]Nicole Mirnig, Astrid Weiss, Gabriel Skantze, Samer Al Moubayed, Joakim Gustafson, Jonas Beskow, Björn Granström, Manfred Tscheligi:
Face-to-Face with a Robot: What do we actually Talk about? Int. J. Humanoid Robotics 10(1) (2013) - [j5]Samer Al Moubayed, Gabriel Skantze, Jonas Beskow:
The furhat Back-Projected humanoid Head-Lip Reading, gaze and Multi-Party Interaction. Int. J. Humanoid Robotics 10(1) (2013) - [c51]Simon Alexanderson, David House, Jonas Beskow:
Aspects of co-occurring syllables and head nods in spontaneous dialogue. AVSP 2013: 169-172 - [c50]Samer Al Moubayed, Jonas Beskow, Bajibabu Bollepalli, Ahmed Hussen Abdelaziz, Martin Johansson, Maria Koutsombogera, José David Águas Lopes, Jekaterina Novikova, Catharine Oertel, Gabriel Skantze, Kalin Stefanov, Gül Varol:
Tutoring Robots - Multiparty Multimodal Social Dialogue with an Embodied Tutor. eNTERFACE 2013: 80-113 - [c49]Samer Al Moubayed, Jonas Beskow, Gabriel Skantze:
The furhat social companion talking head. INTERSPEECH 2013: 747-749 - [c48]Bajibabu Bollepalli, Jonas Beskow, Joakim Gustafson:
Non-linear Pitch Modification in Voice Conversion Using Artificial Neural Networks. NOLISP 2013: 97-103 - [p2]Jens Edlund, Samer Al Moubayed, Jonas Beskow:
Co-present or Not? Eye Gaze in Intelligent User Interfaces 2013: 185-203 - 2012
- [j4]Samer Al Moubayed, Jens Edlund, Jonas Beskow:
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections. ACM Trans. Interact. Intell. Syst. 1(2): 11:1-11:25 (2012) - [c47]Samer Al Moubayed, Gabriel Skantze, Jonas Beskow, Kalin Stefanov, Joakim Gustafson:
Multimodal multiparty social interaction with the furhat head. ICMI 2012: 293-294 - [c46]Samer Al Moubayed, Gabriel Skantze, Jonas Beskow:
Lip-Reading: Furhat Audio Visual Intelligibility of a Back Projected Animated Face. IVA 2012: 196-203 - [c45]Jens Edlund, Simon Alexandersson, Jonas Beskow, Lisa Gustavsson, Mattias Heldner, Anna Hjalmarsson, Petter Kallionen, Ellen Marklund:
3rd party observer gaze as a continuous measure of dialogue flow. LREC 2012: 1354-1358 - [c44]Mats Blomberg, Gabriel Skantze, Samer Al Moubayed, Joakim Gustafson, Jonas Beskow, Björn Granström:
Children and adults in dialogue with the robot head Furhat - corpus collection and initial analysis. WOCCI 2012: 87-91 - [i1]Saad Akram, Jonas Beskow, Hedvig Kjellström:
Visual Recognition of Isolated Swedish Sign Language Signs. CoRR abs/1211.3901 (2012) - 2011
- [c43]Samer Al Moubayed, Simon Alexandersson, Jonas Beskow, Björn Granström:
A robotic head using projected animated faces. AVSP 2011: 71 - [c42]Jonas Beskow, Simon Alexandersson, Samer Al Moubayed, Jens Edlund, David House:
Kinetic data for large-scale analysis and modeling of face-to-face conversation. AVSP 2011: 107-110 - [c41]Samer Al Moubayed, Jonas Beskow, Gabriel Skantze, Björn Granström:
Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction. COST 2102 Training School 2011: 114-130 - [c40]Jens Edlund, Samer Al Moubayed, Jonas Beskow:
The Mona Lisa Gaze Effect as an Objective Metric for Perceived Cospatiality. IVA 2011: 439-440 - [e1]Giampiero Salvi, Jonas Beskow, Olov Engwall, Samer Al Moubayed:
Auditory-Visual Speech Processing, AVSP 2011, Volterra, Italy, September 1-2, 2011. ISCA 2011 [contents] - 2010
- [c39]Samer Al Moubayed, Jonas Beskow, Jens Edlund, Björn Granström, David House:
Animated Faces for Robotic Heads: Gaze and Beyond. COST 2102 Conference 2010: 19-35 - [c38]Samer Al Moubayed, Jonas Beskow, Björn Granström, David House:
Audio-Visual Prosody: Perception, Detection, and Synthesis of Prominence. COST 2102 Training School 2010: 55-71 - [c37]Jonas Beskow, Samer Al Moubayed:
Perception of gaze direction in 2D and 3D facial projections. FAA 2010: 24 - [c36]Jonas Beskow, Samer Al Moubayed:
Perception of nonverbal gestures of prominence in visual speech animation. FAA 2010: 25 - [c35]Samer Al Moubayed, Jonas Beskow:
Prominence detection in Swedish using syllable correlates. INTERSPEECH 2010: 1784-1787 - [c34]Jens Edlund, Jonas Beskow, Kjell Elenius, Kahl Hellmer, Sofia Strömbergsson, David House:
Spontal: A Swedish Spontaneous Dialogue Corpus of Audio, Video and Motion Capture. LREC 2010
2000 – 2009
- 2009
- [j3]Giampiero Salvi, Jonas Beskow, Samer Al Moubayed, Björn Granström:
SynFace - Speech-Driven Facial Animation for Virtual Speech-Reading Support. EURASIP J. Audio Speech Music. Process. 2009 (2009) - [j2]Samer Al Moubayed, Jonas Beskow, Björn Granström:
Auditory visual prominence. J. Multimodal User Interfaces 3(4): 299-309 (2009) - [c33]Samer Al Moubayed, Jonas Beskow:
Effects of visual prominence cues on speech intelligibility. AVSP 2009: 43-46 - [c32]Jonas Beskow, Giampiero Salvi, Samer Al Moubayed:
Synface - verbal and non-verbal face animation from audio. AVSP 2009: 169 - [c31]Jonas Beskow, Jens Edlund, Björn Granström, Joakim Gustafson, David House:
Face-to-Face Interaction and the KTH Cooking Show. COST 2102 Training School 2009: 157-168 - [c30]Jonas Beskow, Jens Edlund, Björn Granström, Joakim Gustafson, Gabriel Skantze, Helena Tobiasson:
The MonAMI reminder: a spoken dialogue system for face-to-face interaction. INTERSPEECH 2009: 296-299 - [c29]Samer Al Moubayed, Jonas Beskow, Anne-Marie Öster, Giampiero Salvi, Björn Granström, Nic van Son, Ellen Ormel:
Virtual speech reading support for hard of hearing in a domestic multi-media setting. INTERSPEECH 2009: 1443-1446 - [p1]Jonas Beskow, Rolf Carlson, Jens Edlund, Björn Granström, Mattias Heldner, Anna Hjalmarsson, Gabriel Skantze:
Multimodal Interaction Control. Computers in the Human Interaction Loop 2009: 143-157 - 2008
- [c28]Jonas Beskow, Jens Edlund, Teodore Gjermani, Björn Granström, Joakim Gustafson, Oskar Jonsson, Gabriel Skantze, Helena Tobiasson:
Innovative interfaces in MonAMI: the reminder. ICMI 2008: 199-200 - [c27]Jonas Beskow, Gösta Bruce, Laura Enflo, Björn Granström, Susanne Schötz:
Recognizing and modelling regional varieties of Swedish. INTERSPEECH 2008: 512-515 - [c26]Jonas Beskow, Björn Granström, Peter Nordqvist, Samer Al Moubayed, Giampiero Salvi, Tobias Herzke, Arne Schulz:
Hearing at home - communication support in home environments for hearing impaired persons. INTERSPEECH 2008: 2203-2206 - [c25]Jonas Beskow, Jens Edlund, Björn Granström, Joakim Gustafson, Gabriel Skantze:
Innovative Interfaces in MonAMI: The Reminder. PIT 2008: 272-275 - 2007
- [c24]Jonas Beskow, Björn Granström, David House:
Analysis and Synthesis of Multimodal Verbal and Non-verbal Interaction for Animated Interface Agents. COST 2102 Workshop (Vietri) 2007: 250-263 - [c23]Jens Edlund, Jonas Beskow:
Pushy versus meek - using avatars to influence turn-taking behaviour. INTERSPEECH 2007: 682-685 - 2006
- [c22]Eva Agelfors, Jonas Beskow, Inger Karlsson, Jo Kewley, Giampiero Salvi, Neil Thomas:
User Evaluation of the SYNFACE Talking Head Telephone. ICCHP 2006: 579-586 - [c21]Jonas Beskow, Björn Granström, David House:
Visual correlates to prominence in several expressive modes. INTERSPEECH 2006 - 2005
- [c20]Jonas Beskow, Mikael Nordenberg:
Data-driven synthesis of expressive visual speech using an MPEG-4 talking head. INTERSPEECH 2005: 793-796 - 2004
- [j1]Jonas Beskow:
Trainable Articulatory Control Models for Visual Speech Synthesis. Int. J. Speech Technol. 7(4): 335-349 (2004) - [c19]Jonas Beskow, Loredana Cerrato, Björn Granström, David House, Mikael Nordenberg, Magnus Nordstrand, Gunilla Svanfeldt:
Expressive Animated Agents for Affective Dialogue Systems. ADS 2004: 240-243 - [c18]Jonas Beskow, Loredana Cerrato, Piero Cosi, Erica Costantini, Magnus Nordstrand, Fabio Pianesi, Michela Prete, Gunilla Svanfeldt:
Preliminary Cross-Cultural Evaluation of Expressiveness in Synthetic Faces. ADS 2004: 301-304 - [c17]Jonas Beskow, Inger Karlsson, Jo Kewley, Giampiero Salvi:
SYNFACE - A Talking Head Telephone for the Hearing-Impaired. ICCHP 2004: 1178-1185 - [c16]Jonas Beskow, Olov Engwall, Björn Granström, Preben Wik:
Design strategies for a virtual language tutor. INTERSPEECH 2004: 1693-1696 - 2003
- [c15]Olov Engwall, Jonas Beskow:
Resynthesis of 3d tongue movements from facial data. INTERSPEECH 2003: 2261-2264 - 2002
- [c14]Jonas Beskow, Jens Edlund, Magnus Nordstrand:
Specification and realisation of multimodal output in dialogue systems. INTERSPEECH 2002: 181-184 - 2001
- [c13]David House, Jonas Beskow, Björn Granström:
Timing and interaction of visual cues for prominence in audiovisual speech perception. INTERSPEECH 2001: 387-390 - 2000
- [c12]Joakim Gustafson, Linda Bell, Jonas Beskow, Johan Boye, Rolf Carlson, Jens Edlund, Björn Granström, David House, Mats Wirén:
Adapt - a multimodal conversational dialogue system in an apartment domain. INTERSPEECH 2000: 134-137 - [c11]Kåre Sjölander, Jonas Beskow:
Wavesurfer - an open source speech tool. INTERSPEECH 2000: 464-467
1990 – 1999
- 1999
- [c10]Eva Agelfors, Jonas Beskow, Björn Granström, Magnus Lundeberg, Giampiero Salvi, Karl-Erik Spens, Tobias Öhman:
Synthetic visual speech driven from auditory speech. AVSP 1999: 21 - [c9]Dominic W. Massaro, Jonas Beskow, Michael M. Cohen, Christopher L. Fry, Tony Rodriguez:
Picture my voice: Audio to visual speech synthesis using artificial neural networks. AVSP 1999: 23 - [c8]Magnus Lundeberg, Jonas Beskow:
Developing a 3D-agent for the august dialogue system. AVSP 1999: 26 - 1998
- [c7]Michael M. Cohen, Jonas Beskow, Dominic W. Massaro:
Recent Developments In Facial Animation: An Inside View. AVSP 1998: 201-206 - [c6]Eva Agelfors, Jonas Beskow, Martin Dahlquist, Björn Granström, Magnus Lundeberg, Karl-Erik Spens, Tobias Öhman:
Synthetic faces as a lipreading support. ICSLP 1998 - [c5]Kåre Sjölander, Jonas Beskow, Joakim Gustafson, Erland Lewin, Rolf Carlson, Björn Granström:
Web-based educational tools for speech technology. ICSLP 1998 - 1997
- [c4]Jonas Beskow:
Animation of talking agents. AVSP 1997: 149-152 - [c3]Jonas Beskow, Kjell Elenius, Scott McGlashan:
OLGA - a dialogue system with an animated talking agent. EUROSPEECH 1997: 1651-1654 - [c2]Jonas Beskow, Martin Dahlquist, Björn Granström, Magnus Lundeberg, Karl-Erik Spens, Tobias Öhman:
The teleface project multi-modal speech-communication for the hearing impaired. EUROSPEECH 1997: 2003-2006 - 1995
- [c1]Jonas Beskow:
Rule-based visual speech synthesis. EUROSPEECH 1995: 299-302
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-11 18:22 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint