default search action
Shiro Kumano
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j12]Mohammad Soleymani, Shiro Kumano, Emily Mower Provost, Nadia Bianchi-Berthouze, Akane Sano, Kenji Suzuki:
Guest Editorial Best of ACII 2021. IEEE Trans. Affect. Comput. 15(2): 376-379 (2024) - 2023
- [j11]Yan Zhou, Kenji Suzuki, Shiro Kumano:
State-Aware Deep Item Response Theory using student facial features. Frontiers Artif. Intell. 6 (2023) - [c41]Ryo Ueda, Hiromi Narimatsu, Yusuke Miyao, Shiro Kumano:
Emotion-Controllable Impression Utterance Generation for Visual Art. ACII 2023: 1-8 - [c40]Hiromi Narimatsu, Mayuko Ozawa, Shiro Kumano:
Collision Probability Matching Loss for Disentangling Epistemic Uncertainty from Aleatoric Uncertainty. AISTATS 2023: 11355-11370 - [c39]Ayane Tashiro, Mai Imamura, Shiro Kumano, Kazuhiro Otsuka:
Analyzing and Recognizing Interlocutors' Gaze Functions from Multimodal Nonverbal Cues. ICMI 2023: 33-41 - [c38]Mai Imamura, Ayane Tashiro, Shiro Kumano, Kazuhiro Otsuka:
Analyzing Synergetic Functional Spectrum from Head Movements and Facial Expressions in Conversations. ICMI 2023: 42-50 - 2022
- [c37]Katsutoshi Masai, Monica Perusquía-Hernández, Maki Sugimoto, Shiro Kumano, Toshitaka Kimura:
Consistent Smile Intensity Estimation from Wearable Optical Sensors. ACII 2022: 1-8 - [c36]Hiromi Narimatsu, Ryo Ueda, Shiro Kumano:
Cross-Linguistic Study on Affective Impression and Language for Visual Art Using Neural Speaker. ACII 2022: 1-8 - [c35]Takayuki Ogasawara, Hanako Fukamachi, Kenryu Aoyagi, Shiro Kumano, Hiroyoshi Togo, Koichiro Oka:
Real-time Auditory Feedback System for Bow-tilt Correction while Aiming in Archery. BIBE 2022: 51-54 - 2021
- [j10]Takayuki Ogasawara, Hanako Fukamachi, Kenryu Aoyagi, Shiro Kumano, Hiroyoshi Togo, Koichiro Oka:
Archery Skill Assessment Using an Acceleration Sensor. IEEE Trans. Hum. Mach. Syst. 51(3): 221-228 (2021) - [c34]Yan Zhou, Tsukasa Ishigaki, Shiro Kumano:
Deep Explanatory Polytomous Item-Response Model for Predicting Idiosyncratic Affective Ratings. ACII 2021: 1-8 - [c33]Ryo Ishii, Shiro Kumano, Ryuichiro Higashinaka, Shiro Ozawa, Tetsuya Kinebuchi:
Estimation of Empathy Skill Level and Personal Traits Using Gaze Behavior and Dialogue Act During Turn-Changing. HCI (41) 2021: 44-57 - 2020
- [c32]Aiko Murata, Shiro Kumano, Junji Watanabe:
Interpersonal physiological linkage is related to excitement during a joint task. CogSci 2020 - [c31]Hiroyuki Ishihara, Shiro Kumano:
Gravity-Direction-Aware Joint Inter-Device Matching and Temporal Alignment between Camera and Wearable Sensors. ICMI Companion 2020: 433-441
2010 – 2019
- 2019
- [j9]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation. Multimodal Technol. Interact. 3(4): 70 (2019) - [c30]Shiro Kumano, Keishi Nomura:
Multitask Item Response Models for Response Bias Removal from Affective Ratings. ACII 2019: 1-7 - [c29]Monica Perusquía-Hernández, Saho Ayabe-Kanamura, Kenji Suzuki, Shiro Kumano:
The Invisible Potential of Facial Electromyography: A Comparison of EMG and Computer Vision when Distinguishing Posed from Spontaneous Smiles. CHI 2019: 149 - [c28]Keishi Nomura, Aiko Murata, Yuko Yotsumoto, Shiro Kumano:
Bayesian Item Response Model with Condition-specific Parameters for Evaluating the Differential Effects of Perspective-taking on Emotional Sharing. CogSci 2019: 3537 - [c27]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing. HCI (14) 2019: 45-53 - 2018
- [c26]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level. ICMI 2018: 31-39 - 2017
- [j8]Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato:
Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations. IEEE Trans. Multim. 19(1): 107-122 (2017) - [c25]Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Computational model of idiosyncratic perception of others' emotions. ACII 2017: 42-49 - [c24]Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Comparing empathy perceived by interlocutors in multiparty conversation and external observers. ACII 2017: 50-57 - [c23]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings. HAI 2017: 181-187 - [c22]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing gaze behavior during turn-taking for estimating empathy skill level. ICMI 2017: 365-373 - 2016
- [j7]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings. ACM Trans. Interact. Intell. Syst. 6(1): 4:1-4:31 (2016) - [j6]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Using Respiration to Predict Who Will Speak Next and When in Multiparty Meetings. ACM Trans. Interact. Intell. Syst. 6(2): 20:1-20:20 (2016) - [c21]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings. ICMI 2016: 209-216 - 2015
- [j5]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Masafumi Matsuda, Junji Yamato:
Analyzing Interpersonal Empathy via Collective Impressions. IEEE Trans. Affect. Comput. 6(4): 324-336 (2015) - [j4]Dairazalia Sanchez-Cortes, Shiro Kumano, Kazuhiro Otsuka, Daniel Gatica-Perez:
In the Mood for Vlog: Multimodal Inference in Conversational Social Video. ACM Trans. Interact. Intell. Syst. 5(2): 9:1-9:24 (2015) - [c20]Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato:
Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision. FG 2015: 1-8 - [c19]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Predicting next speaker based on head movement in multi-party meetings. ICASSP 2015: 2319-2323 - [c18]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings. ICMI 2015: 99-106 - 2014
- [j3]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Junji Yamato:
Analyzing Perceived Empathy Based on Reaction Time in Behavioral Mimicry. IEICE Trans. Inf. Syst. 97-D(8): 2008-2020 (2014) - [c17]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis and modeling of next speaking start timing based on gaze behavior in multi-party meetings. ICASSP 2014: 694-698 - [c16]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis of Timing Structure of Eye Contact in Turn-changing. GazeIn@ICMI 2014: 15-20 - [c15]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings. ICMI 2014: 18-25 - 2013
- [c14]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Ryo Ishii, Junji Yamato:
Using a Probabilistic Topic Model to Link Observers' Perception Tendency to Personality. ACII 2013: 588-593 - [c13]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Junji Yamato:
Analyzing perceived empathy/antipathy based on reaction time in behavioral coordination. FG 2013: 1-8 - [c12]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato:
Predicting next speaker and timing from gaze transition patterns in multi-party meetings. ICMI 2013: 79-86 - [c11]Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato:
MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces. ICMI 2013: 389-396 - [c10]Dairazalia Sanchez-Cortes, Joan-Isaac Biel, Shiro Kumano, Junji Yamato, Kazuhiro Otsuka, Daniel Gatica-Perez:
Inferring mood in ubiquitous conversational video. MUM 2013: 22:1-22:9 - 2012
- [j2]Dan Mikami, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Enhancing Memory-Based Particle Filter with Detection-Based Memory Acquisition for Robustness under Severe Occlusion. IEICE Trans. Inf. Syst. 95-D(11): 2693-2703 (2012) - [c9]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Masafumi Matsuda, Junji Yamato:
Understanding communicative emotions from collective external observations. CHI Extended Abstracts 2012: 2201-2206 - [c8]Kazuhiro Otsuka, Shiro Kumano, Dan Mikami, Masafumi Matsuda, Junji Yamato:
Reconstructing multiparty conversation field by augmenting human head motions via dynamic displays. CHI Extended Abstracts 2012: 2243-2248 - [c7]Dan Mikami, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Enhancing Memory-based Particle Filter with Detection-based Memory Acquisition for Robustness under Severe Occlusion. VISAPP (2) 2012: 208-215 - 2011
- [c6]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato:
Analyzing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings. FG 2011: 43-50 - [c5]Kazuhiro Otsuka, Kamil Sebastian Mucha, Shiro Kumano, Dan Mikami, Masafumi Matsuda, Junji Yamato:
A system for reconstructing multiparty conversation field based on augmented head motion by dynamic projection. ACM Multimedia 2011: 763-764 - [c4]Lumei Su, Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato, Yoichi Sato:
Early facial expression recognition with high-frame rate 3D sensing. SMC 2011: 3304-3310
2000 – 2009
- 2009
- [j1]Shiro Kumano, Kazuhiro Otsuka, Junji Yamato, Eisaku Maeda, Yoichi Sato:
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates. Int. J. Comput. Vis. 83(2): 178-194 (2009) - [c3]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato:
Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings. ICMI 2009: 99-106 - 2008
- [c2]Shiro Kumano, Kazuhiro Otsuka, Junji Yamato, Eisaku Maeda, Yoichi Sato:
Combining Stochastic and Deterministic Search for Pose-Invariant Facial Expression Recognition. BMVC 2008: 1-10 - 2007
- [c1]Shiro Kumano, Kazuhiro Otsuka, Junji Yamato, Eisaku Maeda, Yoichi Sato:
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates. ACCV (1) 2007: 324-334
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:10 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint