default search action
Shri Narayanan
Shrikanth Narayanan – Shrikanth S. Narayanan – Shrikanth Shri Narayanan
Person information
- affiliation: University of Southern California, Signal Analysis and Interpretation Lab, Los Angeles, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j159]Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemyslaw Kazienko, Stanislaw Saganowski, Shrikanth Narayanan:
Scaling Representation Learning From Ubiquitous ECG With State-Space Models. IEEE J. Biomed. Health Informatics 28(10): 5877-5889 (2024) - [c681]Hong Nguyen, Hoang Nguyen, Melinda Chang, Hieu Pham, Shrikanth Narayanan, Michael Pazzani:
ConPro: Learning Severity Representation for Medical Images using Contrastive Learning and Preference Optimization. CVPR Workshops 2024: 5105-5112 - [c680]Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan:
Audio-Visual Child-Adult Speaker Classification in Dyadic Interactions. ICASSP 2024: 8090-8094 - [c679]Shanti Stewart, Kleanthis Avramidis, Tiantian Feng, Shrikanth Narayanan:
Emotion-Aligned Contrastive Learning Between Images and Music. ICASSP 2024: 8135-8139 - [c678]Sabyasachee Baruah, Shrikanth Narayanan:
Character Attribute Extraction from Movie Scripts Using LLMs. ICASSP 2024: 8270-8275 - [c677]Yoonsoo Nam, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, Shrikanth Narayanan:
Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization. ICASSP 2024: 8396-8400 - [c676]Tiantian Feng, Rajat Hebbar, Shrikanth Narayanan:
TRUST-SER: On The Trustworthiness Of Fine-Tuning Pre-Trained Speech Embeddings For Speech Emotion Recognition. ICASSP 2024: 11201-11205 - [c675]Tiantian Feng, Shrikanth Narayanan:
Foundation Model Assisted Automatic Speech Emotion Recognition: Transcribing, Annotating, and Augmenting. ICASSP 2024: 12116-12120 - [c674]Keith Burghardt, Ashwin Rao, Georgios Chochlakis, Sabyasachee Baruah, Siyi Guo, Zihao He, Andrew Rojecki, Shrikanth Narayanan, Kristina Lerman:
Socio-Linguistic Characteristics of Coordinated Inauthentic Accounts. ICWSM 2024: 164-176 - [c673]Parsa Hejabi, Akshay Kiran Padte, Preni Golazizian, Rajat Hebbar, Jackson Trager, Georgios Chochlakis, Aditya Kommineni, Ellie Graeden, Shrikanth Narayanan, Benjamin A. T. Grahama, Morteza Dehghani:
CVAT-BWV: A Web-Based Video Annotation Platform for Police Body-Worn Video. IJCAI 2024: 8674-8678 - [i141]Benjamin A. T. Grahama, Lauren Brown, Georgios Chochlakis, Morteza Dehghani, Raquel Delerme, Brittany Friedman, Ellie Graeden, Preni Golazizian, Rajat Hebbar, Parsa Hejabi, Aditya Kommineni, Mayagüez Salinas, Michael Sierra-Arévalo, Jackson Trager, Nicholas Weller, Shrikanth Narayanan:
A Multi-Perspective Machine Learning Approach to Evaluate Police-Driver Interaction in Los Angeles. CoRR abs/2402.01703 (2024) - [i140]Tiantian Feng, Shrikanth Narayanan:
Understanding Stress, Burnout, and Behavioral Patterns in Medical Residents Using Large-scale Longitudinal Wearable Recordings. CoRR abs/2402.09028 (2024) - [i139]Tiantian Feng, Daniel Yang, Digbalay Bose, Shrikanth Narayanan:
Can Text-to-image Model Assist Multi-modal Learning for Visual Recognition with Visual Modality Missing? CoRR abs/2402.09036 (2024) - [i138]Aditya Kommineni, Kleanthis Avramidis, Richard Leahy, Shrikanth Narayanan:
Knowledge-guided EEG Representation Learning. CoRR abs/2403.03222 (2024) - [i137]Alice Baird, Rachel Manzelli, Panagiotis Tzirakis, Chris Gagne, Haoqi Li, Sadie Allen, Sander Dieleman, Brian Kulis, Shrikanth S. Narayanan, Alan Cowen:
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio Benchmarks and Novel Data. CoRR abs/2403.14048 (2024) - [i136]Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan:
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition. CoRR abs/2403.17125 (2024) - [i135]Tiantian Feng, Xuan Shi, Rahul Gupta, Shrikanth S. Narayanan:
TI-ASU: Toward Robust Automatic Speech Understanding through Text-to-speech Imputation Against Missing Speech Modality. CoRR abs/2404.17983 (2024) - [i134]Hong Nguyen, Hoang Nguyen, Melinda Chang, Hieu H. Pham, Shrikanth Narayanan, Michael Pazzani:
ConPro: Learning Severity Representation for Medical Images using Contrastive Learning and Preference Optimization. CoRR abs/2404.18831 (2024) - [i133]Anfeng Xu, Kevin Huang, Tiantian Feng, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan:
Exploring Speech Foundation Models for Speaker Diarization in Child-Adult Dyadic Interactions. CoRR abs/2406.07890 (2024) - [i132]Jihwan Lee, Aditya Kommineni, Tiantian Feng, Kleanthis Avramidis, Xuan Shi, Sudarsana Kadiri, Shrikanth Narayanan:
Toward Fully-End-to-End Listened Speech Decoding from EEG Signals. CoRR abs/2406.08644 (2024) - [i131]Tiantian Feng, Dimitrios Dimitriadis, Shrikanth Narayanan:
Can Synthetic Audio From Generative Foundation Models Assist Audio Recognition and Speech Modeling? CoRR abs/2406.08800 (2024) - [i130]Tuo Zhang, Tiantian Feng, Yibin Ni, Mengqin Cao, Ruying Liu, Katharine Butler, Yanjun Weng, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
Creating a Lens of Chinese Culture: A Multimodal Dataset for Chinese Pun Rebus Art Understanding. CoRR abs/2406.10318 (2024) - [i129]Angelly Cabrera, Kleanthis Avramidis, Shrikanth Narayanan:
Early Detection of Coffee Leaf Rust Through Convolutional Neural Networks Trained on Low-Resolution Images. CoRR abs/2407.14737 (2024) - [i128]Tiantian Feng, Tuo Zhang, Salman Avestimehr, Shrikanth S. Narayanan:
ModalityMirror: Improving Audio Classification in Modality Heterogeneity Federated Learning with Multimodal Distillation. CoRR abs/2408.15803 (2024) - [i127]Georgios Chochlakis, Niyantha Maruthu Pandiyan, Kristina Lerman, Shrikanth Narayanan:
Larger Language Models Don't Care How You Think: Why Chain-of-Thought Prompting Fails in Subjective Tasks. CoRR abs/2409.06173 (2024) - [i126]Tiantian Feng, Anfeng Xu, Xuan Shi, Somer Bishop, Shrikanth Narayanan:
Egocentric Speaker Classification in Child-Adult Dyadic Interactions: From Sensing to Computational Modeling. CoRR abs/2409.09340 (2024) - [i125]Zhonghao Shi, Harshvardhan Srivastava, Xuan Shi, Shrikanth Narayanan, Maja J. Mataric:
Personalized Speech Recognition for Children with Test-Time Adaptation. CoRR abs/2409.13095 (2024) - [i124]Aditya Kommineni, Digbalay Bose, Tiantian Feng, So Hyun Kim, Helen Tager-Flusberg, Somer Bishop, Catherine Lord, Sudarsana Kadiri, Shrikanth Narayanan:
Towards Child-Inclusive Clinical Video Understanding for Autism Spectrum Disorder. CoRR abs/2409.13606 (2024) - [i123]Hong Nguyen, Sean Foley, Kevin Huang, Xuan Shi, Tiantian Feng, Shrikanth Narayanan:
Speech2rtMRI: Speech-Guided Diffusion Model for Real-time MRI Video of the Vocal Tract during Speech. CoRR abs/2409.15525 (2024) - [i122]Aditya Ashvin, Rimita Lahiri, Aditya Kommineni, Somer Bishop, Catherine Lord, Sudarsana Reddy Kadiri, Shrikanth Narayanan:
Evaluation of state-of-the-art ASR Models in Child-Adult Interactions. CoRR abs/2409.16135 (2024) - [i121]Girish Narayanswamy, Xin Liu, Kumar Ayush, Yuzhe Yang, Xuhai Xu, Shun Liao, Jake Garrison, Shyam Tailor, Jake Sunshine, Yun Liu, Tim Althoff, Shrikanth Narayanan, Pushmeet Kohli, Jiening Zhan, Mark Malhotra, Shwetak N. Patel, Samy Abdel-Ghaffar, Daniel McDuff:
Scaling Wearable Foundation Models. CoRR abs/2410.13638 (2024) - [i120]Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan:
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors. CoRR abs/2410.13776 (2024) - [i119]Anne-Maria Laukkanen, Sudarsana Reddy Kadiri, Shrikanth Narayanan, Paavo Alku:
Can a Machine Distinguish High and Low Amount of Social Creak in Speech? CoRR abs/2410.17028 (2024) - 2023
- [j158]Raghuveer Peri, Krishna Somandepalli, Shrikanth Narayanan:
A study of bias mitigation strategies for speaker recognition. Comput. Speech Lang. 79: 101481 (2023) - [j157]Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth S. Narayanan, Theodora Chaspari:
Modeling inter-individual differences in ambulatory-based multimodal signals via metric learning: a case study of personalized well-being estimation of healthcare workers. Frontiers Digit. Health 5 (2023) - [j156]Chi-Chun Lee, Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan:
An Engineering View on Emotions and Speech: From Analysis and Predictive Models to Responsible Human-Centered Applications. Proc. IEEE 111(10): 1142-1158 (2023) - [j155]Rahul Sharma, Krishna Somandepalli, Shrikanth Narayanan:
Cross Modal Video Representations for Weakly Supervised Active Speaker Localization. IEEE Trans. Multim. 25: 7825-7836 (2023) - [c672]Tiantian Feng, Shrikanth Narayanan:
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models. ACII 2023: 1-8 - [c671]Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, Shrikanth Narayanan:
Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models. ACII 2023: 1-8 - [c670]Sabyasachee Baruah, Shrikanth Narayanan:
Character Coreference Resolution in Movie Screenplays. ACL (Findings) 2023: 10300-10313 - [c669]Mohammad Rostami, Digbalay Bose, Shrikanth Narayanan, Aram Galstyan:
Domain Adaptation for Sentiment Analysis Using Robust Internal Representations. EMNLP (Findings) 2023: 11484-11498 - [c668]Nikolaos Antoniou, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan:
Designing and Evaluating Speech Emotion Recognition Systems: A Reality Check Case Study with IEMOCAP. ICASSP 2023: 1-5 - [c667]Victor Ardulov, Shrikanth Narayanan:
Navigating and Reaching Therapeutic Goals with Dynamical Systems in Conversation-Based Interventions. ICASSP 2023: 1-5 - [c666]Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan:
Signal Processing Grand Challenge 2023 - E-Prevention: Sleep Behavior as an Indicator of Relapses in Psychotic Patients. ICASSP 2023: 1-2 - [c665]Kleanthis Avramidis, Tiantian Feng, Digbalay Bose, Shrikanth Narayanan:
Multimodal Estimation Of Change Points Of Physiological Arousal During Driving. ICASSP Workshops 2023: 1-5 - [c664]Kleanthis Avramidis, Shanti Stewart, Shrikanth Narayanan:
On the Role of Visual Context in Enriching Music Representations. ICASSP 2023: 1-5 - [c663]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan:
Contextually-Rich Human Affect Perception Using Multimodal Scene Information. ICASSP 2023: 1-5 - [c662]Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan:
Leveraging Label Correlations in a Multi-Label Setting: a Case Study in Emotion. ICASSP 2023: 1-5 - [c661]Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan:
Using Emotion Embeddings to Transfer Knowledge between Emotions, Languages, and Annotation Formats. ICASSP 2023: 1-5 - [c660]Rajat Hebbar, Digbalay Bose, Krishna Somandepalli, Veena Vijai, Shrikanth Narayanan:
A Dataset for Audio-Visual Sound Event Detection in Movies. ICASSP 2023: 1-5 - [c659]Rimita Lahiri, Md. Nasir, Catherine Lord, So Hyun Kim, Shrikanth Narayanan:
A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations. ICASSP 2023: 1-5 - [c658]Ravi Pranjal, Ranjana Seshadri, Rakesh Kumar Sanath Kumar Kadaba, Tiantian Feng, Shrikanth S. Narayanan, Theodora Chaspari:
Toward Privacy-Enhancing Ambulatory-Based Well-Being Monitoring: Investigating User Re-Identification Risk in Multimodal Data. ICASSP 2023: 1-5 - [c657]Xuan Shi, Erica Cooper, Xin Wang, Junichi Yamagishi, Shrikanth Narayanan:
Can Knowledge of End-to-End Text-to-Speech Models Improve Neural Midi-to-Audio Synthesis Systems? ICASSP 2023: 1-5 - [c656]Tuo Zhang, Tiantian Feng, Samiul Alam, Sunwoo Lee, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
FedAudio: A Federated Learning Benchmark for Audio Tasks. ICASSP 2023: 1-5 - [c655]Homa Hosseinmardi, Amir Ghasemian, Kristina Lerman, Shrikanth Narayanan, Emilio Ferrara:
Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction. ICHI 2023: 91-100 - [c654]Shrikanth Narayanan:
Bridging Speech Science and Technology - Now and Into the Future. INTERSPEECH 2023: 1 - [c653]Reed Blaylock, Shrikanth Narayanan:
Beatboxing Kick Drum Kinematics. INTERSPEECH 2023: 2583-2587 - [c652]Thomas Melistas, Lefteris Kapelonis, Nikolaos Antoniou, Petros Mitseas, Dimitris Sgouropoulos, Theodoros Giannakopoulos, Athanasios Katsamanis, Shrikanth Narayanan:
Cross-Lingual Features for Alzheimer's Dementia Detection from Speech. INTERSPEECH 2023: 3008-3012 - [c651]Rimita Lahiri, Tiantian Feng, Rajat Hebbar, Catherine Lord, So Hyun Kim, Shrikanth Narayanan:
Robust Self Supervised Speech Embeddings for Child-Adult Classification in Interactions involving Children with Autism. INTERSPEECH 2023: 3557-3561 - [c650]Anfeng Xu, Rajat Hebbar, Rimita Lahiri, Tiantian Feng, Lindsay Butler, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan:
Understanding Spoken Language Development of Children with ASD Using Pre-trained Speech Embeddings. INTERSPEECH 2023: 4633-4637 - [c649]Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, Shrikanth Narayanan:
FedMultimodal: A Benchmark for Multimodal Federated Learning. KDD 2023: 4035-4045 - [c648]Digbalay Bose, Rajat Hebbar, Tiantian Feng, Krishna Somandepalli, Anfeng Xu, Shrikanth Narayanan:
MM-AU: Towards Multimodal Understanding of Advertisement Videos. ACM Multimedia 2023: 86-95 - [c647]Rajat Hebbar, Digbalay Bose, Shrikanth Narayanan:
SEAR: Semantically-grounded Audio Representations. ACM Multimedia 2023: 2785-2794 - [c646]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan:
MovieCLIP: Visual Scene Recognition in Movies. WACV 2023: 2082-2091 - [i118]Rajat Hebbar, Digbalay Bose, Krishna Somandepalli, Veena Vijai, Shrikanth Narayanan:
A dataset for Audio-Visual Sound Event Detection in Movies. CoRR abs/2302.07315 (2023) - [i117]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan:
Contextually-rich human affect perception using multimodal scene information. CoRR abs/2303.06904 (2023) - [i116]Nikolaos Antoniou, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan:
Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP. CoRR abs/2304.00860 (2023) - [i115]Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan:
Signal Processing Grand Challenge 2023 - e-Prevention: Sleep Behavior as an Indicator of Relapses in Psychotic Patients. CoRR abs/2304.08614 (2023) - [i114]Tiantian Feng, Rajat Hebbar, Shrikanth Narayanan:
TrustSER: On the Trustworthiness of Fine-tuning Pre-trained Speech Embeddings For Speech Emotion Recognition. CoRR abs/2305.11229 (2023) - [i113]Keith Burghardt, Ashwin Rao, Siyi Guo, Zihao He, Georgios Chochlakis, Sabyasachee Baruah, Andrew Rojecki, Shri Narayanan, Kristina Lerman:
Socio-Linguistic Characteristics of Coordinated Inauthentic Accounts. CoRR abs/2305.11867 (2023) - [i112]Anfeng Xu, Rajat Hebbar, Rimita Lahiri, Tiantian Feng, Lindsay Butler, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan:
Understanding Spoken Language Development of Children with ASD Using Pre-trained Speech Embeddings. CoRR abs/2305.14117 (2023) - [i111]Tuo Zhang, Tiantian Feng, Samiul Alam, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
GPT-FL: Generative Pre-trained Model-Assisted Federated Learning. CoRR abs/2306.02210 (2023) - [i110]Tiantian Feng, Shrikanth Narayanan:
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models. CoRR abs/2306.05350 (2023) - [i109]Tiantian Feng, Digbalay Bose, Xuan Shi, Shrikanth Narayanan:
Unlocking Foundation Models for Privacy-Enhancing Speech Understanding: An Early Study on Low Resource Speech Training Leveraging Label-guided Synthetic Speech Content. CoRR abs/2306.07791 (2023) - [i108]Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, Shrikanth Narayanan:
FedMultimodal: A Benchmark For Multimodal Federated Learning. CoRR abs/2306.09486 (2023) - [i107]Tiantian Feng, Brandon M. Booth, Shrikanth Narayanan:
Learning Behavioral Representations of Routines From Large-scale Unlabeled Wearable Time-series Data Streams using Hawkes Point Process. CoRR abs/2307.04445 (2023) - [i106]Shanti Stewart, Tiantian Feng, Kleanthis Avramidis, Shrikanth Narayanan:
Emotion-Aligned Contrastive Learning Between Images and Music. CoRR abs/2308.12610 (2023) - [i105]Digbalay Bose, Rajat Hebbar, Tiantian Feng, Krishna Somandepalli, Anfeng Xu, Shrikanth Narayanan:
MM-AU: Towards Multimodal Understanding of Advertisement Videos. CoRR abs/2308.14052 (2023) - [i104]Tiantian Feng, Shrikanth Narayanan:
Foundation Model Assisted Automatic Speech Emotion Recognition: Transcribing, Annotating, and Augmenting. CoRR abs/2309.08108 (2023) - [i103]Yoonsoo Nam, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, Shrikanth Narayanan:
Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization. CoRR abs/2309.09405 (2023) - [i102]Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemyslaw Kazienko, Stanislaw Saganowski, Shrikanth Narayanan:
Scaling Representation Learning from Ubiquitous ECG with State-Space Models. CoRR abs/2309.15292 (2023) - [i101]Samiul Alam, Tuo Zhang, Tiantian Feng, Hui Shen, Zhichao Cao, Dong Zhao, JeongGil Ko, Kiran Somasundaram, Shrikanth S. Narayanan, Salman Avestimehr, Mi Zhang:
FedAIoT: A Federated Learning Benchmark for Artificial Intelligence of Things. CoRR abs/2310.00109 (2023) - [i100]Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan:
Audio-visual child-adult speaker classification in dyadic interactions. CoRR abs/2310.01867 (2023) - [i99]Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, Shrikanth Narayanan:
Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models. CoRR abs/2311.03551 (2023) - [i98]Hong Nguyen, Cuong V. Nguyen, Shrikanth Narayanan, Benjamin Y. Xu, Michael Pazzani:
Explainable Severity ranking via pairwise n-hidden comparison: a case study of glaucoma. CoRR abs/2312.02541 (2023) - 2022
- [j154]Zane Durante, Victor Ardulov, Manoj Kumar, Jennifer Gongola, Thomas D. Lyon, Shrikanth Narayanan:
Causal indicators for assessing the truthfulness of child speech in forensic interviews. Comput. Speech Lang. 71: 101263 (2022) - [j153]Prashanth Gurunath Shivakumar, Shrikanth Narayanan:
End-to-end neural systems for automatic children speech recognition: An empirical study. Comput. Speech Lang. 72: 101289 (2022) - [j152]Tae Jin Park, Naoyuki Kanda, Dimitrios Dimitriadis, Kyu Jeong Han, Shinji Watanabe, Shrikanth Narayanan:
A review of speaker diarization: Recent advances with deep learning. Comput. Speech Lang. 72: 101317 (2022) - [j151]Zhuohao Chen, Nikolaos Flemotomos, Karan Singla, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan:
An automated quality evaluation framework of psychotherapy conversations with local quality estimates. Comput. Speech Lang. 75: 101380 (2022) - [j150]Gábor Mihály Tóth, Tim Hempel, Krishna Somandepalli, Shri Narayanan:
Studying Large-Scale Behavioral Differences in Auschwitz-Birkenau with Simulation of Gendered Narratives. Digit. Humanit. Q. 16(3) (2022) - [j149]Björn W. Schuller, Yonina C. Eldar, Maja Pantic, Shrikanth Narayanan, Tuomas Virtanen, Jianhua Tao:
Editorial: Intelligent Signal Analysis for Contagious Virus Diseases. IEEE J. Sel. Top. Signal Process. 16(2): 159-163 (2022) - [j148]Anil Ramakrishna, Rahul Gupta, Shrikanth Narayanan:
Joint Multi-Dimensional Model for Global and Time-Series Annotations. IEEE Trans. Affect. Comput. 13(1): 473-484 (2022) - [j147]James Gibson, David C. Atkins, Torrey A. Creed, Zac E. Imel, Panayiotis G. Georgiou, Shrikanth Narayanan:
Multi-Label Multi-Task Deep Learning for Behavioral Coding. IEEE Trans. Affect. Comput. 13(1): 508-518 (2022) - [j146]Md. Nasir, Brian R. Baucom, Craig J. Bryan, Shrikanth Narayanan, Panayiotis G. Georgiou:
Modeling Vocal Entrainment in Conversational Speech Using Deep Unsupervised Learning. IEEE Trans. Affect. Comput. 13(3): 1651-1663 (2022) - [j145]Krishna Somandepalli, Rajat Hebbar, Shrikanth Narayanan:
Robust Character Labeling in Movie Videos: Data Resources and Self-Supervised Feature Adaptation. IEEE Trans. Multim. 24: 3355-3368 (2022) - [c645]Aggelina Chatziagapi, Dimitris Sgouropoulos, Constantinos Karouzos, Thomas Melistas, Theodoros Giannakopoulos, Athanasios Katsamanis, Shrikanth Narayanan:
Audio and ASR-based Filled Pause Detection. ACII 2022: 1-7 - [c644]Zhuohao Chen, Nikolaos Flemotomos, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
Leveraging Open Data and Task Augmentation to Automated Behavioral Coding of Psychotherapy Conversations in Low-Resource Scenarios. EMNLP (Findings) 2022: 5787-5795 - [c643]Tiantian Feng, Hanieh Hashemi, Murali Annavaram, Shrikanth S. Narayanan:
Enhancing Privacy Through Domain Adaptive Noise Injection For Speech Emotion Recognition. ICASSP 2022: 7702-7706 - [c642]Kleanthis Avramidis, Mohammad Rostami, Melinda Chang, Shrikanth Narayanan:
Automating Detection of Papilledema in Pediatric Fundus Images with Explainable Machine Learning. ICIP 2022: 3973-3977 - [c641]Tiantian Feng, Shrikanth Narayanan:
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-Labeling. INTERSPEECH 2022: 5050-5054 - [c640]Tiantian Feng, Raghuveer Peri, Shrikanth Narayanan:
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition on Federated Learning. INTERSPEECH 2022: 5055-5059 - [c639]Nikolaos Flemotomos, Shrikanth Narayanan:
Multimodal Clustering with Role Induced Constraints for Speaker Diarization. INTERSPEECH 2022: 5075-5079 - [i97]Tiantian Feng, Shrikanth Narayanan:
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-Labeling. CoRR abs/2203.08810 (2022) - [i96]Rahul Sharma, Shrikanth Narayanan:
Audio visual character profiles for detecting background characters in entertainment media. CoRR abs/2203.11368 (2022) - [i95]Nicholas Mehlman, Anirudh Sreeram, Raghuveer Peri, Shrikanth Narayanan:
Mel Frequency Spectral Domain Defenses against Adversarial Attacks on Speech Recognition Systems. CoRR abs/2203.15283 (2022) - [i94]Rahul Sharma, Shrikanth Narayanan:
Using Active Speaker Faces for Diarization in TV shows. CoRR abs/2203.15961 (2022) - [i93]Nikolaos Flemotomos, Shrikanth Narayanan:
Multimodal Clustering with Role Induced Constraints for Speaker Diarization. CoRR abs/2204.00657 (2022) - [i92]Tiantian Feng, Raghuveer Peri, Shrikanth Narayanan:
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning. CoRR abs/2204.02500 (2022) - [i91]