


default search action
Computer Speech & Language, Volume 93
Volume 93, 2025
- Liming Zhou, Xiaowei Xu
, Xiaodong Wang:
LSRD-Net: A fine-grained sentiment analysis method based on log-normalized semantic relative distance. 101782 - Lele Cao
, Valentin Leonhard Buchner, Zineb Senane, Fangkai Yang:
GenCeption: Evaluate vision LLMs with unlabeled unimodal data. 101785 - V. Soni Ishwarya, K. Mohanaprasad
:
A novel Adaptive Kolmogorov Arnold Sparse Masked Attention Model with multi-loss optimization for Acoustic Echo Cancellation in double-talk noisy scenario. 101786 - Zigang Chen, Yuening Zhou, Zhen Wang, Fan Liu, Tao Leng, Haihua Zhu:
A bias evaluation solution for multiple sensitive attribute speech recognition. 101787 - Haoxiang Chen, Yanyan Xu, Dengfeng Ke, Kaile Su:
DDP-Unet: A mapping neural network for single-channel speech enhancement. 101795 - Igor Abramovski, Alon Vinnikov, Shalev Shaer, Naoyuki Kanda, Xiaofei Wang, Amir Ivry, Eyal Krupka:
Summary of the NOTSOFAR-1 challenge: Highlights and learnings. 101796 - Yuxuan Zhang, Zipeng Zhang, Weiwei Guo, Wei Chen, Zhaohai Liu, Houguang Liu:
LRetUNet: A U-Net-based retentive network for single-channel speech enhancement. 101798 - Cheng-Hung Hu
, Yusuke Yasuda, Tomoki Toda:
E2EPref: An end-to-end preference-based framework for speech quality assessment to alleviate bias in direct assessment scores. 101799 - Khanh Quoc Tran, Quang Phan-Minh Huynh, Oanh Thi-Hong Le, Kiet Van Nguyen
, Ngan Luu-Thuy Nguyen:
ViTASA: New benchmark and methods for Vietnamese targeted aspect sentiment analysis for multiple textual domains. 101800 - Cong Pang
, Ye Ni
, Lin Zhou, Li Zhao, Feifei Xiong
:
Exploiting spatial information and target speaker phoneme loss for multichannel directional speech enhancement and recognition. 101801

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.