default search action
Chengzhu Yu
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [i11]Xin Zhang, Iván Vallés-Pérez, Andreas Stolcke, Chengzhu Yu, Jasha Droppo, Olabanji Shonibare, Roberto Barra-Chicote, Venkatesh Ravichandran:
Stutter-TTS: Controlled Synthesis and Improved Recognition of Stuttered Speech. CoRR abs/2211.09731 (2022) - 2021
- [c32]Yufei Liu, Chengzhu Yu, Shuai Wang, Zhenchuan Yang, Yang Chao, Weibin Zhang:
Non-Parallel Any-to-Many Voice Conversion by Replacing Speaker Statistics. Interspeech 2021: 1369-1373 - 2020
- [c31]Qiao Tian, Zewang Zhang, Ling-Hui Chen, Heng Lu, Chengzhu Yu, Chao Weng, Dong Yu:
The Tencent speech synthesis system for Blizzard Challenge 2020. Blizzard Challenge / Voice Conversion Challenge 2020 - [c30]Chengqi Deng, Chengzhu Yu, Heng Lu, Chao Weng, Dong Yu:
Pitchnet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network. ICASSP 2020: 7749-7753 - [c29]Chao Weng, Chengzhu Yu, Jia Cui, Chunlei Zhang, Dong Yu:
Minimum Bayes Risk Training of RNN-Transducer for End-to-End Speech Recognition. INTERSPEECH 2020: 966-970 - [c28]Yusong Wu, Shengchen Li, Chengzhu Yu, Heng Lu, Chao Weng, Liqiang Zhang, Dong Yu:
Peking Opera Synthesis via Duration Informed Attention Network. INTERSPEECH 2020: 1226-1230 - [c27]Liqiang Zhang, Chengzhu Yu, Heng Lu, Chao Weng, Chunlei Zhang, Yusong Wu, Xiang Xie, Zijin Li, Dong Yu:
DurIAN-SC: Duration Informed Attention Network Based Singing Voice Conversion System. INTERSPEECH 2020: 1231-1235 - [c26]Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu:
DurIAN: Duration Informed Attention Network for Speech Synthesis. INTERSPEECH 2020: 2027-2031 - [i10]Liqiang Zhang, Chengzhu Yu, Heng Lu, Chao Weng, Chunlei Zhang, Yusong Wu, Xiang Xie, Zijin Li, Dong Yu:
DurIAN-SC: Duration Informed Attention Network based Singing Voice Conversion System. CoRR abs/2008.03009 (2020) - [i9]Yusong Wu, Shengchen Li, Chengzhu Yu, Heng Lu, Chao Weng, Liqiang Zhang, Dong Yu:
Peking Opera Synthesis via Duration Informed Attention Network. CoRR abs/2008.03029 (2020)
2010 – 2019
- 2019
- [c25]Yichi Zhang, Meng Yu, Na Li, Chengzhu Yu, Jia Cui, Dong Yu:
Seq2Seq Attentional Siamese Neural Networks for Text-dependent Speaker Verification. ICASSP 2019: 6131-6135 - [c24]Chih-Kuan Yeh, Jianshu Chen, Chengzhu Yu, Dong Yu:
Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching. ICLR (Poster) 2019 - [c23]John H. L. Hansen, Aditya Joglekar, Meena Chandra Shekhar, Vinay Kothapally, Chengzhu Yu, Lakshmish Kaushik, Abhijeet Sangwan:
The 2019 Inaugural Fearless Steps Challenge: A Giant Leap for Naturalistic Audio. INTERSPEECH 2019: 1851-1855 - [i8]Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu:
DurIAN: Duration Informed Attention Network For Multimodal Synthesis. CoRR abs/1909.01700 (2019) - [i7]Chao Weng, Chengzhu Yu, Jia Cui, Chunlei Zhang, Dong Yu:
Minimum Bayes Risk Training of RNN-Transducer for End-to-End Speech Recognition. CoRR abs/1911.12487 (2019) - [i6]Chengqi Deng, Chengzhu Yu, Heng Lu, Chao Weng, Dong Yu:
PitchNet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network. CoRR abs/1912.01852 (2019) - [i5]Liqiang Zhang, Chengzhu Yu, Heng Lu, Chao Weng, Yusong Wu, Xiang Xie, Zijin Li, Dong Yu:
Learning Singing From Speech. CoRR abs/1912.10128 (2019) - [i4]Yusong Wu, Shengchen Li, Chengzhu Yu, Heng Lu, Chao Weng, Liqiang Zhang, Dong Yu:
Synthesising Expressiveness in Peking Opera via Duration Informed Attention Network. CoRR abs/1912.12010 (2019) - 2018
- [c22]Chao Weng, Jia Cui, Guangsen Wang, Jun Wang, Chengzhu Yu, Dan Su, Dong Yu:
Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition. INTERSPEECH 2018: 761-765 - [c21]Chengzhu Yu, Chunlei Zhang, Chao Weng, Jia Cui, Dong Yu:
A Multistage Training Framework for Acoustic-to-Word Model. INTERSPEECH 2018: 786-790 - [c20]John H. L. Hansen, Abhijeet Sangwan, Aditya Joglekar, Ahmet Emin Bulut, Lakshmish Kaushik, Chengzhu Yu:
Fearless Steps: Apollo-11 Corpus Advancements for Speech Technologies from Earth to the Moon. INTERSPEECH 2018: 2758-2762 - [c19]Chunlei Zhang, Chengzhu Yu, Chao Weng, Jia Cui, Dong Yu:
An Exploration of Directly Using Word as ACOUSTIC Modeling Unit for Speech Recognition. SLT 2018: 64-69 - [c18]Jia Cui, Chao Weng, Guangsen Wang, Jun Wang, Peidong Wang, Chengzhu Yu, Dan Su, Dong Yu:
Improving Attention-Based End-to-End ASR Systems with Sequence-Based Loss Functions. SLT 2018: 353-360 - [i3]Chih-Kuan Yeh, Jianshu Chen, Chengzhu Yu, Dong Yu:
Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching. CoRR abs/1812.09323 (2018) - 2017
- [j3]Chunlei Zhang, Chengzhu Yu, John H. L. Hansen:
An Investigation of Deep-Learning Frameworks for Speaker Verification Antispoofing. IEEE J. Sel. Top. Signal Process. 11(4): 684-694 (2017) - [j2]Dongmei Wang, Chengzhu Yu, John H. L. Hansen:
Robust Harmonic Features for Classification-Based Pitch Estimation. IEEE ACM Trans. Audio Speech Lang. Process. 25(5): 952-964 (2017) - [j1]Chengzhu Yu, John H. L. Hansen:
Active Learning Based Constrained Clustering For Speaker Diarization. IEEE ACM Trans. Audio Speech Lang. Process. 25(11): 2188-2198 (2017) - [c17]Geoffrey Zweig, Chengzhu Yu, Jasha Droppo, Andreas Stolcke:
Advances in all-neural speech recognition. ICASSP 2017: 4805-4809 - [c16]Chunlei Zhang, Fahimeh Bahmaninezhad, Shivesh Ranjan, Chengzhu Yu, Navid Shokouhi, John H. L. Hansen:
UTD-CRSS Systems for 2016 NIST Speaker Recognition Evaluation. INTERSPEECH 2017: 1343-1347 - 2016
- [c15]Douglas W. Oard, John H. L. Hansen, Abhijeet Sangwan, Bryan Toth, Lakshmish Kaushik, Chengzhu Yu:
Toward Access to Multi-Perspective Archival Spoken Word Content. ICADL 2016: 77-82 - [c14]Marc Delcroix, Keisuke Kinoshita, Chengzhu Yu, Atsunori Ogawa, Takuya Yoshioka, Tomohiro Nakatani:
Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions. ICASSP 2016: 5270-5274 - [c13]Shivesh Ranjan, Chengzhu Yu, Chunlei Zhang, Finnian Kelly, John H. L. Hansen:
Language recognition using deep neural networks with very limited training data. ICASSP 2016: 5830-5834 - [c12]Chengzhu Yu, Chunlei Zhang, Shivesh Ranjan, Qian Zhang, Abhinav Misra, Finnian Kelly, John H. L. Hansen:
UTD-CRSS system for the NIST 2015 language recognition i-vector machine learning challenge. ICASSP 2016: 5835-5839 - [c11]Chengzhu Yu, Chunlei Zhang, Finnian Kelly, Abhijeet Sangwan, John H. L. Hansen:
Text-Available Speaker Recognition System for Forensic Applications. INTERSPEECH 2016: 1844-1847 - [i2]Geoffrey Zweig, Chengzhu Yu, Jasha Droppo, Andreas Stolcke:
Advances in All-Neural Speech Recognition. CoRR abs/1609.05935 (2016) - [i1]Chunlei Zhang, Fahimeh Bahmaninezhad, Shivesh Ranjan, Chengzhu Yu, Navid Shokouhi, John H. L. Hansen:
UTD-CRSS Systems for 2016 NIST Speaker Recognition Evaluation. CoRR abs/1610.07651 (2016) - 2015
- [c10]Takuya Yoshioka, Nobutaka Ito, Marc Delcroix, Atsunori Ogawa, Keisuke Kinoshita, Masakiyo Fujimoto, Chengzhu Yu, Wojciech J. Fabian, Miquel Espi, Takuya Higuchi, Shoko Araki, Tomohiro Nakatani:
The NTT CHiME-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices. ASRU 2015: 436-443 - [c9]Chunlei Zhang, Gang Liu, Chengzhu Yu, John H. L. Hansen:
I-vector based physical task stress detection with different fusion strategies. INTERSPEECH 2015: 2689-2693 - [c8]Chengzhu Yu, Atsunori Ogawa, Marc Delcroix, Takuya Yoshioka, Tomohiro Nakatani, John H. L. Hansen:
Robust i-vector extraction for neural network adaptation in noisy environment. INTERSPEECH 2015: 2854-2857 - 2014
- [c7]Chengzhu Yu, Gang Liu, Seongjun Hahm, John H. L. Hansen:
Uncertainty propagation in front end factor analysis for noise robust speaker recognition. ICASSP 2014: 4017-4021 - [c6]Chengzhu Yu, John H. L. Hansen, Douglas W. Oard:
'houston, we have a solution': a case study of the analysis of astronaut speech during NASA apollo 11 for long-term speaker modeling. INTERSPEECH 2014: 945-948 - [c5]Chengzhu Yu, Gang Liu, John H. L. Hansen:
Acoustic feature transformation using UBM-based LDA for speaker recognition. INTERSPEECH 2014: 1851-1854 - [c4]Gang Liu, John H. L. Hansen, Chengzhu Yu, Abhinav Misra, Navid Shokouhi:
Investigating State-of-the-Art Speaker Verification in the case of Unlabeled Development Data. Odyssey 2014: 118-122 - [c3]Gang Liu, Chengzhu Yu, Navid Shokouhi, Abhinav Misra, Hua Xing, John H. L. Hansen:
Utilization of unlabeled development data for speaker verification. SLT 2014: 418-423 - 2013
- [c2]Chengzhu Yu, Kamil K. Wójcicki, Philipos C. Loizou, John H. L. Hansen:
A new mask-based objective measure for predicting the intelligibility of binary masked speech. ICASSP 2013: 7030-7033 - [c1]Abhijeet Sangwan, Lakshmish Kaushik, Chengzhu Yu, John H. L. Hansen, Douglas W. Oard:
'houston, we have a solution': using NASA apollo program to advance speech and language processing technology. INTERSPEECH 2013: 1135-1139
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-20 00:39 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint