
Meng Yu 0003
Person information
- unicode name: 于濛
- affiliation: Tencent AI Lab, Bellevue, WA, USA
- affiliation (PhD 2012): University of California, Irvine, Department of Mathematics, CA, USA
Other persons with the same name
- Meng Yu — disambiguation page
- Meng Yu 0001 — Roosevelt University, Department of Computer Science, Chicago, IL, USA (and 6 more)
- Meng Yu 0002 — Sandbridge Technologies, Tarrytown, NY, USa (and 1 more)
- Meng Yu 0004 — Wuhan University of Technology, School of Logistics Engineering, China
- Meng Yu 0005 — East China Jiaotong University, School of Information Engineering, Nanchang, China
- Meng Yu 0006 — National University of Defense Technology, Changsha, China
- Meng Yu 0007 — University of St. Andrews, School of Computer Science, UK
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2020
- [j2]Ke Tan
, Yong Xu, Shi-Xiong Zhang, Meng Yu, Dong Yu
:
Audio-Visual Speech Separation and Dereverberation With a Two-Stage Multimodal Network. IEEE J. Sel. Top. Signal Process. 14(3): 542-553 (2020) - [c29]Xuan Ji, Meng Yu, Chunlei Zhang, Dan Su, Tao Yu, Xiaoyu Liu, Dong Yu:
Speaker-Aware Target Speaker Enhancement by Jointly Learning with Speaker Embedding Extraction. ICASSP 2020: 7294-7298 - [c28]Aswin Shanmugam Subramanian, Chao Weng, Meng Yu, Shi-Xiong Zhang, Yong Xu, Shinji Watanabe
, Dong Yu:
Far-Field Location Guided Target Speech Extraction Using End-to-End Speech Recognition Objectives. ICASSP 2020: 7299-7303 - [c27]Rongzhi Gu, Shi-Xiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Yuexian Zou, Dong Yu:
Enhancing End-to-End Multi-Channel Speech Separation Via Spatial Feature Learning. ICASSP 2020: 7319-7323 - [c26]Yong Xu, Meng Yu, Shi-Xiong Zhang, Lianwu Chen, Chao Weng, Jianming Liu, Dong Yu:
Neural Spatio-Temporal Beamformer for Target Speech Separation. INTERSPEECH 2020: 56-60 - [c25]Meng Yu, Xuan Ji, Bo Wu, Dan Su, Dong Yu:
End-to-End Multi-Look Keyword Spotting. INTERSPEECH 2020: 66-70 - [c24]Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu:
DurIAN: Duration Informed Attention Network for Speech Synthesis. INTERSPEECH 2020: 2027-2031 - [c23]Jianwei Yu, Bo Wu, Rongzhi Gu, Shi-Xiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Dong Yu, Xunying Liu, Helen Meng:
Audio-Visual Multi-Channel Recognition of Overlapped Speech. INTERSPEECH 2020: 3496-3500 - [i15]Rongzhi Gu, Shi-Xiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Yuexian Zou, Dong Yu:
Enhancing End-to-End Multi-channel Speech Separation via Spatial Feature Learning. CoRR abs/2003.03927 (2020) - [i14]Yong Xu, Meng Yu, Shi-Xiong Zhang, Lianwu Chen, Chao Weng, Jianming Liu, Dong Yu:
Neural Spatio-Temporal Beamformer for Target Speech Separation. CoRR abs/2005.03889 (2020) - [i13]Jianwei Yu, Bo Wu, Rongzhi Gu, Shi-Xiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Dong Yu, Xunying Liu, Helen Meng:
Audio-visual Multi-channel Recognition of Overlapped Speech. CoRR abs/2005.08571 (2020) - [i12]Meng Yu, Xuan Ji, Bo Wu, Dan Su, Dong Yu:
End-to-End Multi-Look Keyword Spotting. CoRR abs/2005.10386 (2020) - [i11]Daniel Michelsanti, Zheng-Hua Tan, Shi-Xiong Zhang, Yong Xu, Meng Yu, Dong Yu, Jesper Jensen:
An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation. CoRR abs/2008.09586 (2020) - [i10]Aswin Shanmugam Subramanian, Chao Weng, Shinji Watanabe, Meng Yu, Yong Xu, Shi-Xiong Zhang, Dong Yu:
Directional ASR: A New Paradigm for E2E Multi-Speaker Speech Recognition with Source Localization. CoRR abs/2011.00091 (2020) - [i9]Wei Xia, Chunlei Zhang, Chao Weng, Meng Yu, Dong Yu:
Self-supervised Text-independent Speaker Verification using Prototypical Momentum Contrastive Learning. CoRR abs/2012.07178 (2020)
2010 – 2019
- 2019
- [c22]Junyi Peng, Yuexian Zou, Na Li, Deyi Tuo, Dan Su, Meng Yu, Chunlei Zhang, Dong Yu:
Syllable-Dependent Discriminative Learning for Small Footprint Text-Dependent Speaker Verification. ASRU 2019: 350-357 - [c21]Bo Wu, Meng Yu, Lianwu Chen, Mingjie Jin, Dan Su, Dong Yu:
Improving Speech Enhancement with Phonetic Embedding Features. ASRU 2019: 645-651 - [c20]Jian Wu, Yong Xu, Shi-Xiong Zhang, Lianwu Chen, Meng Yu, Lei Xie, Dong Yu:
Time Domain Audio Visual Speech Separation. ASRU 2019: 667-673 - [c19]Lianwu Chen, Meng Yu, Dan Su, Dong Yu:
Multi-band PIT and Model Integration for Improved Multi-channel Speech Separation. ICASSP 2019: 705-709 - [c18]Yichi Zhang, Meng Yu, Na Li, Chengzhu Yu, Jia Cui, Dong Yu:
Seq2Seq Attentional Siamese Neural Networks for Text-dependent Speaker Verification. ICASSP 2019: 6131-6135 - [c17]Rongjin Li, Na Li, Deyi Tuo, Meng Yu, Dan Su, Dong Yu:
Boundary Discriminative Large Margin Cosine Loss for Text-independent Speaker Verification. ICASSP 2019: 6321-6325 - [c16]Yong Xu, Chao Weng, Like Hui, Jianming Liu, Meng Yu, Dan Su, Dong Yu:
Joint Training of Complex Ratio Mask Based Beamformer and Acoustic Model for Noise Robust Asr. ICASSP 2019: 6745-6749 - [c15]Jian Wu, Yong Xu, Shi-Xiong Zhang, Lianwu Chen, Meng Yu, Lei Xie, Dong Yu:
Improved Speaker-Dependent Separation for CHiME-5 Challenge. INTERSPEECH 2019: 466-470 - [c14]Bin Liu, Shuai Nie, Shan Liang, Wenju Liu, Meng Yu, Lianwu Chen, Shouye Peng, Changliang Li:
Jointly Adversarial Enhancement Training for Robust End-to-End Speech Recognition. INTERSPEECH 2019: 491-495 - [c13]Guanjun Li, Shan Liang, Shuai Nie, Wenju Liu, Meng Yu, Lianwu Chen, Shouye Peng, Changliang Li:
Direction-Aware Speaker Beam for Multi-Channel Speaker Extraction. INTERSPEECH 2019: 2713-2717 - [c12]Rongzhi Gu, Lianwu Chen, Shi-Xiong Zhang, Jimeng Zheng, Yong Xu, Meng Yu, Dan Su, Yuexian Zou, Dong Yu:
Neural Spatial Filter: Target Speaker Speech Separation Assisted with Directional Information. INTERSPEECH 2019: 4290-4294 - [c11]Fahimeh Bahmaninezhad, Jian Wu, Rongzhi Gu, Shi-Xiong Zhang, Yong Xu, Meng Yu, Dong Yu:
A Comprehensive Study of Speech Separation: Spectrogram vs Waveform Separation. INTERSPEECH 2019: 4574-4578 - [i8]Jian Wu, Yong Xu, Shi-Xiong Zhang, Lianwu Chen, Meng Yu, Lei Xie, Dong Yu:
Time Domain Audio Visual Speech Separation. CoRR abs/1904.03760 (2019) - [i7]Jian Wu, Yong Xu, Shi-Xiong Zhang, Lianwu Chen, Meng Yu, Lei Xie, Dong Yu:
Improved Speaker-Dependent Separation for CHiME-5 Challenge. CoRR abs/1904.03792 (2019) - [i6]Rongzhi Gu, Jian Wu, Shi-Xiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Yuexian Zou, Dong Yu:
End-to-End Multi-Channel Speech Separation. CoRR abs/1905.06286 (2019) - [i5]Fahimeh Bahmaninezhad, Jian Wu, Rongzhi Gu, Shi-Xiong Zhang, Yong Xu, Meng Yu, Dong Yu:
A comprehensive study of speech separation: spectrogram vs waveform separation. CoRR abs/1905.07497 (2019) - [i4]Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu:
DurIAN: Duration Informed Attention Network For Multimodal Synthesis. CoRR abs/1909.01700 (2019) - [i3]Ke Tan, Yong Xu, Shi-Xiong Zhang, Meng Yu, Dong Yu:
Audio-Visual Speech Separation and Dereverberation with a Two-Stage Multimodal Network. CoRR abs/1909.07352 (2019) - [i2]Fahimeh Bahmaninezhad, Shi-Xiong Zhang, Yong Xu, Meng Yu, John H. L. Hansen, Dong Yu:
A Unified Framework for Speech Separation. CoRR abs/1912.07814 (2019) - 2018
- [c10]Lianwu Chen, Meng Yu, Yanmin Qian, Dan Su, Dong Yu:
Permutation Invariant Training of Generative Adversarial Network for Monaural Speech Separation. INTERSPEECH 2018: 302-306 - [c9]Jun Wang, Jie Chen, Dan Su, Lianwu Chen, Meng Yu, Yanmin Qian, Dong Yu:
Deep Extractor Network for Target Speaker Recovery from Single Channel Speech Mixtures. INTERSPEECH 2018: 307-311 - [c8]Meng Yu, Xuan Ji, Yi Gao, Lianwu Chen, Jie Chen, Jimeng Zheng, Dan Su, Dong Yu:
Text-Dependent Speech Enhancement for Small-Footprint Robust Keyword Detection. INTERSPEECH 2018: 2613-2617 - [i1]Jun Wang, Jie Chen, Dan Su, Lianwu Chen, Meng Yu, Yanmin Qian, Dong Yu:
Deep Extractor Network for Target Speaker Recovery From Single Channel Speech Mixtures. CoRR abs/1807.08974 (2018) - 2012
- [j1]Meng Yu, Wenye Ma, Jack Xin, Stanley J. Osher:
Multi-Channel l1 Regularized Convex Speech Enhancement Model and Fast Computation by the Split Bregman Method. IEEE Trans. Speech Audio Process. 20(2): 661-675 (2012) - [c7]Meng Yu, Jack Xin:
Exploring Off Time Nature for Speech Enhancement. INTERSPEECH 2012: 150-153 - [c6]Meng Yu, Frank K. Soong:
Constrained Multichannel Speech Dereverberation. INTERSPEECH 2012: 1938-1941 - [c5]Meng Yu, Ryan Ritch, Jack Xin:
A Triple-Microphone Real-Time Speech Enhancement Algorithm Based on Approximate Array Analytical Solutions. INTERSPEECH 2012: 1942-1945 - 2011
- [c4]Shunan Zhang, Michael D. Lee, Meng Yu, Jack Xin:
Modeling Category Identification Using Sparse Instance Representation. CogSci 2011 - 2010
- [c3]Meng Yu, Wenye Ma, Jack Xin, Stanley J. Osher:
Convexity and fast speech extraction by split bregman method. INTERSPEECH 2010: 398-401 - [c2]Wenye Ma, Meng Yu, Jack Xin, Stanley J. Osher:
Reducing musical noise in blind source separation by time-domain sparse filters and split bregman method. INTERSPEECH 2010: 402-405
2000 – 2009
- 2009
- [c1]Jack Xin, Meng Yu, Yingyong Qi, Hsin-I Yang
, Fan-Gang Zeng
:
A nonlocally weighted soft-constrained natural gradient algorithm for blind separation of reverberant speech. WASPAA 2009: 81-84
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
Tweets on dblp homepage
Show tweets from on the dblp homepage.
Privacy notice: By enabling the option above, your browser will contact twitter.com and twimg.com to load tweets curated by our Twitter account. At the same time, Twitter will persistently store several cookies with your web browser. While we did signal Twitter to not track our users by setting the "dnt" flag, we do not have any control over how Twitter uses your data. So please proceed with care and consider checking the Twitter privacy policy.
last updated on 2021-01-23 00:48 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint