default search action
Jon Barker
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j25]Simon Leglaive, Matthieu Fraticelli, Hend Elghazaly, Léonie Borne, Mostafa Sadeghi, Scott Wisdom, Manuel Pariente, John R. Hershey, Daniel Pressnitzer, Jon P. Barker:
Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge. Comput. Speech Lang. 89: 101685 (2025) - 2024
- [c93]Gerardo Roa Dabike, Michael A. Akeroyd, Scott Bannister, Jon Barker, Trevor J. Cox, Bruno Fazenda, Jennifer Firth, Simone Graetzer, Alinka Greasley, Rebecca R. Vos, William M. Whitmer:
The ICASSP SP Cadenza Challenge: Music Demixing/Remixing for Hearing Aids. ICASSP Workshops 2024: 93-94 - [c92]Rhiannon Mogridge, George Close, Robert Sutherland, Thomas Hain, Jon Barker, Stefan Goetze, Anton Ragni:
Non-Intrusive Speech Intelligibility Prediction for Hearing-Impaired Users Using Intermediate ASR Features and Human Memory Models. ICASSP 2024: 306-310 - [c91]Jon Barker, Michael A. Akeroyd, Will Bailey, Trevor J. Cox, John F. Culling, Jennifer Firth, Simone Graetzer, Graham Naylor:
The 2nd Clarity Prediction Challenge: A Machine Learning Challenge for Hearing Aid Intelligibility Prediction. ICASSP 2024: 11551-11555 - [c90]Max Ehrlich, Jon Barker, Namitha Padmanabhan, Larry Davis, Andrew Tao, Bryan Catanzaro, Abhinav Shrivastava:
Leveraging Bitstream Metadata for Fast, Accurate, Generalized Compressed Video Quality Enhancement. WACV 2024: 1506-1516 - [i25]Rhiannon Mogridge, George Close, Robert Sutherland, Thomas Hain, Jon Barker, Stefan Goetze, Anton Ragni:
Non-Intrusive Speech Intelligibility Prediction for Hearing-Impaired Users using Intermediate ASR Features and Human Memory Models. CoRR abs/2401.13611 (2024) - [i24]Simon Leglaive, Matthieu Fraticelli, Hend Elghazaly, Léonie Borne, Mostafa Sadeghi, Scott Wisdom, Manuel Pariente, John R. Hershey, Daniel Pressnitzer, Jon P. Barker:
Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge. CoRR abs/2402.01413 (2024) - [i23]Robert Sutherland, George Close, Thomas Hain, Stefan Goetze, Jon Barker:
Using Speech Foundational Models in Loss Functions for Hearing Aid Speech Enhancement. CoRR abs/2407.13333 (2024) - [i22]Gerardo Roa Dabike, Michael A. Akeroyd, Scott Bannister, Jon P. Barker, Trevor J. Cox, Bruno Fazenda, Jennifer Firth, Simone Graetzer, Alinka Greasley, Rebecca R. Vos, William M. Whitmer:
The first Cadenza challenges: using machine learning competitions to improve music for listeners with a hearing loss. CoRR abs/2409.05095 (2024) - [i21]Wenliang Dai, Nayeon Lee, Boxin Wang, Zhuoling Yang, Zihan Liu, Jon Barker, Tuomas Rintamaki, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping:
NVLM: Open Frontier-Class Multimodal LLMs. CoRR abs/2409.11402 (2024) - 2023
- [c89]Gerardo Roa Dabike, Scott Bannister, Jennifer Firth, Simone Graetzer, Rebecca R. Vos, Michael A. Akeroyd, Jon Barker, Trevor J. Cox, Bruno Fazenda, Alinka Greasley, William M. Whitmer:
The First Cadenza Signal Processing Challenge: Improving Music for Those With a Hearing Loss. HCMIR@ISMIR 2023 - [c88]Michael A. Akeroyd, Will Bailey, Jon Barker, Trevor J. Cox, John F. Culling, Simone Graetzer, Graham Naylor, Zuzanna Podwinska, Zehai Tu:
The 2nd Clarity Enhancement Challenge for Hearing Aid Speech Intelligibility Enhancement: Overview and Outcomes. ICASSP 2023: 1-5 - [c87]Trevor J. Cox, Jon Barker, Will Bailey, Simone Graetzer, Michael A. Akeroyd, John F. Culling, Graham Naylor:
Overview of the 2023 ICASSP SP Clarity Challenge: Speech Enhancement for Hearing Aids. ICASSP 2023: 1-2 - [i20]Gerardo Roa Dabike, Michael A. Akeroyd, Scott Bannister, Jon Barker, Trevor J. Cox, Bruno Fazenda, Jennifer Firth, Simone Graetzer, Alinka Greasley, Rebecca R. Vos, William M. Whitmer:
The Cadenza ICASSP 2024 Grand Challenge. CoRR abs/2310.03480 (2023) - [i19]Gerardo Roa Dabike, Scott Bannister, Jennifer Firth, Simone Graetzer, Rebecca R. Vos, Michael A. Akeroyd, Jon Barker, Trevor J. Cox, Bruno Fazenda, Alinka Greasley, William M. Whitmer:
The First Cadenza Signal Processing Challenge: Improving Music for Those With a Hearing Loss. CoRR abs/2310.05799 (2023) - [i18]Zehai Tu, Ning Ma, Jon Barker:
Intelligibility prediction with a pretrained noise-robust automatic speech recognition model. CoRR abs/2310.19817 (2023) - [i17]Trevor J. Cox, Jon Barker, Will Bailey, Simone Graetzer, Michael A. Akeroyd, John F. Culling, Graham Naylor:
Overview Of The 2023 Icassp Sp Clarity Challenge: Speech Enhancement For Hearing Aids. CoRR abs/2311.14490 (2023) - 2022
- [j24]Zhengjun Yue, Erfan Loweimi, Heidi Christensen, Jon Barker, Zoran Cvetkovic:
Acoustic Modelling From Raw Source and Filter Components for Dysarthric Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 30: 2968-2980 (2022) - [c86]Jack Deadman, Jon Barker:
Improved Simulation of Realistically-Spatialised Simultaneous Speech Using Multi-Camera Analysis in The Chime-5 Dataset. ICASSP 2022: 591-595 - [c85]Zhengjun Yue, Erfan Loweimi, Zoran Cvetkovic, Heidi Christensen, Jon Barker:
Multi-Modal Acoustic-Articulatory Feature Fusion For Dysarthric Speech Recognition. ICASSP 2022: 7372-7376 - [c84]Zehai Tu, Jack Deadman, Ning Ma, Jon Barker:
Auditory-Based Data Augmentation for end-to-end Automatic Speech Recognition. ICASSP 2022: 7447-7451 - [c83]Zhengjun Yue, Erfan Loweimi, Heidi Christensen, Jon Barker, Zoran Cvetkovic:
Dysarthric Speech Recognition From Raw Waveform with Parametric CNNs. INTERSPEECH 2022: 31-35 - [c82]Jack Deadman, Jon Barker:
Modelling Turn-taking in Multispeaker Parties for Realistic Data Simulation. INTERSPEECH 2022: 266-270 - [c81]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training. INTERSPEECH 2022: 1056-1060 - [c80]Zehai Tu, Ning Ma, Jon Barker:
Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners. INTERSPEECH 2022: 3488-3492 - [c79]Zehai Tu, Ning Ma, Jon Barker:
Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction. INTERSPEECH 2022: 3493-3497 - [c78]Jon Barker, Michael Akeroyd, Trevor J. Cox, John F. Culling, Jennifer Firth, Simone Graetzer, Holly Griffiths, Lara Harris, Graham Naylor, Zuzanna Podwinska, Eszter Porter, Rhoddy Viveros Muñoz:
The 1st Clarity Prediction Challenge: A machine learning challenge for hearing aid intelligibility prediction. INTERSPEECH 2022: 3508-3512 - [c77]Emma Barker, Jon Barker, Robert J. Gaizauskas, Ning Ma, Monica Lestari Paramita:
SNuC: The Sheffield Numbers Spoken Language Corpus. LREC 2022: 1978-1984 - [i16]Max Ehrlich, Jon Barker, Namitha Padmanabhan, Larry Davis, Andrew Tao, Bryan Catanzaro, Abhinav Shrivastava:
Leveraging Bitstream Metadata for Fast and Accurate Video Compression Correction. CoRR abs/2202.00011 (2022) - [i15]Zehai Tu, Jack Deadman, Ning Ma, Jon Barker:
Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition. CoRR abs/2204.04284 (2022) - [i14]Zehai Tu, Ning Ma, Jon Barker:
Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners. CoRR abs/2204.04287 (2022) - [i13]Zehai Tu, Ning Ma, Jon Barker:
Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction. CoRR abs/2204.04288 (2022) - [i12]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training. CoRR abs/2205.01751 (2022) - 2021
- [c76]Zehai Tu, Ning Ma, Jon Barker:
DHASP: Differentiable Hearing Aid Speech Processing. ICASSP 2021: 296-300 - [c75]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism. ICASSP 2021: 6084-6088 - [c74]Gerardo Roa Dabike, Jon Barker:
The use of Voice Source Features for Sung Speech Recognition. ICASSP 2021: 6513-6517 - [c73]Simone Graetzer, Jon Barker, Trevor J. Cox, Michael Akeroyd, John F. Culling, Graham Naylor, Eszter Porter, Rhoddy Viveros Muñoz:
Clarity-2021 Challenges: Machine Learning Challenges for Advancing Hearing Aid Processing. Interspeech 2021: 686-690 - [c72]Zehai Tu, Ning Ma, Jon Barker:
Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model. Interspeech 2021: 691-695 - [c71]Zhengjun Yue, Jon Barker, Heidi Christensen, Cristina McKean, Elaine Ashton, Yvonne Wren, Swapnil Gadgil, Rebecca Bright:
Parental Spoken Scaffolding and Narrative Skills in Crowd-Sourced Storytelling Samples of Young Children. Interspeech 2021: 2946-2950 - [c70]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
Teacher-Student MixIT for Unsupervised and Semi-Supervised Speech Separation. Interspeech 2021: 3495-3499 - [i11]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism. CoRR abs/2102.03762 (2021) - [i10]Gerardo Roa Dabike, Jon Barker:
The Use of Voice Source Features for Sung Speech Recognition. CoRR abs/2102.10376 (2021) - [i9]Zehai Tu, Ning Ma, Jon Barker:
DHASP: Differentiable Hearing Aid Speech Processing. CoRR abs/2103.08569 (2021) - [i8]Zehai Tu, Ning Ma, Jon Barker:
Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model. CoRR abs/2106.04639 (2021) - [i7]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation. CoRR abs/2106.07843 (2021) - 2020
- [c69]Zhengjun Yue, Feifei Xiong, Heidi Christensen, Jon Barker:
Exploring Appropriate Acoustic and Language Modelling Choices for Continuous Dysarthric Speech Recognition. ICASSP 2020: 6094-6098 - [c68]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments. ICASSP 2020: 6389-6393 - [c67]Feifei Xiong, Jon Barker, Zhengjun Yue, Heidi Christensen:
Source Domain Data Selection for Improved Transfer Learning Targeting Dysarthric Speech Recognition. ICASSP 2020: 7424-7428 - [c66]Jack Deadman, Jon Barker:
Simulating Realistically-Spatialised Simultaneous Speech Using Video-Driven Speaker Detection and the CHiME-5 Dataset. INTERSPEECH 2020: 349-353 - [c65]Zhengjun Yue, Heidi Christensen, Jon Barker:
Autoencoder Bottleneck Features with Multi-Task Optimisation for Improved Continuous Dysarthric Speech Recognition. INTERSPEECH 2020: 4581-4585 - [i6]Shinji Watanabe, Michael I. Mandel, Jon Barker, Emmanuel Vincent:
CHiME-6 Challenge: Tackling Multispeaker Speech Recognition for Unsegmented Recordings. CoRR abs/2004.09249 (2020) - [i5]Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker:
On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments. CoRR abs/2011.05958 (2020)
2010 – 2019
- 2019
- [c64]Feifei Xiong, Jon Barker, Heidi Christensen:
Phonetic Analysis of Dysarthric Speech Tempo and Applications to Robust Personalised Dysarthric Speech Recognition. ICASSP 2019: 5836-5840 - [c63]Gerardo Roa Dabike, Jon Barker:
Automatic Lyric Transcription from Karaoke Vocal Tracks: Resources and a Baseline System. INTERSPEECH 2019: 579-583 - 2018
- [j23]Ricard Marxer, Jon Barker, Najwa Alghamdi, Steve Maddock:
The impact of the Lombard effect on audio and visual speech recognition systems. Speech Commun. 100: 58-68 (2018) - [c62]Feifei Xiong, Jon Barker, Heidi Christensen:
Deep Learning of Articulatory-Based Representations and Applications for Improving Dysarthric Speech Recognition. ITG Symposium on Speech Communication 2018: 1-5 - [c61]Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, Charles K. Nicholas:
Malware Detection by Eating a Whole EXE. AAAI Workshops 2018: 268-276 - [c60]Fitsum A. Reda, Guilin Liu, Kevin J. Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, Bryan Catanzaro:
SDC-Net: Video Prediction Using Spatially-Displaced Convolution. ECCV (7) 2018: 747-763 - [c59]Erfan Loweimi, Jon Barker, Thomas Hain:
Exploring the Use of Group Delay for Generalised VTS Based Noise Compensation. ICASSP 2018: 4824-4828 - [c58]Erfan Loweimi, Jon Barker, Thomas Hain:
On the Usefulness of the Speech Phase Spectrum for Pitch Extraction. INTERSPEECH 2018: 696-700 - [c57]Jon Barker, Shinji Watanabe, Emmanuel Vincent, Jan Trmal:
The Fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, Task and Baselines. INTERSPEECH 2018: 1561-1565 - [c56]Mandar Gogate, Ahsan Adeel, Ricard Marxer, Jon Barker, Amir Hussain:
DNN Driven Speaker Independent Audio-Visual Mask Estimation for Speech Separation. INTERSPEECH 2018: 2723-2727 - [i4]Jon Barker, Shinji Watanabe, Emmanuel Vincent, Jan Trmal:
The fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines. CoRR abs/1803.10609 (2018) - [i3]Mandar Gogate, Ahsan Adeel, Ricard Marxer, Jon Barker, Amir Hussain:
DNN driven Speaker Independent Audio-Visual Mask Estimation for Speech Separation. CoRR abs/1808.00060 (2018) - [i2]Fitsum A. Reda, Guilin Liu, Kevin J. Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, Bryan Catanzaro:
SDCNet: Video Prediction Using Spatially-Displaced Convolution. CoRR abs/1811.00684 (2018) - 2017
- [j22]Jon Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe:
Multi-microphone speech recognition in everyday environments. Comput. Speech Lang. 46: 386-387 (2017) - [j21]Emmanuel Vincent, Shinji Watanabe, Aditya Arie Nugraha, Jon Barker, Ricard Marxer:
An analysis of environment, microphone and data simulation mismatches in robust speech recognition. Comput. Speech Lang. 46: 535-557 (2017) - [j20]Jon Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe:
The third 'CHiME' speech separation and recognition challenge: Analysis and outcomes. Comput. Speech Lang. 46: 605-626 (2017) - [j19]José A. González, Angel Manuel Gomez Garcia, Antonio M. Peinado, Ning Ma, Jon Barker:
Spectral Reconstruction and Noise Model Estimation Based on a Masking Model for Noise Robust Speech Recognition. Circuits Syst. Signal Process. 36(9): 3731-3760 (2017) - [j18]Najwa Alghamdi, Steve Maddock, Jon Barker, Guy J. Brown:
The impact of automatic exaggeration of the visual articulatory features of a talker on the intelligibility of spectrally distorted speech. Speech Commun. 95: 127-136 (2017) - [c55]Erfan Loweimi, Jon Barker, Thomas Hain:
Statistical normalisation of phase-based feature representation for robust speech recognition. ICASSP 2017: 5310-5314 - [c54]Erfan Loweimi, Jon Barker, Oscar Saz-Torralba, Thomas Hain:
Robust Source-Filter Separation of Speech Signal in the Phase Domain. INTERSPEECH 2017: 414-418 - [c53]Ricard Marxer, Jon Barker:
Binary Mask Estimation Strategies for Constrained Imputation-Based Speech Enhancement. INTERSPEECH 2017: 1988-1992 - [c52]Erfan Loweimi, Jon Barker, Thomas Hain:
Channel Compensation in the Generalised Vector Taylor Series Approach to Robust ASR. INTERSPEECH 2017: 2466-2470 - [p3]Michael I. Mandel, Jon P. Barker:
Multichannel Spatial Clustering Using Model-Based Source Separation. New Era for Robust Speech Recognition, Exploiting Deep Learning 2017: 51-77 - [p2]Jon P. Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe:
The CHiME Challenges: Robust Speech Recognition in Everyday Environments. New Era for Robust Speech Recognition, Exploiting Deep Learning 2017: 327-344 - [i1]Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, Charles K. Nicholas:
Malware Detection by Eating a Whole EXE. CoRR abs/1710.09435 (2017) - 2016
- [c51]Andrew Abel, Ricard Marxer, Jon Barker, Roger Watt, Bill Whitmer, Peter Derleth, Amir Hussain:
A Data Driven Approach to Audiovisual Speech Mapping. BICS 2016: 331-342 - [c50]Máté Attila Tóth, Martin Cooke, Jon Barker:
Misperceptions Arising from Speech-in-Babble Interactions. INTERSPEECH 2016: 630-634 - [c49]María Luisa García Lecumberri, Jon Barker, Ricard Marxer, Martin Cooke:
Language Effects in Noise-Induced Word Misperceptions. INTERSPEECH 2016: 640-644 - [c48]Michael I. Mandel, Jon Barker:
Multichannel Spatial Clustering for Robust Far-Field Automatic Speech Recognition in Mismatched Conditions. INTERSPEECH 2016: 1991-1995 - [c47]Erfan Loweimi, Jon Barker, Thomas Hain:
Use of Generalised Nonlinearity in Vector Taylor Series Noise Compensation for Robust Speech Recognition. INTERSPEECH 2016: 3798-3802 - 2015
- [c46]Ning Ma, Ricard Marxer, Jon Barker, Guy J. Brown:
Exploiting synchrony spectra and deep neural networks for noise-robust automatic speech recognition. ASRU 2015: 490-495 - [c45]Jon Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe:
The third 'CHiME' speech separation and recognition challenge: Dataset, task and baselines. ASRU 2015: 504-511 - [c44]Najwa Alghamdi, Steve Maddock, Guy J. Brown, Jon Barker:
Investigating the impact of artificial enhancement of lip visibility on the intelligibility of spectrally-distorted speech. AVSP 2015: 93-98 - [c43]Najwa Alghamdi, Steve Maddock, Guy J. Brown, Jon Barker:
A comparison of audiovisual and auditory-only training on the perception of spectrally-distorted speech. ICPhS 2015 - [c42]Maryam Al Dabel, Jon Barker:
On the role of discriminative intelligibility model for speech intelligibility enhancement. ICPhS 2015 - [c41]Erfan Loweimi, Jon Barker, Thomas Hain:
Source-filter separation of speech signal in the phase domain. INTERSPEECH 2015: 598-602 - [c40]Lin Lin, Jon Barker, Guy J. Brown:
The effect of cochlear implant processing on speaker intelligibility: a perceptual study and computer model. INTERSPEECH 2015: 1566-1570 - [c39]Ricard Marxer, Martin Cooke, Jon Barker:
A framework for the evaluation of microscopic intelligibility models. INTERSPEECH 2015: 2558-2562 - [c38]Erfan Loweimi, Mortaza Doulaty, Jon Barker, Thomas Hain:
Long-Term Statistical Feature Extraction from Speech Signal and Its Application in Emotion Recognition. SLSP 2015: 173-184 - [c37]Peter Foster, Siddharth Sigtia, Sacha Krstulovic, Jon Barker, Mark D. Plumbley:
Chime-home: A dataset for sound source recognition in a domestic environment. WASPAA 2015: 1-5 - 2014
- [c36]Maryam Al Dabel, Jon Barker:
Speech pre-enhancement using a discriminative microscopic intelligibility model. INTERSPEECH 2014: 2068-2072 - 2013
- [j17]Jon Barker, Emmanuel Vincent:
Special issue on speech separation and recognition in multisource environments. Comput. Speech Lang. 27(3): 619-620 (2013) - [j16]Jon Barker, Emmanuel Vincent, Ning Ma, Heidi Christensen, Phil D. Green:
The PASCAL CHiME speech separation and recognition challenge. Comput. Speech Lang. 27(3): 621-633 (2013) - [j15]Ning Ma, Jon Barker, Heidi Christensen, Phil D. Green:
A hearing-inspired approach for distant-microphone speech recognition in the presence of multiple sources. Comput. Speech Lang. 27(3): 820-836 (2013) - [j14]José L. Carmona, Jon Barker, Angel M. Gomez, Ning Ma:
Speech Spectral Envelope Enhancement by HMM-Based Analysis/Resynthesis. IEEE Signal Process. Lett. 20(6): 563-566 (2013) - [j13]José A. González, Antonio M. Peinado, Ning Ma, Angel M. Gomez, Jon Barker:
MMSE-Based Missing-Feature Reconstruction With Temporal Modeling for Robust Speech Recognition. IEEE Trans. Speech Audio Process. 21(3): 624-635 (2013) - [c35]Emmanuel Vincent, Jon Barker, Shinji Watanabe, Jonathan Le Roux, Francesco Nesta, Marco Matassoni:
The second 'CHiME' speech separation and recognition challenge: An overview of challenge systems and outcomes. ASRU 2013: 162-167 - [c34]Emmanuel Vincent, Jon Barker, Shinji Watanabe, Jonathan Le Roux, Francesco Nesta, Marco Matassoni:
The second 'chime' speech separation and recognition challenge: Datasets, tasks and baselines. ICASSP 2013: 126-130 - 2012
- [j12]Brian Barber, Jon Barker:
Indication of slowly moving ground targets in non-Gaussian clutter using multi-channel synthetic aperture radar. IET Signal Process. 6(5): 424-434 (2012) - [j11]Ning Ma, Jon Barker, Heidi Christensen, Phil D. Green:
Combining Speech Fragment Decoding and Adaptive Noise Floor Modeling. IEEE Trans. Speech Audio Process. 20(3): 818-827 (2012) - [c33]José Andrés González López, Antonio Miguel Peinado Herreros, Angel Manuel Gomez Garcia, Ning Ma, Jon Barker:
Combining missing-data reconstruction and uncertainty decoding for robust speech recognition. ICASSP 2012: 4693-4696 - [c32]Ning Ma, Jon Barker:
Coupling identification and reconstruction of missing features for noise-robust automatic speech recognition. INTERSPEECH 2012: 2638-2641 - [p1]Jon Barker:
Missing-Data Techniques: Recognition with Incomplete Spectrograms. Techniques for Noise Robustness in Automatic Speech Recognition 2012: 369-398 - 2011
- [c31]Juan Andres Morales-Cordovilla, Ning Ma, Victoria E. Sánchez, José L. Carmona, Antonio M. Peinado, Jon Barker:
A pitch based noise estimation technique for robust speech recognition with Missing Data. ICASSP 2011: 4808-4811 - [c30]Ning Ma, Jon Barker, Heidi Christensen, Phil D. Green:
Binaural Cues for Fragment-Based Speech Recognition in Reverberant Multisource Environments. INTERSPEECH 2011: 1657-1660 - [c29]Martin Cooke, Jon Barker, María Luisa García Lecumberri, Krzysztof Wasilewski:
Crowdsourcing for Word Recognition in Noise. INTERSPEECH 2011: 3049-3052 - 2010
- [j10]Jon Barker, Ning Ma, André Coy, Martin Cooke:
Speech fragment decoding techniques for simultaneous speaker identification and speech recognition. Comput. Speech Lang. 24(1): 94-111 (2010) - [c28]Heidi Christensen, Jon Barker:
Speaker turn tracking with mobile microphones: Combining location and pitch information. EUSIPCO 2010: 954-958 - [c27]Ning Ma, Jon Barker, Heidi Christensen, Phil D. Green:
Distant microphone speech recognition in a noisy indoor environment: combining soft missing data and speech fragment decoding. SAPA@INTERSPEECH 2010: 19-24 - [c26]Heidi Christensen, Jon Barker, Ning Ma, Phil D. Green:
The CHiME corpus: a resource and a challenge for computational hearing in multisource environments. INTERSPEECH 2010: 1918-1921
2000 – 2009
- 2009
- [j9]Jon Barker, Xu Shao:
Energetic and Informational Masking Effects in an Audiovisual Speech Recognition System. IEEE Trans. Speech Audio Process. 17(3): 446-458 (2009) - [c25]Heidi Christensen, Ning Ma, Stuart N. Wrigley, Jon Barker:
A speech fragment approach to localising multiple speakers in reverberant environments. ICASSP 2009: 4593-4596 - [c24]Heidi Christensen, Jon Barker:
Using location cues to track speaker changes from mobile, binaural microphones. INTERSPEECH 2009: 140-143 - 2008
- [j8]Xu Shao, Jon Barker:
Stream weight estimation for multistream audio-visual speech recognition in a multispeaker environment. Speech Commun. 50(4): 337-353 (2008) - [c23]Elise Arnaud, Heidi Christensen, Yan-Chen Lu, Jon Barker, Vasil Khalidov, Miles E. Hansard, Bertrand Holveck, Hervé Mathieu, Ramya Narasimha, Elise Taillant, Florence Forbes, Radu Horaud:
The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements. ICMI 2008: 109-116 - 2007
- [j7]André Coy, Jon Barker:
An automatic speech recognition system based on the scene analysis account of auditory perception. Speech Commun. 49(5): 384-401 (2007) - [j6]Jon Barker, Martin Cooke:
Modelling speaker intelligibility in noise. Speech Commun. 49(5): 402-417 (2007) - [j5]Ning Ma, Phil D. Green, Jon Barker, André Coy:
Exploiting correlogram structure for robust speech recognition with multiple speech sources. Speech Commun. 49(12): 874-891 (2007) - [c22]Jon Barker, Xu Shao:
Audio-visual speech fragment decoding. AVSP 2007 - [c21]Ning Ma, Jon Barker, Phil D. Green:
Applying word duration constraints by using unrolled HMMs. INTERSPEECH 2007: 1066-1069 - [c20]Heidi Christensen, Ning Ma, Stuart N. Wrigley, Jon Barker:
Integrating pitch and localisation cues at a speech fragment level. INTERSPEECH 2007: 2769-2772 - 2006
- [j4]Sue Harding, Jon P. Barker, Guy J. Brown:
Mask estimation for missing data speech recognition based on statistics of binaural interaction. IEEE Trans. Speech Audio Process. 14(1): 58-67 (2006) - [c19]Kalle J. Palomäki, Guy J. Brown, Jon P. Barker:
Recognition of Reverberant Speech using Full Cepstral Features and Spectral Missing Data. ICASSP (1) 2006: 289-292 - [c18]Guy J. Brown, Sue Harding, Jon P. Barker:
Speech Separation Based on The Statistics of Binaural Auditory Features. ICASSP (5) 2006: 949-952 - [c17]Jon Barker, André Coy, Ning Ma, Martin Cooke:
Recent advances in speech fragment decoding techniques. INTERSPEECH 2006 - [c16]André Coy, Jon Barker:
A multipitch tracker for monaural speech segmentation. INTERSPEECH 2006 - [c15]Xu Shao, Jon Barker:
Audio-visual speech recognition in the presence of a competing speaker. INTERSPEECH 2006 - 2005
- [j3]Jon P. Barker, Martin P. Cooke, Daniel P. W. Ellis:
Decoding speech in the presence of other sources. Speech Commun. 45(1): 5-25 (2005) - [c14]André Coy, Jon Barker:
Recognising Speech in the Presence of a Competing Speaker using a 'Speech Fragment Decoder'. ICASSP (1) 2005: 425-428 - [c13]Sue Harding, Jon Barker, Guy J. Brown:
Mask Estimation Based on Sound Localisation for Missing Data Speech Recognition. ICASSP (1) 2005: 537-540 - [c12]Jon Barker:
Tracking Facial Markers with an Adaptive Marker Collocation Model. ICASSP (2) 2005: 665-668 - [c11]Sue Harding, Jon P. Barker, Guy J. Brown:
Binaural feature selection for missing data speech recognition. INTERSPEECH 2005: 1269-1272 - [c10]André Coy, Jon Barker:
Soft harmonic masks for recognising speech in the presence of a competing speaker. INTERSPEECH 2005: 2641-2644 - 2004
- [j2]Kalle J. Palomäki, Guy J. Brown, Jon P. Barker:
Techniques for handling convolutional distortion with 'missing data' automatic speech recognition. Speech Commun. 43(1-2): 123-142 (2004) - 2002
- [c9]Kalle J. Palomäki, Guy J. Brown, Jon P. Barker:
Missing data speech recognition in reverberant conditions. ICASSP 2002: 65-68 - 2001
- [c8]Phil D. Green, Jon Barker, Martin Cooke, Ljubomir Josifovski:
Handling Missing and Unreliable Information in Speech Recognition. AISTATS 2001: 112-116 - [c7]Jon Barker, Martin Cooke, Phil D. Green:
Robust ASR based on clean speech models: an evaluation of missing data techniques for connected digit recognition in noise. INTERSPEECH 2001: 213-217 - 2000
- [c6]Jon Barker, Martin Cooke, Daniel P. W. Ellis:
Decoding speech in the presence of other sound sources. INTERSPEECH 2000: 270-273 - [c5]Jon Barker, Ljubomir Josifovski, Martin Cooke, Phil D. Green:
Soft decisions in missing data techniques for robust automatic speech recognition. INTERSPEECH 2000: 373-376
1990 – 1999
- 1999
- [j1]Jon Barker, Martin Cooke:
Is the sine-wave speech cocktail party worth attending? Speech Commun. 27(3-4): 159-174 (1999) - [c4]Jon P. Barker, Frédéric Berthommier:
Estimation of speech acoustics from visual speech features: A comparison of linear and non-linear models. AVSP 1999: 19 - 1998
- [c3]Jon P. Barker, Frédéric Berthommier, Jean-Luc Schwartz:
Is Primitive AV Coherence An Aid To Segment The Scene? AVSP 1998: 103-108 - [c2]Jon Barker, Gethin Williams, Steve Renals:
Acoustic confidence measures for segmenting broadcast news. ICSLP 1998 - 1997
- [c1]Jon Barker, Martin Cooke:
Modelling the recognition of spectrally reduced speech. EUROSPEECH 1997: 2127-2130
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-15 21:35 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint