default search action
Albert Zeyer
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c30]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Chunked Attention-Based Encoder-Decoder Model for Streaming Speech Recognition. ICASSP 2024: 11331-11335 - [i19]Robin Schmitt, Albert Zeyer, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
The Conformer Encoder May Reverse the Time Dimension. CoRR abs/2410.00680 (2024) - 2023
- [i18]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Chunked Attention-based Encoder-Decoder Model for Streaming Speech Recognition. CoRR abs/2309.08436 (2023) - 2022
- [b1]Albert Zeyer:
Neural network based modeling and architectures for automatic speech recognition and machine translation. RWTH Aachen University, Germany, 2022 - [c29]Michael Gansen, Jie Lou, Florian Freye, Tobias Gemmeke, Farhad Merchant, Albert Zeyer, Mohammad Zeineldeen, Ralf Schlüter, Xin Fan:
Discrete Steps towards Approximate Computing. ISQED 2022: 1-6 - [c28]Albert Zeyer, Robin Schmitt, Wei Zhou, Ralf Schlüter, Hermann Ney:
Monotonic Segmental Attention for Automatic Speech Recognition. SLT 2022: 229-236 - [i17]Albert Zeyer, Robin Schmitt, Wei Zhou, Ralf Schlüter, Hermann Ney:
Monotonic segmental attention for automatic speech recognition. CoRR abs/2210.14742 (2022) - 2021
- [c27]Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Librispeech Transducer Model with Internal Language Model Prior Correction. Interspeech 2021: 2052-2056 - [c26]Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Investigating Methods to Improve Language Model Integration for Attention-Based Encoder-Decoder ASR Models. Interspeech 2021: 2856-2860 - [c25]Wei Zhou, Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept. Interspeech 2021: 2891-2895 - [i16]Albert Zeyer, Ralf Schlüter, Hermann Ney:
A study of latent monotonic attention variants. CoRR abs/2103.16710 (2021) - [i15]Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Librispeech Transducer Model with Internal Language Model Prior Correction. CoRR abs/2104.03006 (2021) - [i14]Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models. CoRR abs/2104.05544 (2021) - [i13]Wei Zhou, Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept. CoRR abs/2104.06104 (2021) - [i12]Albert Zeyer, Ralf Schlüter, Hermann Ney:
Why does CTC result in peaky behavior? CoRR abs/2105.14849 (2021) - 2020
- [c24]Nick Rossenbach, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Generating Synthetic Audio Data for Attention-Based Speech Recognition Systems. ICASSP 2020: 7069-7073 - [c23]Vitalii Bozheniuk, Albert Zeyer, Ralf Schlüter, Hermann Ney:
A Comprehensive Study of Residual CNNS for Acoustic Modeling in ASR. ICASSP 2020: 7674-7678 - [c22]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Layer-Normalized LSTM for Hybrid-Hmm and End-To-End ASR. ICASSP 2020: 7679-7683 - [c21]Parnia Bahar, Nikita Makarov, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Exploring A Zero-Order Direct Hmm Based on Latent Attention for Automatic Speech Recognition. ICASSP 2020: 7854-7858 - [c20]Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
A New Training Pipeline for an Improved Neural Transducer. INTERSPEECH 2020: 2812-2816 - [i11]Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
A New Training Pipeline for an Improved Neural Transducer. CoRR abs/2005.09319 (2020) - [i10]Albert Zeyer, Wei Zhou, Thomas Ng, Ralf Schlüter, Hermann Ney:
Investigations on Phoneme-Based End-To-End Speech Recognition. CoRR abs/2005.09336 (2020)
2010 – 2019
- 2019
- [j1]Muhammad Ali Tahir, Heyun Huang, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Training of reduced-rank linear transformations for multi-layer polynomial acoustic features for speech recognition. Speech Commun. 110: 56-63 (2019) - [c19]Albert Zeyer, Parnia Bahar, Kazuki Irie, Ralf Schlüter, Hermann Ney:
A Comparison of Transformer and LSTM Encoder Decoder Models for ASR. ASRU 2019: 8-15 - [c18]Kazuki Irie, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Training Language Models for Long-Span Cross-Sentence Evaluation. ASRU 2019: 419-426 - [c17]Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney:
On Using 2D Sequence-to-sequence Models for Speech Recognition. ICASSP 2019: 5671-5675 - [c16]Christoph Lüscher, Eugen Beck, Kazuki Irie, Markus Kitza, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention. INTERSPEECH 2019: 231-235 - [c15]André Merboldt, Albert Zeyer, Ralf Schlüter, Hermann Ney:
An Analysis of Local Monotonic Attention Variants. INTERSPEECH 2019: 1398-1402 - [c14]Kazuki Irie, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Language Modeling with Deep Transformers. INTERSPEECH 2019: 3905-3909 - [c13]Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney:
On Using SpecAugment for End-to-End Speech Translation. IWSLT 2019 - [i9]Christoph Lüscher, Eugen Beck, Kazuki Irie, Markus Kitza, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation. CoRR abs/1905.03072 (2019) - [i8]Kazuki Irie, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Language Modeling with Deep Transformers. CoRR abs/1905.04226 (2019) - [i7]Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney:
On Using SpecAugment for End-to-End Speech Translation. CoRR abs/1911.08876 (2019) - [i6]Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney:
On using 2D sequence-to-sequence models for speech recognition. CoRR abs/1911.08888 (2019) - [i5]Nick Rossenbach, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Generating Synthetic Audio Data for Attention-Based Speech Recognition Systems. CoRR abs/1912.09257 (2019) - 2018
- [c12]Eugen Beck, Albert Zeyer, Patrick Doetsch, André Merboldt, Ralf Schlüter, Hermann Ney:
Sequence Modeling and Alignment for LVCSR-Systems. ITG Symposium on Speech Communication 2018: 1-5 - [c11]Albert Zeyer, Tamer Alkhouli, Hermann Ney:
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition. ACL (4) 2018: 128-133 - [c10]Albert Zeyer, Kazuki Irie, Ralf Schlüter, Hermann Ney:
Improved Training of End-to-end Attention Models for Speech Recognition. INTERSPEECH 2018: 7-11 - [c9]Evgeny Matusov, Patrick Wilken, Parnia Bahar, Julian Schamper, Pavel Golik, Albert Zeyer, Joan Albert Silvestre-Cerdà, Adria A. Martinez-Villaronga, Hendrik Pesch, Jan-Thorsten Peter:
Neural Speech Translation at AppTek. IWSLT 2018: 104-111 - [i4]Albert Zeyer, Kazuki Irie, Ralf Schlüter, Hermann Ney:
Improved training of end-to-end attention models for speech recognition. CoRR abs/1805.03294 (2018) - [i3]Albert Zeyer, Tamer Alkhouli, Hermann Ney:
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition. CoRR abs/1805.05225 (2018) - 2017
- [c8]Albert Zeyer, Patrick Doetsch, Paul Voigtlaender, Ralf Schlüter, Hermann Ney:
A comprehensive study of deep bidirectional LSTM RNNS for acoustic modeling in speech recognition. ICASSP 2017: 2462-2466 - [c7]Albert Zeyer, Ilia Kulikov, Ralf Schlüter, Hermann Ney:
Faster sequence training. ICASSP 2017: 5285-5289 - [c6]Patrick Doetsch, Albert Zeyer, Paul Voigtlaender, Ilia Kulikov, Ralf Schlüter, Hermann Ney:
Returnn: The RWTH extensible training framework for universal recurrent neural networks. ICASSP 2017: 5345-5349 - [c5]Albert Zeyer, Eugen Beck, Ralf Schlüter, Hermann Ney:
CTC in the Context of Generalized Full-Sum HMM Training. INTERSPEECH 2017: 944-948 - 2016
- [c4]Markus Kitza, Albert Zeyer, Ralf Schlüter, Jahn Heymann, Reinhold Haeb-Umbach:
Robust Online Multi-Channel Speech Recognition. ITG Symposium on Speech Communication 2016: 1-5 - [c3]Patrick Doetsch, Albert Zeyer, Hermann Ney:
Bidirectional Decoder Networks for Attention-Based End-to-End Offline Handwriting Recognition. ICFHR 2016: 361-366 - [c2]Albert Zeyer, Ralf Schlüter, Hermann Ney:
Towards Online-Recognition with Deep Bidirectional LSTM Acoustic Models. INTERSPEECH 2016: 3424-3428 - [c1]Ralf Schlüter, Patrick Doetsch, Pavel Golik, Markus Kitza, Tobias Menne, Kazuki Irie, Zoltán Tüske, Albert Zeyer:
Automatic Speech Recognition Based on Neural Networks. SPECOM 2016: 3-17 - [i2]Albert Zeyer, Patrick Doetsch, Paul Voigtlaender, Ralf Schlüter, Hermann Ney:
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition. CoRR abs/1606.06871 (2016) - [i1]Patrick Doetsch, Albert Zeyer, Paul Voigtlaender, Ilya Kulikov, Ralf Schlüter, Hermann Ney:
RETURNN: The RWTH Extensible Training framework for Universal Recurrent Neural Networks. CoRR abs/1608.00895 (2016)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 12:49 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint