default search action
Matthias Bethge
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j34]Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven:
Continual Learning: Applications and the Road Forward. Trans. Mach. Learn. Res. 2024 (2024) - [c62]Max F. Burg, Thomas Zenkel, Michaela Vystrcilová, Jonathan Oesterle, Larissa Höfling, Konstantin F. Willeke, Jan Lause, Sarah Müller, Paul G. Fahey, Zhiwei Ding, Kelli Restivo, Shashwat Sridhar, Tim Gollisch, Philipp Berens, Andreas S. Tolias, Thomas Euler, Matthias Bethge, Alexander S. Ecker:
Most discriminative stimuli for functional cell type clustering. ICLR 2024 - [c61]Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak, Matthias Bethge, Wieland Brendel:
Does CLIP's generalization performance mainly stem from high train-test similarity? ICLR 2024 - [c60]Vishaal Udandarao, Max F. Burg, Samuel Albanie, Matthias Bethge:
Visual Data-Type Understanding does not emerge from scaling Vision-Language Models. ICLR 2024 - [c59]Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel:
Provable Compositional Generalization for Object-Centric Learning. ICLR 2024 - [c58]Ori Press, Ravid Shwartz-Ziv, Yann LeCun, Matthias Bethge:
The Entropy Enigma: Success and Failure of Entropy Minimization. ICML 2024 - [c57]Mark Basting, Robert-Jan Bruintjes, Thaddäus Wiedemer, Matthias Kümmerer, Matthias Bethge, Jan van Gemert:
Scale Learning in Scale-Equivariant Convolutional Networks. VISIGRAPP (2): VISAPP 2024: 567-574 - [i76]Max F. Burg, Thomas Zenkel, Michaela Vystrcilová, Jonathan Oesterle, Larissa Höfling, Konstantin F. Willeke, Jan Lause, Sarah Müller, Paul G. Fahey, Zhiwei Ding, Kelli Restivo, Shashwat Sridhar, Tim Gollisch, Philipp Berens, Andreas S. Tolias, Thomas Euler, Matthias Bethge, Alexander S. Ecker:
Most discriminative stimuli for functional cell type identification. CoRR abs/2401.05342 (2024) - [i75]Çagatay Yildiz, Nishaanth Kanna Ravichandran, Prishruit Punia, Matthias Bethge, Beyza Ermis:
Investigating Continual Pretraining in Large Language Models: Insights and Implications. CoRR abs/2402.17400 (2024) - [i74]Ameya Prabhu, Vishaal Udandarao, Philip Torr, Matthias Bethge, Adel Bibi, Samuel Albanie:
Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress. CoRR abs/2402.19472 (2024) - [i73]Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H. S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge:
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance. CoRR abs/2404.04125 (2024) - [i72]Shiven Sinha, Ameya Prabhu, Ponnurangam Kumaraguru, Siddharth Bhat, Matthias Bethge:
Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry. CoRR abs/2404.06405 (2024) - [i71]Ori Press, Ravid Shwartz-Ziv, Yann LeCun, Matthias Bethge:
The Entropy Enigma: Success and Failure of Entropy Minimization. CoRR abs/2405.05012 (2024) - [i70]Çaglar Hizli, Çagatay Yildiz, Matthias Bethge, St John, Pekka Marttinen:
Identifying latent state transition in non-linear dynamical systems. CoRR abs/2406.03337 (2024) - [i69]Lukas Thede, Karsten Roth, Olivier J. Hénaff, Matthias Bethge, Zeynep Akata:
Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models. CoRR abs/2406.09384 (2024) - [i68]Ori Press, Andreas Hochlehnert, Ameya Prabhu, Vishaal Udandarao, Ofir Press, Matthias Bethge:
CiteME: Can Language Models Accurately Cite Scientific Claims? CoRR abs/2407.12861 (2024) - [i67]Karsten Roth, Vishaal Udandarao, Sebastian Dziadzio, Ameya Prabhu, Mehdi Cherti, Oriol Vinyals, Olivier Hénaff, Samuel Albanie, Matthias Bethge, Zeynep Akata:
A Practitioner's Guide to Continual Multimodal Pretraining. CoRR abs/2408.14471 (2024) - 2023
- [j33]Zhe Li, Josue Ortega Caro, Evgenia Rusak, Wieland Brendel, Matthias Bethge, Fabio Anselmi, Ankit B. Patel, Andreas S. Tolias, Xaq Pitkow:
Robust deep learning object recognition models rely on low frequency information in natural images. PLoS Comput. Biol. 19(3) (2023) - [j32]Yongrong Qiu, David A. Klindt, Klaudia P. Szatko, Dominic Gonschorek, Larissa Höfling, Timm Schubert, Laura Busse, Matthias Bethge, Thomas Euler:
Efficient coding of natural scenes improves neural system identification. PLoS Comput. Biol. 19(4) (2023) - [j31]Patrik Reizinger, Yash Sharma, Matthias Bethge, Bernhard Schölkopf, Ferenc Huszár, Wieland Brendel:
Jacobian-based Causal Discovery with Nonlinear ICA. Trans. Mach. Learn. Res. 2023 (2023) - [c56]Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter Vincent Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf:
Unsupervised Object Learning via Common Fate. CLeaR 2023: 281-327 - [c55]Ilze Amanda Auzina, Çagatay Yildiz, Sara Magliacane, Matthias Bethge, Efstratios Gavves:
Modulated Neural ODEs. NeurIPS 2023 - [c54]Ori Press, Steffen Schneider, Matthias Kümmerer, Matthias Bethge:
RDumb: A simple approach that questions our progress in continual test-time adaptation. NeurIPS 2023 - [c53]Thaddäus Wiedemer, Prasanna Mayilvahanan, Matthias Bethge, Wieland Brendel:
Compositional Generalization from First Principles. NeurIPS 2023 - [d2]Patrik Reizinger, Yash Sharma, Matthias Bethge, Bernhard Schölkopf, Ferenc Huszár, Wieland Brendel:
nl-causal-representations. Version v1.0.1. Zenodo, 2023 [all versions] - [i66]Ilze Amanda Auzina, Çagatay Yildiz, Sara Magliacane, Matthias Bethge, Efstratios Gavves:
Invariant Neural Ordinary Differential Equations. CoRR abs/2302.13262 (2023) - [i65]Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz:
Playing repeated games with Large Language Models. CoRR abs/2305.16867 (2023) - [i64]Ori Press, Steffen Schneider, Matthias Kümmerer, Matthias Bethge:
RDumb: A simple approach that questions our progress in continual test-time adaptation. CoRR abs/2306.05401 (2023) - [i63]Thaddäus Wiedemer, Prasanna Mayilvahanan, Matthias Bethge, Wieland Brendel:
Compositional Generalization from First Principles. CoRR abs/2307.05596 (2023) - [i62]Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel:
Provable Compositional Generalization for Object-Centric Learning. CoRR abs/2310.05327 (2023) - [i61]Vishaal Udandarao, Max F. Burg, Samuel Albanie, Matthias Bethge:
Visual Data-Type Understanding does not emerge from Scaling Vision-Language Models. CoRR abs/2310.08577 (2023) - [i60]Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak, Matthias Bethge, Wieland Brendel:
Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity? CoRR abs/2310.09562 (2023) - [i59]Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven:
Continual Learning: Applications and the Road Forward. CoRR abs/2311.11908 (2023) - [i58]Luca M. Schulze Buschoff, Elif Akata, Matthias Bethge, Eric Schulz:
Have we built machines that think like people? CoRR abs/2311.16093 (2023) - [i57]Sebastian Dziadzio, Çagatay Yildiz, Gido M. van de Ven, Tomasz Trzcinski, Tinne Tuytelaars, Matthias Bethge:
Disentangled Continual Learning: Separating Memory Edits from Model Updates. CoRR abs/2312.16731 (2023) - 2022
- [j30]Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge:
If your data distribution shifts, use self-learning. Trans. Mach. Learn. Res. 2022 (2022) - [c52]Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard S. Zemel, Matthias Bethge:
Disentanglement and Generalization Under Correlation Shifts. CoLLAs 2022: 116-141 - [c51]Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Vincent Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel:
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain. ICLR 2022 - [d1]Patrik Reizinger, Yash Sharma, Matthias Bethge, Bernhard Schölkopf, Ferenc Huszár, Wieland Brendel:
nl-causal-representations. Version 1.0.0. Zenodo, 2022 [all versions] - 2021
- [j29]Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker:
Benchmarking Unsupervised Object Representations for Video Sequences. J. Mach. Learn. Res. 22: 183:1-183:61 (2021) - [j28]Max F. Burg, Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker:
Learning divisive normalization in primary visual cortex. PLoS Comput. Biol. 17(6) (2021) - [c50]Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge:
DeepGaze IIE: Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling. ICCV 2021: 12899-12908 - [c49]Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel:
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization. ICLR 2021 - [c48]David A. Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan M. Paiton:
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding. ICLR 2021 - [c47]Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel:
Contrastive Learning Inverts the Data Generating Process. ICML 2021: 12979-12990 - [c46]Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel:
How Well do Feature Visualizations Support Causal Understanding of CNN Activations? NeurIPS 2021: 11730-11744 - [c45]Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel:
Partial success in closing the gap between human and machine vision. NeurIPS 2021: 23885-23899 - [c44]Alexander Mathis, Thomas Biasi, Steffen Schneider, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis:
Pretraining boosts out-of-domain robustness for pose estimation. WACV 2021: 1858-1867 - [i56]Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel:
Contrastive Learning Inverts the Data Generating Process. CoRR abs/2102.08850 (2021) - [i55]Matthias Kümmerer, Matthias Bethge:
State-of-the-Art in Human Scanpath Prediction. CoRR abs/2102.12239 (2021) - [i54]Evgenia Rusak, Steffen Schneider, Peter V. Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge:
Adapting ImageNet-scale models to complex distribution shifts with self-learning. CoRR abs/2104.12928 (2021) - [i53]Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge:
Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling. CoRR abs/2105.12441 (2021) - [i52]Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel:
Partial success in closing the gap between human and machine vision. CoRR abs/2106.07411 (2021) - [i51]Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel:
How Well do Feature Visualizations Support Causal Understanding of CNN Activations? CoRR abs/2106.12447 (2021) - [i50]Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter V. Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel:
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain. CoRR abs/2107.08221 (2021) - [i49]Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter V. Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf:
Unsupervised Object Learning via Common Fate. CoRR abs/2110.06562 (2021) - [i48]Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard S. Zemel, Matthias Bethge:
Disentanglement and Generalization Under Correlation Shifts. CoRR abs/2112.14754 (2021) - 2020
- [j27]Jonas Rauber, Roland Zimmermann, Matthias Bethge, Wieland Brendel:
Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. J. Open Source Softw. 5(53): 2607 (2020) - [j26]Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann:
Shortcut learning in deep neural networks. Nat. Mach. Intell. 2(11): 665-673 (2020) - [c43]Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel:
A Simple Way to Make Neural Networks Robust Against Diverse Image Corruptions. ECCV (3) 2020: 53-69 - [c42]Matthias Tangemann, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge:
Measuring the Importance of Temporal Features in Video Saliency. ECCV (28) 2020: 667-684 - [c41]Ute Schmid, Volker Tresp, Matthias Bethge, Kristian Kersting, Rainer Stiefelhagen:
Künstliche Intelligenz - Die dritte Welle. GI-Jahrestagung 2020: 91-95 - [c40]Ivan Ustyuzhaninov, Santiago A. Cadena, Emmanouil Froudarakis, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Fabian H. Sinz, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker:
Rotation-invariant clustering of neuronal responses in primary visual cortex. ICLR 2020 - [c39]Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge:
Improving robustness against common corruptions by covariate shift adaptation. NeurIPS 2020 - [c38]Cornelius Schröder, David A. Klindt, Sarah Strauß, Katrin Franke, Matthias Bethge, Thomas Euler, Philipp Berens:
System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina. NeurIPS 2020 - [i47]Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel:
Increasing the robustness of DNNs against image corruptions by playing the Game of Noise. CoRR abs/2001.06057 (2020) - [i46]Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann:
Shortcut Learning in Deep Neural Networks. CoRR abs/2004.07780 (2020) - [i45]Christina M. Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S. A. Wallis, Matthias Bethge:
The Notorious Difficulty of Comparing Human and Machine Perception. CoRR abs/2004.09406 (2020) - [i44]Julius von Kügelgen, Ivan Ustyuzhaninov, Peter V. Gehler, Matthias Bethge, Bernhard Schölkopf:
Towards causal generative scene models via competition of experts. CoRR abs/2004.12906 (2020) - [i43]Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker:
Unmasking the Inductive Biases of Unsupervised Object Representations for Video Sequences. CoRR abs/2006.07034 (2020) - [i42]Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge:
Improving robustness against common corruptions by covariate shift adaptation. CoRR abs/2006.16971 (2020) - [i41]Jonas Rauber, Matthias Bethge:
Fast Differentiable Clipping-Aware Normalization and Rescaling. CoRR abs/2007.07677 (2020) - [i40]David A. Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan M. Paiton:
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding. CoRR abs/2007.10930 (2020) - [i39]Jonas Rauber, Matthias Bethge, Wieland Brendel:
EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy. CoRR abs/2008.04175 (2020) - [i38]Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Matthias Bethge, Felix A. Wichmann, Wieland Brendel:
On the surprising similarities between supervised and self-supervised models. CoRR abs/2010.08377 (2020) - [i37]Judy Borowski, Roland S. Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel:
Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations. CoRR abs/2010.12606 (2020) - [i36]Claudio Michaelis, Matthias Bethge, Alexander S. Ecker:
Closing the Generalization Gap in One-Shot Object Detection. CoRR abs/2011.04267 (2020)
2010 – 2019
- 2019
- [j25]Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Leon A. Gatys, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker:
Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Comput. Biol. 15(4) (2019) - [c37]Wieland Brendel, Matthias Bethge:
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet. ICLR (Poster) 2019 - [c36]Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge:
A rotation-equivariant convolutional neural network model of primary visual cortex. ICLR (Poster) 2019 - [c35]Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel:
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. ICLR 2019 - [c34]Jörn-Henrik Jacobsen, Jens Behrmann, Richard S. Zemel, Matthias Bethge:
Excessive Invariance Causes Adversarial Vulnerability. ICLR (Poster) 2019 - [c33]Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel:
Towards the first adversarially robust neural network model on MNIST. ICLR (Poster) 2019 - [c32]Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Zachary Pitkow, Andreas S. Tolias:
Learning from brains how to regularize machines. NeurIPS 2019: 9525-9535 - [c31]Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge:
Accurate, reliable and fast robustness evaluation. NeurIPS 2019: 12841-12851 - [i35]Wieland Brendel, Matthias Bethge:
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet. CoRR abs/1904.00760 (2019) - [i34]Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge:
Accurate, reliable and fast robustness evaluation. CoRR abs/1907.01003 (2019) - [i33]Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel:
Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming. CoRR abs/1907.07484 (2019) - [i32]Alexander Mathis, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis:
Pretraining boosts out-of-domain robustness for pose estimation. CoRR abs/1909.11229 (2019) - [i31]Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Xaq Pitkow, Andreas S. Tolias:
Learning From Brains How to Regularize Machines. CoRR abs/1911.05072 (2019) - 2018
- [j24]Philipp Berens, Jeremy Freeman, Thomas Deneux, Nicolay Chenkov, Thomas McColgan, Artur Speiser, Jakob H. Macke, Srinivas C. Turaga, Patrick J. Mineault, Peter Rupprecht, Stephan Gerhard, Rainer W. Friedrich, Johannes Friedrich, Liam Paninski, Marius Pachitariu, Kenneth D. Harris, Ben Bolte, Timothy A. Machado, Dario Ringach, Jasmine Stone, Luke E. Rogerson, Nicolas J. Sofroniew, Jacob Reimer, Emmanouil Froudarakis, Thomas Euler, Miroslav Román Rosón, Lucas Theis, Andreas S. Tolias, Matthias Bethge:
Community-based benchmarking improves spike rate inference from two-photon calcium imaging data. PLoS Comput. Biol. 14(5) (2018) - [c30]Santiago A. Cadena, Marissa A. Weis, Leon A. Gatys, Matthias Bethge, Alexander S. Ecker:
Diverse Feature Visualizations Reveal Invariances in Early Layers of Deep Neural Networks. ECCV (12) 2018: 225-240 - [c29]Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge:
Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics. ECCV (16) 2018: 798-814 - [c28]Wieland Brendel, Jonas Rauber, Matthias Bethge:
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. ICLR (Poster) 2018 - [c27]Claudio Michaelis, Matthias Bethge, Alexander S. Ecker:
One-Shot Segmentation in Clutter. ICML 2018: 3546-3555 - [c26]Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann:
Generalisation in humans and deep neural networks. NeurIPS 2018: 7549-7561 - [i30]Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge:
Trace your sources in large-scale data: one ring to find them all. CoRR abs/1803.08882 (2018) - [i29]Claudio Michaelis, Matthias Bethge, Alexander S. Ecker:
One-Shot Segmentation in Clutter. CoRR abs/1803.09597 (2018) - [i28]Alexander Mathis, Pranav Mamidanna, Taiga Abe, Kevin M. Cury, Venkatesh N. Murthy, Mackenzie W. Mathis, Matthias Bethge:
Markerless tracking of user-defined features with deep learning. CoRR abs/1804.03142 (2018) - [i27]Lukas Schott, Jonas Rauber, Wieland Brendel, Matthias Bethge:
Robust Perception through Analysis by Synthesis. CoRR abs/1805.09190 (2018) - [i26]Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge:
One-shot Texture Segmentation. CoRR abs/1807.02654 (2018) - [i25]Santiago A. Cadena, Marissa A. Weis, Leon A. Gatys, Matthias Bethge, Alexander S. Ecker:
Diverse feature visualizations reveal invariances in early layers of deep neural networks. CoRR abs/1807.10589 (2018) - [i24]Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge:
Adversarial Vision Challenge. CoRR abs/1808.01976 (2018) - [i23]Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann:
Generalisation in humans and deep neural networks. CoRR abs/1808.08750 (2018) - [i22]Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge:
A rotation-equivariant convolutional neural network model of primary visual cortex. CoRR abs/1809.10504 (2018) - [i21]Jörn-Henrik Jacobsen, Jens Behrmann, Richard S. Zemel, Matthias Bethge:
Excessive Invariance Causes Adversarial Vulnerability. CoRR abs/1811.00401 (2018) - [i20]Claudio Michaelis, Ivan Ustyuzhaninov, Matthias Bethge, Alexander S. Ecker:
One-Shot Instance Segmentation. CoRR abs/1811.11507 (2018) - [i19]Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel:
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. CoRR abs/1811.12231 (2018) - 2017
- [j23]Marcel Nonnenmacher, Christian Behrens, Philipp Berens, Matthias Bethge, Jakob H. Macke:
Signatures of criticality arise from random subsampling in simple population models. PLoS Comput. Biol. 13(10) (2017) - [c25]Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, Eli Shechtman:
Controlling Perceptual Factors in Neural Style Transfer. CVPR 2017: 3730-3738 - [c24]Felix A. Wichmann, David H. J. Janssen, Robert Geirhos, Guillermo Aguilar, Heiko H. Schütt, Marianne Maertens, Matthias Bethge:
Methods and measurements to compare men against machines. HVEI 2017: 36-45 - [c23]Matthias Kümmerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge:
Understanding Low- and High-Level Contributions to Fixation Prediction. ICCV 2017: 4799-4808 - [c22]