


default search action
2nd CoLLAs 2024: Pisa, Italy
- Vincenzo Lomonaco, Stefano Melacci, Tinne Tuytelaars, Sarath Chandar, Razvan Pascanu:
Conference on Lifelong Learning Agents, 29-1 August 2024, University of Pisa, Pisa, Italy. Proceedings of Machine Learning Research 274, PMLR 2024 - Sayed Mohammadreza Tayaranian Hosseini, Seyyed Hasan Mozafari, Brett H. Meyer, James J. Clark, Warren J. Gross:
Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models. 1-28 - Vihang Prakash Patil, Andreas Radler, Daniel Klotz, Sepp Hochreiter:
Simplified priors for Object-Centric Learning. 29-48 - Erik B. Terres-Escudero, Javier Del Ser, Pablo García Bringas:
A Contrastive Symmetric Forward-Forward Algorithm (SFFA) for Continual Learning Tasks. 49-69 - Subarnaduti Paul, Lars-Joel Frey, Roshni Ramanna Kamath, Kristian Kersting, Martin Mundt:
Masked Autoencoders are Efficient Continual Federated Learners. 70-85 - Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon:
Towards More Diverse Evaluation of Class Incremental Learning: Representation Learning Perspective. 86-101 - Fahad Sarfraz, Bahram Zonooz, Elahe Arani:
Beyond Unimodal Learning: The Importance of Integrating Multiple Modalities for Lifelong Learning. 102-120 - Giulia Lanzillotta, Sidak Pal Singh, Benjamin F. Grewe, Thomas Hofmann:
Local vs Global continual learning. 121-143 - Prashant Shivaram Bhat, Bharath Chennamkulam Renjith, Elahe Arani, Bahram Zonooz:
Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning. 144-160 - Hyemin Jeong, Seong-Woong Kim, Dong-Wan Choi:
Replaying with Realistic Latent Vectors in Generative Continual Learning. 161-178 - Matteo Tiezzi, Federico Becattini, Simone Marullo, Stefano Melacci:
Memory Head for Pre-Trained Backbones in Continual Learning. 179-197 - Martin Schiemer, Clemens JS Schaefer, Mark James Horeni, Yu Emma Wang, Juan Ye, Siddharth Joshi:
Hadamard Domain Training with Integers for Class Incremental Quantized Learning. 198-220 - Dong Wang, Olga Saukh, Xiaoxi He, Lothar Thiele:
Subspace-Configurable Networks. 221-251 - Yue Guo, Xijia Zhang, Simon Stepputtis, Joseph Campbell, Katia P. Sycara:
Adaptive Action Advising with Different Rewards. 252-267 - Norman Di Palo, Leonard Hasenclever, Jan Humplik, Arunkumar Byravan:
Diffusion Augmented Agents: A Framework for Efficient Exploration and Transfer Learning. 268-284 - Anna Vettoruzzo, Joaquin Vanschoren, Mohamed-Rafik Bouguelia, Thorsteinn S. Rögnvaldsson:
Learning to learn without forgetting using attention. 285-300 - Hayato Watahiki, Ryo Iwase, Ryosuke Unno, Yoshimasa Tsuruoka:
Cross-Domain Policy Transfer by Representation Alignment via Multi-Domain Behavioral Cloning. 301-323 - Yao Ma, Samuel Louvan, Zhunxuan Wang:
Gradual Fine-Tuning with Graph Routing for Multi-Source Unsupervised Domain Adaptation. 324-341 - Junwei Su, Difan Zou, Chuan Wu:
On the Limitation and Experience Replay for GNNs in Continual Learning. 342-366 - Tyler L. Hayes, César Roberto de Souza, Namil Kim, Jiwon Kim, Riccardo Volpi, Diane Larlus:
PANDAS: Prototype-based Novel Class Discovery and Detection. 367-387 - Yipeng Zhang, Laurent Charlin, Richard S. Zemel, Mengye Ren:
Integrating Present and Past in Unsupervised Continual Learning. 388-409 - Saurabh Kumar, Henrik Marklund, Benjamin Van Roy:
Maintaining Plasticity in Continual Learning via Regenerative Regularization. 410-430 - Sergi Masip, Pau Rodríguez, Tinne Tuytelaars, Gido M. van de Ven:
Continual Learning of Diffusion Models with Generative Distillation. 431-456 - Xiaoxuan Lei, Lucas Gomez, Hao Yuan Bai, Pouya Bashivan:
iWISDM: Assessing instruction following in multimodal models at scale. 457-480 - William Yue, Bo Liu, Peter Stone:
t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making. 481-497 - Sebastian Dziadzio, Çagatay Yildiz, Gido M. van de Ven, Tomasz Trzcinski, Tinne Tuytelaars, Matthias Bethge:
Infinite dSprites for Disentangled Continual Learning: Separating Memory Edits from Generalization. 498-513 - Pedro Vianna, Muawiz Sajjad Chaudhary, Paria Mehrbod, An Tang, Guy Cloutier, Guy Wolf, Michael Eickenberg, Eugene Belilovsky:
Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation. 514-533 - Ameya Prabhu, Hasan Abed Al Kader Hammoud, Ser-Nam Lim, Bernard Ghanem, Philip Torr, Adel Bibi:
From Categories to Classifiers: Name-Only Continual Learning by Exploring the Web. 534-559 - Christian Di Maio, Andrea Zugarini, Francesco Giannini, Marco Maggini, Stefano Melacci:
Tomorrow Brings Greater Knowledge: Large Language Models Join Dynamic Temporal Knowledge Graphs. 560-576 - Amir El-Ghoussani, Julia Hornauer, Gustavo Carneiro, Vasileios Belagiannis:
Consistency Regularisation for Unsupervised Domain Adaptation in Monocular Depth Estimation. 577-596 - Luca Salvatore Lorello, Marco Lippi, Stefano Melacci:
Continual Learning for Unsupervised Concept Bottleneck Discovery. 597-619 - Fatemeh Amerehi
, Patrick Healy:
Label Augmentation for Neural Networks Robustness. 620-640 - Morten Blørstad, Berent Ånund Strømnes Lunde, Nello Blaser:
Stable Update of Regression Trees. 641-651 - Jihwan Kwak, Sungmin Cha, Taesup Moon:
Towards realistic incremental scenario in class incremental semantic segmentation. 652-671 - Marcel Hoffmann, Lukas Galke, Ansgar Scherp:
POWN: Prototypical Open-World Node Classification. 672-691 - Jelena Luketina, Jack Lanchantin, Sainbayar Sukhbaatar, Arthur Szlam:
Compositional Interfaces for Compositional Generalization. 692-709 - Hosung Lee, Sejin Kim, Seungpil Lee, Sanha Hwang, Jihwan Lee, Byung-Jun Lee, Sundong Kim:
ARCLE: The Abstraction and Reasoning Corpus Learning Environment for Reinforcement Learning. 710-731 - Li Guo, Yuxuan Xia, Shengjie Wang:
Enhanced Label Propagation through Affinity Matrix Fusion for Source-Free Domain Adaptation. 732-749 - Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, Will Dabney:
Disentangling the Causes of Plasticity Loss in Neural Networks. 750-783 - Di Fu, Thanh Vinh Vo, Haozhe Ma, Tze-Yun Leong:
Decoupled Prompt-Adapter Tuning for Continual Activity Recognition. 784-797 - Lucas Cazzonelli, Cedric Kulbach, Steffen Thoma:
Optimizing the Learning Rate for the Online Training of Neural Networks. 798-814 - Daniel Anthes, Sushrut Thorat, Peter König, Tim C. Kietzmann:
Keep moving: identifying task-relevant subspaces to maximise plasticity for newly learned tasks. 815-831 - Sameer Ambekar, Zehao Xiao, Jiayi Shen, Xiantong Zhen, Cees G. M. Snoek:
Probabilistic Test-Time Generalization by Variational Neighbor-Labeling. 832-851 - Thomas De Min, Massimiliano Mancini, Stéphane Lathuilière, Subhankar Roy, Elisa Ricci:
Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning. 852-868 - Marius-Constantin Dinu, Claudiu Leoveanu-Condrei, Markus Holzleitner, Werner Zellinger, Sepp Hochreiter:
SymbolicAI: A framework for logic-based approaches combining generative models and solvers. 869-914 - Thomas L. Lee, Amos J. Storkey:
Chunking: Continual Learning is not just about Distribution Shift. 915-937 - Cameron Ethan Taylor, Vassilis Vassiliades, Constantine Dovrolis:
Patch-Based Contrastive Learning and Memory Consolidation for Online Unsupervised Continual Learning. 938-958 - Safa Alver, Ali Rahimi-Kalahroudi, Doina Precup:
Partial Models for Building Adaptive Model-Based Reinforcement Learning Agents. 959-977 - Paolo Cudrano, Xiaoyu Luo, Matteo Matteucci:
The Empirical Impact of Forgetting and Transfer in Continual Visual Odometry. 978-995 - Albin Soutif, Simone Magistri, Joost van de Weijer, Andrew D. Bagdanov:
An Empirical Analysis of Forgetting in Pre-trained Models with Incremental Low-Rank Updates. 996-1012 - Jeffery Dick, Saptarshi Nath, Christos Peridis, Eseoghene Ben-Iwhiwhu, Soheil Kolouri, Andrea Soltoggio:
Statistical Context Detection for Deep Lifelong Reinforcement Learning. 1013-1031 - Md Yousuf Harun, Jhair Gallardo, Junyu Chen, Christopher Kanan:
GRASP: A Rehearsal Policy for Efficient Online Continual Learning. 1032-1052 - Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Côté:
Sub-goal Distillation: A Method to Improve Small Language Agents. 1053-1075 - Lukas Thede, Karsten Roth, Olivier J. Hénaff, Matthias Bethge, Zeynep Akata:
Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models. 1076-1093

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.