default search action
Sarath Chandar
A. P. Sarath Chandar
Person information
- affiliation: University of Montreal, Department of Computer Science and Operations Research, Canada
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j7]Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar:
Promoting Exploration in Memory-Augmented Adam using Critical Momenta. Trans. Mach. Learn. Res. 2024 (2024) - [c52]Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar:
Fairness-Aware Structured Pruning in Transformers. AAAI 2024: 22484-22492 - [c51]Andreas Madsen, Sarath Chandar, Siva Reddy:
Are self-explanations from Large Language Models faithful? ACL (Findings) 2024: 295-337 - [c50]Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, Sarath Chandar:
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques. ACL (1) 2024: 5732-5745 - [c49]Abdelrahman Zayed, Gonçalo Mordido, Ioana Baldini, Sarath Chandar:
Why Don't Prompt-Based Fairness Metrics Correlate? ACL (1) 2024: 9002-9019 - [c48]Louis Clouâtre, Amal Zouaq, Sarath Chandar:
MVP: Minimal Viable Phrase for Long Text Understanding. LREC/COLING 2024: 12016-12026 - [c47]Darshan Patil, Janarthanan Rajendran, Glen Berseth, Sarath Chandar:
Intelligent Switching for Reset-Free RL. ICLR 2024 - [c46]Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar:
Mastering Memory Tasks with World Models. ICLR 2024 - [c45]Andreas Madsen, Siva Reddy, Sarath Chandar:
Faithfulness Measurable Masked Language Models. ICML 2024 - [c44]Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar:
Lookbehind-SAM: k steps back, 1 step forward. ICML 2024 - [c43]Doriane Olewicki, Sarra Habchi, Mathieu Nayrolles, Mojtaba Faramarzi, Sarath Chandar, Bram Adams:
On the Costs and Benefits of Adopting Lifelong Learning for Software Analytics - Empirical Study on Brown Build and Risk Prediction. ICSE-SEIP 2024: 275-286 - [i78]Andreas Madsen, Sarath Chandar, Siva Reddy:
Are self-explanations from Large Language Models faithful? CoRR abs/2401.07927 (2024) - [i77]Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar:
Mastering Memory Tasks with World Models. CoRR abs/2403.04253 (2024) - [i76]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Towards Practical Tool Usage for Continually Learning LLMs. CoRR abs/2404.09339 (2024) - [i75]Darshan Patil, Janarthanan Rajendran, Glen Berseth, Sarath Chandar:
Intelligent Switching for Reset-Free RL. CoRR abs/2405.01684 (2024) - [i74]Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Côté:
Sub-goal Distillation: A Method to Improve Small Language Agents. CoRR abs/2405.02749 (2024) - [i73]Andreas Madsen, Himabindu Lakkaraju, Siva Reddy, Sarath Chandar:
Interpretability Needs a New Paradigm. CoRR abs/2405.05386 (2024) - [i72]Pranshu Malviya, Jerry Huang, Quentin Fournier, Sarath Chandar:
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective. CoRR abs/2405.15895 (2024) - [i71]Artem Zholus, Maksim Kuznetsov, Roman Schutski, Rim Shayakhmetov, Daniil Polykovskiy, Sarath Chandar, Alex Zhavoronkov:
BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning. CoRR abs/2406.03686 (2024) - [i70]Megh Thakkar, Quentin Fournier, Matthew D. Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, Sarath Chandar:
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques. CoRR abs/2406.04879 (2024) - [i69]Abdelrahman Zayed, Gonçalo Mordido, Ioana Baldini, Sarath Chandar:
Why Don't Prompt-Based Fairness Metrics Correlate? CoRR abs/2406.05918 (2024) - [i68]Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido, Sarath Chandar:
Exploring Quantization for Efficient Pre-Training of Transformer Language Models. CoRR abs/2407.11722 (2024) - [i67]Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar:
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models. CoRR abs/2408.08470 (2024) - 2023
- [j6]Andreas Madsen, Siva Reddy, Sarath Chandar:
Post-hoc Interpretability for Neural NLP: A Survey. ACM Comput. Surv. 55(8): 155:1-155:42 (2023) - [j5]Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell:
An Empirical Investigation of the Role of Pre-training in Lifelong Learning. J. Mach. Learn. Res. 24: 214:1-214:50 (2023) - [c42]Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar:
Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness. AAAI 2023: 14593-14601 - [c41]Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar:
Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning. CoLLAs 2023: 21-42 - [c40]Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar:
Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning. CoLLAs 2023: 376-398 - [c39]Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu, Sarath Chandar:
Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi. CoLLAs 2023: 861-877 - [c38]Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, Partha Talukdar:
Self-Influence Guided Data Reweighting for Language Model Pre-training. EMNLP 2023: 2033-2045 - [c37]Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar:
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. EMNLP (Findings) 2023: 4305-4319 - [c36]Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani, Sarath Chandar:
EpiK-Eval: Evaluation for Language Models as Epistemic Models. EMNLP 2023: 9523-9557 - [c35]Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran:
Conditionally optimistic exploration for cooperative deep multi-agent reinforcement learning. UAI 2023: 2529-2540 - [e2]Sarath Chandar, Razvan Pascanu, Hanie Sedghi, Doina Precup:
Conference on Lifelong Learning Agents, 22-25 August 2023, McGill University, Montréal, Québec, Canada. Proceedings of Machine Learning Research 232, PMLR 2023 [contents] - [i66]Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar:
Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning. CoRR abs/2302.02792 (2023) - [i65]Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar:
Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning. CoRR abs/2303.08690 (2023) - [i64]Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran:
Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning. CoRR abs/2303.09032 (2023) - [i63]Doriane Olewicki, Sarra Habchi, Mathieu Nayrolles, Mojtaba Faramarzi, Sarath Chandar, Bram Adams:
Towards Lifelong Learning for Software Analytics Models: Empirical Study on Brown Build and Risk Prediction. CoRR abs/2305.09824 (2023) - [i62]Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Sarath Chandar:
Should We Attend More or Less? Modulating Attention for Fairness. CoRR abs/2305.13088 (2023) - [i61]Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar:
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models. CoRR abs/2305.14775 (2023) - [i60]Jarrid Rector-Brooks, Kanika Madan, Moksh Jain, Maksym Korablyov, Cheng-Hao Liu, Sarath Chandar, Nikolay Malkin, Yoshua Bengio:
Thompson sampling for improved exploration in GFlowNets. CoRR abs/2306.17693 (2023) - [i59]Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar:
Promoting Exploration in Memory-Augmented Adam using Critical Momenta. CoRR abs/2307.09638 (2023) - [i58]Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar:
Lookbehind Optimizer: k steps back, 1 step forward. CoRR abs/2307.16704 (2023) - [i57]Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu, Sarath Chandar:
Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi. CoRR abs/2308.10284 (2023) - [i56]Andreas Madsen, Siva Reddy, Sarath Chandar:
Faithfulness Measurable Masked Language Models. CoRR abs/2310.07819 (2023) - [i55]Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani, Sarath Chandar:
EpiK-Eval: Evaluation for Language Models as Epistemic Models. CoRR abs/2310.15372 (2023) - [i54]Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, Partha Talukdar:
Self-Influence Guided Data Reweighting for Language Model Pre-training. CoRR abs/2311.00913 (2023) - [i53]Arjun Vaithilingam Sudhakar, Prasanna Parthasarathi, Janarthanan Rajendran, Sarath Chandar:
Language Model-In-The-Loop: Data Optimal Approach to Learn-To-Recommend Actions in Text Games. CoRR abs/2311.07687 (2023) - [i52]Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar:
Fairness-Aware Structured Pruning in Transformers. CoRR abs/2312.15398 (2023) - 2022
- [c34]Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar:
PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks. AAAI 2022: 589-597 - [c33]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Local Structure Matters Most: Perturbation Study in NLU. ACL (Findings) 2022: 3712-3731 - [c32]Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar:
Improving Meta-Learning Generalization with Activation-Based Early-Stopping. CoLLAs 2022: 213-230 - [c31]Pranshu Malviya, Balaraman Ravindran, Sarath Chandar:
TAG: Task-based Accumulated Gradients for Lifelong learning. CoLLAs 2022: 366-389 - [c30]Daphné Lafleur, Sarath Chandar, Gilles Pesant:
Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints. CP 2022: 30:1-30:16 - [c29]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. EMNLP (Findings) 2022: 5375-5396 - [c28]Paul-Aymeric Martin McRae, Prasanna Parthasarathi, Mido Assran, Sarath Chandar:
Memory Augmented Optimizers for Deep Learning. ICLR 2022 - [c27]Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen:
Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods. ICML 2022: 22536-22561 - [c26]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Local Structure Matters Most in Most Languages. AACL/IJCNLP (2) 2022: 285-294 - [e1]Sarath Chandar, Razvan Pascanu, Doina Precup:
Conference on Lifelong Learning Agents, CoLLAs 2022, 22-24 August 2022, McGill University, Montréal, Québec, Canada. Proceedings of Machine Learning Research 199, PMLR 2022 [contents] - [i51]Amir Ardalan Kalantari, Mohammad Amini, Sarath Chandar, Doina Precup:
Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers. CoRR abs/2202.00710 (2022) - [i50]Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen:
Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods. CoRR abs/2204.11464 (2022) - [i49]Shagun Sodhani, Mojtaba Faramarzi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed A. Abdelsalam, Janarthanan Rajendran, Sarath Chandar:
An Introduction to Lifelong Supervised Learning. CoRR abs/2207.04354 (2022) - [i48]Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar:
Improving Meta-Learning Generalization with Activation-Based Early-Stopping. CoRR abs/2208.02377 (2022) - [i47]Enamundram Naga Karthik, Anne Kerbrat, Pierre Labauge, Tobias Granberg, Jason Talbott, Daniel S. Reich, Massimo Filippi, Rohit Bakshi, Virginie Callot, Sarath Chandar, Julien Cohen-Adad:
Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch? CoRR abs/2210.15091 (2022) - [i46]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. CoRR abs/2211.05015 (2022) - [i45]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Local Structure Matters Most in Most Languages. CoRR abs/2211.05025 (2022) - [i44]Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar:
Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness. CoRR abs/2211.11109 (2022) - [i43]Gonçalo Mordido, Sarath Chandar, François Leduc-Primeau:
Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators. CoRR abs/2211.11561 (2022) - [i42]Gabriele Prato, Yale Song, Janarthanan Rajendran, R. Devon Hjelm, Neel Joshi, Sarath Chandar:
PatchBlender: A Motion Prior for Video Transformers. CoRR abs/2211.14449 (2022) - 2021
- [c25]Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, Sarath Chandar:
Towered Actor Critic For Handling Multiple Action Types In Reinforcement Learning For Drug Discovery. AAAI 2021: 142-150 - [c24]Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard H. Hovy:
A Survey of Data Augmentation Approaches for NLP. ACL/IJCNLP (Findings) 2021: 968-988 - [c23]Louis Clouâtre, Philippe Trempe, Amal Zouaq, Sarath Chandar:
MLMLM: Link Prediction with Mean Likelihood Masked Language Model. ACL/IJCNLP (Findings) 2021: 4321-4331 - [c22]Mohamed A. Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar:
IIRC: Incremental Implicitly-Refined Classification. CVPR 2021: 11038-11047 - [c21]Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron C. Courville, Sarath Chandar:
Continuous Coordination As a Realistic Scenario for Lifelong Learning. ICML 2021: 8016-8024 - [c20]Prasanna Parthasarathi, Mohamed A. Abdelsalam, Sarath Chandar, Joelle Pineau:
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss. SIGDIAL 2021: 469-476 - [c19]Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar:
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ? SIGDIAL 2021: 477-488 - [i41]Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron C. Courville, Sarath Chandar:
Continuous Coordination As a Realistic Scenario for Lifelong Learning. CoRR abs/2103.03216 (2021) - [i40]Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard H. Hovy:
A Survey of Data Augmentation Approaches for NLP. CoRR abs/2105.03075 (2021) - [i39]Pranshu Malviya, Balaraman Ravindran, Sarath Chandar:
TAG: Task-based Accumulated Gradients for Lifelong learning. CoRR abs/2105.05155 (2021) - [i38]Prasanna Parthasarathi, Mohamed A. Abdelsalam, Joelle Pineau, Sarath Chandar:
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss. CoRR abs/2106.10619 (2021) - [i37]Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar:
Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ? CoRR abs/2106.10622 (2021) - [i36]Paul-Aymeric McRae, Prasanna Parthasarathi, Mahmoud Assran, Sarath Chandar:
Memory Augmented Optimizers for Deep Learning. CoRR abs/2106.10708 (2021) - [i35]Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar:
Demystifying Neural Language Models' Insensitivity to Word-Order. CoRR abs/2107.13955 (2021) - [i34]Andreas Madsen, Siva Reddy, Sarath Chandar:
Post-hoc Interpretability for Neural NLP: A Survey. CoRR abs/2108.04840 (2021) - [i33]Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish, Sarath Chandar:
Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers. CoRR abs/2110.06990 (2021) - [i32]Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell:
An Empirical Investigation of the Role of Pre-training in Lifelong Learning. CoRR abs/2112.09153 (2021) - 2020
- [j4]Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, Michael Bowling:
The Hanabi challenge: A new frontier for AI research. Artif. Intell. 280: 103216 (2020) - [j3]Shagun Sodhani, Sarath Chandar, Yoshua Bengio:
Toward Training Recurrent Neural Networks for Lifelong Learning. Neural Comput. 32(1): 1-35 (2020) - [c18]Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Simon Blackburn, Karam M. J. Thomas, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio:
Learning to Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning. ICML 2020: 3668-3679 - [c17]Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar:
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning. NeurIPS 2020 - [i31]Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam M. J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio:
Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning. CoRR abs/2004.12485 (2020) - [i30]Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar:
PatchUp: A Regularization Technique for Convolutional Neural Networks. CoRR abs/2006.07794 (2020) - [i29]Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar:
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning. CoRR abs/2007.03158 (2020) - [i28]Evan Racah, Sarath Chandar:
Slot Contrastive Networks: A Contrastive Approach for Representing Objects. CoRR abs/2007.09294 (2020) - [i27]Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar:
How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics. CoRR abs/2008.10427 (2020) - [i26]Louis Clouâtre, Philippe Trempe, Amal Zouaq, Sarath Chandar:
MLMLM: Link Prediction with Mean Likelihood Masked Language Model. CoRR abs/2009.07058 (2020) - [i25]Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar:
Maximum Reward Formulation In Reinforcement Learning. CoRR abs/2010.03744 (2020) - [i24]Mohamed A. Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar:
IIRC: Incremental Implicitly-Refined Classification. CoRR abs/2012.12477 (2020)
2010 – 2019
- 2019
- [c16]Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, Yoshua Bengio:
Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies. AAAI 2019: 3280-3287 - [c15]Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio:
Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study. ACL (1) 2019: 32-37 - [c14]Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp:
Towards Lossless Encoding of Sentences. ACL (1) 2019: 1577-1583 - [c13]Vardaan Pahuja, Jie Fu, Sarath Chandar, Christopher Joseph Pal:
Structure Learning for Neural Module Networks. LANTERN@EMNLP-IJCNLP 2019: 1-10 - [c12]Revanth Reddy, Sarath Chandar, Balaraman Ravindran:
Edge Replacement Grammars : A Formal Language Approach for Generating Graphs. SDM 2019: 351-359 - [i23]Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, Michael Bowling:
The Hanabi Challenge: A New Frontier for AI Research. CoRR abs/1902.00506 (2019) - [i22]Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, Yoshua Bengio:
Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies. CoRR abs/1902.06704 (2019) - [i21]Revanth Reddy, Sarath Chandar, Balaraman Ravindran:
Edge Replacement Grammars: A Formal Language Approach for Generating Graphs. CoRR abs/1902.07159 (2019) - [i20]Vardaan Pahuja, Jie Fu, Sarath Chandar, Christopher J. Pal:
Structure Learning for Neural Module Networks. CoRR abs/1905.11532 (2019) - [i19]Chinnadhurai Sankar, Sandeep Subramanian, Christopher J. Pal, Sarath Chandar, Yoshua Bengio:
Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study. CoRR abs/1906.01603 (2019) - [i18]Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp:
Towards Lossless Encoding of Sentences. CoRR abs/1906.01659 (2019) - 2018
- [j2]Çaglar Gülçehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio:
Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes. Neural Comput. 30(4) (2018) - [c11]Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar:
Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph. AAAI 2018: 705-713 - [i17]Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeswar, Alexandre de Brébisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio:
A Deep Reinforcement Learning Chatbot (Short Version). CoRR abs/1801.06700 (2018) - [i16]Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar:
Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph. CoRR abs/1801.10314 (2018) - [i15]Ghulam Ahmed Ansari, Sagar J. P, Sarath Chandar, Balaraman Ravindran:
Language Expansion In Text-Based Games. CoRR abs/1805.07274 (2018) - [i14]Shagun Sodhani, Sarath Chandar, Yoshua Bengio:
On Training Recurrent Neural Networks for Lifelong Learning. CoRR abs/1811.07017 (2018) - [i13]