


default search action
3rd CoRL 2019: Osaka, Japan
- Leslie Pack Kaelbling, Danica Kragic, Komei Sugiura:

3rd Annual Conference on Robot Learning, CoRL 2019, Osaka, Japan, October 30 - November 1, 2019, Proceedings. Proceedings of Machine Learning Research 100, PMLR 2019 - Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani:

Data Efficient Reinforcement Learning for Legged Robots. 1-10 - Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Joseph J. Lim:

To Follow or not to Follow: Selective Imitation Learning from Observations. 11-23 - Ashwin Balakrishna, Brijen Thananjeyan, Jonathan Lee, Felix Li, Arsh Zahed, Joseph E. Gonzalez, Ken Goldberg:

On-Policy Robot Imitation Learning from a Converging Supervisor. 24-41 - Kuan Fang, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei:

Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation. 42-52 - Yuzhe Qin, Rui Chen, Hao Zhu, Meng Song, Jing Xu, Hao Su:

S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes. 53-65 - Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl:

Learning by Cheating. 66-75 - Aly Magassouba, Komei Sugiura, Hisashi Kawai:

Multimodal Attention Branch Network for Perspective-Free Sentence Generation. 76-85 - Yuning Chai, Benjamin Sapp, Mayank Bansal, Dragomir Anguelov:

MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction. 86-99 - Yufei Ye, Dhiraj Gandhi, Abhinav Gupta, Shubham Tulsiani:

Object-centric Forward Modeling for Model Predictive Control. 100-109 - Ofir Nachum, Michael Ahn, Hugo Ponte, Shixiang Shane Gu, Vikash Kumar:

Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real. 110-121 - Siddharth Ancha, Junyu Nan, David Held:

Combining Deep Learning and Verification for Precise Object Instance Detection. 122-141 - Bohan Wu, Iretiayo Akinola, Jacob Varley, Peter K. Allen:

MAT: Multi-Fingered Adaptive Tactile Grasping via Deep Reinforcement Learning. 142-161 - Sarah Bechtle, Yixin Lin, Akshara Rai, Ludovic Righetti, Franziska Meier:

Curious iLQR: Resolving Uncertainty in Model-based RL. 162-171 - Michael Burke, Yordan Hristov, Subramanian Ramamoorthy:

Hybrid system identification using switching density networks. 172-181 - Rinu Boney, Juho Kannala, Alexander Ilin:

Regularizing Model-Based Planning with Energy-Based Models. 182-191 - Vaibhav V. Unhelkar, Shen Li, Julie A. Shah:

Semi-Supervised Learning of Decision-Making Models for Human-Robot Collaboration. 192-203 - Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan D. Ratliff:

Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping. 204-219 - Keuntaek Lee, Gabriel Nakajima An, Viacheslav Zakharov, Evangelos A. Theodorou:

Perceptual Attention-based Predictive Control. 220-232 - Noémie Jaquier, Leonel Dario Rozo, Sylvain Calinon, Mathias Bürger:

Bayesian Optimization Meets Riemannian Manifolds in Robot Learning. 233-246 - Noémie Jaquier, David Ginsbourger, Sylvain Calinon:

Learning from demonstration with model-based Gaussian process. 247-257 - Masashi Okada, Tadahiro Taniguchi:

Variational Inference MPC for Bayesian Model-based Reinforcement Learning. 258-272 - Lukas Schwenkel, Meng Guo, Mathias Bürger:

Optimizing Sequences of Probabilistic Manipulation Skills Learned from Demonstration. 273-282 - Meng Guo, Mathias Bürger:

Predictive Safety Network for Resource-constrained Multi-agent Systems. 283-292 - Leonidas Koutras, Zoe Doulgeri:

A correct formulation for the Orientation Dynamic Movement Primitives for robot control in the Cartesian space. 293-302 - Dan Barnes, Rob Weston, Ingmar Posner:

Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information. 303-316 - Zhaoming Xie, Patrick Clary, Jeremy Dao, Pedro Morais, Jonathan W. Hurst, Michiel van de Panne:

Learning Locomotion Skills for Cassie: Iterative Design and Sim-to-Real. 317-329 - Daniel S. Brown, Wonjoon Goo, Scott Niekum:

Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations. 330-359 - Felix Leibfried, Jordi Grau-Moya:

Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning. 360-373 - Yilun Du, Toru Lin, Igor Mordatch:

Model-Based Planning with Energy-Based Models. 374-383 - Kelvin Wong, Shenlong Wang, Mengye Ren, Ming Liang, Raquel Urtasun:

Identifying Unknown Instances for Autonomous Driving. 384-393 - Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer:

Vision-and-Dialog Navigation. 394-406 - Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, Raquel Urtasun:

Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction. 407-419 - Somil Bansal, Varun Tolani, Saurabh Gupta, Jitendra Malik, Claire J. Tomlin:

Combining Optimal Control and Learning for Visual Navigation in Novel Environments. 420-429 - Bogdan Mazoure, Thang Doan, Audrey Durand, Joelle Pineau, R. Devon Hjelm:

Leveraging exploration in off-policy algorithms via normalizing flows. 430-444 - Adam Allevato, Elaine Schaertl Short, Mitch Pryor, Andrea Thomaz:

TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer. 445-455 - Rika Antonova, Akshara Rai, Tianyu Li, Danica Kragic:

Bayesian Optimization in Variational Latent Spaces with Dynamic Compression. 456-465 - Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam:

A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots. 466-489 - Matthew Wilson, Tucker Hermans:

Learning to Manipulate Object Collections Using Grounded State Representations. 490-502 - Vitor Guizilini, Jie Li, Rares Ambrus, Sudeep Pillai, Adrien Gaidon:

Robust Semi-Supervised Monocular Depth Estimation with Reprojected Distances. 503-512 - Pascal Klink, Hany Abdulsamad, Boris Belousov, Jan Peters:

Self-Paced Contextual Reinforcement Learning. 513-529 - Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine:

Contextual Imagined Goals for Self-Supervised Robotic Learning. 530-539 - Junha Roh, Chris Paxton, Andrzej Pronobis, Ali Farhadi, Dieter Fox:

Conditional Driving from Natural Language Instructions. 540-551 - Zhang-Wei Hong, Tsu-Jui Fu, Tzu-Yun Shann, Chun-Yi Lee

:
Adversarial Active Exploration for Inverse Dynamics Model Learning. 552-565 - Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller:

Imagined Value Gradients: Model-Based Policy Optimization with Tranferable Latent Dynamics Models. 566-589 - Iou-Jen Liu, Raymond A. Yeh, Alexander G. Schwing:

PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning. 590-602 - Chengshu Li, Fei Xia, Roberto Martín-Martín, Silvio Savarese:

HRL4IN: Hierarchical Reinforcement Learning for Interactive Navigation with Mobile Manipulators. 603-616 - Ashish Kumar, Saurabh Gupta, Jitendra Malik:

Learning Navigation Subroutines from Egocentric Videos. 617-626 - Steve Heim, Alexander von Rohr

, Sebastian Trimpe, Alexander Badri-Spröwitz:
A Learnable Safety Measure. 627-639 - Michael Lutter, Boris Belousov, Kim Listmann, Debora Clever, Jan Peters:

HJB Optimal Feedback Control with Deep Differential Value Functions and Action Constraints. 640-650 - Eunah Jung, Nan Yang, Daniel Cremers:

Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light. 651-660 - Juntong Lin, Xuyun Yang, Peiwei Zheng, Hui Cheng:

Connectivity Guaranteed Multi-robot Navigation via Deep Reinforcement Learning. 661-670 - Ekaterina I. Tolstaya, Fernando Gama, James Paulos, George J. Pappas, Vijay Kumar, Alejandro Ribeiro:

Learning Decentralized Controllers for Robot Swarms with Graph Neural Networks. 671-682 - Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali Jain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, Vikas Sindhwani:

Provably Robust Blackbox Optimization for Reinforcement Learning. 683-696 - Joe Watson, Hany Abdulsamad, Jan Peters:

Stochastic Optimal Control as Approximate Input Inference. 697-716 - Andrey Kurenkov, Ajay Mandlekar, Roberto Martin Martin, Silvio Savarese, Animesh Garg:

AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers. 717-734 - Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Jost Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin A. Riedmiller:

Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics. 735-751 - Dylan P. Losey, Mengxi Li, Jeannette Bohg, Dorsa Sadigh:

Learning from My Partner's Actions: Roles in Decentralized Robot Teams. 752-765 - Minghan Wei, Volkan Isler:

Energy-efficient Path Planning for Ground Robots by and Combining Air and Ground Measurements. 766-775 - Orr Krupnik, Igor Mordatch, Aviv Tamar:

Multi-Agent Reinforcement Learning with Multi-Step Generative Models. 776-790 - Alexander Sax, Jeffrey O. Zhang, Bradley Emi, Amir Zamir, Silvio Savarese, Leonidas J. Guibas, Jitendra Malik:

Learning to Navigate Using Mid-Level Visual Priors. 791-812 - Rohan Chitnis, Tomás Lozano-Pérez:

Learning Compact Models for Planning with Exogenous Processes. 813-822 - Arbaaz Khan, Ekaterina I. Tolstaya, Alejandro Ribeiro, Vijay Kumar:

Graph Policy Gradients for Large Scale Robot Control. 823-834 - Rémy Portelas, Cédric Colas, Katja Hofmann, Pierre-Yves Oudeyer:

Teacher algorithms for curriculum learning of Deep RL in continuously parameterized environments. 835-853 - Kevin Sebastian Luck, Heni Ben Amor, Roberto Calandra:

Data-efficient Co-Adaptation of Morphology and Behaviour with Deep Reinforcement Learning. 854-869 - Yordan Hristov, Daniel Angelov, Michael Burke, Alex Lascarides, Subramanian Ramamoorthy:

Disentangled Relational Representations for Explaining and Learning from Demonstration. 870-884 - Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, Chelsea Finn:

RoboNet: Large-Scale Multi-Robot Learning. 885-897 - Yuxiao Chen, Sumanth Dathathri, Tung Phan-Minh, Richard M. Murray:

Counter-example Guided Learning of Bounds on Environment Behavior. 898-909 - Swaminathan Gurumurthy, Sumit Kumar, Katia P. Sycara:

MAME : Model-Agnostic Meta-Exploration. 910-922 - Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Tom Ouyang, James Guo, Jiquan Ngiam, Vijay Vasudevan:

End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds. 923-932 - Michael Noseworthy, Rohan Paul, Subhro Roy, Daehyung Park, Nicholas Roy:

Task-Conditioned Variational Autoencoders for Learning Movement Primitives. 933-944 - Devesh K. Jha, Arvind U. Raghunathan, Diego Romeres:

Quasi-Newton Trust Region Policy Optimization. 945-954 - Beomjoon Kim, Luke Shimanuki:

Learning value functions with relational state representations for guiding task-and-motion planning. 955-968 - Grady R. Williams, Brian Goldfain, Keuntaek Lee, Jason Gibson, James M. Rehg, Evangelos A. Theodorou:

Locally Weighted Regression Pseudo-Rehearsal for Adaptive Model Predictive Control. 969-978 - Maximilian Sieb, Xian Zhou, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki:

Graph-Structured Visual Imitation. 979-989 - David Hoeller, Farbod Farshidian, Marco Hutter:

Deep Value Model Predictive Control. 990-1004 - Daehyung Park, Michael Noseworthy, Rohan Paul, Subhro Roy, Nicholas Roy:

Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning. 1005-1014 - Yen-Chen Lin, Maria Bauzá, Phillip Isola:

Experience-Embedded Visual Foresight. 1015-1024 - Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman:

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning. 1025-1037 - Sandy H. Huang, Isabella Huang, Ravi Pandya, Anca D. Dragan:

Nonverbal Robot Feedback for Human Teachers. 1038-1051 - Rares Ambrus, Vitor Guizilini, Jie Li, Sudeep Pillai, Adrien Gaidon:

Two Stream Networks for Self-Supervised Ego-Motion Estimation. 1052-1061 - Alan Wu, A. J. Piergiovanni, Michael S. Ryoo:

Model-based Behavioral Cloning with Future Image Similarity Learning. 1062-1077 - Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov:

Worst Cases Policy Gradients. 1078-1093 - Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine:

Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning. 1094-1100 - Anusha Nagabandi, Kurt Konolige, Sergey Levine, Vikash Kumar:

Deep Dynamics Models for Learning Dexterous Manipulation. 1101-1112 - Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet:

Learning Latent Plans from Play. 1113-1132 - Chaitanya Mitash, Bowen Wen, Kostas E. Bekris, Abdeslam Boularias:

Scene-level Pose Estimation for Multiple Instances of Densely Packed Objects. 1133-1145 - Yuchen Xiao, Joshua Hoffman, Christopher Amato:

Macro-Action-Based Deep Multi-Agent Reinforcement Learning. 1146-1161 - Bhairav Mehta, Manfred Diaz, Florian Golemo, Christopher J. Pal, Liam Paull:

Active Domain Randomization. 1162-1176 - Erdem Biyik, Malayandi Palan, Nicholas C. Landolfi, Dylan P. Losey, Dorsa Sadigh:

Asking Easy Questions: A User-Friendly Approach to Active Reward Learning. 1177-1190 - Jieliang Luo, Hui Li:

Dynamic Experience Replay. 1191-1200 - Siddharth Patki, Ethan Fahnestock, Thomas M. Howard, Matthew R. Walter:

Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments. 1201-1210 - Glen Chou, Necmiye Ozay, Dmitry Berenson:

Learning Parametric Constraints in High Dimensions from Demonstrations. 1211-1230 - Ethan N. Evans, Marcus A. Pereira, George I. Boutselis, Evangelos A. Theodorou:

Variational Optimization Based Reinforcement Learning for Infinite Dimensional Stochastic Systems. 1231-1246 - Akanksha Saran, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum:

Understanding Teacher Gaze Patterns for Robot Learning. 1247-1258 - Seyed Kamyar Seyed Ghasemipour, Richard S. Zemel, Shixiang Gu:

A Divergence Minimization Perspective on Imitation Learning Methods. 1259-1277 - Matthias Schultheis, Boris Belousov, Hany Abdulsamad, Jan Peters:

Receding Horizon Curiosity. 1278-1288 - Ben Abbatematteo, Stefanie Tellex, George Konidaris:

Learning to Generalize Kinematic Models to Novel Objects. 1289-1299 - Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar:

ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots. 1300-1313 - Martin Weiss, Simon Chamorro, Roger Girgis, Margaux Luck, Samira Ebrahimi Kahou, Joseph Paul Cohen, Derek Nowrouzezahrai, Doina Precup, Florian Golemo, Chris Pal:

Navigation Agents for the Visually Impaired: A Sidewalk Simulator and Experiments. 1314-1327 - Björn Lütjens, Michael Everett, Jonathan P. How:

Certified Adversarial Robustness for Deep Reinforcement Learning. 1328-1337 - Yunzhi Zhang, Ignasi Clavera, Boren Tsai, Pieter Abbeel:

Asynchronous Methods for Model-Based Reinforcement Learning. 1338-1347 - Brian Delhaisse, Leonel Dario Rozo, Darwin G. Caldwell:

PyRoboLearn: A Python Framework for Robot Learning Practitioners. 1348-1358 - Marco Capotondi, Giulio Turrisi, Claudio Gaz, Valerio Modugno, Giuseppe Oriolo, Alessandro De Luca:

An Online Learning Procedure for Feedback Linearization Control without Torque Measurements. 1359-1368 - Christopher Xie, Yu Xiang, Arsalan Mousavian, Dieter Fox:

The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation. 1369-1378 - Ching-An Cheng, Xinyan Yan, Byron Boots:

Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods. 1379-1394 - Yazhan Zhang, Weihao Yuan, Zicheng Kan, Michael Yu Wang:

Towards Learning to Detect and Predict Contact Events on Vision-based Tactile Sensors. 1395-1404 - Weiming Zhi, Lionel Ott, Fabio Ramos:

Kernel Trajectory Maps for Multi-Modal Probabilistic Motion Prediction. 1405-1414 - Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A. Knepper, Yoav Artzi:

Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated Flight. 1415-1438 - Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine:

Entity Abstraction in Visual Model-Based Reinforcement Learning. 1439-1456 - Muhammad Asif Rana, Anqi Li, Harish Ravichandar, Mustafa Mukadam, Sonia Chernova, Dieter Fox, Byron Boots, Nathan D. Ratliff:

Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations. 1457-1468

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














