default search action
22nd AAMAS 2023: London, UK
- Noa Agmon, Bo An, Alessandro Ricci, William Yeoh:
Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, London, United Kingdom, 29 May 2023 - 2 June 2023. ACM 2023, ISBN 978-1-4503-9432-1
Keynote Talks
- Karl Tuyls:
Multiagent Learning: From Fundamentals to Foundation Models. 1 - Yejin Choi:
Common Sense: The Dark Matter of Language and Intelligence. 2 - Iain D. Couzin:
Geometric Principles of Individual and Collective Decision-Making. 3 - Edith Elkind:
Proportionality in Multiwinner Voting: The Power of Local Search. 4
Session 1A: Multiagent Reinforcement Learning I
- Mingfei Sun, Sam Devlin, Jacob Beck, Katja Hofmann, Shimon Whiteson:
Trust Region Bounds for Decentralized PPO Under Non-stationarity. 5-13 - Jiachen Yang, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden K. Petersen, Daniel M. Faissol, Robert W. Anderson:
Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement. 14-22 - Jiechuan Jiang, Zongqing Lu:
Adaptive Learning Rates for Multi-Agent Reinforcement Learning. 23-30 - Shanqi Liu, Yujing Hu, Runze Wu, Dong Xing, Yu Xiong, Changjie Fan, Kun Kuang, Yong Liu:
Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement Learning. 31-39 - Woojun Kim, Whiyoung Jung, Myungsik Cho, Youngchul Sung:
A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement Learning. 40-48 - Dmitry Ivanov, Ilya Zisman, Kirill Chernyshev:
Mediated Multi-Agent Reinforcement Learning. 49-57 - Yucong Zhang, Chao Yu:
EXPODE: EXploiting POlicy Discrepancy for Efficient Exploration in Multi-agent Reinforcement Learning. 58-66 - Fanqi Lin, Shiyu Huang, Tim Pearce, Wenze Chen, Wei-Wei Tu:
TiZero: Mastering Multi-Agent Football with Curriculum Learning and Self-Play. 67-76
Session 1B: Planning
- Maya Lavie, Tehila Caspi, Omer Lev, Roie Zivan:
Ask and You Shall be Served: Representing & Solving Multi-agentOptimization Problems with Service Requesters and Providers. 77-85 - Napendra Solanki, Shweta Jain, Suman Banerjee, Yayathi Pavan Kumar S:
Fairness Driven Efficient Algorithms for Sequenced Group Trip Planning Query Problem. 86-94 - Adrian Price, Ramon Fraga Pereira, Peta Masters, Mor Vered:
Domain-Independent Deceptive Planning. 95-103 - Arseni Pertzovskiy, Roie Zivan, Noa Agmon:
CAMS: Collision Avoiding Max-Sum for Mobile Sensor Teams. 104-112 - Anna Gautier, Marc Rigter, Bruno Lacerda, Nick Hawes, Michael J. Wooldridge:
Risk-Constrained Planning for Multi-Agent Systems with Shared Resources. 113-121 - Chongyang Shi, Shuo Han, Jie Fu:
Quantitative Planning with Action Deception in Concurrent Stochastic Games. 122-130 - Stelios Triantafyllou, Goran Radanovic:
Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPs. 131-139 - Matheus Aparecido do Carmo Alves, Elnaz Shafipour Yourdshahi, Amokh Varma, Leandro Soriano Marcolino, Jó Ueyama, Plamen Angelov:
On-line Estimators for Ad-hoc Task Execution: Learning Types and Parameters of Teammates for Effective Teamwork. 140-142
Session 1C: Fair Allocations
- Haris Aziz, Jeremy Lindsay, Angus Ritossa, Mashbat Suzuki:
Fair Allocation of Two Types of Chores. 143-151 - Hadi Hosseini, Sujoy Sikdar, Rohit Vaish, Lirong Xia:
Fairly Dividing Mixtures of Goods and Chores under Lexicographic Preferences. 152-160 - Hadi Hosseini, Justin Payan, Rik Sengupta, Rohit Vaish, Vignesh Viswanathan:
Graphical House Allocation. 161-169 - Jiarui Gan, Bo Li, Xiaowei Wu:
Approximation Algorithm for Computing Budget-Feasible EF1 Allocations. 170-178 - Vignesh Viswanathan, Yair Zick:
Yankee Swap: A Fast and Simple Fair Allocation Mechanism for Matroid Rank Valuations. 179-187 - Zeyu Shen, Zhiyi Wang, Xingyu Zhu, Brandon Fain, Kamesh Munagala:
Fairness in the Assignment Problem with Uncertain Priorities. 188-196 - Haris Aziz, Bo Li, Shiji Xing, Yu Zhou:
Possible Fairness for Allocating Indivisible Resources. 197-205 - Hila Shoshan, Noam Hazon, Erel Segal-Halevi:
Efficient Nearly-Fair Division with Capacity Constraints. 206-214
Session 1D: Equilibria and Complexities of Games
- Nils Bertschinger, Martin Hoefer, Simon Krogmann, Pascal Lenzner, Steffen Schuldenzucker, Lisa Wilhelmi:
Equilibria and Convergence in Fire Sale Games. 215-223 - Willem Röpke, Carla Groenland, Roxana Radulescu, Ann Nowé, Diederik M. Roijers:
Bridging the Gap Between Single and Multi Objective Games. 224-232 - Zhijian Duan, Wenhan Huang, Dinghuai Zhang, Yali Du, Jun Wang, Yaodong Yang, Xiaotie Deng:
Is Nash Equilibrium Approximator Learnable? 233-241 - Nicolò Cesa-Bianchi, Tommaso Cesari, Takayuki Osogami, Marco Scarsini, Segev Wasserkrug:
Learning the Stackelberg Equilibrium in a Newsvendor Game. 242-250 - Jiehua Chen, Gergely Csáji, Sanjukta Roy, Sofia Simola:
Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained Complexity. 251-259 - Panagiotis Kanellopoulos, Maria Kyropoulou, Hao Zhou:
Debt Transfers in Financial Networks: Complexity and Equilibria. 260-268 - Willem Röpke, Diederik M. Roijers, Ann Nowé, Roxana Radulescu:
A Study of Nash Equilibria in Multi-Objective Normal-Form Games. 269-271 - Cyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueira, Amy Greenwald:
Learning Properties in Simulation-Based Games. 272-280
Session 1E: Human-Agent Teams
- Nikolaos Kondylidis, Ilaria Tiddi, Annette ten Teije:
Establishing Shared Query Understanding in an Open Multi-Agent System. 281-289 - Julie Porteous, Alan Lindsay, Fred Charles:
Communicating Agent Intentions for Human-Agent Decision Making under Uncertainty. 290-298 - Marin Le Guillou, Laurent Prévot, Bruno Berberian:
Trusting Artificial Agents: Communication Trumps Performance. 299-306 - Kate Candon, Jesse Chen, Yoony Kim, Zoe Hsu, Nathan Tsoi, Marynel Vázquez:
Nonverbal Human Signals Can Help Autonomous Agents Infer Human Preferences for Their Behavior. 307-316 - Sagalpreet Singh, Shweta Jain, Shashi Shekhar Jha:
On Subset Selection of Multiple Humans To Improve Human-AI Team Accuracy. 317-325 - Lujain Ibrahim, Mohammad M. Ghassemi, Tuka Alhanai:
Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations. 326-334 - Sangwon Seo, Bing Han, Vaibhav V. Unhelkar:
Automated Task-Time Interventions to Improve Teamwork using Imitation Learning. 335-344 - Stefan Sarkadi, Peidong Mei, Edmond Awad:
Should My Agent Lie for Me? A Study on Attitudes of US-basedParticipants Towards Deceptive AI in Selected Future-of-work. 345-354
Session 1F: Knowledge Representation and Reasoning I
- Qihui Feng, Daxin Liu, Vaishak Belle, Gerhard Lakemeyer:
A Logic of Only-Believing over Arbitrary Probability Distributions. 355-363 - Carlos Areces, Valentin Cassano, Pablo F. Castro, Raul Fervari, Andrés R. Saravia:
A Deontic Logic of Knowingly Complying. 364-372 - Giulio Mazzi, Daniele Meli, Alberto Castellini, Alessandro Farinelli:
Learning Logic Specifications for Soft Policy Guidance in POMCP. 373-381 - Jaime Arias, Wojciech Jamroga, Wojciech Penczek, Laure Petrucci, Teofil Sidoruk:
Strategic (Timed) Computation Tree Logic. 382-390 - Gaia Belardinelli, Thomas Bolander:
Attention! Dynamic Epistemic Logic Models of (In)attentive Agents. 391-399 - Rustam Galimullin, Fernando R. Velázquez-Quesada:
(Arbitrary) Partial Communication. 400-408 - Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna:
Epistemic Abstract Argumentation Framework: Formal Foundations, Computation and Complexity. 409-417 - Vaishak Belle:
Actions, Continuous Distributions and Meta-Beliefs. 418-426
Session 2A: Multiagent Reinforcement Learning II
- Xuefeng Wang, Xinran Li, Jiawei Shao, Jun Zhang:
AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement Learning. 427-435 - Junjie Sheng, Xiangfeng Wang, Bo Jin, Wenhao Li, Jun Wang, Junchi Yan, Tsung-Hui Chang, Hongyuan Zha:
Learning Structured Communication for Multi-Agent Reinforcement Learning. 436-438 - Shuai Han, Mehdi Dastani, Shihan Wang:
Model-based Sparse Communication in Multi-agent Reinforcement Learning. 439-447 - Phillip J. K. Christoffersen, Andreas A. Haupt, Dylan Hadfield-Menell:
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL. 448-456 - Michelle Li, Michael Dennis:
The Benefits of Power Regularization in Cooperative Reinforcement Learning. 457-465 - Yongsheng Mei, Hanhan Zhou, Tian Lan, Guru Venkataramani, Peng Wei:
MAC-PO: Multi-Agent Experience Replay via Collective Priority Optimization. 466-475 - Shaowei Zhang, Jiahan Cao, Lei Yuan, Yang Yu, De-Chuan Zhan:
Self-Motivated Multi-Agent Exploration. 476-484 - Yifan Zang, Jinmin He, Kai Li, Haobo Fu, Qiang Fu, Junliang Xing:
Sequential Cooperative Multi-Agent Reinforcement Learning. 485-493
Session 2B: Planning + Task/Resource Allocation
- Saar Cohen, Noa Agmon:
Online Coalitional Skill Formation. 494-503 - Gauthier Picard:
Multi-Agent Consensus-based Bundle Allocation for Multi-mode Composite Tasks. 504-512 - Osnat Ackerman Viden, Yohai Trabelsi, Pan Xu, Karthik Abinav Sankararaman, Oleg Maksimov, Sarit Kraus:
Allocation Problem in Remote Teleoperation: Online Matching with Offline Reusable Resources and Delayed Assignments. 513-521 - Shaheen Fatima, Michael J. Wooldridge:
Optimal Coalition Structures for Probabilistically Monotone Partition Function Games. 522-524 - Grace Cai, Noble Harasha, Nancy A. Lynch:
A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density. 525-533 - Till Hofmann, Vaishak Belle:
Abstracting Noisy Robot Programs. 534-542 - Qian Che, Wanyuan Wang, Fengchen Wang, Tianchi Qiao, Xiang Liu, Jiuchuan Jiang, Bo An, Yichuan Jiang:
Structural Credit Assignment-Guided Coordinated MCTS: An Efficient and Scalable Method for Online Multiagent Planning. 543-551 - Rajiv Ranjan Kumar, Pradeep Varakantham, Shih-Fen Cheng:
Strategic Planning for Flexible Agent Availability in Large Taxi Fleets. 552-560
Session 2C: Fair Allocations + Public Goods Games
- Ankang Sun, Bo Chen, Xuan Vinh Doan:
Equitability and Welfare Maximization for Allocating Indivisible Items. 561-563 - Martin Hoefer, Marco Schmalhofer, Giovanna Varricchio:
Best of Both Worlds: Agents with Entitlements. 564-572 - Inbal Rozencweig, Reshef Meir, Nicholas Mattei, Ofra Amir:
Mitigating Skewed Bidding for Conference Paper Assignment. 573-581 - David Sychrovsky, Jakub Cerný, Sylvain Lichau, Martin Loebl:
Price of Anarchy in a Double-Sided Critical Distribution System. 582-590 - Evangelos Markakis, Christodoulos Santorinaios:
Improved EFX Approximation Guarantees under Ordinal-based Assumptions. 591-599 - Zirou Qiu, Andrew Yuan, Chen Chen, Madhav V. Marathe, S. S. Ravi, Daniel J. Rosenkrantz, Richard Edwin Stearns, Anil Vullikanti:
Assigning Agents to Increase Network-Based Neighborhood Diversity. 600-608 - Jichen Li, Xiaotie Deng, Yukun Cheng, Yuqi Pan, Xuanzhi Xia, Zongjun Yang, Jan Xie:
Altruism, Collectivism and Egalitarianism: On a Variety of Prosocial Behaviors in Binary Networked Public Goods Games. 609-624 - Jacques Bara, Fernando P. Santos, Paolo Turrini:
The Role of Space, Density and Migration in Social Dilemmas. 625-633
Session 2D: Behavioral and Algorithmic Game Theory
- Daniel Chui, Jason D. Hartline, James R. Wright:
Non-strategic Econometrics (for Initial Play). 634-642 - Natalie Collina, Eshwar Ram Arunachaleswaran, Michael Kearns:
Efficient Stackelberg Strategies for Finitely Repeated Games. 643-651 - Libo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock, Jiamou Liu:
Learning Density-Based Correlated Equilibria for Markov Games. 652-660 - Maizi Liao, Wojciech Golab, Seyed Majid Zahedi:
IRS: An Incentive-compatible Reward Scheme for Algorand. 661-669 - Bryce Wiedenbeck, Erik Brinkman:
Data Structures for Deviation Payoffs. 670-678
Session 2E: Humans and AI Agents
- Xingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang, Yali Du:
PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI Coordination. 679-688 - Saaduddin Mahmud, Connor Basich, Shlomo Zilberstein:
Semi-Autonomous Systems with Contextual Competence Awareness. 689-697 - Yubin Kim, Huili Chen, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park:
Joint Engagement Classification using Video Augmentation Techniques for Multi-person HRI in the wild. 698-707 - Haochen Wu, Pedro Sequeira, David V. Pynadath:
Multiagent Inverse Reinforcement Learning via Theory of Mind Reasoning. 708-716 - Nele Albers, Mark A. Neerincx, Willem-Paul Brinkman:
Persuading to Prepare for Quitting Smoking with a Virtual Coach: Using States and User Characteristics to Predict Behavior. 717-726 - Yushan Qian, Bo Wang, Shangzhao Ma, Bin Wu, Shuo Zhang, Dongming Zhao, Kun Huang, Yuexian Hou:
Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response Generation. 727-736 - Weilai Xu, Fred Charles, Charlie Hargood:
Generating Stylistic and Personalized Dialogues for Virtual Agents in Narratives. 737-746 - David Obremski, Ohenewa Bediako Akuffo, Leonie Lücke, Miriam Semineth, Sarah Tomiczek, Hanna-Finja Weichert, Birgit Lugrin:
Reducing Racial Bias by Interacting with Virtual Agents: An Intervention in Virtual Reality. 747-755
Session 2F: Knowledge Representation and Reasoning II
- Jinzhao Li, Daniel Fink, Christopher Wood, Carla P. Gomes, Yexiang Xue:
Provable Optimization of Quantal Response Leader-Follower Games with Exponentially Large Action Spaces. 756-765 - Masoud Tabatabaei, Wojciech Jamroga:
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information Theory. 766-774 - Rodica Condurache, Catalin Dima, Youssouf Oualhadj, Nicolas Troquard:
Synthesis of Resource-Aware Controllers Against Rational Agents. 775-783 - Catalin Dima, Wojciech Jamroga:
Computationally Feasible Strategies. 784-792 - Angelo Ferrando, Vadim Malvone:
Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect Information. 793-801
Session 3A: Reinforcement Learning
- Durgesh Kalwar, Omkar Shelke, Somjit Nath, Hardik Meisheri, Harshad Khadilkar:
Follow your Nose: Using General Value Functions for Directed Exploration in Reinforcement Learning. 802-809 - Liam Hebert, Lukasz Golab, Pascal Poupart, Robin Cohen:
FedFormer: Contextual Federation with Attention in Reinforcement Learning. 810-818 - Wenhao Li, Baoxiang Wang, Shanchao Yang, Hongyuan Zha:
Diverse Policy Optimization for Structured Action Space. 819-828 - Paul Daoudi, Bogdan Robu, Christophe Prieur, Ludovic Dos Santos, Merwan Barlier:
Enhancing Reinforcement Learning Agents with Local Guides. 829-838 - Peter Vamplew, Benjamin J. Smith, Johan Källström, Gabriel de Oliveira Ramos, Roxana Radulescu, Diederik M. Roijers, Conor F. Hayes, Friedrik Hentz, Patrick Mannion, Pieter J. K. Libin, Richard Dazeley, Cameron Foale:
Scalar Reward is Not Enough. 839-841 - Alexandre Trudeau, Michael Bowling:
Targeted Search Control in AlphaZero for Effective Policy Improvement. 842-850 - Tom Haider, Karsten Roscher, Felippe Schmoeller da Roza, Stephan Günnemann:
Out-of-Distribution Detection for Reinforcement Learning Agents with Probabilistic Dynamics Models. 851-859 - Jiajing Ling, Moritz Lukas Schuler, Akshat Kumar, Pradeep Varakantham:
Knowledge Compilation for Constrained Combinatorial Action Spaces in Reinforcement Learning. 860-868
Session 3B: Multiagent Path Finding
- Gilad Fine, Dor Atzmon, Noa Agmon:
Anonymous Multi-Agent Path Finding with Individual Deadlines. 869-877 - Junyoung Park, Changhyun Kwon, Jinkyoo Park:
Learn to Solve the Min-max Multiple Traveling Salesmen Problem with Reinforcement Learning. 878-886 - Hikaru Asano, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno:
Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation. 887-895 - Isseïnie Calviac, Ocan Sankur, François Schwarzentruber:
Improved Complexity Results and an Efficient Solution for Connected Multi-Agent Path Finding. 896-904 - Yaakov Livne, Dor Atzmon, Shawn Skyler, Eli Boyarski, Amir Shapiro, Ariel Felner:
Optimally Solving the Multiple Watchman Route Problem with Heuristic Search. 905-913 - Yuki Miyashita, Tomoki Yamauchi, Toshiharu Sugawara:
Distributed Planning with Asynchronous Execution with Local Navigation for Multi-agent Pickup and Delivery Problem. 914-922 - Jonathan Diller, Qi Han:
Energy-aware UAV Path Planning with Adaptive Speed. 923-931 - Mikkel Abrahamsen, Tzvika Geft, Dan Halperin, Barak Ugav:
Coordination of Multiple Robots along Given Paths with Bounded Junction Complexity. 932-940
Session 3C: Matching
- Haris Aziz, Aditya Ganguly, Evi Micha:
Best of Both Worlds Fairness under Entitlements. 941-948 - Haris Aziz:
Probabilistic Rationing with Categorized Priorities: Processing Reserves Fairly and Efficiently. 949-956 - Telikepalli Kavitha, Rohit Vaish:
Semi-Popular Matchings and Copeland Winners. 957-965 - Dusan Knop, Simon Schierreich:
Host Community Respecting Refugee Housing. 966-975 - Mathieu Mari, Michal Pawlowski, Runtian Ren, Piotr Sankowski:
Online Matching with Delays and Stochastic Arrival Times. 976-984 - Niclas Boehmer, Klaus Heeger:
Adapting Stable Matchings to Forced and Forbidden Pairs. 985-993 - Yinghui Wen, Zhongyi Zhang, Jiong Guo:
Stable Marriage in Euclidean Space. 994-1002 - Niclas Boehmer, Klaus Heeger, Stanislaw Szufa:
A Map of Diverse Synthetic Stable Roommates Instances. 1003-1011
Session 3D: Learning in Games
- Yongzhao Wang, Michael P. Wellman:
Empirical Game-Theoretic Analysis for Mean Field Games. 1025-1033 - Jing Wang, Meichen Song, Feng Gao, Boyi Liu, Zhaoran Wang, Yi Wu:
Differentiable Arbitrating in Zero-sum Markov Games. 1034-1043 - Madelyn Gatchel, Bryce Wiedenbeck:
Learning Parameterized Families of Games. 1044-1052 - Zelai Xu, Yancheng Liang, Chao Yu, Yu Wang, Yi Wu:
Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive Games. 1053-1061 - Jingqi Li, Chih-Yuan Chiu, Lasse Peters, Somayeh Sojoudi, Claire J. Tomlin, David Fridovich-Keil:
Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete Trajectories. 1062-1070 - Chirag Chhablani, Michael Sullins, Ian A. Kash:
Multiplicative Weight Updates for Extensive Form Games. 1071-1078 - Xu Chen, Shuo Liu, Xuan Di:
A Hybrid Framework of Reinforcement Learning and Physics-Informed Deep Learning for Spatiotemporal Mean Field Games. 1079-1087 - Yang Chen, Libo Zhang, Jiamou Liu, Michael Witbrock:
Adversarial Inverse Reinforcement Learning for Mean Field Games. 1088-1096
Session 3E: Learning with Humans and Robots
- Tobias Huber, Maximilian Demmler, Silvan Mertes, Matthew L. Olson, Elisabeth André:
GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual Explanations. 1097-1106 - Chao Yu, Xinyi Yang, Jiaxuan Gao, Jiayu Chen, Yunfei Li, Jijia Liu, Yunfei Xiang, Ruixin Huang, Huazhong Yang, Yi Wu, Yu Wang:
Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative Exploration. 1107-1115 - Prasanth Sengadu Suresh, Yikang Gui, Prashant Doshi:
Dec-AIRL: Decentralized Adversarial IRL for Human-Robot Teaming. 1116-1124 - Neeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang Liang, D. Livingston McPherson, Katherine Rose Driggs-Campbell:
Structural Attention-based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection. 1125-1134 - Maxence Hussonnois, Thommen George Karimpanal, Santu Rana:
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills. 1135-1143 - Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson, Mark Crowley:
Learning from Multiple Independent Advisors in Multi-agent Reinforcement Learning. 1144-1153
Session 3F: Engineering Multiagent Systems
- Samuel H. Christie V., Munindar P. Singh, Amit K. Chopra:
Kiko: Programming Agents to Enact Interaction Models. 1154-1163 - Rui Zhao, Xu Liu, Yizheng Zhang, Minghao Li, Cheng Zhou, Shuai Li, Lei Han:
CraftEnv: A Flexible Collective Robotic Construction Environment for Multi-Agent Reinforcement Learning. 1164-1172 - Michael Dann, John Thangarajah, Minyi Li:
Feedback-Guided Intention Scheduling for BDI Agents. 1173-1181 - Sebastian Rodriguez, John Thangarajah, Michael Winikoff:
A Behaviour-Driven Approach for Testing Requirements via User and System Stories in Agent Systems. 1182-1190 - Hilal Al Shukairi, Rafael C. Cardoso:
ML-MAS: A Hybrid AI Framework for Self-Driving Vehicles. 1191-1199 - Danai Vachtsevanou, Andrei Ciortea, Simon Mayer, Jérémy Lemée:
Signifiers as a First-class Abstraction in Hypermedia Multi-Agent Systems. 1200-1208 - Débora C. Engelmann, Alison R. Panisson, Renata Vieira, Jomi Fred Hübner, Viviana Mascardi, Rafael H. Bordini:
MAIDS - A Framework for the Development of Multi-Agent Intentional Dialogue Systems. 1209-1217 - Samuel H. Christie V., Munindar P. Singh, Amit K. Chopra:
Mandrake: Multiagent Systems as a Basis for Programming Fault-Tolerant Decentralized Applications. 1218-1220
Session 4A: Reinfocement and Immitation Learning
- Yuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao, Jiang Bian:
Curriculum Offline Reinforcement Learning. 1221-1229 - Romain Cravic, Nicolas Gast, Bruno Gaujal:
Decentralized Model-Free Reinforcement Learning in Stochastic Games with Average-Reward Objective. 1230-1238 - Haoyuan Sun, Feng Wu:
Less Is More: Refining Datasets for Offline Reinforcement Learning with Reward Machines. 1239-1247 - John Wesley Hostetter, Mark Abdelshiheed, Tiffany Barnes, Min Chi:
A Self-Organizing Neuro-Fuzzy Q-Network: Systematic Design with Offline Hybrid Learning. 1248-1257 - Jinming Ma, Feng Wu:
Learning to Coordinate from Offline Datasets with Uncoordinated Behavior Policies. 1258-1266 - Caroline Wang, Garrett Warnell, Peter Stone:
D-Shape: Demonstration-Shaped Reinforcement Learning via Goal-Conditioning. 1267-1275 - Xu-Hui Liu, Feng Xu, Xinyu Zhang, Tianyuan Liu, Shengyi Jiang, Ruifeng Chen, Zongzhang Zhang, Yang Yu:
How To Guide Your Learner: Imitation Learning with Active Adaptive Expert Involvement. 1276-1284 - The Viet Bui, Tien Mai, Thanh Hong Nguyen:
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive Games. 1285-1293
Session 4B: Multi-Armed Bandits + Monte Carlo Tree Search
- Abheek Ghosh, Dheeraj Nagaraj, Manish Jain, Milind Tambe:
Indexability is Not Enough for Whittle: Improved, Near-Optimal Algorithms for Restless Bandits. 1294-1302 - Dexun Li, Pradeep Varakantham:
Avoiding Starvation of Arms in Restless Multi-Armed Bandits. 1303-1311 - Shresth Verma, Aditya Mate, Kai Wang, Neha Madhiwalla, Aparna Hegde, Aparna Taneja, Milind Tambe:
Restless Multi-Armed Bandits for Maternal and Child Health: Results from Decision-Focused Learning. 1312-1320 - Arpita Biswas, Jackson A. Killian, Paula Rodriguez Diaz, Susobhan Ghosh, Milind Tambe:
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks. 1321-1328 - Jialin Yi, Milan Vojnovic:
On Regret-optimal Cooperative Nonstochastic Multi-armed Bandits. 1329-1335 - Siddharth Chandak, Ilai Bistritz, Nicholas Bambos:
Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics. 1336-1344 - Dixant Mittal, Siddharth Aravindan, Wee Sun Lee:
ExPoSe: Combining State-Based Exploration with Gradient-Based Online Search. 1345-1353 - Debraj Chakraborty, Damien Busatto-Gaston, Jean-François Raskin, Guillermo A. Pérez:
Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search using Data Aggregation with Formal Methods. 1354-1362
Session 4C: Auctions + Voting
- Zhiqiang Zhuang, Kewen Wang, Zhe Wang:
Price of Anarchy for First Price Auction with Risk-Averse Bidders. 1363-1369 - Sizhe Gu, Yao Zhang, Yida Zhao, Dengji Zhao:
A Redistribution Framework for Diffusion Auctions. 1370-1378 - Hongyin Chen, Xiaotie Deng, Ying Wang, Yue Wu, Dengji Zhao:
Sybil-Proof Diffusion Auction in Social Networks. 1379-1387 - Munyque Mittelmann, Laurent Perrussel, Sylvain Bouveret:
Representing and Reasoning about Auctions. 1388-1390 - Aris Filos-Ratsikas, Alexandros A. Voudouris:
Revisiting the Distortion of Distributed Voting. 1391-1399 - Dorothea Baumeister, Linus Boes, Christian Laußmann, Simon Rey:
Bounded Approval Ballots: Balancing Expressiveness and Simplicity for Multiwinner Elections. 1400-1408 - Dimitris Fotakis, Laurent Gourvès:
On the Distortion of Single Winner Elections with Aligned Candidates. 1409-1411 - Ari Conati, Andreas Niskanen, Matti Järvisalo:
SAT-based Judgment Aggregation. 1412-1420
Session 4E: Robotics
- Yi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding, Xiaowei Huang:
Decentralised and Cooperative Control of Multi-Robot Systems through Distributed Optimisation. 1421-1429 - Kacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru, Wenchao Li:
Byzantine Resilience at Swarm Scale: A Decentralized Blocklist Protocol from Inter-robot Accusations. 1430-1438 - Ori Rappel, Michael Amir, Alfred M. Bruckstein:
Stigmergy-based, Dual-Layer Coverage of Unknown Regions. 1439-1447 - Jinlin Chen, Jiannong Cao, Zhiqin Cheng, Wei Li:
Mitigating Imminent Collision for Multi-robot Navigation: A TTC-force Reward Shaping Approach. 1448-1456 - Arnhav Datar, Nischith Shadagopan M. N, John Augustine:
Gathering of Anonymous Agents. 1457-1465 - Enrico Marchesini, Luca Marzari, Alessandro Farinelli, Christopher Amato:
Safe Deep Reinforcement Learning by Verifying Task-Level Properties. 1466-1475 - Yiwei Lyu, John M. Dolan, Wenhao Luo:
Decentralized Safe Navigation for Multi-agent Systems via Risk-aware Weighted Buffered Voronoi Cells. 1476-1484 - Matteo Bettini, Ajay Shankar, Amanda Prorok:
Heterogeneous Multi-Robot Reinforcement Learning. 1485-1494
Session 4F: Innovative Applications
- Longxiang Shi, Zilin Zhang, Shoujin Wang, Binbin Zhou, Minghui Wu, Cheng Yang, Shijian Li:
Efficient Interactive Recommendation via Huffman Tree-based Policy Learning. 1495-1503 - Ge Gao, Song Ju, Markel Sanz Ausin, Min Chi:
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare. 1504-1513 - Shivendra Agrawal, Suresh Nayak, Ashutosh Naik, Bradley Hayes:
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic Cane. 1514-1523 - Qian Shao, Shih-Fen Cheng:
Preference-Aware Delivery Planning for Last-Mile Logistics. 1524-1532 - Yufeng Shi, Mingxiao Feng, Minrui Wang, Wengang Zhou, Houqiang Li:
Multi-Agent Reinforcement Learning with Safety Layer for Active Voltage Control. 1533-1541 - Phuriwat Worrawichaipat, Enrico H. Gerding, Ioannis Kaparias, Sarvapali D. Ramchurn:
Multi-agent Signalless Intersection Management with Dynamic Platoon Formation. 1542-1550 - Harsh Goel, Yifeng Zhang, Mehul Damani, Guillaume Sartoretti:
SocialLight: Distributed Cooperation Learning towards Network-Wide Traffic Signal Control. 1551-1559 - Shuang Chen, Qisen Xu, Liang Zhang, Yongbo Jin, Wenhao Li, Linjian Mo:
Model-Based Reinforcement Learning for Auto-bidding in Display Advertising. 1560-1568
Session 5A: Multiagent Reinforcement Learning III
- Gaurav Dixit, Kagan Tumer:
Learning Inter-Agent Synergies in Asymmetric Multiagent Systems. 1569-1577 - Aamal Abbas Hussain, Francesco Belardinelli, Georgios Piliouras:
Asymptotic Convergence and Performance of Multi-Agent Q-learning Dynamics. 1578-1586 - Wenli Xiao, Yiwei Lyu, John M. Dolan:
Model-based Dynamic Shielding for Safe and Efficient Multi-agent Reinforcement Learning. 1587-1596 - Jihwan Oh, Joonkee Kim, Minchan Jeong, Se-Young Yun:
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning. 1597-1605 - Briti Gangopadhyay, Pallab Dasgupta, Soumyajit Dey:
Counterexample-Guided Policy Refinement in Multi-Agent Reinforcement Learning. 1606-1614 - Yang Yu, Qiyue Yin, Junge Zhang, Kaiqi Huang:
Prioritized Tasks Mining for Multi-Task Cooperative Multi-Agent Reinforcement Learning. 1615-1623 - Linghui Meng, Jingqing Ruan, Xuantang Xiong, Xiyun Li, Xi Zhang, Dengpeng Xing, Bo Xu:
M3: Modularization for Multi-task and Multi-agent Offline Pre-training. 1624-1633
Session 5B: Graph Neural Networks + Transformers
- Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Wenxin Tang, Runjie Zhou, Yong Jiang:
User Device Interaction Prediction via Relational Gated Graph Attention Network and Intent-aware Encoder. 1634-1642 - Gregory Everett, Ryan J. Beal, Tim Matthews, Joseph Early, Timothy J. Norman, Sarvapali D. Ramchurn:
Inferring Player Location in Sports Matches: Multi-Agent Spatial Imputation from Limited Observations. 1643-1651 - Xinyi Yang, Shiyu Huang, Yiwen Sun, Yuxiang Yang, Chao Yu, Wei-Wei Tu, Huazhong Yang, Yu Wang:
Learning Graph-Enhanced Commander-Executor for Multi-Agent Navigation. 1652-1660 - Ryan Kortvelesy, Steven D. Morad, Amanda Prorok:
Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for Multi-Agent Learning. 1661-1669 - Peiwang Tang, Xianchao Zhang:
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem. 1670-1678 - Matteo Gallici, Mario Martin, Ivan Masmitja:
TransfQMix: Transformers for Leveraging the Graph Structure of Multi-Agent Reinforcement Learning Problems. 1679-1687 - Rohit Chowdhury, Raswanth Murugan, Deepak N. Subramani:
Intelligent Onboard Routing in Stochastic Dynamic Environments using Transformers. 1688-1696
Session 5C: Voting I
- Chris Dong, Patrick Lederer:
Characterizations of Sequential Valuation Rules. 1697-1705 - Niclas Boehmer, Nathan Schaar:
Collecting, Classifying, Analyzing, and Using Real-World Ranking Data. 1706-1715 - Michelle Döring, Jannik Peters:
Margin of Victory for Weighted Tournament Solutions. 1716-1724 - Bartosz Kusek, Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk, Dusan Knop:
Bribery Can Get Harder in Structured Multiwinner Approval Election. 1725-1733 - Felix Brandt, Patrick Lederer, Sascha Tausch:
Strategyproof Social Decision Schemes on Super Condorcet Domains. 1734-1742 - Benjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David E. Narváez, Conor Taliancich, Henry B. Welles:
Separating and Collapsing Electoral Control Types. 1743-1751 - Soroush Ebadian, Mohamad Latifian, Nisarg Shah:
The Distortion of Approval Voting with Runoff. 1752-1760
Session 5D: Blue Sky
- Arvid Horned, Loïs Vanhée:
Models of Anxiety for Agent Deliberation: The Benefits of Anxiety-Sensitive Agents. 1761-1767 - Nimrod Talmon:
Social Choice Around Decentralized Autonomous Organizations: On the Computational Social Choice of Digital Communities. 1768-1773 - Enrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I. J. Dobbe, Catholijn M. Jonker, Maite López-Sánchez, Juan A. Rodríguez-Aguilar, Pradeep K. Murukannaiah:
Value Inference in Sociotechnical Systems. 1774-1780 - David Radke, Alexi Orchard:
Presenting Multiagent Challenges in Team Sports Analytics. 1781-1785 - Amit K. Chopra, Samuel H. Christie V.:
Communication Meaning: Foundations and Directions for Systems Research. 1786-1791 - Zoi Terzopoulou, Marijn A. Keijzer, Gogulapati Sreedurga, Jobst Heitzig:
The Rule-Tool-User Nexus in Digital Collective Decisions. 1792-1796 - Toryn Q. Klassen, Parand Alizadeh Alamdari, Sheila A. McIlraith:
Epistemic Side Effects: An AI Safety Problem. 1797-1801 - Sebastian Stein, Vahid Yazdanpanah:
Citizen-Centric Multiagent Systems. 1802-1807
Session 5E: Adversarial Learning + Social Networks + Causal Graphs
- Yan Shen, Jian Du, Han Zhao, Zhanghexuan Ji, Chunwei Ma, Mingchen Gao:
FedMM: A Communication Efficient Solver for Federated Adversarial Domain Adaptation. 1808-1816 - Michal Tomasz Godziszewski, Yevgeniy Vorobeychik, Tomasz P. Michalak:
Adversarial Link Prediction in Spatial Networks. 1817-1825 - Haoxin Liu, Yao Zhang, Dengji Zhao:
Distributed Mechanism Design in Social Networks. 1826-1834 - Mohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla, Goran Radanovic:
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks. 1835-1844 - H. Van Dyke Parunak:
How to Turn an MAS into a Graphical Causal Model. 1845-1847
Session 5F: Simulations
- Ayush Chopra, Alexander Rodríguez, Jayakumar Subramanian, Arnau Quera-Bofarull, Balaji Krishnamurthy, B. Aditya Prakash, Ramesh Raskar:
Differentiable Agent-based Epidemiology. 1848-1857 - Deepesh Kumar Lall, Garima Shakya, Swaprava Nath:
Social Distancing via Social Scheduling. 1858-1866 - Arnau Quera-Bofarull, Ayush Chopra, Joseph Aylett-Bullock, Carolina Cuesta-Lázaro, Anisoara Calinescu, Ramesh Raskar, Michael J. Wooldridge:
Don't Simulate Twice: One-Shot Sensitivity Analyses via Automatic Differentiation. 1867-1876 - Bernhard C. Geiger, Alireza Jahani, Hussain Hussain, Derek Groen:
Markov Aggregation for Speeding Up Agent-Based Movement Simulations. 1877-1885 - Nutchanon Yongsatianchot, Noah Chicoine, Jacqueline A. Griffin, Özlem Ergun, Stacy Marsella:
Agent-Based Modeling of Human Decision-makers Under Uncertain Information During Supply Chain Shortages. 1886-1894 - Erik van Haeringen, Charlotte Gerritsen:
Simulating Panic Amplification in Crowds via Density-Emotion Interaction. 1895-1902 - Franziska Klügl, Hildegunn Kyvik Nordås:
Modelling Agent Decision Making in Agent-based Simulation - Analysis Using an Economic Technology Uptake Model. 1903-1911 - Erik van Haeringen, Charlotte Gerritsen, Koen V. Hindriks:
Emotion Contagion in Agent-based Simulations of Crowds: A Systematic Review. 1912-1914
Session 6A: Deep Learning
- Jing Yuan, Shaojie Tang:
Worst-Case Adaptive Submodular Cover. 1915-1922 - Quentin Cohen-Solal, Tristan Cazenave:
Minimax Strikes Back. 1923-1931 - Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu:
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning. 1932-1941 - Woojun Kim, Youngchul Sung:
Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement Learning. 1942-1950 - Junqi Qian, Paul Weng, Chenmien Tan:
Learning Rewards to Optimize Global Performance Metrics in Deep Reinforcement Learning. 1951-1960 - Hao Zeng, Qiong Wu, Kunpeng Han, Junying He, Haoyuan Hu:
A Deep Reinforcement Learning Approach for Online Parcel Assignment. 1961-1968 - Mohammad Samin Yasar, Tariq Iqbal:
CoRaL: Continual Representation Learning for Overcoming Catastrophic Forgetting. 1969-1978
Session 6B: Multi-objective Planning and Learning
- Atrisha Sarkar, Kate Larson, Krzysztof Czarnecki:
Revealed Multi-Objective Utility Aggregation in Human Driving. 1979-1987 - Conor F. Hayes, Roxana Radulescu, Eugenio Bargiacchi, Johan Källström, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa M. Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowé, Gabriel de Oliveira Ramos, Marcello Restelli, Peter Vamplew, Diederik M. Roijers:
A Brief Guide to Multi-Objective Reinforcement Learning and Planning. 1988-1990 - Zimeng Fan, Nianli Peng, Muhang Tian, Brandon Fain:
Welfare and Fairness in Multi-objective Reinforcement Learning. 1991-1999 - Florence Ho, Shinji Nakadai:
Preference-Based Multi-Objective Multi-Agent Path Finding. 2000-2002 - Lucas Nunes Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé, Bruno C. da Silva:
Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization. 2003-2012 - Zhaori Guo, Timothy J. Norman, Enrico H. Gerding:
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility. 2013-2021
Session 6C: Voting II
- Yongjie Yang:
On the Complexity of the Two-Stage Majority Rule. 2022-2030 - Jan Maly, Simon Rey, Ulle Endriss, Martin Lackner:
Fairness in Participatory Budgeting via Equality of Resources. 2031-2039 - Martin Lackner, Jan Maly, Oliviero Nardi:
Free-Riding in Multi-Issue Decisions. 2040-2048 - Wei-Chen Lee, David Hyland, Alessandro Abate, Edith Elkind, Jiarui Gan, Julian Gutierrez, Paul Harrenstein, Michael J. Wooldridge:
k-Prize Weighted Voting Game. 2049-2057 - Andrei Constantinescu, Roger Wattenhofer:
Computing the Best Policy that Survives a Vote. 2058-2066 - Marie Christin Schmidtlein, Ulle Endriss:
Voting by Axioms. 2067-2075 - Javier Maass, Vincent Mousseau, Anaëlle Wilczynski:
A Hotelling-Downs Game for Strategic Candidacy with Binary Issues. 2076-2084 - Zoi Terzopoulou:
Voting with Limited Energy: A Study of Plurality and Borda. 2085-2093
Session 6D: Mechanism Design
- Thomas Archbold, Bart de Keijzer, Carmine Ventre:
Non-Obvious Manipulability for Single-Parameter Agents and Bilateral Trade. 2107-2115 - Hau Chan, Chenhao Wang:
Mechanism Design for Improving Accessibility to Public Facilities. 2116-2124 - Diodato Ferraioli, Carmine Ventre:
Explicit Payments for Obviously Strategyproof Mechanisms. 2125-2133 - Sumedh Pendurkar, Chris Chow, Luo Jie, Guni Sharon:
Bilevel Entropy based Mechanism Design for Balancing Meta in Video Games. 2134-2142 - Bengisu Guresti, Abdullah Vanlioglu, Nazim Kemal Ure:
IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas. 2143-2151 - Aris Filos-Ratsikas, Panagiotis Kanellopoulos, Alexandros A. Voudouris, Rongsen Zhang:
Settling the Distortion of Distributed Facility Location. 2152-2160 - Tianyi Zhang, Junyu Zhang, Sizhe Gu, Dengji Zhao:
Cost Sharing under Private Valuation and Connection Control. 2161-2169 - Houyu Zhou, Guochuan Zhang, Lili Mei, Minming Li:
Facility Location Games with Thresholds. 2170-2178
Session 6E: Social Networks
- Ahad N. Zehmakan:
Random Majority Opinion Diffusion: Stabilization Time, Absorbing States, and Influential Nodes. 2179-2187 - Wiktoria Kosny, Oskar Skibski:
Axiomatic Analysis of Medial Centrality Measures. 2188-2196 - Fang Kong, Jize Xie, Baoxiang Wang, Tao Yao, Shuai Li:
Online Influence Maximization under Decreasing Cascade Model. 2197-2204 - Jie Zhang, Yuezhou Lv, Zihe Wang:
Node Conversion Optimization in Multi-hop Influence Networks. 2205-2212 - Jesse Milzman, Cody Moser:
Decentralized Core-periphery Structure in Social Networks Accelerates Cultural Innovation in Agent-based Modeling. 2213-2221 - Argyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith, George Skretas:
Being an Influencer is Hard: The Complexity of Influence Maximization in Temporal Graphs with a Fixed Source. 2222-2230 - Jacques Bara, Paolo Turrini, Giulia Andrighetto:
Enabling Imitation-Based Cooperation in Dynamic Social Networks. 2231-2233 - Jacques Bara, Charlie Pilgrim, Paolo Turrini, Stanislav Zhydkov:
The Grapevine Web: Analysing the Spread of False Information in Social Networks with Corrupted Sources. 2234-2242
Session 6F: Norms
- David Radke, Kate Larson, Tim Brecht:
The Importance of Credo in Multiagent Learning. 2243-2252 - Gideon Ogunniye, Nadin Kökciyan:
Contextual Integrity for Argumentation-based Privacy Reasoning. 2253-2261 - Marc Serramia, William Seymour, Natalia Criado, Michael Luck:
Predicting Privacy Preferences for Smart Devices as Norms. 2262-2270 - Andreasa Morris-Martin, Marina De Vos, Julian A. Padget, Oliver Ray:
Agent-directed Runtime Norm Synthesis. 2271-2279 - Dhaminda B. Abeywickrama, Nathan Griffiths, Zhou Xu, Alex Mouzakitis:
Emergence of Norms in Interactions with Complex Rewards. 2280-2282
Poster Session I
- Michael Winikoff, Galina Sidorenko:
Evaluating a Mechanism for Explaining BDI Agent Behaviour. 2283-2285 - Mattias Appelgren, Alex Lascarides:
Learning Manner of Execution from Partial Corrections. 2286-2288 - Jieting Luo, Mehdi Dastani, Thomas Studer, Beishui Liao:
What Do You Care About: Inferring Values from Emotions. 2289-2291 - Zahra Zahedi, Sailik Sengupta, Subbarao Kambhampati:
'Why didn't you allocate this task to them?' Negotiation-Aware Explicable Task Allocation and Contrastive Explanation Generation. 2292-2294 - Yael Septon, Yotam Amitai, Ofra Amir:
Explaining Agent Preferences and Behavior: Integrating Reward Decomposition and Contrastive Highlights. 2295-2297 - David A. Robb, Xingkun Liu, Helen Hastie:
Explanation Styles for Trustworthy Autonomous Systems. 2298-2300 - Taíssa Ribeiro, Ricardo Rodrigues, Carlos Martinho:
Modeling the Interpretation of Animations to Help Improve Emotional Expression. 2301-2303 - Tatiana Chakravorti, Vaibhav Singh, Sarah Rajtmajer, Michael McLaughlin, Robert Fraleigh, Christopher Griffin, Anthony Kwasnica, David M. Pennock, C. Lee Giles:
Artificial Prediction Markets Present a Novel Opportunity for Human-AI Collaboration. 2304-2306 - Samer B. Nashed, Saaduddin Mahmud, Claudia V. Goldman, Shlomo Zilberstein:
Causal Explanations for Sequential Decision Making Under Uncertainty. 2307-2309 - Haozhe Ma, Thanh Vinh Vo, Tze-Yun Leong:
Hierarchical Reinforcement Learning with Human-AI Collaborative Sub-Goals Optimization. 2310-2312 - Anupama Arukgoda, Erandi Lakshika, Michael Barlow, Kasun Gunawardana:
Context-aware Agents based on Psychological Archetypes for Teamwork. 2313-2315 - Ruben S. Verhagen, Mark A. Neerincx, Can Parlar, Marin Vogel, Myrthe L. Tielman:
Personalized Agent Explanations for Human-Agent Teamwork: Adapting Explanations to User Trust, Workload, and Performance. 2316-2318 - Ping Chen, Xinjia Yu, Su Fang Lim, Zhiqi Shen:
A Teachable Agent to Enhance Elderly's Ikigai. 2319-2321 - Gwendolyn Edgar, Matthew McWilliams, Matthias Scheutz:
Improving Human-Robot Team Performance with Proactivity and Shared Mental Models. 2322-2324 - Khaing Phyo Wai, Minghong Geng, Budhitama Subagdja, Shubham Pateria, Ah-Hwee Tan:
Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models. 2325-2327 - Silvia Poletti, Alberto Testolin, Sebastian Tschiatschek:
Learning Constraints From Human Stop-Feedback in Reinforcement Learning. 2328-2330 - Malek Mechergui, Sarath Sreedharan:
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI. 2331-2333 - David V. Pynadath, Nikolos Gurney, Sarah Kenny, Rajay Kumar, Stacy C. Marsella, Haley Matuszak, Hala Mostafa, Pedro Sequeira, Volkan Ustun, Peggy Wu:
Effectiveness of Teamwork-Level Interventions through Decision-Theoretic Reasoning in a Minecraft Search-and-Rescue Task. 2334-2336 - Stéphane Aroca-Ouellette, Miguel Aroca-Ouellette, Upasana Biswas, Katharina Kann, Alessandro Roncone:
Hierarchical Reinforcement Learning for Ad Hoc Teaming. 2337-2339 - Ben Rachmut, Sofia Amador Nelke, Roie Zivan:
Asynchronous Communication Aware Multi-Agent Task Allocation. 2340-2342 - Francesco Leofante, Alessio Lomuscio:
Towards Robust Contrastive Explanations for Human-Neural Multi-agent Systems. 2343-2345 - Sylvie Doutre, Théo Duchatelle, Marie-Christine Lagasquie-Schiex:
Visual Explanations for Defence in Abstract Argumentation. 2346-2348 - Saravanan Ramanathan, Yihao Liu, Xueyan Tang, Wentong Cai, Jingning Li:
Minimising Task Tardiness for Multi-Agent Pickup and Delivery. 2349-2351 - Xiuyi Fan:
Probabilistic Deduction as a Probabilistic Extension of Assumption-based Argumentation. 2352-2354 - Jonathon Schwartz, Hanna Kurniawati:
Bayes-Adaptive Monte-Carlo Planning for Type-Based Reasoning in Large Partially Observable, Multi-Agent Environments. 2355-2357 - Avraham Natan, Roni Stern, Meir Kalech:
Blame Attribution for Multi-Agent Pathfinding Execution Failures. 2358-2360 - Alessandro Burigana, Paolo Felli, Marco Montali, Nicolas Troquard:
A Semantic Approach to Decidability in Epistemic Planning. 2361-2363 - Willy Arthur Silva Reis, Denis Benevolo Pais, Valdinei Freire, Karina Valdivia Delgado:
Forward-PECVaR Algorithm: Exact Evaluation for CVaR SSPs. 2364-2366 - Nadia Abchiche-Mimouni, Leila Amgoud, Farida Zehraoui:
Explainable Ensemble Classification Model based on Argumentation. 2367-2369 - Peter Stringer, Rafael C. Cardoso, Clare Dixon, Michael Fisher, Louise A. Dennis:
Updating Action Descriptions and Plans for Cognitive Agents. 2370-2372 - Leila Amgoud, Philippe Muller, Henri Trenquier:
Argument-based Explanation Functions. 2373-2375 - Andreas Brännström, Virginia Dignum, Juan Carlos Nieves:
A Formal Framework for Deceptive Topic Planning in Information-Seeking Dialogues. 2376-2378 - Dhananjay Raju, Georgios Bakirtzis, Ufuk Topcu:
Memoryless Adversaries in Imperfect Information Games. 2379-2381 - Mehran Hosseini, Alessio Lomuscio:
Bounded and Unbounded Verification of RNN-Based Agents in Non-deterministic Environments. 2382-2384 - Tung Thai, Mudit Verma, Utkarsh Soni, Sriram Gopalakrishnan, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz:
Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments. 2385-2387 - Nathaniel Weir, Xingdi Yuan, Marc-Alexandre Côté, Matthew J. Hausknecht, Romain Laroche, Ida Momennejad, Harm van Seijen, Benjamin Van Durme:
One-Shot Learning from a Demonstration with Hierarchical Latent Language. 2388-2390 - Seth Karten, Siva Kailas, Katia P. Sycara:
Emergent Compositional Concept Communication through Mutual Information in Multi-Agent Teams. 2391-2393 - Michael J. Vezina, François Schwarzentruber, Babak Esfandiari, Sandra Morley:
Reasoning about Uncertainty in AgentSpeak using Dynamic Epistemic Logic. 2394-2396 - Kazi Ashik Islam, Da Qi Chen, Madhav V. Marathe, Henning S. Mortveit, Samarth Swarup, Anil Vullikanti:
Towards Optimal and Scalable Evacuation Planning Using Data-driven Agent Based Models. 2397-2399 - Di Wu, Yuan Yao, Natasha Alechina, Brian Logan, John Thangarajah:
Intention Progression with Maintenance Goals. 2400-2402 - Aleksander Czechowski, Frans A. Oliehoek:
Safety Guarantees in Multi-agent Learning via Trapping Regions. 2403-2405 - Joshua Cook, Tristan Scheiner, Kagan Tumer:
Multi-Team Fitness Critics For Robust Teaming. 2406-2408 - Pankaj Kumar:
Multi-Agent Deep Reinforcement Learning for High-Frequency Multi-Market Making. 2409-2411 - Ali Beikmohammadi, Sindri Magnússon:
TA-Explore: Teacher-Assisted Exploration for Facilitating Fast Reinforcement Learning. 2412-2414 - Meera Hahn, Amit Raj, James M. Rehg:
Which way is 'right'?: Uncovering limitations of Vision-and-Language Navigation Models. 2415-2417 - Chen Yang, Guangkai Yang, Junge Zhang:
Learning Individual Difference Rewards in Multi-Agent Reinforcement Learning. 2418-2420 - Zixuan Chen, Wenbin Li, Yang Gao, Yiyu Chen:
TiLD: Third-person Imitation Learning by Estimating Domain Cognitive Differences of Visual Demonstrations. 2421-2423 - Wei Qiu, Weixun Wang, Rundong Wang, Bo An, Yujing Hu, Svetlana Obraztsova, Zinovi Rabinovich, Jianye Hao, Yingfeng Chen, Changjie Fan:
Off-Beat Multi-Agent Reinforcement Learning. 2424-2426 - Benoît Alcaraz, Olivier Boissier, Rémy Chaput, Christopher Leturc:
AJAR: An Argumentation-based Judging Agents Framework for Ethical Reinforcement Learning. 2427-2429 - Pranav Khanna, Guy Tennenholtz, Nadav Merlis, Shie Mannor, Chen Tessler:
Never Worse, Mostly Better: Stable Policy Improvement in Deep Reinforcement Learning. 2430-2432 - Matthias Gerstgrasser, Tom Danino, Sarah Keren:
Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning. 2433-2435 - Siddarth Shandeep Singh, Benjamin Rosman:
The Challenge of Redundancy on Multi-agent Value Factorisation. 2436-2438 - Hugo Gilbert, Mohamed Ouaguenouni, Meltem Öztürk, Olivier Spanjaard:
Robust Ordinal Regression for Collaborative Preference Learning with Opinion Synergies. 2439-2441 - Claude Formanek, Asad Jeewa, Jonathan P. Shock, Arnu Pretorius:
Off-the-Grid MARL: Datasets and Baselines for Offline Multi-Agent Reinforcement Learning. 2442-2444 - Zun Li, Marc Lanctot, Kevin R. McKee, Luke Marris, Ian Gemp, Daniel Hennes, Kate Larson, Yoram Bachrach, Michael P. Wellman, Paul Muller:
Search-Improved Game-Theoretic Multiagent Reinforcement Learning in General and Negotiation Games. 2445-2447 - Xiao Ma, Wu-Jun Li:
Grey-box Adversarial Attack on Communication in Multi-agent Reinforcement Learning. 2448-2450 - Cevahir Köprülü, Ufuk Topcu:
Reward-Machine-Guided, Self-Paced Reinforcement Learning. 2451-2453 - Chao Li, Chen Gong, Qiang He, Xinwen Hou, Yu Liu:
Centralized Cooperative Exploration Policy for Continuous Control Tasks. 2454-2456 - Chaitanya Kharyal, Tanmay Kumar Sinha, Sai Krishna Gottipati, Fatemeh Abdollahi, Srijita Das, Matthew E. Taylor:
Do As You Teach: A Multi-Teacher Approach to Self-Play in Deep Reinforcement Learning. 2457-2459 - Jizhou Wu, Tianpei Yang, Xiaotian Hao, Jianye Hao, Yan Zheng, Weixun Wang, Matthew E. Taylor:
PORTAL: Automatic Curricula Generation for Multiagent Reinforcement Learning. 2460-2462 - Panayiotis Danassis, Aris Filos-Ratsikas, Haipeng Chen, Milind Tambe, Boi Faltings:
AI-driven Prices for Externalities and Sustainability in Production Markets. 2463-2465 - Jonathan Scarlett, Nicholas Teh, Yair Zick:
For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods. 2466-2468 - Haris Aziz, Sean Morota Chu, Zhaohong Sun:
Matching Algorithms under Diversity-Based Reservations. 2469-2471 - Ben Abramowitz, Nicholas Mattei:
Social Mechanism Design: A Low-Level Introduction. 2472-2474 - Evripidis Bampis, Bruno Escoffier, Paul Youssef:
Online 2-stage Stable Matching. 2475-2477 - Xinming Liu, Joseph Y. Halpern:
Strategic Play By Resource-Bounded Agents in Security Games. 2478-2480 - Zijian Shi, John Cartlidge:
Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid Methodology. 2481-2483 - Yongzhao Wang, Michael P. Wellman:
Regularization for Strategy Exploration in Empirical Game-Theoretic Analysis. 2484-2486 - Shengbo Chang, Katsuhide Fujita:
A Scalable Opponent Model Using Bayesian Learning for Automated Bilateral Multi-Issue Negotiation. 2487-2489
Poster Session II
- Yangkun Chen, Joseph Suarez, Junjie Zhang, Chenghui Yu, Bo Wu, Hanmo Chen, Hengman Zhu, Rui Du, Shanliang Qian, Shuai Liu, Weijun Hong, Jinke He, Yibing Zhang, Liang Zhao, Clare Zhu, Julian Togelius, Sharada P. Mohanty, Jiaxin Chen, Xiu Li, Xiaolong Zhu, Phillip Isola:
Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO. 2490-2492 - Francisco Supino Marcondes, José João Almeida, Paulo Novais:
SE4AI Issues on Social Media Agent Design with Use Cases. 2493-2495 - Jayati Deshmukh, Nikitha Adivi, Srinath Srinivasa:
Modeling Application Scenarios for Responsible Autonomy using Computational Transcendence. 2496-2498 - Jérémy Lemée, Samuele Burattini, Simon Mayer, Andrei Ciortea:
Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use Cases. 2499-2501 - Vincent Mai, Philippe Maisonneuve, Tianyu Zhang, Hadi Nekoei, Liam Paull, Antoine Lesage-Landry:
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads. 2502-2504 - Ágnes Cseh, Pascal Führlich, Pascal Lenzner:
The Swiss Gambit. 2505-2507 - Guoxin Sun, Tansu Alpcan, Seyit Camtepe, Andrew C. Cullen, Benjamin I. P. Rubinstein:
An Adversarial Strategic Game for Machine Learning as a Service using System Features. 2508-2510 - Ran Tao, Pan Zhao, Jing Wu, Nicolas F. Martin, Matthew Tom Harrison, Carla Sofia Santos Ferreira, Zahra Kalantari, Naira Hovakimyan:
Optimizing Crop Management with Reinforcement Learning and Imitation Learning. 2511-2513 - Stavros Orfanoudakis, Georgios Chalkiadakis:
A Novel Aggregation Framework for the Efficient Integration of Distributed Energy Resources in the Smart Grid. 2514-2516 - Huy Quang Ngo, Mingyu Guo, Hung Nguyen:
Near Optimal Strategies for Honeypots Placement in Dynamic and Large Active Directory Networks. 2517-2519 - Sanjay Chandlekar, Arthik Boroju, Shweta Jain, Sujit Gujar:
A Novel Demand Response Model and Method for Peak Reduction in Smart Grids - PowerTAC. 2520-2522 - Michaela Kümpel, Jonas Dech, Alina Hawkin, Michael Beetz:
Robotic Shopping Assistance for Everyone: Dynamic Query Generation on a Semantic Digital Twin as a Basis for Autonomous Shopping Assistance. 2523-2525 - Tasfia Mashiat, Xavier Gitiaux, Huzefa Rangwala, Sanmay Das:
Counterfactually Fair Dynamic Assignment: A Case Study on Policing. 2526-2528 - Chikadibia Ihejimba, Behnam Torabi, Rym Z. Wenkstern:
A Cloud-Based Solution for Multi-Agent Traffic Control Systems. 2529-2531 - Dimitris Michailidis, Sennay Ghebreab, Fernando P. Santos:
Balancing Fairness and Efficiency in Transport Network Design through Reinforcement Learning. 2532-2534 - Yu Zhang:
From Abstractions to Grounded Languages for Robust Coordination of Task Planning Robots. 2535-2537 - Mehdi William Othmani-Guibourg, Jean-Loup Farges, Amal El Fallah Seghrouchni:
Idleness Estimation for Distributed Multiagent Patrolling Strategies. 2538-2540 - Francesco Semeraro, Jon Carberry, Angelo Cangelosi:
Simpler rather than Challenging: Design of Non-Dyadic Human-Robot Collaboration to Mediate Human-Human Concurrent Tasks. 2541-2543 - Lei Wu, Bin Guo, Qiuyun Zhang, Zhuo Sun, Jieyi Zhang, Zhiwen Yu:
Learning to Self-Reconfigure for Freeform Modular Robots via Altruism Multi-Agent Reinforcement Learning. 2544-2546 - Alejandro Romero, Gianluca Baldassarre, Richard J. Duro, Vieri Giuliano Santucci:
Learning Multiple Tasks with Non-stationary Interdependencies in Autonomous Robots. 2547-2549 - John Harwell, London Lowmanstone, Maria L. Gini:
Provably Manipulable 3D Structures using Graph Theory. 2550-2552 - Kacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru, Wenchao Li:
HoLA Robots: Mitigating Plan-Deviation Attacks in Multi-Robot Systems with Co-Observations and Horizon-Limiting Announcements. 2553-2555 - Atsuyoshi Kita, Nobuhiro Suenari, Masashi Okada, Tadahiro Taniguchi:
Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times. 2556-2558 - Kang Zhou, Chi Guo, Huyin Zhang, Wenfei Guo:
RTransNav: Relation-wise Transformer Network for More Successful Object Goal Navigation. 2559-2561 - Benedetta Flammini, Davide Azzalini, Francesco Amigoni:
Multi-Agent Pickup and Delivery in Presence of Another Team of Robots. 2562-2564 - Jesus Bujalance Martin, Fabien Moutarde:
Reward Relabelling for Combined Reinforcement and Imitation Learning on Sparse-reward Tasks. 2565-2567 - Xiangguo Liu, Ruochen Jiao, Bowen Zheng, Dave Liang, Qi Zhu:
Connectivity Enhanced Safe Neural Network Planner for Lane Changing in Mixed Traffic. 2568-2570 - Licheng Wen, Pinlong Cai, Daocheng Fu, Song Mao, Yikang Li:
Bringing Diversity to Autonomous Vehicles: An Interpretable Multi-vehicle Decision-making and Planning Framework. 2571-2573 - Edward Vickery, Aditya A. Paranjape:
Loss of Distributed Coverage Using Lazy Agents Operating Under Discrete, Local, Event-Triggered Communication. 2574-2576 - Cheng Zhao, Liansheng Zhuang, Haonan Liu, Yihong Huang, Jian Yang:
Multi-Agent Path Finding via Reinforcement Learning with Hybrid Reward. 2577-2579 - Andrea Di Pietro, Nicola Basilico, Francesco Amigoni:
Multi-Agent Pickup and Delivery with Task Probability Distribution. 2580-2582 - Yupeng Yang, Yiwei Lyu, Wenhao Luo:
Minimally Constraining Line-of-Sight Connectivity Maintenance for Collision-free Multi-Robot Networks under Uncertainty. 2583-2585 - Jianqi Gao, Qi Liu, Shiyu Chen, Kejian Yan, Xinyi Li, Yanjie Li:
Multi-Agent Path Finding with Time Windows: Preliminary Results. 2586-2588 - Su Zhang, Srijita Das, Sriram Ganapathi Subramanian, Matthew E. Taylor:
Two-Level Actor-Critic Using Multiple Teachers. 2589-2591 - Xiaoyan Hu, Ho-fung Leung:
Provably Efficient Offline RL with Options. 2592-2594 - Gonçalo Querido, Alberto Sardinha, Francisco S. Melo:
Learning to Perceive in Deep Model-Free Reinforcement Learning. 2595-2597 - Yutong Wang, Bairan Xiang, Shinan Huang, Guillaume Sartoretti:
SCRIMP: Scalable Communication for Reinforcement- and Imitation-Learning-Based Multi-Agent Pathfinding. 2598-2600 - Xiangrui Meng, Ying Tan:
Learning Group-Level Information Integration in Multi-Agent Communication. 2601-2603 - Ionela G. Mocanu, Vaishak Belle, Brendan Juba:
Learnability with PAC Semantics for Multi-agent Beliefs. 2604-2606 - Mingyang Sun, Yaqing Hou, Jie Kang, Haiyin Piao, Yifeng Zeng, Hongwei Ge, Qiang Zhang:
Improving Cooperative Multi-Agent Exploration via Surprise Minimization and Social Influence Maximization. 2607-2609 - Wiktor Piotrowski, Roni Stern, Yoni Sher, Jacob Le, Matthew Klenk, Johan de Kleer, Shiwali Mohan:
Learning to Operate in Open Worlds by Adapting Planning Models. 2610-2612 - James Kotary, Vincenzo Di Vito, Ferdinando Fioretto:
End-to-End Optimization and Learning for Multiagent Ensembles. 2613-2615 - Haoxiang Ma, Shuo Han, Nandi Leslie, Charles A. Kamhoua, Jie Fu:
Optimal Decoy Resource Allocation for Proactive Defense in Probabilistic Attack Graphs. 2616-2618 - Mateo Mahaut, Francesca Franzon, Roberto Dessì, Marco Baroni:
Referential Communication in Heterogeneous Communities of Pre-trained Visual Deep Networks. 2619-2621 - Haipeng Chen, Bryan Wilder, Wei Qiu, Bo An, Eric Rice, Milind Tambe:
A Learning Approach to Complex Contagion Influence Maximization. 2622-2624 - Nasik Muhammad Nafi, Raja Farrukh Ali, William H. Hsu:
Analyzing the Sensitivity to Policy-Value Decoupling in Deep Reinforcement Learning Generalization. 2625-2627 - Taylor Dohmen, Ashutosh Trivedi:
Reinforcement Learning with Depreciating Assets. 2628-2630 - Kushal Chauhan, Soumya Chatterjee, Akash Reddy, Aniruddha S, Balaraman Ravindran, Pradeep Shenoy:
Matching Options to Tasks using Option-Indexed Hierarchical Reinforcement Learning. 2631-2633 - Wenze Chen, Shiyu Huang, Yuan Chiang, Ting Chen, Jun Zhu:
DGPO: Discovering Multiple Strategies with Diversity-Guided Policy Optimization. 2634-2636 - Prashank Kadam, Ruiyang Xu, Karl J. Lieberherr:
Accelerating Neural MCTS Algorithms using Neural Sub-Net Structures. 2637-2639 - Jing Dong, Li Shen, Yinggan Xu, Baoxiang Wang:
Provably Efficient Convergence of Primal-Dual Actor-Critic with Nonlinear Function Approximation. 2640-2642 - Xueping Gong, Jiheng Zhang:
Achieving Near-optimal Regrets in Confounded Contextual Bandits. 2643-2645 - Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani, Toby Walsh:
Proportional Fairness in Obnoxious Facility Location. 2646-2648 - Dorothea Baumeister, Linus Boes:
Distortion in Attribute Approval Committee Elections. 2649-2651 - Justin Payan, Rik Sengupta, Vignesh Viswanathan:
Relaxations of Envy-Freeness Over Graphs. 2652-2654 - Mingwei Yang:
Fairly Allocating (Contiguous) Dynamic Indivisible Items with Few Adjustments. 2655-2657 - Rachael Colley, Théo Delemazure, Hugo Gilbert:
Measuring a Priori Voting Power - Taking Delegations Seriously. 2658-2660 - Debajyoti Kar, Palash Dey, Swagato Sanyal:
Sampling-Based Winner Prediction in District-Based Elections. 2661-2663 - Jiawei Li, Hui Wang, Jilong Wang:
Cedric: A Collaborative DDoS Defense System Using Credit. 2664-2666 - Chaya Levinger, Amos Azaria, Noam Hazon:
Social Aware Coalition Formation with Bounded Coalition Size. 2667-2669 - Ioannis Caragiannis, Shivika Narang:
Repeatedly Matching Items to Agents Fairly and Efficiently. 2670-2672 - Jayakrishnan Madathil, Neeldhara Misra, Aditi Sethia:
The Complexity of Minimizing Envy in House Allocation. 2673-2675 - Luke Thorburn, Maria Polukarov, Carmine Ventre:
Error in the Euclidean Preference Model. 2676-2678 - Alessandro Aloisio:
Distance Hypergraph Polymatrix Coordination Games. 2679-2681 - Benjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David E. Narváez, Conor Taliancich, Henry B. Welles:
Search versus Search for Collapsing Electoral Control Types. 2682-2684 - Xiaolin Sun, Jacob Masur, Ben Abramowitz, Nicholas Mattei, Zizhan Zheng:
Does Delegating Votes Protect Against Pandering Candidates? 2685-2687 - Dolev Mutzari, Yonatan Aumann, Sarit Kraus:
Resilient Fair Allocation of Indivisible Goods. 2688-2690 - Shaojie Bai, Dongxia Wang, Tim Muller, Peng Cheng, Jiming Chen:
Stability of Weighted Majority Voting under Estimated Weights. 2691-2693 - Gogulapati Sreedurga:
Indivisible Participatory Budgeting with Multiple Degrees of Sophistication of Projects. 2694-2696 - Yuan Luo:
Incentivizing Sequential Crowdsourcing Systems. 2697-2699 - Hugh Zhang:
No-regret Learning Dynamics for Sequential Correlated Equilibria. 2700-2702 - Roland Saur, Han La Poutré, Neil Yorke-Smith:
Fair Pricing for Time-Flexible Smart Energy Markets. 2703-2705 - Farzaneh Farhadi, Maria Chli, Nicholas R. Jennings:
Budget-Feasible Mechanism Design for Cost-Benefit Optimization in Gradual Service Procurement. 2706-2708 - MohammadTaghi Hajiaghayi, Max Springer:
Analysis of a Learning Based Algorithm for Budget Pacing. 2709-2711 - Youzhi Zhang, Bo An, V. S. Subrahmanian:
Finding Optimal Nash Equilibria in Multiplayer Games via Correlation Plans. 2712-2714 - Haolin Liu, Xinyuan Lian, Dengji Zhao:
Diffusion Multi-unit Auctions with Diminishing Marginal Utility Buyers. 2715-2717 - Yuhong Xu, Shih-Fen Cheng, Xinyu Chen:
Improving Quantal Cognitive Hierarchy Model Through Iterative Population Learning. 2718-2720 - Jugal Garg, Thorben Tröbst, Vijay V. Vazirani:
A Nash-Bargaining-Based Mechanism for One-Sided Matching Markets and Dichotomous Utilities. 2721-2723 - Fengjuan Jia, Mengxiao Zhang, Jiamou Liu, Bakh Khoussainov:
Differentially Private Diffusion Auction: The Single-unit Case. 2724-2726 - Fedor Duzhin:
Learning in Teams: Peer Evaluation for Fair Assessment of Individual Contributions. 2727-2729
Poster Session III
- Adway Mitra:
Agent-based Simulation of District-based Elections with Heterogeneous Populations. 2730-2732 - Varun Madhavan, Adway Mitra, Partha Pratim Chakrabarti:
Deep Learning-based Spatially Explicit Emulation of an Agent-Based Simulator for Pandemic in a City. 2733-2735 - Yikun Yang, Fenghui Ren, Minjie Zhang:
A Decentralized Multiagent-Based Task Scheduling Framework for Handling Uncertain Events in Fog Computing. 2736-2738 - Theodor Cimpeanu, Luís Moniz Pereira, The Anh Han:
Co-evolution of Social and Non-social Guilt in Structured Populations. 2739-2741 - Leo Ardon, Jared Vann, Deepeka Garg, Thomas Spooner, Sumitra Ganesh:
Phantom - A RL-driven Multi-Agent Framework to Model Complex Systems. 2742-2744 - Ryo Niwa, Shunki Takami, Shusuke Shigenaka, Masaki Onishi, Wataru Naito, Tetsuo Yasutaka:
Simulation Model with Side Trips at a Large-Scale Event. 2745-2747 - Michael Schlechtinger, Damaris Kosack, Heiko Paulheim, Thomas Fetzer, Franz Krause:
The Price of Algorithmic Pricing: Investigating Collusion in a Market Simulation with AI Agents. 2748-2750 - Ryo Nishida, Masaki Onishi, Koichi Hashimoto:
Crowd Simulation Incorporating a Route Choice Model and Similarity Evaluation using Real Large-scale Data. 2751-2753 - Ayushman Panda, Kamalakar Karlapalem:
Capturing Hiders with Moving Obstacles. 2754-2756 - Maëlle Beuret, Irène Foucherot, Christian Gentil, Joël Savelli:
COBAI : A Generic Agent-based Model of Human Behaviors Centered on Contexts and Interactions. 2757-2759 - Michael Curry, Alexander Trott, Soham Phade, Yu Bai, Stephan Zheng:
Learning Solutions in Large Economic Networks using Deep Multi-Agent Reinforcement Learning. 2760-2762 - Anshul Toshniwal, Fernando P. Santos:
Opinion Dynamics in Populations of Converging and Polarizing Agents. 2763-2765 - Luca Becchetti, Vincenzo Bonifaci, Emilio Cruciani, Francesco Pasquale:
On a Voter Model with Context-Dependent Opinion Adoption. 2766-2768 - Abdullah Al Maruf, Luyao Niu, Bhaskar Ramasubramanian, Andrew Clark, Radha Poovendran:
Cognitive Bias-Aware Dissemination Strategies for Opinion Dynamics with External Information Sources. 2769-2771 - Debajyoti Kar, Mert Kosan, Debmalya Mandal, Sourav Medya, Arlei Silva, Palash Dey, Swagato Sanyal:
Feature-based Individual Fairness in k-clustering. 2772-2774 - Helen Sternbach, Sara Cohen:
Fair Facility Location for Socially Equitable Representation. 2775-2777 - Quentin Elsaesser, Patricia Everaere, Sébastien Konieczny:
S&F: Sources and Facts Reliability Evaluation Method. 2778-2780 - Xiangsen Wang, Xianyuan Zhan:
Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization. 2781-2783 - Yun Hua, Shang Gao, Wenhao Li, Bo Jin, Xiangfeng Wang, Hongyuan Zha:
Learning Optimal "Pigovian Tax" in Sequential Social Dilemmas. 2784-2786 - Daan Di Scala, Pinar Yolum:
PACCART: Reinforcing Trust in Multiuser Privacy Agreement Systems. 2787-2789 - Gonul Ayci, Arzucan Özgür, Murat Sensoy, Pinar Yolum:
Explain to Me: Towards Understanding Privacy Decisions. 2790-2791 - Michael A. Goodrich, Jennifer Leaf, Julie A. Adams, Matthias Scheutz:
The Resilience Game: A New Formalization of Resilience for Groups of Goal-Oriented Autonomous Agents. 2792-2794 - M. Amin Rahimian, Fang-Yi Yu, Carlos Hurtado:
Differentially Private Network Data Collection for Influence Maximization. 2795-2797 - Vivek Mallampati, Harish Ravichandar:
Inferring Implicit Trait Preferences from Demonstrations of Task Allocation in Heterogeneous Teams. 2798-2800 - Abhinav Joshi, Areeb Ahmad, Umang Pandey, Ashutosh Modi:
From Scripts to RL Environments: Towards Imparting Commonsense Knowledge to RL Agents. 2801-2803 - Sihong Luo, Jinghao Chen, Zheng Hu, Chunhong Zhang, Benhui Zhuang:
Hierarchical Reinforcement Learning with Attention Reward. 2804-2806 - Stefano Mariani, Pasquale Roseti, Franco Zambonelli:
Towards Multi-agent Learning of Causal Networks. 2807-2809 - Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan, Bryan Kian Hsiang Low:
FedHQL: Federated Heterogeneous Q-Learning. 2810-2812 - Seán Caulfield Curley, Karl Mason, Patrick Mannion:
Know Your Enemy: Identifying and Adapting to Adversarial Attacks in Deep Reinforcement Learning. 2813-2814 - Namyeong Lee, Jun Moon:
Transformer Actor-Critic with Regularization: Automated Stock Trading using Reinforcement Learning. 2815-2817 - Johan Källström, Fredrik Heintz:
Model-Based Actor-Critic for Multi-Objective Reinforcement Learning with Dynamic Utility Functions. 2818-2820 - Shahaf S. Shperberg, Bo Liu, Peter Stone:
Relaxed Exploration Constrained Reinforcement Learning. 2821-2823 - Rafael Pina, Varuna De Silva, Corentin Artaud:
Causality Detection for Efficient Multi-Agent Reinforcement Learning. 2824-2826 - Peter Sunehag, Alexander Sasha Vezhnevets, Edgar A. Duéñez-Guzmán, Igor Mordatch, Joel Z. Leibo:
Diversity Through Exclusion (DTE): Niche Identification for Reinforcement Learning through Value-Decomposition. 2827-2829 - Devdhar Patel, Joshua Russell, Francesca Walsh, Tauhidur Rahman, Terrence J. Sejnowski, Hava T. Siegelmann:
Temporally Layered Architecture for Adaptive, Distributed and Continuous Control. 2830-2832 - Marc Vincent, Amal El Fallah Seghrouchni, Vincent Corruble, Narayan Bernardin, Rami Kassab, Frédéric Barbaresco:
Multi-objective Reinforcement Learning in Factored MDPs with Graph Neural Networks. 2833-2835 - Chirag Chhablani, Ian A. Kash:
An Analysis of Connections Between Regret Minimization and Actor Critic Methods in Cooperative Settings. 2836-2838 - Thomy Phan, Fabian Ritz, Jonas Nüßlein, Michael Kölle, Thomas Gabor, Claudia Linnhoff-Popien:
Attention-Based Recurrency for Multi-Agent Reinforcement Learning under State Uncertainty. 2839-2841 - Nancirose Piazza, Vahid Behzadan:
A Theory of Mind Approach as Test-Time Mitigation Against Emergent Adversarial Communication. 2842-2844 - Cynthia Huang, Pascal Poupart:
Defensive Collaborative Learning: Protecting Objective Privacy in Data Sharing. 2845-2847 - Jonathan C. Balloch, Zhiyu Lin, Xiangyu Peng, Mustafa Hussain, Aarun Srinivas, Robert Wright, Julia M. Kim, Mark O. Riedl:
Neuro-Symbolic World Models for Adapting to Open World Novelty. 2848-2850 - Andrey Kurenkov, Michael Lingelbach, Tanmay Agarwal, Chengshu Li, Emily Jin, Ruohan Zhang, Li Fei-Fei, Jiajun Wu, Silvio Savarese, Roberto Martín-Martín:
Modeling Dynamic Environments with Scene Graph Memory. 2851-2853 - Shivam Gupta, Ganesh Ghalme, Narayanan C. Krishnan, Shweta Jain:
Group Fair Clustering Revisited - Notions and Efficient Algorithm. 2854-2856 - Mohammad Afzal, Sankalp Gambhir, Ashutosh Gupta, S. Krishna, Ashutosh Trivedi, Alvaro Velasquez:
LTL-Based Non-Markovian Inverse Reinforcement Learning. 2857-2859 - Argyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith:
The Parameterized Complexity of Welfare Guarantees in Schelling Segregation. 2860-2862 - Siddharth Barman, Vishnu V. Narayan, Paritosh Verma:
Fair Chore Division under Binary Supermodular Costs. 2863-2865 - Julian Chingoma, Adrian Haret:
Deliberation as Evidence Disclosure: A Tale of Two Protocol Types. 2866-2868 - Sandip Banerjee, Rajesh Chitnis, Abhiruk Lahiri:
How Does Fairness Affect the Complexity of Gerrymandering? 2869-2871 - Gogulapati Sreedurga, Soumyarup Sadhukhan, Souvik Roy, Yadati Narahari:
Individual-Fair and Group-Fair Social Choice Rules under Single-Peaked Preferences. 2872-2874 - Pooja Kulkarni, Rucha Kulkarni, Ruta Mehta:
Maximin Share Allocations for Assignment Valuations. 2875-2876 - Farhad Mohsin, Qishen Han, Sikai Ruan, Pin-Yu Chen, Francesca Rossi, Lirong Xia:
Computational Complexity of Verifying the Group No-show Paradox. 2877-2879 - Jiehua Chen, Gergely Csáji:
Optimal Capacity Modification for Many-To-One Matching Problems. 2880-2882 - Inwon Kang, Qishen Han, Lirong Xia:
Learning to Explain Voting Rules. 2883-2885 - Mingyu Xiao, Guoliang Qiu, Sen Huang:
MMS Allocations of Chores with Connectivity Constraints: New Methods and New Results. 2886-2888 - Haris Aziz, Evi Micha, Nisarg Shah:
Group Fairness in Peer Review. 2889-2891 - Houyu Zhou, Hau Chan, Minming Li:
Altruism in Facility Location Problems. 2892-2894 - Siqi Chen, Qisong Sun, Heng You, Tianpei Yang, Jianye Hao:
Transfer Learning based Agent for Automated Negotiation. 2895-2898 - Tobias Friedrich, Pascal Lenzner, Louise Molitor, Lars Seifert:
Single-Peaked Jump Schelling Games. 2899-2901 - Francis Rhys Ward, Francesca Toni, Francesco Belardinelli:
Defining Deception in Structural Causal Games. 2902-2904 - Yongzhao Wang, Michael P. Wellman:
Game Model Learning for Mean Field Games. 2905-2907 - Sonja Johnson-Yu, Kai Wang, Jessie Finocchiaro, Aparna Taneja, Milind Tambe:
Modeling Robustness in Decision-Focused Learning as a Stackelberg Game. 2908-2909 - Andrzej Nagórko, Pawel Ciosmak, Tomasz P. Michalak:
Two-phase Security Games. 2910-2912 - Costas Courcoubetis, Antonis Dimakis:
Stationary Equilibrium of Mean Field Games with Congestion-dependent Sojourn Times. 2913-2915 - Keyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan, Panagiotis Angeloudis:
Last-mile Collaboration: A Decentralized Mechanism with Performance Guarantees and its Implementation. 2916-2918 - Benjamin Estermann, Stefan Kramer, Roger Wattenhofer, Ye Wang:
Deep Learning-Powered Iterative Combinatorial Auctions with Active Learning. 2919-2921 - Zhikang Fan, Weiran Shen:
Revenue Maximization Mechanisms for an Uninformed Mediator with Communication Abilities. 2922-2924
Doctoral Consortium
- Jasmina Gajcin:
Counterfactual Explanations for Reinforcement Learning Agents. 2925-2927 - Yohai Trabelsi:
Bipartite Matching for Repeated Allocation Problems. 2928-2930 - Zun Li:
Artificial Intelligence Algorithms for Strategic Reasoning over Complex Multiagent Systems. 2931-2933 - Jacques Bara:
Emergence of Cooperation on Networks. 2934-2936 - Yotam Amitai:
Enhancing User Understanding of Reinforcement Learning Agents Through Visual Explanations. 2937-2939 - Ashwin Kumar:
Algorithmic Fairness in Temporal Resource Allocation. 2940-2942 - Alexander Rodríguez:
AI & Multi-agent Systems for Data-centric Epidemic Forecasting. 2943-2945 - Archana Vadakattu:
Strategy Extraction for Transfer in AI Agents. 2946-2948 - Zhaori Guo:
Multi-Advisor Dynamic Decision Making. 2949-2951 - Stelios Triantafyllou:
Forward-Looking and Backward-Looking Responsibility Attribution in Multi-Agent Sequential Decision Making. 2952-2954 - Saar Cohen:
Coalition Formation in Sequential Decision-Making under Uncertainty. 2955-2957 - Aditi Sethia:
Fine Grained Complexity of Fair and Efficient Allocations. 2958-2960 - Junlin Lu:
Preference Inference from Demonstration in Multi-objective Multi-agent Decision Making. 2961-2963 - Yifan Xu:
Explanation through Dialogue for Reasoning Systems. 2964-2966 - John Lindqvist:
Logics for Information Aggregation. 2967-2969 - Lucas Nunes Alegre:
Towards Sample-Efficient Multi-Objective Reinforcement Learning. 2970-2972 - Yi Yang:
Verifiably Safe Decision-Making for Autonomous Systems. 2973-2975 - Maxence Hussonnois:
A Toolkit for Encouraging Safe Diversity in Skill Discovery. 2976-2978 - Alexander Masterman:
Citizen Centric Demand Responsive Transport. 2979-2981 - Jan Vermaelen:
Safe Behavior Specification and Planning for Autonomous Robotic Systems in Uncertain Environments. 2982-2984 - Rongsen Zhang:
Mechanism Design for Heterogeneous and Distributed Facility Location Problems. 2985-2987 - Behrad Koohy:
Reinforcement Learning and Mechanism Design for Routing of Connected and Autonomous Vehicles. 2988-2990 - Gönül Ayci:
Uncertainty-aware Personal Assistant and Explanation Method for Privacy Decisions. 2991-2992 - Dimitris Michailidis:
Fair Transport Network Design using Multi-Agent Reinforcement Learning. 2993-2995 - Jonathon Schwartz:
Towards Scalable and Robust Decision Making in Partially Observable, Multi-Agent Environments. 2996-2998 - Willem Röpke:
Reinforcement Learning in Multi-Objective Multi-Agent Systems. 2999-3001 - Tasfia Mashiat:
Characterizing Fairness in Societal Resource Allocation. 3002-3004 - Mohammad Samin Yasar:
Learning Transferable Representations for Non-stationary Environments. 3005-3007 - Aaquib Tabrez:
Effective Human-Machine Teaming through Communicative Autonomous Agents that Explain, Coach, and Convince. 3008-3010 - Stylianos Loukas Vasileiou:
Towards a Logical Account for Human-Aware Explanation Generation in Model Reconciliation Problems. 3011-3013 - Abheek Ghosh:
Contests and Other Topics in Multi-Agent Systems. 3014-3016 - Jonathan Diller:
Planning and Coordination for Unmanned Aerial Vehicles. 3017-3019 - Kate Candon:
Towards Creating Better Interactive Agents: Leveraging Both Implicit and Explicit Human Feedback. 3020-3022 - Shivendra Agrawal:
Assistive Robotics for Empowering Humans with Visual Impairments to Independently Perform Day-to-day Tasks. 3023-3025 - Michael C. Chavrimootoo:
Separations and Collapses in Computational Social Choice. 3026-3028 - Jayati Deshmukh:
Emergent Responsible Autonomy in Multi-Agent Systems. 3029-3031 - Nasik Muhammad Nafi:
Learning Representations and Robust Exploration for Improved Generalization in Reinforcement Learning. 3032-3034 - Lucia Cipolina-Kun:
Enhancing Smart, Sustainable Mobility with Game Theory and Multi-Agent Reinforcement Learning. 3035-3037
Demonstrations
- Cleber Jorge Amaral, Jomi Fred Hübner, Timotheus Kampik:
TDD for AOP: Test-Driven Development for Agent-Oriented Programming. 3038-3040 - Amit K. Chopra, Samuel H. Christie V., Munindar P. Singh:
Interaction-Oriented Programming: Intelligent, Meaning-Based Multiagent Systems. 3041-3043 - Yanyu Liu, Yifeng Zeng, Biyang Ma, Yinghui Pan, Huifan Gao, Xiaohan Huang:
Improvement and Evaluation of the Policy Legibility in Reinforcement Learning. 3044-3046 - Mara Cairo, Bevin Eldaphonse, Payam Mousavi, Sahir, Sheikh Jubair, Matthew E. Taylor, Graham Doerksen, Nikolai Kummer, Jordan Maretzki, Gupreet Mohhar, Sean Murphy, Johannes Günther, Laura Petrich, Talat Syed:
Multi-Robot Warehouse Optimization: Leveraging Machine Learning for Improved Performance. 3047-3049 - Matteo Baldoni, Cristina Baroglio, Roberto Micalizio, Stefano Tedeschi:
Robust JaCaMo Applications via Exceptions and Accountability. 3050-3052 - Sandrine Chausson, Ameer Saadat-Yazdi, Xue Li, Jeff Z. Pan, Vaishak Belle, Nadin Kökciyan, Björn Ross:
A Web-based Tool for Detecting Argument Validity and Novelty. 3053-3055 - Marc Roig Vilamala, Dave Braines, Federico Cerutti, Alun D. Preece:
Visualizing Logic Explanations for Social Media Moderation. 3056-3058 - Sukankana Chakraborty, Sebastian Stein, Ananthram Swami, Matthew Jones, Lewis Hill:
The Influence Maximisation Game. 3059-3061 - William Hunt, Jack Ryan, Ayodeji Opeyemi Abioye, Sarvapali D. Ramchurn, Mohammad Divband Soorati:
Demonstrating Performance Benefits of Human-Swarm Teaming. 3062-3064 - Sai Krishna Gottipati, Luong-Ha Nguyen, Clodéric Mars, Matthew E. Taylor:
Hiking up that HILL with Cogment-Verse: Train & Operate Multi-agent Systems Learning from Humans. 3065-3067 - Hazel Watson-Smith, Felix Marcon Swadel, Jo Hutton, Kirstin Marcon, Mark Sagar, Shane Blackett, Tiago Rebeiro, Travers Biddle, Tim Wu:
Real Time Gesturing in Embodied Agents for Dynamic Content Creation. 3068-3069
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.