


default search action
19th AAMAS 2020: Auckland, New Zealand
- Amal El Fallah Seghrouchni, Gita Sukthankar, Bo An, Neil Yorke-Smith:

Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS '20, Auckland, New Zealand, May 9-13, 2020. International Foundation for Autonomous Agents and Multiagent Systems 2020, ISBN 978-1-4503-7518-4
Keynote Talks
- Carla P. Gomes:

AI for Advancing Scientific Discovery for a Sustainable Future. 1 - Thore Graepel:

Automatic Curricula in Deep Multi-Agent Reinforc ement Learning. 2 - Alison J. Heppenstall, Nick Malleson:

Building Cities from Slime Mould, Agents and Quantum Field Theory. 3-4 - Sergey Levine:

Unsupervised Reinforcement Learning. 5-6
Research Papers
- Yehia Abd Alrahman, Giuseppe Perelli, Nir Piterman:

Reconfigurable Interaction for MAS Modelling. 7-15 - Nirav Ajmeri, Hui Guo, Pradeep K. Murukannaiah, Munindar P. Singh:

Elessar: Ethics in Norm-Aware Agents. 16-24 - Michael E. Akintunde, Elena Botoeva, Panagiotis Kouvaros, Alessio Lomuscio:

Formal Verification of Neural Agents in Non-deterministic Environments. 25-33 - Shaull Almagor, Morteza Lahijanian:

Explainable Multi Agent Path Finding. 34-42 - Yackolley Amoussou-Guenou, Bruno Biais, Maria Potop-Butucaru, Sara Tucci Piergiovanni:

Rational vs Byzantine Players in Consensus-based Blockchains. 43-51 - Merlinda Andoni, Valentin Robu, Wolf-Gerrit Früh, David Flynn:

Strategic Decision-Making for Power Network Investments with Distributed Renewable Generation. 52-60 - Alessia Antelmi, Gennaro Cordasco, Carmine Spagnuolo, Vittorio Scarano:

A Design-Methodology for Epidemic Dynamics via Time-Varying Hypergraphs. 61-69 - Antonios Antoniadis, Andrés Cristi, Tim Oosterwijk, Alkmini Sgouritsa:

A General Framework for Energy-Efficient Cloud Computing Mechanisms. 70-78 - Enrique Areyan Viqueira, Cyrus Cousins, Amy Greenwald:

Improved Algorithms for Learning Equilibria in Simulation-Based Games. 79-87 - James Ault, Josiah P. Hanna, Guni Sharon:

Learning an Interpretable Traffic Signal Control Policy. 88-96 - Haris Aziz

, Anton Baychkov, Péter Biró:
Summer Internship Matching with Funding Constraints. 97-104 - Davide Azzalini, Alberto Castellini, Matteo Luperto, Alessandro Farinelli, Francesco Amigoni:

HMMs for Anomaly Detection in Autonomous Robots. 105-113 - Nathanaël Barrot, Sylvaine Lemeilleur, Nicolas Paget, Abdallah Saffidine:

Peer Reviewing in Participatory Guarantee Systems: Modelisation and Algorithmic Aspects. 114-122 - Connor Basich, Justin Svegliato, Kyle Hollins Wray, Stefan J. Witwicki, Joydeep Biswas, Shlomo Zilberstein:

Learning to Optimize Autonomy in Competence-Aware Systems. 123-131 - Dorothea Baumeister, Ann-Kathrin Selker, Anaëlle Wilczynski:

Manipulation of Opinion Polls to Influence Iterative Elections. 132-140 - Ryan Beal, Georgios Chalkiadakis, Timothy J. Norman, Sarvapali D. Ramchurn:

Optimising Game Tactics for Football. 141-149 - Xiaohui Bei, Shengxin Liu, Chung Keung Poon, Hongao Wang:

Candidate Selections with Proportional Fairness Constraints. 150-158 - Matteo Bellusci, Nicola Basilico, Francesco Amigoni:

Multi-Agent Path Finding in Configurable Environments. 159-167 - Arthur Boixel, Ulle Endriss:

Automated Justification of Collective Decisions via Constraint Solving. 168-176 - Iago Bonnici, Abdelkader Gouaïch, Fabien Michel:

Input Addition and Deletion in Reinforcement: Towards Learning with Structural Changes. 177-185 - Sirin Botan, Ulle Endriss:

Majority-Strategyproofness in Judgment Aggregation. 186-194 - Felix Brandt, Martin Bullinger:

Finding and Recognizing Popular Coalition Structures. 195-203 - Jan Bürmann, Enrico H. Gerding, Baharak Rastegari:

Fair Allocation of Resources with Uncertain Availability. 204-212 - Martin Bullinger:

Pareto-Optimality in Cardinal Hedonic Games. 213-221 - Yaniel Carreno, Èric Pairet, Yvan R. Pétillot, Ronald P. A. Petrick:

Task Allocation Strategy for Heterogeneous Robot Teams in Offshore Missions. 222-230 - Mithun Chakraborty, Ayumi Igarashi, Warut Suksompong, Yair Zick:

Weighted Envy-Freeness in Indivisible Item Allocation. 231-239 - Hau Chan, Mohammad T. Irfan, Cuong Viet Than:

Schelling Models with Localized Social Influence: A Game-Theoretic Framework. 240-248 - Ziyu Chen, Wenxin Zhang, Yanchen Deng, Dingding Chen, Qiang Li:

RMB-DPOP: Refining MB-DPOP by Reducing Redundant Inference. 249-257 - Samuel H. Christie V., Amit K. Chopra, Munindar P. Singh:

Refinement for Multiagent Protocols. 258-266 - Murat Cubuktepe, Zhe Xu, Ufuk Topcu:

Policy Synthesis for Factored MDPs with Graph Temporal Logic Specifications. 267-275 - Gianlorenzo D'Angelo, Mattia D'Emidio

, Shantanu Das, Alfredo Navarra, Giuseppe Prencipe:
Leader Election and Compaction for Asynchronous Silent Programmable Matter. 276-284 - Michael Dann

, John Thangarajah, Yuan Yao
, Brian Logan:
Intention-Aware Multiagent Scheduling. 285-293 - Giuseppe De Giacomo, Yves Lespérance:

Goal Formation through Interaction in the Situation Calculus: A Formal Account Grounded in Behavioral Science. 294-302 - Frits de Nijs, Peter J. Stuckey:

Risk-Aware Conditional Replanning for Globally Constrained Multi-Agent Sequential Decision Making. 303-311 - Greg d'Eon, Kate Larson:

Testing Axioms Against Human Reward Divisions in Cooperative Games. 312-320 - Palash Dey, Sourav Medya:

Manipulating Node Similarity Measures in Networks. 321-329 - Gaurav Dixit, Stéphane Airiau, Kagan Tumer:

Gaussian Processes as Multiagent Reward Models. 330-338 - Ryan D'Orazio, Dustin Morrill, James R. Wright, Michael Bowling:

Alternative Function Approximation Parameterizations for Solving Games: An Analysis of ƒ-Regression Counterfactual Regret Minimization. 339-347 - Yihan Du, Siwei Wang

, Longbo Huang:
Dueling Bandits: From Two-dueling to Multi-dueling. 348-356 - Abhimanyu Dubey, Alex Pentland:

Private and Byzantine-Proof Cooperative Decision-Making. 357-365 - Edith Elkind, Piotr Faliszewski, Sushmita Gupta, Sanjukta Roy:

Algorithms for Swap and Shift Bribery in Structured Elections. 366-374 - Mirgita Frasheri, José Manuel Cano-García, Eva González-Parada, Baran Çürüklü, Mikael Ekström, Alessandro Vittorio Papadopoulos, Cristina Urdiales:

Adaptive Autonomy in Wireless Sensor Networks. 375-383 - Rupert Freeman, Sujoy Sikdar, Rohit Vaish, Lirong Xia:

Equitable Allocations of Indivisible Chores. 384-392 - Kobi Gal, Ta Duy Nguyen, Quang Nhat Tran, Yair Zick:

Threshold Task Games: Theory, Platform and Experiments. 393-401 - Jiarui Gan, Edith Elkind, Sarit Kraus, Michael J. Wooldridge:

Mechanism Design for Defense Coordination in Security Games. 402-410 - Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, Nidhi Hegde:

Multi Type Mean Field Reinforcement Learning. 411-419 - Jugal Garg, Peter McGlaughlin:

Computing Competitive Equilibria with Mixed Manna. 420-428 - Felix Gervits, Dean Thurston, Ravenna Thielstrom, Terry Fong, Quinn Pham, Matthias Scheutz:

Toward Genuine Robot Teammates: Improving Human-Robot Team Performance Using Robot Shared Mental Models. 429-437 - Sina Ghiassian, Banafsheh Rafiee, Yat Long Lo, Adam White:

Improving Performance in Reinforcement Learning by Breaking Generalization in Neural Networks. 438-446 - Ahana Ghosh, Sebastian Tschiatschek, Hamed Mahdavi, Adish Singla

:
Towards Deployment of Robust Cooperative AI Agents: An Algorithmic Framework for Learning Adaptive Policies. 447-455 - Hugo Gimbert, Soumyajit Paul, B. Srivathsan:

A Bridge between Polynomial Optimization and Games with Imperfect Recall. 456-464 - Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich:

Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments. 465-473 - John Harwell, London Lowmanstone, Maria L. Gini:

Demystifying Emergent Intelligence and Its Effect on Performance In Large Robot Swarms. 474-482 - Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening:

Cautious Reinforcement Learning with Logical Constraints. 483-491 - Daniel Hennes, Dustin Morrill, Shayegan Omidshafiei, Rémi Munos, Julien Pérolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, Paavo Parmas, Edgar A. Duéñez-Guzmán, Karl Tuyls:

Neural Replicator Dynamics: Multiagent Learning via Hedging Policy Gradients. 492-501 - Khoi D. Hoang, William Yeoh, Makoto Yokoo, Zinovi Rabinovich:

New Algorithms for Continuous Distributed Constraint Optimization Problems. 502-510 - Safwan Hossain, Nisarg Shah:

The Effect of Strategic Noise in Linear Regression. 511-519 - David Earl Hostallero, Daewoo Kim, Sangwoo Moon, Kyunghwan Son, Wan Ju Kang, Yung Yi:

Inducing Cooperation through Reward Reshaping based on Peer Evaluations in Deep Multi-Agent Reinforcement Learning. 520-528 - Taoan Huang, Weiran Shen, David Zeng, Tianyu Gu, Rohit Singh, Fei Fang:

Green Security Game with Community Engagement. 529-537 - Edward Hughes, Thomas W. Anthony, Tom Eccles, Joel Z. Leibo, David Balduzzi, Yoram Bachrach:

Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games. 538-547 - Léonard Hussenot, Matthieu Geist, Olivier Pietquin:

CopyCAT: : Taking Control of Neural Policies with Constant Attacks. 548-556 - Matthew Inkawhich, Yiran Chen, Hai Helen Li:

Snooping Attacks on Deep Reinforcement Learning. 557-565 - Gabriel Istrate, Cosmin Bonchis, Claudiu Gatina:

It's Not Whom You Know, It's What You, or Your Friends, Can Do: Coalitional Frameworks for Network Centralities. 566-574 - Harshavardhan Kamarthi, Priyesh Vijayan, Bryan Wilder, Balaraman Ravindran, Milind Tambe:

Influence Maximization in Unknown Social Networks: Learning Policies for Effective Graph Sampling. 575-583 - Naoyuki Kamiyama:

On Stable Matchings with Pairwise Preferences and Matroid Constraints. 584-592 - Ian A. Kash, Michael Sullins, Katja Hofmann:

Combining No-regret and Q-learning. 593-601 - Yasushi Kawase, Atsushi Iwasaki:

Approximately Stable Matchings with General Constraints. 602-610 - David Kempe, Sixie Yu, Yevgeniy Vorobeychik:

Inducing Equilibria in Networked Public Goods Games through Network Structure Modification. 611-619 - Dong-Ki Kim, Miao Liu, Shayegan Omidshafiei, Sebastian Lopez-Cot, Matthew Riemer, Golnaz Habibi, Gerald Tesauro, Sami Mourad, Murray Campbell, Jonathan P. How:

Learning Hierarchical Teaching Policies for Cooperative Agents. 620-628 - David Klaska, Antonín Kucera, Vojtech Rehák:

Adversarial Patrolling with Drones. 629-637 - Grammateia Kotsialou, Luke Riley:

Incentivising Participation in Liquid Democracy with Breadth-First Delegation. 638-644 - Justin Kruger, Zoi Terzopoulou:

Strategic Manipulation with Incomplete Preferences: Possibilities and Impossibilities for Positional Scoring Rules. 645-653 - Chris J. Kuhlman, Achla Marathe, Anil Vullikanti, Nafisa Halim, Pallab Mozumder:

Increasing Evacuation during Disaster Events. 654-662 - Soh Kumabe, Takanori Maehara:

Convexity of Hypergraph Matching Game. 663-671 - Hian Lee Kwa, Jabez Leong Kit, Roland Bouffanais:

Optimal Swarm Strategy for Dynamic Target Search and Tracking. 672-680 - Salvatore La Torre, Gennaro Parlato:

On the Model-Checking of Branching-time Temporal Logic with BDI Modalities. 681-689 - Yaqing Lai, Wufan Wang, Yunjie Yang, Jihong Zhu, Minchi Kuang:

Hindsight Planner. 690-698 - Christopher Leturc, Grégory Bonnet:

A Deliberate BIAT Logic for Modeling Manipulations. 699-707 - Bo Li, Yingkai Li:

Fair Resource Sharing and Dorm Assignment. 708-716 - Henger Li, Wen Shen, Zizhan Zheng:

Spatial-Temporal Moving Target Defense: A Markov Stackelberg Game Model. 717-725 - Jiaoyang Li, Kexuan Sun, Hang Ma, Ariel Felner, T. K. Satish Kumar, Sven Koenig:

Moving Agents in Formation in Congested Environments. 726-734 - Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur:

On Emergent Communication in Competitive Multi-Agent Teams. 735-743 - Baihan Lin, Guillermo A. Cecchi, Djallel Bouneffouf, Jenna M. Reinen, Irina Rish:

A Story of Two Streams: Reinforcement Learning Models from Human Behavior and Neuropsychiatry. 744-752 - Anji Liu, Yitao Liang, Guy Van den Broeck:

Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration. 753-761 - Alessio Lomuscio, Edoardo Pirovano

:
Parameterised Verification of Strategic Properties in Probabilistic Multi-Agent Systems. 762-770 - Meghna Lowalekar, Pradeep Varakantham, Patrick Jaillet:

Competitive Ratios for Online Multi-capacity Ridesharing. 771-779 - Yuan Luo, Nicholas R. Jennings:

A Budget-Limited Mechanism for Category-Aware Crowdsourcing Systems. 780-788 - Andrei Lupu, Doina Precup:

Gifting in Multi-Agent Reinforcement Learning. 789-797 - Xueguang Lyu, Christopher Amato:

Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning. 798-806 - Hongyao Ma, Reshef Meir, David C. Parkes, Elena Wu-Yan:

Penalty Bidding Mechanisms for Allocating Resources and Overcoming Present-Bias. 807-815 - Jinming Ma, Feng Wu:

Feudal Multi-Agent Deep Reinforcement Learning for Traffic Signal Control. 816-824 - Saaduddin Mahmud, Moumita Choudhury, Md. Mosaddek Khan, Long Tran-Thanh, Nicholas R. Jennings:

AED: An Anytime Evolutionary DCOP Algorithm. 825-833 - Alberto Marchesi, Francesco Trovò, Nicola Gatti:

Learning Probably Approximately Correct Maximin Strategies in Simulation-Based Games with Infinite Strategy Spaces. 834-842 - Gilberto Marcon dos Santos, Julie A. Adams:

Optimal Temporal Plan Merging. 851-859 - Eric Mazumdar, Lillian J. Ratliff, Michael I. Jordan, S. Shankar Sastry:

Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games. 860-868 - Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo:

Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning. 869-877 - Congcong Miao, Jilong Wang, Heng Yu, Weichen Zhang, Yinyao Qi:

Trajectory-User Linking with Attentive Recurrent Network. 878-886 - Aniket Murhekar, Ruta Mehta:

Approximate Nash Equilibria of Imitation Games: Algorithms and Complexity. 887-894 - Goran Muric, Alexey Tregubov, Jim Blythe, Andrés Abeliuk, Divya Choudhary, Kristina Lerman, Emilio Ferrara:

Massive Cross-Platform Simulations of Online Social Networks. 895-903 - Pavel Naumov, Jia Tao:

Duty to Warn in Strategic Games. 904-912 - Grigory Neustroev, Mathijs Michiel de Weerdt:

Generalized Optimistic Q-Learning with Provable Efficiency. 913-921 - Marc Neveling, Jörg Rothe:

The Complexity of Cloning Candidates in Multiwinner Elections. 922-930 - Xiaodong Nian, Athirai Aravazhi Irissappane, Diederik M. Roijers:

DCRAC: Deep Conditioned Recurrent Actor-Critic for Multi-Objective Partially Observable Environments. 931-938 - Chris Nota, Philip S. Thomas:

Is the Policy Gradient a Gradient? 939-947 - Alessandro Nuara, Francesco Trovò, Dominic Crippa, Nicola Gatti, Marcello Restelli:

Driving Exploration by Maximum Distribution in Gaussian Process Bandits. 948-956 - Svetlana Obraztsova, Maria Polukarov, Edith Elkind, Marek Grzesiuk:

Multiwinner Candidacy Games. 957-965 - Stefan Olafsson, Byron C. Wallace, Timothy W. Bickmore:

Towards a Computational Framework for Automating Substance Use Counseling with Virtual Agents. 966-974 - Declan Oller, Tobias Glasmachers, Giuseppe Cuccu:

Analyzing Reinforcement Learning Benchmarks with Random Weight Guessing. 975-982 - Yaniv Oshrat, Noa Agmon, Sarit Kraus:

Non-Uniform Policies for Multi-Robot Asymmetric Perimeter Patrol in Adversarial Domains. 983-991 - Han-Ching Ou, Arunesh Sinha, Sze-Chuan Suen, Andrew Perrault, Alpan Raval, Milind Tambe:

Who and When to Screen: Multi-Round Active Screening for Network Recurrent Infectious Diseases Under Uncertainty. 992-1000 - Ling Pan, Qingpeng Cai, Longbo Huang:

Multi-Path Policy Optimization. 1001-1009 - Dhaval Parmar, Stefán Ólafsson, Dina Utami, Prasanth Murali, Timothy W. Bickmore:

Navigating the Combinatorics of Virtual Agent Design Space to Maximize Persuasion. 1010-1018 - Lukasz Pelcner, Shaling Li, Matheus Aparecido do Carmo Alves, Leandro Soriano Marcolino, Alex Collins:

Real-time Learning and Planning in Environments with Swarms: A Hierarchical and a Parameter-based Simulation Approach. 1019-1027 - Florian Pescher, Nils Napp, Benoît Piranda, Julien Bourgeois:

GAPCoD: A Generic Assembly Planner by Constrained Disassembly. 1028-1036 - Lasse Peters, David Fridovich-Keil, Claire J. Tomlin, Zachary N. Sunberg:

Inference-Based Strategy Alignment for General-Sum Differential Games. 1037-1045 - Geoffrey Pettet, Ayan Mukhopadhyay, Mykel J. Kochenderfer, Yevgeniy Vorobeychik, Abhishek Dubey:

On Algorithmic Decision Procedures in Emergency Response Systems in Smart and Connected Communities. 1046-1054 - Thomy Phan, Thomas Gabor, Andreas Sedlmeier, Fabian Ritz, Bernhard Kempter, Cornel Klein, Horst Sauer, Reiner N. Schmid, Jan Wieghardt, Marc Zeller, Claudia Linnhoff-Popien:

Learning and Testing Resilience in Cooperative Multi-Agent Systems. 1055-1063 - Silviu Pitis, Michael R. Zhang:

Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes. 1064-1071 - Artem Polyvyanyy

, Zihang Su, Nir Lipovetzky, Sebastian Sardiña:
Goal Recognition Using Off-The-Shelf Process Mining Techniques. 1072-1080 - Julie Porteous, João F. Ferreira, Alan Lindsay, Marc Cavazza:

Extending Narrative Planning Domains with Linguistic Resources. 1081-1089 - Divya Ramesh, Anthony Z. Liu, Andres J. Echeverria, Jean Y. Song, Nicholas R. Waytowich, Walter S. Lasecki:

Yesterday's Reward is Today's Punishment: Contrast Effects in Human Feedback to Reinforcement Learning Agents. 1090-1097 - Gabriel de Oliveira Ramos, Roxana Radulescu, Ann Nowé, Anderson R. Tavares:

Toll-Based Learning for Minimising Congestion under Heterogeneous Preferences. 1098-1106 - Alex Raymond, Hatice Gunes, Amanda Prorok:

Culture-Based Explainable Human-Agent Deconfliction. 1107-1115 - Bram M. Renting, Holger H. Hoos, Catholijn M. Jonker:

Automated Configuration of Negotiation Strategies. 1116-1124 - Cinjon Resnick, Abhinav Gupta, Jakob N. Foerster, Andrew M. Dai, Kyunghyun Cho:

Capacity, Bandwidth, and Compositionality in Emergent Language Learning. 1125-1133 - Lillian M. Rigoli, Patrick Nalepka, Hannah M. Douglas, Rachel W. Kallen, Simon G. Hosking, Christopher J. Best, Elliot Saltzman, Michael J. Richardson:

Employing Models of Human Social Motor Behavior for Artificial Agent Trainers. 1134-1142 - Golden Rockefeller, Shauharda Khadka, Kagan Tumer:

Multi-level Fitness Critics for Cooperative Coevolution. 1143-1151 - Manel Rodriguez-Soto, Maite López-Sánchez, Juan A. Rodríguez-Aguilar:

A Structural Solution to Sequential Moral Dilemmas. 1152-1160 - Stephanie Rosenthal, Laura M. Hiatt:

Human-Centered Decision Support for Agenda Scheduling. 1161-1168 - Yael Sabato, Amos Azaria, Noam Hazon:

Viral vs. Effective: Utility Based Influence Maximization. 1169-1177 - Mirko Salaris, Alessandro Riva, Francesco Amigoni:

Multirobot Coverage of Modular Environments. 1178-1186 - Prathyush Sambaturu, Bijaya Adhikari, B. Aditya Prakash, Srinivasan Venkatramanan, Anil Vullikanti:

Designing Effective and Practical Interventions to Contain Epidemics. 1187-1195 - Navyata Sanghvi, Ryo Yonetani, Kris Kitani:

MGpi: A Computational Model of Multiagent Group Perception and Interaction. 1196-1205 - Riccardo Sartea, Georgios Chalkiadakis, Alessandro Farinelli, Matteo Murari:

Bayesian Active Malware Analysis. 1206-1214 - Yash Satsangi, Sungsu Lim, Shimon Whiteson, Frans A. Oliehoek, Martha White:

Maximizing Information Gain in Partially Observable Environments via Prediction Rewards. 1215-1223 - Grant Schoenebeck, Biaoshuai Tao, Fang-Yi Yu

:
Limitations of Greed: Influence Maximization in Undirected Networks Re-visited. 1224-1232 - Marc Serramia, Maite López-Sánchez, Juan A. Rodríguez-Aguilar:

A Qualitative Approach to Composing Value-Aligned Norm Systems. 1233-1241 - Weiran Shen, Pingzhong Tang, Xun Wang, Yadong Xu, Xiwang Yang:

Learning to Design Coupons in Online Advertising Markets. 1242-1250 - Maayan Shvo, Toryn Q. Klassen, Shirin Sohrabi, Sheila A. McIlraith:

Epistemic Plan Recognition. 1251-1259 - Rui Silva, Miguel Vasco, Francisco S. Melo, Ana Paiva, Manuela Veloso:

Playing Games in the Dark: An Approach for Cross-Modality Transfer in Reinforcement Learning. 1260-1268 - Thiago D. Simão, Romain Laroche, Rémi Tachet des Combes:

Safe Policy Improvement with an Estimated Baseline Policy. 1269-1277 - Arambam James Singh, Akshat Kumar, Hoong Chuin Lau:

Hierarchical Multiagent Reinforcement Learning for Maritime Traffic Management. 1278-1286 - Oskar Skibski, Takamasa Suzuki, Tomasz Grabowski, Tomasz P. Michalak, Makoto Yokoo:

Signed Graph Games: Coalitional Games with Friends, Enemies and Allies. 1287-1295 - Sebastian Stein, Mateusz Ochal, Ioana-Adriana Moisoiu, Enrico H. Gerding, Raghu K. Ganti, Ting He, Tom La Porta:

Strategyproof Reinforcement Learning for Online Resource Allocation. 1296-1304 - Ana-Andreea Stoica, Abhijnan Chakraborty, Palash Dey, Krishna P. Gummadi:

Minimizing Margin of Victory for Fair Political and Educational Districting. 1305-1313 - Charlie Street, Bruno Lacerda, Manuel Mühlig, Nick Hawes:

Multi-Robot Planning Under Uncertainty with Congestion-Aware Models. 1314-1322 - Jingchang Sun, Pingzhong Tang, Yulong Zeng:

Games of Miners. 1323-1331 - Yanchao Sun, Furong Huang:

Can Agents Learn by Analogy?: An Inferable Model for PAC Reinforcement Learning. 1332-1340 - Stanislaw Szufa, Piotr Faliszewski, Piotr Skowron, Arkadii Slinko, Nimrod Talmon:

Drawing a Map of Elections in the Space of Statistical Cultures. 1341-1349 - Akshat Tandon, Kamalakar Karlapalem:

Capturing Oracle Guided Hiders. 1350-1358 - Pingzhong Tang, Xun Wang, Zihe Wang, Yadong Xu, Xiwang Yang:

Optimized Cost per Mille in Feeds Advertising. 1359-1367 - Wei Tang, Chien-Ju Ho, Yang Liu:

Differentially Private Contextual Dynamic Pricing. 1368-1376 - Swapna Thorve, Zhihao Hu, Kiran Lakkaraju, Joshua Letchford, Anil Vullikanti, Achla Marathe, Samarth Swarup:

An Active Learning Method for the Comparison of Agent-based Models. 1377-1385 - Behnam Torabi, Rym Zalila-Wenkstern, Robert Saylor, Patrick Ryan:

Deployment of a Plug-In Multi-Agent System for Traffic Signal Timing. 1386-1394 - Aristide C. Y. Tossou, Christos Dimitrakakis, Jaroslaw Rzepecki, Katja Hofmann:

A Novel Individually Rational Objective In Multi-Agent Multi-Armed Bandits: Algorithms and Regret Bounds. 1395-1403 - Yuushi Toyoda, Gale M. Lucas, Jonathan Gratch:

The Effects of Autonomy and Task meaning in Algorithmic Management of Crowdwork. 1404-1412 - J. Gregory Trafton, Laura M. Hiatt, Benjamin Brumback, J. Malcolm McCurry:

Using Cognitive Models to Train Big Data Models with Small Data. 1413-1421 - Line van den Berg, Manuel Atencia, Jérôme Euzenat:

Agent Ontology Alignment Repair through Dynamic Epistemic Logic. 1422-1430 - Elise van der Pol, Thomas Kipf, Frans A. Oliehoek, Max Welling:

Plannable Approximations to MDP Homomorphisms: Equivariance under Actions. 1431-1439 - Haozhe Wang, Jiale Zhou, Xuming He:

Learning Context-aware Task Reasoning for Efficient Meta Reinforcement Learning. 1440-1448 - Kai Wang, Andrew Perrault, Aditya Mate, Milind Tambe:

Scalable Game-Focused Learning of Adversary Models: Data-to-Decisions in Network Security Games. 1449-1457 - Zihe Wang, Weiran Shen, Song Zuo:

Bayesian Nash Equilibrium in First-Price Auction with Discrete Value Distributions. 1458-1466 - Tomasz Was, Marcin Waniek, Talal Rahwan, Tomasz P. Michalak:

The Manipulability of Centrality Measures-An Axiomatic Approach. 1467-1475 - Klaus Weber, Kathrin Janowski, Niklas Rach, Katharina Weitz, Wolfgang Minker, Stefan Ultes, Elisabeth André:

Predicting Persuasive Effectiveness for Multimodal Behavior Adaptation using Bipolar Weighted Argument Graphs. 1476-1484 - Pengfei Wei, Xinghua Qu, Yiping Ke, Tze-Yun Leong, Yew-Soon Ong:

Adaptive Knowledge Transfer based on Transfer Neural Kernel Network. 1485-1493 - Jiali Weng, Fuyuan Xiao, Zehong Cao:

Uncertainty Modelling in Multi-agent Information Fusion Systems. 1494-1502 - Jan Wöhlke, Felix Schmitt, Herke van Hoof:

A Performance-Based Start State Curriculum Framework for Reinforcement Learning. 1503-1511 - Baicen Xiao, Qifan Lu, Bhaskar Ramasubramanian, Andrew Clark, Linda Bushnell, Radha Poovendran:

FRESH: Interactive Reward Shaping in High-Dimensional State Spaces using Human Feedback. 1512-1520 - Tao Xiao, Zhengyang Liu, Wenhan Huang:

On the Complexity of Sequential Posted Pricing. 1521-1529 - Tao Xiao, Sujoy Sikdar:

Size-Relaxed Committee Selection under the Chamberlin-Courant Rule. 1530-1538 - Xinping Xu, Minming Li, Lingjie Duan:

Strategyproof Mechanisms for Activity Scheduling. 1539-1547 - Kentaro Yahiro, Makoto Yokoo:

Game Theoretic Analysis for Two-Sided Matching with Resource Allocation. 1548-1556 - Fan Yang, Bruno Lepri, Wen Dong:

Optimal Control in Partially Observable Complex Social Systems. 1557-1565 - Jiachen Yang, Igor Borovikov, Hongyuan Zha:

Hierarchical Cooperative Multi-Agent Reinforcement Learning with Skill Discovery. 1566-1574 - Yaodong Yang, Rasul Tutunov, Phu Sakulwongtana, Haitham Bou-Ammar:

αα-Rank: Practically Scaling α-Rank through Stochastic Optimisation. 1575-1583 - Yongjie Yang:

On the Complexity of Destructive Bribery in Approval-Based Multi-winner Voting. 1584-1592 - Hedayat Zarkoob, Hu Fu, Kevin Leyton-Brown:

Report-Sensitive Spot-Checking in Peer-Grading Systems. 1593-1601 - Nicholas Zerbel, Kagan Tumer:

The Power of Suggestion. 1602-1610 - Shangtong Zhang, Wendelin Boehmer, Shimon Whiteson:

Deep Residual Reinforcement Learning. 1611-1619 - Wen Zhang, Dengji Zhao, Hanyu Chen:

Redistribution Mechanism on Networks. 1620-1628 - Wen Zhang, Yao Zhang, Dengji Zhao:

Collaborative Data Acquisition. 1629-1637 - Yihan Zhang, Lyon Zhang, Hanlin Wang, Fabián E. Bustamante, Michael Rubenstein:

SwarmTalk - Towards Benchmark Software Suites for Swarm Robotics Platforms. 1638-1646 - Mingde Zhao, Sitao Luan, Ian Porada, Xiao-Wen Chang, Doina Precup:

META-Learning State-based Eligibility Traces for More Sample-Efficient Policy Evaluation. 1647-1655 - Han Zheng, Jing Jiang, Pengfei Wei, Guodong Long, Chengqi Zhang:

Competitive and Cooperative Heterogeneous Deep Reinforcement Learning. 1656-1664 - Aizhong Zhou, Jiong Guo:

Parameterized Complexity of Shift Bribery in Iterative Elections. 1665-1673 - Changxi Zhu, Yi Cai, Ho-fung Leung, Shuyue Hu:

Learning by Reusing Previous Advice in Teacher-Student Paradigm. 1674-1682
Blue Sky Idea Papers
- Dorothea Baumeister, Tobias Hogrebe, Jörg Rothe:

Towards Reality: Smoothed Analysis in Computational Social Choice. 1691-1695 - Sara Bernardini, Ferdian Jovan, Zhengyi Jiang, Simon Watson, Andrew Weightman, Peiman Moradi, Tom Richardson, Rasoul Sadeghian, Sina Sareh:

A Multi-Robot Platform for the Autonomous Operation and Maintenance of Offshore Wind Farms. 1696-1700 - Virginia Dignum, Frank Dignum:

Agents are Dead. Long live Agents! 1701-1705 - Pradeep K. Murukannaiah, Nirav Ajmeri, Catholijn M. Jonker, Munindar P. Singh:

New Foundations of Ethical Multiagent Systems. 1706-1710 - Oren Salzman, Roni Stern:

Research Challenges and Opportunities in Multi-Agent Path Finding and Multi-Agent Pickup and Delivery Problems. 1711-1715 - Candice Schumann, Jeffrey S. Foster, Nicholas Mattei, John P. Dickerson:

We Need Fairness and Explainability in Algorithmic Hiring. 1716-1720 - Samarth Swarup, Henning S. Mortveit:

Live Simulations. 1721-1725 - Vahid Yazdanpanah, Sara Mehryar, Nicholas R. Jennings, Swenja Surminski, Martin J. Siegert, Jos van Hillegersberg:

Multiagent Climate Change Research. 1726-1731
Extended Abstracts
- Kumar Abhishek, Shweta Jain, Sujit Gujar:

Designing Truthful Contextual Multi-Armed Bandits based Sponsored Search Auctions. 1732-1734 - Abhijin Adiga, Sarit Kraus, S. S. Ravi:

Boolean Games: Inferring Agents' Goals Using Taxation Queries. 1735-1737 - Dhaval Adjodah, Dan Calacci, Abhimanyu Dubey, Anirudh Goyal, P. M. Krafft, Esteban Moro, Alex Pentland:

Leveraging Communication Topologies Between Learning Agents in Deep Reinforcement Learning. 1738-1740 - Akshat Agarwal, Sumit Kumar, Katia P. Sycara, Michael Lewis:

Learning Transferable Cooperative Behavior in Multi-Agent Teams. 1741-1743 - Mona Alshehri, Napoleon H. Reyes

, Andre L. C. Barczak
:
Evolving Meta-Level Reasoning with Reinforcement Learning and A* for Coordinated Multi-Agent Path-planning. 1744-1746 - Gilad Asharov, Tucker Hybinette Balch, Antigoni Polychroniadou, Manuela Veloso:

Privacy-Preserving Dark Pools. 1747-1749 - Carlos Azevedo, Bruno Lacerda, Nick Hawes, Pedro U. Lima:

Long-Run Multi-Robot Planning With Uncertain Task Durations. 1750-1752 - Haris Aziz

, Edward Lee:
The Temporary Exchange Problem. 1753-1755 - Haris Aziz

, Serge Gaspers, Zhaohong Sun:
Mechanism Design for School Choice with Soft Diversity Constraints. 1756-1758 - Haris Aziz

, Serge Gaspers, Zhaohong Sun, Makoto Yokoo:
Multiple Levels of Importance in Matching with Distributional Constraints: Extended Abstract. 1759-1761 - Andrea Baisero, Christopher Amato:

Learning Complementary Representations of the Past using Auxiliary Tasks in Partially Observable Reinforcement Learning. 1762-1764 - Vaibhav Bajaj, Sachit Rao:

Autonomous Shape Formation and Morphing in a Dynamic Environment by a Swarm of Robots: Extended Abstract. 1765-1767 - Wolfram Barfuss:

Reinforcement Learning Dynamics in the Infinite Memory Limit. 1768-1770 - Dorothea Baumeister, Tobias Hogrebe:

Complexity of Election Evaluation and Probabilistic Robustness: Extended Abstract. 1771-1773 - Dorothea Baumeister, Linus Boes, Tessa Seeger:

Irresolute Approval-based Budgeting. 1774-1776 - Hans L. Bodlaender, Tesshu Hanaka, Lars Jaffke, Hirotaka Ono, Yota Otachi, Tom C. van der Zanden:

Hedonic Seat Arrangement Problems. 1777-1779 - Niclas Boehmer, Edith Elkind:

Stable Roommate Problem With Diversity Preferences. 1780-1782 - Rafael H. Bordini, Rem W. Collier, Jomi Fred Hübner, Alessandro Ricci:

Encapsulating Reactive Behaviour in Goal-Based Plans for Programming BDI Agents: Extended Abstract. 1783-1785 - Jose Cadena, Achla Marathe, Anil Vullikanti:

Finding Spatial Clusters Susceptible to Epidemic Outbreaks due to Undervaccination. 1786-1788 - Arthur Casals, Assia Belbachir, Amal El Fallah Seghrouchni:

Adaptive and Collaborative Agent-based Traffic Regulation using Behavior Trees. 1789-1791 - Jhelum Chakravorty, Patrick Nadeem Ward, Julien Roy, Maxime Chevalier-Boisvert, Sumana Basu, Andrei Lupu, Doina Precup:

Option-Critic in Cooperative Multi-agent Systems. 1792-1794 - Hau Chan, David C. Parkes, Karim R. Lakhani:

The Price of Anarchy of Self-Selection in Tullock Contests. 1795-1797 - Meghan Chandarana, Michael Lewis, Katia P. Sycara, Sebastian A. Scherer:

Human-in-the-loop Planning and Monitoring of Swarm Search and Service Missions. 1798-1800 - Gang Chen:

A New Framework for Multi-Agent Reinforcement Learning - Centralized Training and Exploration with Decentralized Execution via Policy Distillation. 1801-1803 - Weiwei Chen:

Aggregation of Support-Relations of Bipolar Argumentation Frameworks. 1804-1806 - Yang Chen, Jiamou Liu, He Zhao, Hongyi Su:

Social Structure Emergence: A Multi-agent Reinforcement Learning Framework for Relationship Building. 1807-1809 - Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, Stefanos Nikolaidis:

The Fair Contextual Multi-Armed Bandit. 1810-1812 - Yukun Cheng, Xiaotie Deng, Yuhao Li:

Limiting the Deviation Incentives in Resource Sharing Networks. 1813-1815 - Giovanni Ciatto, Davide Calvaresi, Michael Ignaz Schumacher, Andrea Omicini:

An Abstract Framework for Agent-Based Explanations in AI. 1816-1818 - Theodor Cimpeanu, The Anh Han:

Fear of Punishment Promotes the Emergence of Cooperation and Enhanced Social Welfare in Social Dilemmas. 1819-1821 - Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini, Francesca Rossi:

Voting with Random Classifiers (VORACE). 1822-1824 - Zeyuan Cui, Shijun Liu, Li Pan, Qiang He:

Translating Embedding with Local Connection for Knowledge Graph Completion. 1825-1827 - Matteo D'Auria, Eric O. Scott, Rajdeep Singh Lather, Javier Hilty, Sean Luke:

Distributed, Automated Calibration of Agent-based Model Parameters and Agent Behaviors. 1828-1830 - Guohui Ding, Joewie J. Koh, Kelly Merckaert, Bram Vanderborght, Marco M. Nicotra, Christoffer Heckman, Alessandro Roncone, Lijun Chen:

Distributed Reinforcement Learning for Cooperative Multi-Robot Object Manipulation. 1831-1833 - Yinzhao Dong, Chao Yu, Paul Weng, Ahmed Maustafa, Hui Cheng, Hongwei Ge:

Decomposed Deep Reinforcement Learning for Robotic Control. 1834-1836 - Nagat Drawel, Jamal Bentahar, Hongyang Qu:

Computationally Grounded Quantitative Trust with Time. 1837-1839 - Gábor Erdélyi, Yongjie Yang:

Microbribery in Group Identification. 1840-1842 - Alessandro Farinelli, Antonello Contini, Davide Zorzi:

Decentralized Task Assignment for Multi-item Pickup and Delivery in Logistic Scenarios. 1843-1845 - Michele Flammini, Bojana Kodric, Martin Olsen, Giovanna Varricchio:

Distance Hedonic Games. 1846-1848 - Ganesh Ghalme, Swapnil Dhamal, Shweta Jain, Sujit Gujar, Y. Narahari:

Ballooning Multi-Armed Bandits. 1849-1851 - Mahak Goindani, Jennifer Neville:

Cluster-Based Social Reinforcement Learning. 1852-1854 - Nate Gruver, Jiaming Song, Mykel J. Kochenderfer, Stefano Ermon:

Multi-agent Adversarial Inverse Reinforcement Learning with Latent Variables. 1855-1857 - Shubham Gupta, Rishi Hazra, Ambedkar Dukkipati:

Networked Multi-Agent Reinforcement Learning with Emergent Communication. 1858-1860 - Shubham Gupta, Ambedkar Dukkipati:

Winning an Election: On Emergent Strategic Communication in Multi-Agent Networks. 1861-1863 - MohammadTaghi Hajiaghayi, Marina Knittel:

Matching Affinity Clustering: Improved Hierarchical Clustering at Scale with Guarantees. 1864-1866 - Allen Huang, Geoff Nitschke:

Automating Coordinated Autonomous Vehicle Control. 1867-1868 - Yuzhong Huang, Andrés Abeliuk, Fred Morstatter, Pavel Atanasov, Aram Galstyan:

Anchor Attention for Hybrid Crowd Forecasts Aggregation. 1869-1871 - Hangtian Jia, Chunxu Ren, Yujing Hu, Yingfeng Chen, Tangjie Lv, Changjie Fan, Hongyao Tang, Jianye Hao:

Mastering Basketball With Deep Reinforcement Learning: An Integrated Curriculum Training Approach. 1872-1874 - Jinmingwu Jiang, Kaigui Wu:

Multi-agent Path Planning based on MA-RRT* Fixed Nodes. 1875-1877 - Fatema T. Johora, Hao Cheng, Jörg P. Müller, Monika Sester:

An Agent-Based Model for Trajectory Modelling in Shared Spaces: A Combination of Expert-Based and Deep Learning Approaches. 1878-1880 - Jan Karwowski, Jacek Mandziuk, Adam Zychowski:

Anchoring Theory in Sequential Stackelberg Games. 1881-1883 - Eliahu Khalastchi, Meir Kalech:

Efficient Hybrid Fault Detection for Autonomous Robots. 1884-1886 - Raphael Koster, Dylan Hadfield-Menell, Gillian K. Hadfield, Joel Z. Leibo:

Silly Rules Improve the Capacity of Agents to Learn Stable Enforcement and Compliance Behaviors. 1887-1888 - Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati:

Signaling Friends and Head-Faking Enemies Simultaneously: Balancing Goal Obfuscation and Goal Legibility. 1889-1891 - Pankaj Kumar:

Deep Reinforcement Learning for Market Making. 1892-1894 - Chaya Levinger, Noam Hazon, Amos Azaria:

Computing the Shapley Value for Ride-Sharing and Routing Games. 1895-1897 - Jiaoyang Li, Andrew Tinka, Scott Kiesel, Joseph W. Durham, T. K. Satish Kumar, Sven Koenig:

Lifelong Multi-Agent Path Finding in Large-Scale Warehouses. 1898-1900 - Qingbiao Li, Fernando Gama, Alejandro Ribeiro, Amanda Prorok:

Graph Neural Networks for Decentralized Path Planning. 1901-1903 - Bingyu Liu, Shangyu Xie, Yuan Hong:

PANDA: Privacy-Aware Double Auction for Divisible Resources without a Mediator. 1904-1906 - Xiang Liu, Weiwei Wu, Minming Li, Wanyuan Wang:

Two-sided Auctions with Budgets: Fairness, Incentives and Efficiency. 1907-1909 - Shih-Yun Lo, Elaine Schaertl Short, Andrea Lockerd Thomaz:

Robust Following with Hidden Information in Travel Partners. 1910-1912 - Marin Lujak, Alberto Fernández, Eva Onaindia:

A Decentralized Multi-Agent Coordination Method for Dynamic and Constrained Production Planning. 1913-1915 - Xiaobai Ma, Jayesh K. Gupta, Mykel J. Kochenderfer:

Normalizing Flow Model for Policy Representation in Continuous Action Multi-agent Systems. 1916-1918 - Enrico Marchesini, Alessandro Farinelli:

Genetic Deep Reinforcement Learning for Mapless Navigation. 1919-1921 - Sourav Medya, Tianyi Ma, Arlei Silva, Ambuj K. Singh:

A Game Theoretic Approach For k-Core Minimization. 1922-1924 - Erinc Merdivan, Sten Hanke, Matthieu Geist:

Modified Actor-Critics. 1925-1927 - Rupert Mitchell, Jenny Fletcher, Jacopo Panerati, Amanda Prorok:

Multi-Vehicle Mixed Reality Reinforcement Learning for Autonomous Multi-Lane Driving. 1928-1930 - Shuwa Miura, Shlomo Zilberstein:

Maximizing Plan Legibility in Stochastic Environments. 1931-1933 - Marina Moreira, Brian Coltin, Rodrigo Ventura:

Cooperative Real-Time Inertial Parameter Estimation. 1934-1936 - Francesca Mosca, Jose M. Such, Peter McBurney:

Towards a Value-driven Explainable Agent for Collective Privacy. 1937-1939 - Prasanth Murali, Ameneh Shamekhi, Dhaval Parmar, Timothy W. Bickmore:

Argumentation is More Important than Appearance for Designing Culturally Tailored Virtual Agents. 1940-1942 - Rohit Murali, Suravi Patnaik, Stephen Cranefield:

Mining International Political Norms from the GDELT Database. 1943-1945 - Sai Ganesh Nagarajan, David Balduzzi, Georgios Piliouras:

Robust Self-organization in Games: Symmetries, Conservation Laws and Dimensionality Reduction. 1946-1948 - Yusuke Nakata, Sachiyo Arai:

Mini-batch Bayesian Inverse Reinforcement Learning for Multiple Dynamics. 1949-1950 - Shivika Narang, Yadati Narahari:

A Study of Incentive Compatibility and Stability Issues in Fractional Matchings. 1951-1953 - Van Nguyen, Tran Cao Son, Vasileiou Loukas Stylianos, William Yeoh:

Conditional Updates of Answer Set Programming and Its Application in Explainable Planning. 1954-1956 - Eoin O'Neill, David Lillis, Gregory M. P. O'Hare, Rem W. Collier:

Explicit Modelling of Resources for Multi-Agent MicroServices using the CArtAgO Framework. 1957-1959 - Cristobal Pais:

Vulcano: Operational Fire Suppression Management Using Deep Reinforcement Learning. 1960-1962 - Shubham Pateria, Budhitama Subagdja, Ah-Hwee Tan:

Hierarchical Reinforcement Learning with Integrated Discovery of Salient Subgoals. 1963-1965 - Zhaoqing Peng, Junqi Jin, Lan Luo, Yaodong Yang, Rui Luo, Jun Wang, Weinan Zhang, Miao Xu, Chuan Yu, Tiejian Luo, Han Li, Jian Xu, Kun Gai:

Sequential Advertising Agent with Interpretable User Hidden Intents. 1966-1968 - Olga Petrova, Karel Durkota, Galina Alperovich, Karel Horák, Michal Najman, Branislav Bosanský, Viliam Lisý:

Discovering Imperfectly Observable Adversarial Actions using Anomaly Detection. 1969-1971 - I. S. W. B. Prasetya, Mehdi Dastani:

Aplib: An Agent Programming Library for Testing Games. 1972-1974 - Amirarsalan Rajabi, Chathika Gunaratne, Alexander V. Mantzaris, Ivan Garibay:

Modeling Disinformation and the Effort to Counter It: A Cautionary Tale of When the Treatment Can Be Worse Than the Disease. 1975-1977 - Francesco Riccio, Roberto Capobianco, Daniele Nardi:

GUESs: Generative modeling of Unknown Environments and Spatial Abstraction for Robots. 1978-1980 - Guillermo Romero Moreno, Long Tran-Thanh, Markus Brede:

Continuous Influence Maximisation for the Voter Dynamics: Is Targeting High-Degree Nodes a Good Strategy? 1981-1983 - Sandhya Saisubramanian, Ece Kamar, Shlomo Zilberstein:

Mitigating the Negative Side Effects of Reasoning with Imperfect Models: A Multi-Objective Approach. 1984-1986 - Anirban Santara, Rishabh Madan, Pabitra Mitra, Balaraman Ravindran:

ExTra: Transfer-guided Exploration. 1987-1989 - Amit Sarker, Abdullahil Baki Arif, Moumita Choudhury, Md. Mosaddek Khan:

C-CoCoA: A Continuous Cooperative Constraint Approximation Algorithm to Solve Functional DCOPs. 1990-1992 - Jaelle Scheuerman, Jason L. Harman, Nicholas Mattei, K. Brent Venable:

Heuristic Strategies in Uncertain Approval Voting Environments. 1993-1995 - Murat Sensoy, Maryam Saleki, Simon Julier, Reyhan Aydogan, John Reid:

Not all Mistakes are Equal. 1996-1998 - Elnaz Shafipour Yourdshahi, Matheus Aparecido do Carmo Alves, Leandro Soriano Marcolino, Plamen Angelov:

On-line Estimators for Ad-hoc Task Allocation. 1999-2001 - Hitoshi Shimizu, Tatsushi Matsubayashi, Akinori Fujino, Hiroshi Sawada:

Theme Park Simulation based on Questionnaires for Maximizing Visitor Surplus. 2002-2004 - Itay Shtechman, Rica Gonen, Erel Segal-Halevi:

Fair Cake-Cutting Algorithms with Real Land-Value Data. 2005-2007 - Shoeb Siddiqui, Ganesh Vanahalli, Sujit Gujar:

BitcoinF: Achieving Fairness For Bitcoin In Transaction Fee Only Model. 2008-2010 - Joseph Singleton, Richard Booth:

An Axiomatic Approach to Truth Discovery. 2011-2013 - Thomas Spooner, Rahul Savani:

Robust Market Making via Adversarial Reinforcement Learning. 2014-2016 - Nanda Kishore Sreenivas, Shrisha Rao:

Analyzing the Effects of Memory Biases and Mood Disorders on Social Performance. 2017-2019 - Joseph Suarez, Yilun Du, Igor Mordatch, Phillip Isola:

Neural MMO v1.3: A Massively Multiagent Game Environment for Training and Evaluating Neural Networks. 2020-2022 - Zoi Terzopoulou, Alexander Karpov, Svetlana Obraztsova:

Restricted Domains of Dichotomous Preferences with Possibly Incomplete Information. 2023-2025 - Alvaro Velasquez, Daniel Melcer:

Verification-Guided Tree Search. 2026-2028 - Timothy Verstraeten, Eugenio Bargiacchi, Pieter J. K. Libin, Diederik M. Roijers, Ann Nowé:

Thompson Sampling for Factored Multi-Agent Bandits. 2029-2031 - Rose E. Wang, Sarah A. Wu, James A. Evans, Joshua B. Tenenbaum, David C. Parkes, Max Kleiman-Weiner:

Too Many Cooks: Coordinating Multi-agent Collaboration Through Inverse Planning. 2032-2034 - Shufan Wang, Jian Li:

Online Algorithms for Multi-shop Ski Rental with Machine Learned Predictions. 2035-2037 - Yu Wang, Yilin Shen, Hongxia Jin:

An Interpretable Multimodal Visual Question Answering System using Attention-based Weighted Contextual Features. 2038-2040 - Kaisheng Wu, Yong Qiao, Kaidong Chen, Fei Rong, Liangda Fang, Zhao-Rong Lai, Qian Dong, Liping Xiong:

Automatic Synthesis of Generalized Winning Strategy of Impartial Combinatorial Games. 2041-2043 - Yuanming Xiao, Atena M. Tabakhi, William Yeoh:

Embedding Preference Elicitation Within the Search for DCOP Solutions. 2044-2046 - Yuyu Xu, David C. Jeong, Pedro Sequeira, Jonathan Gratch, Javed Aslam, Stacy Marsella:

A Supervised Topic Model Approach to Learning Effective Styles within Human-Agent Negotiation. 2047-2049 - Hiroaki Yamada, Naoyuki Kamiyama:

An Information Distribution Method for Avoiding Hunting Phenomenon in Theme Parks. 2050-2052 - Tianpei Yang, Jianye Hao, Zhaopeng Meng, Zongzhang Zhang, Yujing Hu, Yingfeng Chen, Changjie Fan, Weixun Wang, Zhaodong Wang, Jiajie Peng:

Efficient Deep Reinforcement Learning through Policy Transfer. 2053-2055 - Vahid Yazdanpanah, Mehdi Dastani, Shaheen Fatima, Nicholas R. Jennings, Devrim Murat Yazan, W. Henk Zijm:

Task Coordination in Multiagent Systems. 2056-2058 - Harel Yedidsion, Shani Alkoby, Peter Stone:

The Sequential Online Chore Division Problem - Definition and Application. 2059-2061 - Nutchanon Yongsatianchot, Stacy Marsella:

A Computational Model of Hurricane Evacuation Decision. 2062-2064 - Chao Yu, Tianpei Yang, Wenxuan Zhu, Yinzhao Dong, Guangliang Li:

Interactive RL via Online Human Demonstrations. 2065-2067 - Guy Zaks, Gilad Katz:

CoMet: A Meta Learning-Based Approach for Cross-Dataset Labeling Using Co-Training. 2068-2070 - Zhiwei Zeng, Zhiqi Shen, Jing Jih Chin, Cyril Leung, Yu Wang, Ying Chi, Chunyan Miao:

Explainable and Contextual Preferences based Decision Making with Assumption-based Argumentation for Diagnostics and Prognostics of Alzheimer's Disease. 2071-2073 - Shuangfeng Zhang, Yuan Liu, Xingren Chen, Xin Zhou:

A POMDP-based Method for Analyzing Blockchain System Security Against Long Delay Attack: (Extended Abstract). 2074-2076 - Yi Zhang, Yu Qian, Yichen Yao, Haoyuan Hu, Yinghui Xu:

Learning to Cooperate: Application of Deep Reinforcement Learning for Online AGV Path Finding. 2077-2079 - Yijie Zhang, Roxana Radulescu, Patrick Mannion, Diederik M. Roijers, Ann Nowé:

Opponent Modelling for Reinforcement Learning in Multi-Objective Normal Form Games. 2080-2082 - Zhi Zhang, Jiachen Yang, Hongyuan Zha:

Integrating Independent and Centralized Multi-agent Reinforcement Learning for Traffic Signal Network Optimization. 2083-2085 - Dengji Zhao, Yiqing Huang, Liat Cohen, Tal Grinshpoun:

Coalitional Games with Stochastic Characteristic Functions Defined by Private Types. 2086-2088 - Adam Zychowski, Jacek Mandziuk:

A Generic Metaheuristic Approach to Sequential Security Games. 2089-2091
Demonstrations
- Cleber Jorge Amaral, Timotheus Kampik, Stephen Cranefield:

A Framework for Collaborative and Interactive Agent-oriented Developer Operations. 2092-2094 - Zehong Cao, Kaichiu Wong, Quan Bai, Chin-Teng Lin:

Hierarchical and Non-Hierarchical Multi-Agent Interactions Based on Unity Reinforcement Learning. 2095-2097 - João Carneiro, Rui Andrade, Patrícia Alves, Luís Conceição, Paulo Novais, Goreti Marreiros:

A Consensus-based Group Decision Support System using a Multi-Agent MicroServices Approach: Demonstration. 2098-2100 - Kristijonas Cyras, Amin Karamlou, Myles Lee, Dimitrios Letsios

, Ruth Misener, Francesca Toni:
AI-assisted Schedule Explainer for Nurse Rostering. 2101-2103 - Daniel Gebbran, Gregor Verbic, Archie C. Chapman, Sleiman Mhanna:

Coordination of Prosumer Agents via Distributed Optimal Power Flow: An Edge Computing Hardware Prototype. 2104-2106 - David Minarsch, Marco Favorito, Ali Hosseini, Jonathan Ward:

Trading Agent Competition with Autonomous Economic Agents. 2107-2110 - Artur Niewiadomski

, Magdalena Kacprzak, Damian Kurpiewski
, Michal Knapik
, Wojciech Penczek
, Wojciech Jamroga:
MsATL: A Tool for SAT-Based ATL Satisfiability Checking. 2111-2113 - Tiago Pinto, Luis Gomes, Pedro Faria, Filipe Sousa, Zita A. Vale:

MARTINE: Multi-Agent based Real-Time INfrastructure for Energy. 2114-2116 - Hedieh Ranjbartabar, Deborah Richards, Ayse Aysin Bilgin, Cat Kutay, Samuel Mascarenhas:

User-Models to Drive an Adaptive Virtual Advisor: Demonstration. 2117-2119 - Behnam Torabi, Rym Zalila-Wenkstern:

DALI: An Agent-Plug-In System to "Smartify" Conventional Traffic Control Systems. 2120-2122 - Agnieszka M. Zbrzezny, Andrzej Zbrzezny

, Sabina Szymoniak, Olga Siedlecka-Lamch, Miroslaw Kurkowski:
VerSecTis - An Agent based Model Checker for Security Protocols. 2123-2125
JAAMAS Track Papers
- Johan Arcile, Raymond Devillers, Hanna Klaudel:

VERIFCAR: A Framework for Modeling and Model checking Communicating Autonomous Vehicles. 2126-2127 - Haris Aziz

:
Strategyproof Multi-Item Exchange Under Single-Minded Dichotomous Preferences. 2128-2130 - Cristina Cornelio, Maria Silvia Pini, Francesca Rossi, K. Brent Venable:

Sequential Voting in Multi-agent Soft Constraint Aggregation. 2131-2133 - Dave de Jonge, Dongmo Zhang:

Strategic Negotiations for Extensive-Form Games. 2134-2136 - John A. Doucette, Alan Tsang, Hadi Hosseini, Kate Larson, Robin Cohen:

Inferring True Voting Outcomes in Homophilic Social Networks. 2137-2139 - Rica Gonen, Ozi Egri:

COMBIMA: Truthful, Budget Maintaining, Dynamic Combinatorial Market. 2140-2142 - Noam Hazon, Mira Gonen:

Probabilistic Physical Search on General Graphs: Approximations and Heuristics. 2143-2145 - Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:

A Very Condensed Survey and Critique of Multiagent Deep Reinforcement Learning. 2146-2148 - Jieting Luo, John-Jules Ch. Meyer, Max Knobbout:

A Formal Framework for Reasoning about Opportunistic Propensity in Multi-agent Systems. 2149-2151 - Andreasa Morris-Martin, Marina De Vos, Julian A. Padget:

Norm Emergence in Multiagent Systems: A Viewpoint Paper. 2152-2154 - Olabambo I. Oluwasuji, Obaid Malik, Jie Zhang, Sarvapali D. Ramchurn:

Solving the Fair Electric Load Shedding Problem in Developing Countries. 2155-2157 - Roxana Radulescu, Patrick Mannion, Diederik M. Roijers, Ann Nowé:

Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey. 2158-2160 - Avi Rosenfeld, Ariella Richardson:

Why, Who, What, When and How about Explainability in Human-Agent Systems. 2161-2164 - Felipe Leno da Silva, Garrett Warnell, Anna Helena Reali Costa, Peter Stone:

Agents Teaching Agents: A Survey on Inter-agent Transfer Learning. 2165-2167
Doctoral Consortium
- Carlos Azevedo:

Long-Run Multi-Robot Planning Under Uncertain Task Durations. 2168-2170 - Davide Azzalini:

Modeling and Comparing Robot Behaviors for Anomaly Detection. 2171-2173 - Connor Basich:

Competence-Aware Systems for Long-Term Autonomy. 2174-2175 - Arthur Boixel:

Computer-aided Reasoning about Collective Decision Making. 2176-2178 - Elizabeth Bondi:

Vision for Decisions: Utilizing Uncertain Real-Time Information and Signaling for Conservation. 2179-2181 - Jan Bürmann:

Efficiency and Fairness of Resource Utilisation under Uncertainty. 2182-2184 - Martin Bullinger:

Computing Desirable Partitions in Coalition Formation Games. 2185-2187 - Theodor Cimpeanu:

Cost Effective Interventions in Complex Networks Using Agent-Based Modelling and Simulations. 2188-2190 - John Harwell:

A Theoretical Framework for Self-Organized Task Allocation in Large Swarms. 2191-2192 - Johan Källström:

Adaptive Agent-Based Simulation for Individualized Training. 2193-2195 - Andreasa Morris-Martin:

Decentralised Runtime Norm Synthesis. 2196-2198 - Francesca Mosca:

Value-Aligned and Explainable Agents for Collective Decision Making: Privacy Application. 2199-2200 - Sindhu Padakandla:

Reinforcement Learning Algorithms for Autonomous Adaptive Agents. 2201-2203 - Michael Pernpeintner:

Achieving Emergent Governance in Competitive Multi-Agent Systems. 2204-2206 - Roxana Radulescu:

A Utility-Based Perspective on Multi-Objective Multi-Agent Decision Making. 2207-2208 - Jaelle Scheuerman:

Computational Methods for Simulating Biased Agents. 2209-2210 - Joseph Singleton:

Truth Discovery: Who to Trust and What to Believe. 2211-2213 - Ana-Andreea Stoica:

Algorithmic Fairness for Networked Algorithms. 2214-2216 - Charlie Street:

Towards Multi-Robot Coordination under Temporal Uncertainty. 2217-2218 - Zhaohong Sun:

New Challenges in Matching with Constraints. 2219-2221 - Zoi Terzopoulou:

Incomplete Opinions in Collective Decision Making. 2222-2224 - Miguel Vasco:

Multimodal Representation Learning for Robotic Cross-Modality Policy Transfer. 2225-2227 - Kai Wang:

Balance Between Scalability and Optimality in Network Security Games. 2228-2230 - Wenlong Wang:

Implementing Securities Based Decision Markets with Stochastic Decision Rules. 2231-2233 - Mengxiao Zhang:

Incentive Mechanisms for Data Privacy Preservation and Pricing. 2234-2236

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














