


Остановите войну!
for scientists:


default search action
20th AAMAS 2021: Virtual Event, UK
- Frank Dignum, Alessio Lomuscio, Ulle Endriss, Ann Nowé:
AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021. ACM 2021, ISBN 978-1-4503-8307-3
Blue Sky Ideas Track
- Niclas Boehmer, Rolf Niedermeier:
Broadening the Research Agenda for Computational Social Choice: Multiple Preference Profiles and Multiple Solutions. 1-5 - Gabriel Istrate:
Models We Can Trust: Toward a Systematic Discipline of (Agent-Based) Model Interpretation and Validation. 6-11 - Amol Kelkar:
Cognitive Homeostatic Agents. 12-16 - Jeffrey O. Kephart:
Multi-modal Agents for Business Intelligence. 17-22 - Alexander Mey, Frans A. Oliehoek:
Environment Shift Games: Are Multiple Agents the Solution, and not the Problem, to Non-Stationarity? 23-27 - Reuth Mirsky, Peter Stone:
The Seeing-Eye Robot Grand Challenge: Rethinking Automated Care. 28-33 - Decebal Constantin Mocanu, Elena Mocanu, Tiago Pinto, Selima Curci, Phuong H. Nguyen, Madeleine Gibescu, Damien Ernst, Zita A. Vale:
Sparse Training Theory for Scalable and Efficient Agents. 34-38 - Gauthier Picard, Clément Caron, Jean-Loup Farges, Jonathan Guerra, Cédric Pralet, Stéphanie Roussel:
Autonomous Agents and Multiagent Systems Challenges in Earth Observation Satellite Constellations. 39-44 - Avi Rosenfeld:
Better Metrics for Evaluating Explainable Artificial Intelligence. 45-50 - Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou-Ammar, Jun Wang, Matthew E. Taylor:
Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems. 51-56 - Vahid Yazdanpanah, Enrico H. Gerding, Sebastian Stein, Mehdi Dastani, Catholijn M. Jonker, Timothy J. Norman:
Responsibility Research for Trustworthy Autonomous Systems. 57-62 - Dengji Zhao:
Mechanism Design Powered by Social Interactions. 63-67
Main Track
- Amal Abdulrahman, Deborah Richards, Ayse Aysin Bilgin:
Reason Explanation for Encouraging Behaviour Change Intention. 68-77 - Kenshi Abe, Yusuke Kaneko:
Off-Policy Exploitability-Evaluation in Two-Player Zero-Sum Markov Games. 78-87 - Ramin Ahadi, Wolfgang Ketter, John Collins, Nicolò Daina:
Siting and Sizing of Charging Infrastructure for Shared Autonomous Electric Fleets. 88-96 - Lucas Nunes Alegre, Ana L. C. Bazzan, Bruno C. da Silva:
Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection. 97-105 - Andrea Aler Tubella, Andreas Theodorou, Juan Carlos Nieves:
Interrogating the Black Box: Transparency through Information-Seeking Dialogues. 106-114 - Nicolas Anastassacos, Julian García, Stephen Hailes, Mirco Musolesi:
Cooperation and Reputation Dynamics with Reinforcement Learning. 115-123 - Siddharth Aravindan, Wee Sun Lee:
State-Aware Variational Thompson Sampling for Deep Q-Networks. 124-132 - Haris Aziz, Hau Chan, Ágnes Cseh, Bo Li, Fahimeh Ramezani
, Chenhao Wang:
Multi-Robot Task Allocation-Complexity and Approximation. 133-141 - Matteo Baldoni, Cristina Baroglio, Roberto Micalizio, Stefano Tedeschi:
Robustness Based on Accountability in Multiagent Organizations. 142-150 - Jacques Bara, Omer Lev, Paolo Turrini:
Predicting Voting Outcomes in Presence of Communities. 151-159 - Eugenio Bargiacchi, Timothy Verstraeten, Diederik M. Roijers:
Cooperative Prioritized Sweeping. 160-168 - Siddharth Barman, Paritosh Verma:
Existence and Computation of Maximin Fair Allocations Under Matroid-Rank Valuations. 169-177 - Dorothea Baumeister, Tobias Alexander Hogrebe:
Complexity of Scheduling and Predicting Round-Robin Tournaments. 178-186 - Dorothea Baumeister, Linus Boes, Robin Weishaupt:
Complexity of Sequential Rules in Judgment Aggregation. 187-195 - Ryan Beal, Georgios Chalkiadakis, Timothy J. Norman, Sarvapali D. Ramchurn:
Optimising Long-Term Outcomes using Real-World Fluent Objectives: An Application to Football. 196-204 - Ondrej Biza, Dian Wang, Robert Platt Jr., Jan-Willem van de Meent, Lawson L. S. Wong:
Action Priors for Large Action Spaces in Robotics. 205-213 - Sirin Botan, Ronald de Haan, Marija Slavkovik, Zoi Terzopoulou:
Egalitarian Judgment Aggregation. 214-222 - Sirin Botan:
Manipulability of Thiele Methods on Party-List Profiles. 223-231 - Fabien Boucaud, Catherine Pelachaud, Indira Thouvenin:
Decision Model for a Virtual Agent that can Touch and be Touched. 232-241 - Yasser Bourahla, Manuel Atencia, Jérôme Euzenat:
Knowledge Improvement and Diversity under Interaction-Driven Adaptation of Learned Ontologies. 242-250 - Felix Brandt, Martin Bullinger, Patrick Lederer:
On the Indecisiveness of Kelly-Strategyproof Social Choice Functions. 251-259 - Robert Bredereck, Aleksander Figiel, Andrzej Kaczmarczyk, Dusan Knop, Rolf Niedermeier:
High-Multiplicity Fair Allocation Made More Practical. 260-268 - Federico Cacciamani, Andrea Celli, Marco Ciccone, Nicola Gatti:
Multi-Agent Coordination in Adversarial Environments through Signal Mediated Strategies. 269-278 - Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang, Zhi-Hua Zhou:
Imitation Learning from Pixel-Level Demonstrations by HashReward. 279-287 - Pierre Cardi, Laurent Gourvès, Julien Lesca:
Worst-case Bounds for Spending a Common Budget. 288-296 - Vishal Chakraborty, Phokion G. Kolaitis:
Classifying the Complexity of the Possible Winner Problem on Partial Chains. 297-305 - Rahul Chandan, Dario Paccagnan, Jason R. Marden:
Tractable Mechanisms for Computing Near-Optimal Utility Functions. 306-313 - Kangjie Chen, Shangwei Guo, Tianwei Zhang, Shuxin Li, Yang Liu:
Temporal Watermarks for Deep Reinforcement Learning Models. 314-322 - Lin Chen, Lei Xu, Zhimin Gao, Ahmed Imtiaz Sunny, Keshav Kasichainula, Weidong Shi:
A Game Theoretical Analysis of Non-Linear Blockchain System. 323-331 - Mingxi Cheng, Chenzhong Yin, Junyao Zhang, Shahin Nazarian, Jyotirmoy Deshmukh, Paul Bogdan:
A General Trust Framework for Multi-Agent Systems. 332-340 - Shushman Choudhury, Jayesh K. Gupta, Peter Morales, Mykel J. Kochenderfer:
Scalable Anytime Planning for Multi-Agent MDPs. 341-349 - Serafino Cicerone, Alessia Di Fonso, Gabriele Di Stefano, Alfredo Navarra:
MOBLOT: Molecular Oblivious Robots. 350-358 - Saar Cohen, Noa Agmon:
Spatial Consensus-Prevention in Robotic Swarms. 359-367 - Rodica Condurache, Catalin Dima, Youssouf Oualhadj, Nicolas Troquard:
Rational Synthesis in the Commons with Careless and Careful Agents. 368-376 - Elena Congeduti, Alexander Mey, Frans A. Oliehoek:
Loss Bounds for Approximate Influence-Based Abstraction. 377-385 - Jiaxun Cui, William Macke, Harel Yedidsion, Aastha Goyal, Daniel Urieli, Peter Stone:
Scalable Multiagent Driving Policies for Reducing Traffic Congestion. 386-394 - Panayiotis Danassis, Zeki Doruk Erden, Boi Faltings:
Improved Cooperation by Exploiting a Common Signal. 395-403 - Dave de Jonge, Filippo Bistaffa, Jordi Levy
:
A Heuristic Algorithm for Multi-Agent Vehicle Routing with Automated Negotiation. 404-412 - Argyrios Deligkas, Themistoklis Melissourgos, Paul G. Spirakis:
Walrasian Equilibria in Markets with Small Demands. 413-419 - Chuang Deng, Zhihai Rong, Lin Wang, Xiaofan Wang:
Modeling Replicator Dynamics in Stochastic Games Using Markov Chain Method. 420-428 - Louise A. Dennis, Nir Oren:
Explaining BDI Agent Behaviour through Dialogue. 429-437 - Palash Dey, Suman Kalyan Maity, Sourav Medya, Arlei Silva:
Network Robustness via Global k-cores. 438-446 - Zehao Dong, Sanmay Das, Patrick J. Fowler, Chien-Ju Ho:
Efficient Nonmyopic Online Allocation of Scarce Reusable Resources. 447-455 - Yali Du, Bo Liu, Vincent Moens, Ziqi Liu, Zhicheng Ren, Jun Wang, Xu Chen, Haifeng Zhang:
Learning Correlated Communication Topology in Multi-Agent Reinforcement learning. 456-464 - Miroslav Dudík, Xintong Wang, David M. Pennock, David M. Rothschild:
Log-time Prediction Markets for Interval Securities. 465-473 - Pierre El Mqirmi, Francesco Belardinelli, Borja G. León:
An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors. 474-482 - Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers
, Ufuk Topcu, Lu Feng:
Safe Multi-Agent Reinforcement Learning via Shielding. 483-491 - Hélène Fargier, Jérôme Mengin:
A Knowledge Compilation Map for Conditional Preference Statements-based Languages. 492-500 - Johan Ferret, Olivier Pietquin, Matthieu Geist:
Self-Imitation Advantage Learning. 501-509 - Alina Filimonov, Reshef Meir:
Strategyproof Facility Location Mechanisms on Discrete Trees. 510-518 - Fabrice Gaignier, Yannis Dimopoulos, Jean-Guy Mailly, Pavlos Moraitis:
Probabilistic Control Argumentation Frameworks. 519-527 - Rustam Galimullin, Thomas Ågotnes:
Quantified Announcements and Common Knowledge. 528-536 - Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart:
Partially Observable Mean Field Reinforcement Learning. 537-545 - Anis Gargouri, Sébastien Konieczny, Pierre Marquis, Srdjan Vesic:
On a Notion of Monotonic Support for Bipolar Argumentation Frameworks. 546-554 - Siddharth Gupta, Meirav Zehavi:
Multivariate Analysis of Scheduling Fair Competitions. 555-564 - Vaibhav Gupta, Daksh Anand, Praveen Paruchuri, Akshat Kumar:
Action Selection for Composable Modular Deep Reinforcement Learning. 565-573 - Lewis Hammond, James Fox, Tom Everitt, Alessandro Abate, Michael J. Wooldridge:
Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice. 574-582 - Lewis Hammond, Alessandro Abate, Julian Gutierrez, Michael J. Wooldridge:
Multi-Agent Reinforcement Learning with Temporal Logic Specifications. 583-592 - Paul Harrenstein, Grzegorz Lisowski, Ramanujan Sridharan, Paolo Turrini:
A Hotelling-Downs Framework for Party Nominees. 593-601 - Keyang He, Bikramjit Banerjee, Prashant Doshi:
Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards. 602-610 - Taoan Huang, Bistra Dilkina, Sven Koenig:
Learning Node-Selection Strategies in Bounded-Suboptimal Conflict-Based Search for Multi-Agent Path Finding. 611-619 - Léonard Hussenot, Robert Dadashi, Matthieu Geist, Olivier Pietquin:
Show Me the Way: Intrinsic Motivation from Demonstrations. 620-628 - Ercument Ilhan, Jeremy Gow, Diego Perez Liebana:
Action Advising with Advice Imitation in Deep Reinforcement Learning. 629-637 - Aviram Imber, Benny Kimelfeld:
Computing the Extremal Possible Ranks with Incomplete Preferences. 638-646 - Aviram Imber, Benny Kimelfeld:
Probabilistic Inference of Winners in Elections by Independent Random Voters. 647-655 - Katsuya Ito, Kentaro Minami, Kentaro Imajo, Kei Nakagawa:
Trader-Company Method: A Metaheuristics for Interpretable Stock Price Prediction. 656-664 - Pallavi Jain, Nimrod Talmon, Laurent Bulteau:
Partition Aggregation for Participatory Budgeting. 665-673 - Zhengyao Jiang, Pasquale Minervini, Minqi Jiang, Tim Rocktäschel:
Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning. 674-682 - Venkateswara Rao Kagita, Arun K. Pujari, Vineet Padmanabhan, Haris Aziz, Vikas Kumar:
Committee Selection using Attribute Approvals. 683-691 - Takehiro Kawasaki, Ryoji Wada, Taiki Todo, Makoto Yokoo:
Mechanism Design for Housing Markets over Social Networks. 692-700 - Shakil M. Khan
, Yves Lespérance:
Knowing Why - On the Dynamics of Knowledge about Actual Causes in the Situation Calculus. 701-709 - Jackson A. Killian, Andrew Perrault, Milind Tambe:
Beyond "To Act or Not to Act": Fast Lagrangian Approaches to General Multi-Action Restless Bandits. 710-718 - Tabajara Krausburg, Jürgen Dix, Rafael H. Bordini:
Feasible Coalition Sequences. 719-727 - Rajiv Ranjan Kumar, Pradeep Varakantham, Shih-Fen Cheng:
Adaptive Operating Hours for Improved Performance of Taxi Fleets. 728-736 - Martin Lackner, Jan Maly:
Approval-Based Shortlisting. 737-745 - Stefan Lauren, Francesco Belardinelli, Francesca Toni:
Aggregating Bipolar Opinions. 746-754 - Omer Lev, Neel Patel, Vignesh Viswanathan, Yair Zick:
The Price is (Probably) Right: Learning Market Equilibria from Samples. 755-763 - Sheng Li, Jayesh K. Gupta, Peter Morales, Ross E. Allen, Mykel J. Kochenderfer:
Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning. 764-772 - Wenhao Li, Xiangfeng Wang, Bo Jin, Junjie Sheng, Yun Hua, Hongyuan Zha:
Structured Diversification Emergence via Reinforced Organization Control and Hierachical Consensus Learning. 773-781 - Yuyu Li, Jianmin Ji:
Parallel Curriculum Experience Replay in Distributed Reinforcement Learning. 782-789 - Yu Liang, Amulya Yadav:
Let the DOCTOR Decide Whom to Test: Adaptive Testing Strategies to Tackle the COVID-19 Pandemic. 790-798 - Enrico Liscio, Michiel van der Meer, Luciano Cavalcante Siebert, Catholijn M. Jonker, Niek Mouter, Pradeep K. Murukannaiah:
Axies: Identifying and Evaluating Context-Specific Values. 799-808 - Minghuan Liu, Tairan He, Minkai Xu, Weinan Zhang:
Energy-Based Imitation Learning. 809-817 - Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters:
Deceptive Reinforcement Learning for Privacy-Preserving Planning. 818-826 - Emiliano Lorini:
A Logic of Evaluation. 827-835 - Matteo Luperto, Luca Fochetta, Francesco Amigoni:
Exploration of Indoor Environments through Predicting the Layout of Partially Observed Rooms. 836-843 - Xueguang Lyu, Yuchen Xiao, Brett Daley, Christopher Amato:
Contrasting Centralized and Decentralized Critics in Multi-Agent Reinforcement Learning. 844-852 - Xiaoteng Ma, Yiqin Yang, Chenghao Li, Yiwen Lu, Qianchuan Zhao, Jun Yang:
Modeling the Interaction between Agents in Cooperative Multi-Agent Reinforcement Learning. 853-861 - Tejasvi Malladi, Karpagam Murugappan, Depak Sudarsanam, Ramasubramanian Suriyanarayanan, Arunchandar Vasan:
To hold or not to hold? - Reducing Passenger Missed Connections in Airlines using Reinforcement Learning. 862-870 - Peta Masters, Michael Kirley, Wally Smith:
Extended Goal Recognition: A Planning-Based Model for Strategic Deception. 871-879 - Aditya Mate, Andrew Perrault, Milind Tambe:
Risk-Aware Interventions in Public Health: Planning with Restless Multi-Armed Bandits. 880-888 - Giulio Mazzi, Alberto Castellini, Alessandro Farinelli:
Identification of Unexpected Decisions in Partially Observable Monte-Carlo Planning: A Rule-Based Approach. 889-897 - Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Francisco C. Santos:
Cooperation between Independent Reinforcement Learners under Wealth Inequality and Collective Risks. 898-906 - Nieves Montes, Carles Sierra:
Value-Guided Synthesis of Parametric Normative Systems. 907-915 - Francesca Mosca, Jose M. Such:
ELVIRA: An Explainable Agent for Value and Utility-Driven Multiuser Privacy. 916-924 - Muhammad Faizan, Vasanth Sarathy, Gyan Tatiya, Shivam Goel, Saurav Gyawali, Mateo Guaman Castro, Jivko Sinapov, Matthias Scheutz:
A Novelty-Centric Agent Architecture for Changing Worlds. 925-933 - Cyrus Neary, Zhe Xu, Bo Wu, Ufuk Topcu:
Reward Machines for Cooperative Multi-Agent Reinforcement Learning. 934-942 - Thomas Nedelec, Jules Baudet, Vianney Perchet, Noureddine El Karoui:
Adversarial Learning in Revenue-Maximizing Auctions. 955-963 - Yaru Niu, Rohan R. Paleja, Matthew C. Gombolay:
Multi-Agent Graph-Attention Communication and Teaming. 964-973 - Michael Noukhovitch, Travis LaCroix, Angeliki Lazaridou, Aaron C. Courville:
Emergent Communication under Competition. 974-982 - Caspar Oesterheld, Vincent Conitzer:
Safe Pareto Improvements for Delegated Game Playing. 983-991 - Han-Ching Ou, Haipeng Chen, Shahin Jabbari, Milind Tambe:
Active Screening for Recurrent Diseases: A Reinforcement Learning Approach. 992-1000 - Deval Patel, Arindam Khan, Anand Louis:
Group Fairness for Knapsack Problems. 1001-1009 - Manon Prédhumeau, Lyuba Mancheva, Julie Dugdale, Anne Spalanzani:
An Agent-Based Model to Predict Pedestrians Trajectories with an Autonomous Vehicle in Shared Spaces. 1010-1018 - Ben Rachmut, Roie Zivan, William Yeoh:
Latency-Aware Local Search for Distributed Constraint Optimization. 1019-1027 - Md. Musfiqur Rahman, Ayman Rasheed, Md. Mosaddek Khan, Mohammad Ali Javidian, Pooyan Jamshidi, Md. Mamun-Or-Rashid:
Accelerating Recursive Partition-Based Causal Structure Learning. 1028-1036 - Lokman Rahmani, David Minarsch, Jonathan Ward:
Peer-to-peer Autonomous Agent Communication Network. 1037-1045