![](https://dblp1.uni-trier.de/img/logo.ua.320x120.png)
![](https://dblp1.uni-trier.de/img/dropdown.dark.16x16.png)
![](https://dblp1.uni-trier.de/img/peace.dark.16x16.png)
Остановите войну!
for scientists:
![search dblp search dblp](https://dblp1.uni-trier.de/img/search.dark.16x16.png)
![search dblp](https://dblp1.uni-trier.de/img/search.dark.16x16.png)
default search action
AISafety@IJCAI 2021: Online Event
- Huáscar Espinoza, John A. McDermid, Xiaowei Huang, Mauricio Castillo-Effen, Xin Cynthia Chen, José Hernández-Orallo, Seán Ó hÉigeartaigh, Richard Mallah, Gabriel Pedroza:
Proceedings of the Workshop on Artificial Intelligence Safety 2021 co-located with the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021), Virtual, August, 2021. CEUR Workshop Proceedings 2916, CEUR-WS.org 2021
Trustworthiness of Knowledge-Based AI
- Vahid Yazdanpanah, Sebastian Stein, Enrico H. Gerding, Nicholas R. Jennings:
Applying Strategic Reasoning for Accountability Ascription in Multiagent Teams. - William J. Howe, Roman V. Yampolskiy:
Impossibility of Unambiguous Communication as a Source of Failure in AI Systems.
Session 2: Robustness of Machine Learning Approaches
- Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe, Xiaowei Huang:
Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles. - Romie Banerjee, Feng Liu, Pei Ke:
Towards Robust Perception Using Topological Invariants. - Lena Heidemann, Adrian Schwaiger, Karsten Roscher:
Measuring Ensemble Diversity and Its Effects on Model Robustness.
Session 3: Perception and Adversarial Attacks
- Shashank Kotyan, Danilo Vasconcellos Vargas:
Deep Neural Network Loses Attention to Adversarial Images. - Kavya Gupta, Jean-Christophe Pesquet, Béatrice Pesquet-Popescu, Fateh Kaakai, Fragkiskos D. Malliaros:
An Adversarial Attacker for Neural Networks in Regression Problems. - Suruchi Gupta, Ihsan Ullah, Michael Madden:
Coyote: A Dataset of Challenging Scenarios in Visual Perception for Autonomous Vehicles.
Session 4: Qualification / Certification of AI-Based Systems
- Florian Geissler, Syed Sha Qutub, Sayanta Roychowdhury, Ali Asgari Khoshouyeh, Yang Peng, Akash Dhamasia, Ralf Graefe, Karthik Pattabiraman, Michael Paulitsch:
Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision. - Christophe Gabreau, Béatrice Pesquet-Popescu, Fateh Kaakai, Baptiste Lefèvre:
Artificial Intelligence for Future Skies: On-going Standardization Activities to Build the Next Certification/Approval Framework for Airborne and Ground Aeronautic Products. - Michael Klaes, Rasmus Adler, Lisa Jöckel, Janek Groß, Jan Reich:
Using Complementary Risk Acceptance Criteria to Structure Assurance Cases for Safety-Critical AI Components.
Poster Papers
- Roman Yampolskiy:
Uncontrollability of Artificial Intelligence. - Tom Haider, Felippe Schmoeller Roza, Dirk Eilers, Karsten Roscher, Stephan Günnemann:
Domain Shifts in Reinforcement Learning: Identifying Disturbances in Environments. - James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong:
Chess as a Testing Grounds for the Oracle Approach to AI Safety. - Ayan Banerjee, Imane Lamrani, Katina Michael, Diana M. Bowman, Sandeep K. S. Gupta:
Socio-technical co-Design for Accountable Autonomous Software. - Nadisha-Marie Aliman, Leon Kester:
Epistemic Defenses against Scientific and Empirical Adversarial AI Attacks. - Roman Yampolskiy:
On the Differences between Human and Machine Intelligence. - Christopher Lazarus, Mykel J. Kochenderfer:
A Mixed Integer Programming Approach for Verifying Properties of Binarized Neural Networks.
![](https://dblp1.uni-trier.de/img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.