default search action
Michael Backes 0001
Person information
- affiliation: CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
- affiliation: Saarland University, Computer Science Department, Saarbrücken, Germany
Other persons with the same name
- Michael Backes 0002 — University of Namibia, Department of Physics, Windhoek, Namibia (and 1 more)
- Michael Backes 0003 — Vestron, Los Angeles, CA, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j33]Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Link Stealing Attacks Against Inductive Graph Neural Networks. Proc. Priv. Enhancing Technol. 2024(4): 818-839 (2024) - [c267]Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang:
FAKEPCD: Fake Point Cloud Detection via Source Attribution. AsiaCCS 2024 - [c266]Ge Han, Ahmed Salem, Zheng Li, Shanqing Guo, Michael Backes, Yang Zhang:
Detection and Attribution of Models Trained on Generated Data. ICASSP 2024: 4875-4879 - [c265]Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch:
Memorization in Self-Supervised Learning Improves Downstream Generalization. ICLR 2024 - [c264]Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao:
Position: TrustLLM: Trustworthiness in Large Language Models. ICML 2024 - [c263]Yukun Jiang, Xinyue Shen, Rui Wen, Zeyang Sha, Junjie Chu, Yugeng Liu, Michael Backes, Yang Zhang:
Games and Beyond: Analyzing the Bullet Chats of Esports Livestreaming. ICWSM 2024: 761-773 - [c262]Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang:
Composite Backdoor Attacks Against Large Language Models. NAACL-HLT (Findings) 2024: 1459-1472 - [c261]Alfusainey Jallow, Michael Schilling, Michael Backes, Sven Bugiel:
Measuring the Effects of Stack Overflow Code Snippet Evolution on Open-Source Software Security. SP 2024: 1083-1101 - [c260]Xinyue Shen, Yiting Qu, Michael Backes, Yang Zhang:
Prompt Stealing Attacks Against Text-to-Image Generation Models. USENIX Security Symposium 2024 - [c259]Yixin Wu, Rui Wen, Michael Backes, Pascal Berrang, Mathias Humbert, Yun Shen, Yang Zhang:
Quantifying Privacy Risks of Prompts in Visual Prompt Learning. USENIX Security Symposium 2024 - [c258]Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, Michael Backes, Mario Fritz, Yang Zhang:
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models. USENIX Security Symposium 2024 - [c257]Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang:
Instruction Backdoor Attacks Against Customized LLMs. USENIX Security Symposium 2024 - [c256]Minxing Zhang, Ning Yu, Rui Wen, Michael Backes, Yang Zhang:
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models. WACV 2024: 4827-4837 - [i171]Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yue Zhao:
TrustLLM: Trustworthiness in Large Language Models. CoRR abs/2401.05561 (2024) - [i170]Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch:
Memorization in Self-Supervised Learning Improves Downstream Generalization. CoRR abs/2401.12233 (2024) - [i169]Junjie Chu, Zeyang Sha, Michael Backes, Yang Zhang:
Conversation Reconstruction Attack Against GPT Models. CoRR abs/2402.02987 (2024) - [i168]Junjie Chu, Yugeng Liu, Ziqing Yang, Xinyue Shen, Michael Backes, Yang Zhang:
Comprehensive Assessment of Jailbreak Attacks Against LLMs. CoRR abs/2402.05668 (2024) - [i167]Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang:
Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization. CoRR abs/2402.09179 (2024) - [i166]Yiyong Liu, Rui Wen, Michael Backes, Yang Zhang:
Efficient Data-Free Model Stealing with Label Diversity. CoRR abs/2404.00108 (2024) - [i165]Yiting Qu, Xinyue Shen, Yixin Wu, Michael Backes, Savvas Zannettou, Yang Zhang:
UnsafeBench: Benchmarking Image Safety Classifiers on Real-World and AI-Generated Images. CoRR abs/2405.03486 (2024) - [i164]Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Link Stealing Attacks Against Inductive Graph Neural Networks. CoRR abs/2405.05784 (2024) - [i163]Xaver Fabian, Marco Patrignani, Marco Guarnieri, Michael Backes:
Do You Even Lift? Strengthening Compiler Security Guarantees Against Spectre Attacks. CoRR abs/2405.10089 (2024) - [i162]Xinyue Shen, Yixin Wu, Michael Backes, Yang Zhang:
Voice Jailbreak Attacks Against GPT-4o. CoRR abs/2405.19103 (2024) - [i161]Ziqing Yang, Michael Backes, Yang Zhang, Ahmed Salem:
SOS! Soft Prompt Attack Against Open-Source Large Language Models. CoRR abs/2407.03160 (2024) - [i160]Wai Man Si, Michael Backes, Yang Zhang:
ICLGuard: Controlling In-Context Learning Behavior for Applicability Authorization. CoRR abs/2407.06955 (2024) - [i159]Boyang Zhang, Yicong Tan, Yun Shen, Ahmed Salem, Michael Backes, Savvas Zannettou, Yang Zhang:
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification. CoRR abs/2407.20859 (2024) - [i158]Minxing Zhang, Ahmed Salem, Michael Backes, Yang Zhang:
Vera Verto: Multimodal Hijacking Attack. CoRR abs/2408.00129 (2024) - [i157]Yuan Xin, Zheng Li, Ning Yu, Dingfan Chen, Mario Fritz, Michael Backes, Yang Zhang:
Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders. CoRR abs/2408.11046 (2024) - [i156]Yixin Wu, Yun Shen, Michael Backes, Yang Zhang:
Image-Perfect Imperfections: Safety, Bias, and Authenticity in the Shadow of Text-To-Image Model Evolution. CoRR abs/2408.17285 (2024) - [i155]Rui Wen, Zheng Li, Michael Backes, Yang Zhang:
Membership Inference Attacks Against In-Context Learning. CoRR abs/2409.01380 (2024) - [i154]Rui Wen, Michael Backes, Yang Zhang:
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm? CoRR abs/2409.03741 (2024) - 2023
- [j32]Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A. Álvarez:
Adversarial vulnerability bounds for Gaussian process classification. Mach. Learn. 112(3): 971-1009 (2023) - [j31]Giorgio Di Tizio, Patrick Speicher, Milivoj Simeonovski, Michael Backes, Ben Stock, Robert Künnemann:
Pareto-optimal Defenses for the Web Infrastructure: Theory and Practice. ACM Trans. Priv. Secur. 26(2): 18:1-18:36 (2023) - [c255]Yiting Qu, Xinyue Shen, Xinlei He, Michael Backes, Savvas Zannettou, Yang Zhang:
Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models. CCS 2023: 3403-3417 - [c254]Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, Yang Zhang:
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. CVPR 2023: 16373-16383 - [c253]Rui Wen, Zhengyu Zhao, Zhuoran Liu, Michael Backes, Tianhao Wang, Yang Zhang:
Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning? ICLR 2023 - [c252]Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang:
Generated Graph Detection. ICML 2023: 23412-23428 - [c251]Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang:
Data Poisoning Attacks Against Multimodal Encoders. ICML 2023: 39299-39313 - [c250]Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang:
Backdoor Attacks Against Dataset Distillation. NDSS 2023 - [c249]Sanam Ghorbani Lyastani, Michael Backes, Sven Bugiel:
A Systematic Study of the Consistency of Two-Factor Authentication User Journeys on Top-Ranked Websites. NDSS 2023 - [c248]Hamed Rasifard, Rahul Gopinath, Michael Backes, Hamed Nemati:
SEAL: Capability-Based Access Control for Data-Analytic Scenarios. SACMAT 2023: 67-78 - [c247]Yiting Qu, Xinlei He, Shannon Pierson, Michael Backes, Yang Zhang, Savvas Zannettou:
On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning. SP 2023: 293-310 - [c246]Haiming Wang, Zhikun Zhang, Tianhao Wang, Shibo He, Michael Backes, Jiming Chen, Yang Zhang:
PrivTrace: Differentially Private Trajectory Synthesis by Adaptive Markov Models. USENIX Security Symposium 2023: 1649-1666 - [c245]Wai Man Si, Michael Backes, Yang Zhang, Ahmed Salem:
Two-in-One: A Model Hijacking Attack Against Text Generation Models. USENIX Security Symposium 2023: 2223-2240 - [c244]Cristian-Alexandru Staicu, Sazzadur Rahaman, Ágnes Kiss, Michael Backes:
Bilingual Problems: Studying the Security Risks Incurred by Native Extensions in Scripting Languages. USENIX Security Symposium 2023: 6133-6150 - [c243]Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Yang Zhang:
FACE-AUDITOR: Data Auditing in Facial Recognition Systems. USENIX Security Symposium 2023: 7195-7212 - [c242]Zheng Li, Ning Yu, Ahmed Salem, Michael Backes, Mario Fritz, Yang Zhang:
UnGANable: Defending Against GAN-based Face Manipulation. USENIX Security Symposium 2023: 7213-7230 - [i153]Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang:
Backdoor Attacks Against Dataset Distillation. CoRR abs/2301.01197 (2023) - [i152]Xinyue Shen, Yiting Qu, Michael Backes, Yang Zhang:
Prompt Stealing Attacks Against Text-to-Image Generation Models. CoRR abs/2302.09923 (2023) - [i151]Ziqing Yang, Zeyang Sha, Michael Backes, Yang Zhang:
From Visual Prompt Learning to Zero-Shot Transfer: Mapping Is All You Need. CoRR abs/2303.05266 (2023) - [i150]Xinlei He, Xinyue Shen, Zeyuan Chen, Michael Backes, Yang Zhang:
MGTBench: Benchmarking Machine-Generated Text Detection. CoRR abs/2303.14822 (2023) - [i149]Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Yang Zhang:
FACE-AUDITOR: Data Auditing in Facial Recognition Systems. CoRR abs/2304.02782 (2023) - [i148]Xinyue Shen, Zeyuan Chen, Michael Backes, Yang Zhang:
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT. CoRR abs/2304.08979 (2023) - [i147]Wai Man Si, Michael Backes, Yang Zhang, Ahmed Salem:
Two-in-One: A Model Hijacking Attack Against Text Generation Models. CoRR abs/2305.07406 (2023) - [i146]Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang:
Watermarking Diffusion Model. CoRR abs/2305.12502 (2023) - [i145]Yiting Qu, Xinyue Shen, Xinlei He, Michael Backes, Savvas Zannettou, Yang Zhang:
Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models. CoRR abs/2305.13873 (2023) - [i144]Peihua Ma, Yixin Wu, Ning Yu, Yang Zhang, Michael Backes, Qin Wang, Cheng-I Wei:
Vision-language models boost food composition compilation. CoRR abs/2306.01747 (2023) - [i143]Yihan Ma, Zhengyu Zhao, Xinlei He, Zheng Li, Michael Backes, Yang Zhang:
Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis. CoRR abs/2306.07754 (2023) - [i142]Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang:
Generated Graph Detection. CoRR abs/2306.07758 (2023) - [i141]Matthis Kruse, Michael Backes, Marco Patrignani:
Secure Composition of Robust and Optimising Compilers. CoRR abs/2307.08681 (2023) - [i140]Wai Man Si, Michael Backes, Yang Zhang:
Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing. CoRR abs/2308.03558 (2023) - [i139]Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang:
"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. CoRR abs/2308.03825 (2023) - [i138]Bartlomiej Surma, Tahleen A. Rahman, Monique M. B. Breteler, Michael Backes, Yang Zhang:
You Are How You Walk: Quantifying Privacy Risks in Step Count Data. CoRR abs/2308.04933 (2023) - [i137]Yugeng Liu, Tianshuo Cong, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang:
Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models. CoRR abs/2308.07847 (2023) - [i136]Minxing Zhang, Michael Backes, Xiao Zhang:
Generating Less Certain Adversarial Examples Improves Robust Generalization. CoRR abs/2310.04539 (2023) - [i135]Yiyong Liu, Michael Backes, Xiao Zhang:
Transferable Availability Poisoning Attacks. CoRR abs/2310.05141 (2023) - [i134]Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang:
Prompt Backdoors in Visual Prompt Learning. CoRR abs/2310.07632 (2023) - [i133]Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang:
Composite Backdoor Attacks Against Large Language Models. CoRR abs/2310.07676 (2023) - [i132]Yuan Xin, Michael Backes, Xiao Zhang:
Provably Robust Cost-Sensitive Learning via Randomized Smoothing. CoRR abs/2310.08732 (2023) - [i131]Rui Wen, Tianhao Wang, Michael Backes, Yang Zhang, Ahmed Salem:
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning. CoRR abs/2310.11397 (2023) - [i130]Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes, Qi Li, Chao Shen:
Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights. CoRR abs/2310.11850 (2023) - [i129]Yixin Wu, Rui Wen, Michael Backes, Pascal Berrang, Mathias Humbert, Yun Shen, Yang Zhang:
Quantifying Privacy Risks of Prompts in Visual Prompt Learning. CoRR abs/2310.11970 (2023) - [i128]Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, Michael Backes, Mario Fritz, Yang Zhang:
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models. CoRR abs/2310.12665 (2023) - [i127]Yixin Wu, Ning Yu, Michael Backes, Yun Shen, Yang Zhang:
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts. CoRR abs/2310.16613 (2023) - [i126]Minxing Zhang, Ning Yu, Rui Wen, Michael Backes, Yang Zhang:
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models. CoRR abs/2310.19410 (2023) - [i125]Boyang Zhang, Xinyue Shen, Wai Man Si, Zeyang Sha, Zeyuan Chen, Ahmed Salem, Yun Shen, Michael Backes, Yang Zhang:
Comprehensive Assessment of Toxicity in ChatGPT. CoRR abs/2311.14685 (2023) - [i124]Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang:
FAKEPCD: Fake Point Cloud Detection via Source Attribution. CoRR abs/2312.11213 (2023) - 2022
- [j30]Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian M. Molloy:
Backdoor smoothing: Demystifying backdoor attacks on deep neural networks. Comput. Secur. 120: 102814 (2022) - [c241]Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang:
Graph Unlearning. CCS 2022: 499-513 - [c240]Hai Huang, Zhikun Zhang, Yun Shen, Michael Backes, Qi Li, Yang Zhang:
On the Privacy Risks of Cell-Based NAS Architectures. CCS 2022: 1427-1441 - [c239]Zheng Li, Yiyong Liu, Xinlei He, Ning Yu, Michael Backes, Yang Zhang:
Auditing Membership Leakages of Multi-Exit Networks. CCS 2022: 1917-1931 - [c238]Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang:
Membership Inference Attacks by Exploiting Loss Trajectory. CCS 2022: 2085-2098 - [c237]Trung Tin Nguyen, Michael Backes, Ben Stock:
Freely Given Consent?: Studying Consent Notice of Third-Party Tracking and Its Violations of GDPR in Android Apps. CCS 2022: 2369-2383 - [c236]Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini:
Finding MNEMON: Reviving Memories of Node Embeddings. CCS 2022: 2643-2657 - [c235]Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, Yang Zhang:
Why So Toxic?: Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. CCS 2022: 2659-2673 - [c234]Michael Backes, Pascal Berrang, Lucjan Hanzlik, Ivan Pryvalov:
A Framework for Constructing Single Secret Leader Election from MPC. ESORICS (2) 2022: 672-691 - [c233]Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang:
Dynamic Backdoor Attacks Against Machine Learning Models. EuroS&P 2022: 703-718 - [c232]Xinyue Shen, Xinlei He, Michael Backes, Jeremy Blackburn, Savvas Zannettou, Yang Zhang:
On Xing Tian and the Perseverance of Anti-China Sentiment Online. ICWSM 2022: 944-955 - [c231]Ahmed Salem, Michael Backes, Yang Zhang:
Get a Model! Model Hijacking Attack Against Machine Learning Models. NDSS 2022 - [c230]Lukas Bieringer, Kathrin Grosse, Michael Backes, Battista Biggio, Katharina Krombholz:
Industrial practitioners' mental models of adversarial machine learning. SOUPS @ USENIX Security Symposium 2022: 97-116 - [c229]Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang:
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. USENIX Security Symposium 2022: 4525-4542 - [c228]Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang:
Inference Attacks Against Graph Neural Networks. USENIX Security Symposium 2022: 4543-4560 - [i123]Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, Yang Zhang:
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. CoRR abs/2201.07513 (2022) - [i122]Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini:
Finding MNEMON: Reviving Memories of Node Embeddings. CoRR abs/2204.06963 (2022) - [i121]Xinyue Shen, Xinlei He, Michael Backes, Jeremy Blackburn, Savvas Zannettou, Yang Zhang:
On Xing Tian and the Perseverance of Anti-China Sentiment Online. CoRR abs/2204.08935 (2022) - [i120]Zheng Li, Yiyong Liu, Xinlei He, Ning Yu, Michael Backes, Yang Zhang:
Auditing Membership Leakages of Multi-Exit Networks. CoRR abs/2208.11180 (2022) - [i119]Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang:
Membership Inference Attacks by Exploiting Loss Trajectory. CoRR abs/2208.14933 (2022) - [i118]Hai Huang, Zhikun Zhang, Yun Shen, Michael Backes, Qi Li, Yang Zhang:
On the Privacy Risks of Cell-Based NAS Architectures. CoRR abs/2209.01688 (2022) - [i117]Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, Yang Zhang:
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. CoRR abs/2209.03463 (2022) - [i116]Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang:
Data Poisoning Attacks Against Multimodal Encoders. CoRR abs/2209.15266 (2022) - [i115]Haiming Wang, Zhikun Zhang, Tianhao Wang, Shibo He, Michael Backes, Jiming Chen, Yang Zhang:
PrivTrace: Differentially Private Trajectory Synthesis by Adaptive Markov Model. CoRR abs/2210.00581 (2022) - [i114]Zheng Li, Ning Yu, Ahmed Salem, Michael Backes, Mario Fritz, Yang Zhang:
UnGANable: Defending Against GAN-based Face Manipulation. CoRR abs/2210.00957 (2022) - [i113]Yixin Wu, Ning Yu, Zheng Li, Michael Backes, Yang Zhang:
Membership Inference Attacks Against Text-to-image Generation Models. CoRR abs/2210.00968 (2022) - [i112]Xinyue Shen, Xinlei He, Zheng Li, Yun Shen, Michael Backes, Yang Zhang:
Backdoor Attacks in the Supply Chain of Masked Image Modeling. CoRR abs/2210.01632 (2022) - [i111]Sanam Ghorbani Lyastani, Michael Backes, Sven Bugiel:
A Systematic Study of the Consistency of Two-Factor Authentication User Journeys on Top-Ranked Websites (Extended Version). CoRR abs/2210.09373 (2022) - [i110]Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes:
Towards Good Practices in Evaluating Transfer Adversarial Attacks. CoRR abs/2211.09565 (2022) - [i109]Yiting Qu, Xinlei He, Shannon Pierson, Michael Backes, Yang Zhang, Savvas Zannettou:
On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning. CoRR abs/2212.06573 (2022) - [i108]Michael Backes, Pascal Berrang, Lucjan Hanzlik, Ivan Pryvalov:
A framework for constructing Single Secret Leader Election from MPC. IACR Cryptol. ePrint Arch. 2022: 1040 (2022) - 2021
- [c227]Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, Yang Zhang:
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements. ACSAC 2021: 554-569 - [c226]