"PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models."

Hongwei Yao, Jian Lou, Zhan Qin (2023)

Details and statistics

DOI: 10.48550/ARXIV.2310.12439

access: open

type: Informal or Other Publication

metadata version: 2023-10-27

a service of  Schloss Dagstuhl - Leibniz Center for Informatics