2024 ArtPromptASCIIArtbasedJailbreak
- (Jiang, Xu et al., 2024) ⇒ Fengqing Jiang, Zhangchen Xu, Luyao Niu, Zhen Xiang, Bhaskar Ramasubramanian, Bo Li, and Radha Poovendran. (2024). “ArtPrompt: ASCII Art-based Jailbreak Attacks Against Aligned LLMs.” doi:10.48550/arXiv.2402.11753
Subject Headings: LLM Jailbreak, ASCII Art.
Notes
- It introduces a novel ASCII art-based jailbreak attack named "ArtPrompt," which exploits the vulnerability of Large Language Models (LLMs) in recognizing ASCII art to bypass safety measures and elicit undesired behaviors.
- It develops the Vision-in-Text Challenge (ViTC) benchmark to assess LLMs' ability to recognize prompts that cannot be interpreted solely through semantics, highlighting a significant gap in current LLM capabilities.
- It demonstrates that five state-of-the-art LLMs (GPT-3.5, GPT-4, Gemini, Claude, and Llama2) exhibit poor performance in recognizing ASCII art, which forms the basis for the ArtPrompt attack.
- It employs a two-step approach in the ArtPrompt attack: masking sensitive words in a prompt and then replacing these with ASCII art to create a cloaked prompt, effectively circumventing LLM safety mechanisms.
- It compares ArtPrompt against other jailbreak attacks and defenses, showing that ArtPrompt can effectively and efficiently provoke unsafe behaviors from LLMs, outperforming other attacks in most cases.
- It includes extensive experimental evaluations using two benchmark datasets, AdvBench and HEx-PHI, to demonstrate the effectiveness and efficiency of ArtPrompt across different LLMs and settings.
- It acknowledges limitations, including the focus on text-based LLMs and the speculation on the attack's effectiveness against multimodal models, suggesting areas for future research.
- It addresses ethical considerations by aiming to advance LLM safety under adversarial conditions and encourages responsible disclosure and community engagement to mitigate potential misuse of the demonstrated vulnerabilities.
- It suggests the urgent need for developing more sophisticated defenses against ASCII art-based attacks like ArtPrompt, noting that current defenses are inadequate to fully mitigate the risks posed by such attacks.
Cited By
Quotes
Abstract
Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for safety alignment of LLMs are solely interpreted by semantics. This assumption, however, does not hold in real-world applications, which leads to severe vulnerabilities in LLMs. For example, users of forums often use ASCII art, a form of text-based art, to convey image information. In this paper, we propose a novel ASCII art-based jailbreak attack and introduce a comprehensive benchmark Vision-in-Text Challenge (ViTC) to evaluate the capabilities of LLMs in recognizing prompts that cannot be solely interpreted by semantics. We show that five SOTA LLMs (GPT-3.5, GPT-4, Gemini, Claude, and Llama2) struggle to recognize prompts provided in the form of ASCII art. Based on this observation, we develop the jailbreak attack ArtPrompt, which leverages the poor performance of LLMs in recognizing ASCII art to bypass safety measures and elicit undesired behavior]]s from LLMs. ArtPrompt only requires black-box access to the victim LLMs, making it a practical attack. We evaluate ArtPrompt on five SOTA LLMs, and show that ArtPrompt can effectively and efficiently induce undesired behaviors from all five LLMs.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2024 ArtPromptASCIIArtbasedJailbreak | Fengqing Jiang Zhangchen Xu Luyao Niu Zhen Xiang Bhaskar Ramasubramanian Bo Li Radha Poovendran | ArtPrompt: ASCII Art-based Jailbreak Attacks Against Aligned LLMs | 10.48550/arXiv.2402.11753 | 2024 |