Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters

Haibo Jin

School of Information Sciences
University of Illinois at Urbana-Champaign

Andy Zhou

Computer Science
Lapis Labs
University of Illinois Urbana-Champaign

Joe D. Menke

School of Information Sciences
University of Illinois at Urbana-Champaign

Haohan Wang *

School of Information Sciences
University of Illinois at Urbana-Champaign

Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

*Corresponding Author

Abstract

Large Language Models (LLMs) are typically harmless but remain vulnerable to carefully crafted prompts known as “jailbreaks”, which can bypass protective measures and induce harmful behavior. Recent advancements in LLMs have incorporated moderation guardrails that can filter outputs, which trigger processing errors for certain malicious questions. Existing red-teaming benchmarks often neglect to include questions that trigger moderation guardrails, making it difficult to evaluate jailbreak effectiveness. To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails. JAMBench involves 160 manually crafted instructions covering four major risk categories at multiple severity levels. Furthermore, we propose a jailbreak method, JAM (Jailbreak Against Moderation), designed to attack moderation guardrails using jailbreak prefixes to bypass input-level filters and a fine-tuned shadow model functionally equivalent to the guardrail model to generate cipher characters to bypass output-level filters. Our extensive experiments on four LLMs demonstrate that JAM achieves higher jailbreak success (~ × 19.88) and lower filtered-out rates (~ × 1/6) than baselines.

Motivation

Examples of jailbreaks. (a) A malicious question that receives a refusal response from the LLM. (b) An affirmative response with detailed steps to implement the malicious question by adding a jailbreak prompt as the prefix. (c) A filtered-out error is triggered by the moderation guardrail, even when a successful jailbreak prompt is added. (d) An affirmative response using JAM, which combines a jailbreak prefix, the malicious question, and the cipher characters to bypass the guardrail.

Diagram of the transformer deep learning architecture.

Examples

⚠️ Warning

Please be advised that the following content may include materials that could elicit strong or adverse emotional responses. By proceeding, you acknowledge and accept full responsibility for any potential impacts, including emotional distress or subsequent actions taken in response to the content. The information provided is for research purposes only, and the authors assume no liability for any emotional, psychological, or behavioral effects experienced after engaging with this material. If you understand and accept these terms, please confirm to proceed.

Paper Showcase

The presentation video on the left explains the problem addressed, our methodology, and key outcomes, helping viewers understand the broader impact of our work. On the right, the poster offers a visual summary of major findings and innovations, designed to capture the core essence of our research at a glance.

Watch our paper presentation video.
Diagram of the transformer deep learning architecture.
Here is the poster for our paper.

Get Involved

UIUC DREAM Lab

UIUC DREAM Lab

Developing Reliable and Efficient AI for Medicine

Visit DREAM Lab
Join The Community

Join The Community

Join our community on Slack to discuss ideas, ask questions, and collaborate with new friends.

Join Slack
Share Your Research Insights

Share Your Research Insights

Provide us feedback by sharing insights or suggesting additional resources related to our study.

Fill This Form
Trustworthy ML Initiative

Trustworthy ML Initiative

The Trustworthy ML Initiative (TrustML) addresses challenges in responsible ML by providing resources, showcasing early career researchers, fostering discussions and building a community.

More Info

BibTeX citation

    @article{jin2024jailbreaking,
  title={Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters},
  author={Jin, Haibo and Zhou, Andy and Menke, Joe D and Wang, Haohan},
  journal={arXiv preprint arXiv:2405.20413},
  year={2024}
}