Safety Alignment in NLP Tasks:
Weakly Aligned Summarization as an In-Context Attack

University of California, Riverside*, Microsoft#
Description of Image

Abstract

Recent developments in balancing the usefulness and safety of Large Language Models (LLMs) have raised a critical question: Are mainstream NLP tasks adequately aligned with safety considerations? Our study, focusing on safety-sensitive documents obtained through adversarial attacks, reveals significant disparities in the safety alignment of various NLP tasks. For instance, LLMs can effectively summarize malicious long documents but often refuse to translate them. This discrepancy highlights a previously unidentified vulnerability: attacks exploiting tasks with weaker safety alignment, like summarization, can potentially compromise the integrity of tasks traditionally deemed more robust, such as translation and question-answering (QA). Moreover, the concurrent use of multiple NLP tasks with lesser safety alignment increases the risk of LLMs inadvertently processing harmful content. We demonstrate these vulnerabilities in various safety-aligned LLMs, particularly Llama2 models and GPT-4, indicating an urgent need for strengthening safety alignments across a broad spectrum of NLP tasks.

Method Overview

Safety Alignment NLP

Our Contributions are:

  1. NLP Tasks Have Different Levels of Safety Alignment: We designed a novel setup using NLP task prompts and safety-sensitive documents, creating a dataset of 6,985 articles (average length of 1520 tokens) from adversarial attacks, to test whether different NLP tasks have varying levels of safety alignment. We found that tasks like summarization have notably lower safety alignment compared to translation or QA tasks.
  2. Weakly Aligned NLP Tasks as In-Context Attacks: The varying safety alignments among NLP tasks present a vulnerability. We discovered that performing weakly aligned NLP task first increases the likelihood of LLMs processing safety-sensitive documents for other tasks. This effect is further amplified when combining multiple weakly-aligned tasks.
  3. Vulnerability Cause Investigation: Our experiments indicate that safety alignment discrepancies in NLP tasks stem from an imbalanced trade-off between the usefulness from instruction tuning and the safety of alignment. Our ablation study reveals that summarization attacks are more frequently blocked on shorter documents than longer ones, possibly due to a prevalence of shorter documents in safety alignment. These findings are crucial for enhancing safety alignment research and building stronger defenses.

Task Process Rate Result on Safety-Sentitive Documents

Safety Alignment NLP

Main results: NLP tasks have different levels of safety alignment on safety-sensitive (SS) documents. These safety-sensitive documents are obtained by adversarial attacks on LLMs based on malicious prompt from AdvBench.

In-Context Attack with Summarization on GPT4

In this example, GPT-4 initially refuses to translate the SS document concerning abuse. However, after processing a summarization first, it changes its mind and becomes willing to translate the document. In this case, the summarization task serves as an in-context attack, weakening the safety alignment of the translation task.

-->

In-Context Attack with Summarization on Llama2

BibTeX

@misc{fu2023safety,
        title={Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack}, 
        author={Yu Fu and Yufei Li and Wen Xiao and Cong Liu and Yue Dong},
        year={2023},
        eprint={2312.06924},
        archivePrefix={arXiv},
        primaryClass={cs.CL}
  }