A new jailbreak method for large language models (LLMs) takes advantage of models’ ability to identify and score harmful content in order to trick t

New LLM jailbreak uses models’ evaluation skills against them

submited by
Style Pass
2025-01-13 09:00:03

A new jailbreak method for large language models (LLMs) takes advantage of models’ ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more.

The “Bad Likert Judge” multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and was found to increase the success rate of jailbreak attempts by more than 60% when compared with direct single-turn attack attempts.

The method is based on the Likert scale, which is typically used to gauge the degree to which someone agrees or disagrees with a statement in a questionnaire or survey. For example, in a Likert scale of 1 to 5, 1 would indicate the respondent strongly disagrees with the statement and 5 would indicate the respondent strongly agrees.

For the LLM jailbreak experiments, the researchers asked the LLMs to use a Likert-like scale to score the degree to which certain content contained in the prompt was harmful. In one example, they asked the LLMs to give a score of 1 if a prompt didn’t contain any malware-related information and a score of 2 if it contained very detailed information about how to create malware, or actual malware code.

Leave a Comment