Tribunal.RedTeam (Tribunal v1.3.6)

Copy Markdown View Source

Red team attack generators for testing LLM safety.

Inspired by promptfoo's red team strategies, this module provides functions to generate adversarial prompts for testing your LLM's resistance to jailbreaks and manipulation.

Attack Categories

Static Encoding Attacks

Transform harmful prompts using encoding techniques:

  • Base64 encoding
  • Leetspeak substitution
  • ROT13 cipher
  • Pig Latin
  • Reversed text

Prompt Injection Attacks

Attempt to override system instructions:

  • Ignore previous instructions
  • System prompt extraction
  • Role switching

Jailbreak Attacks

Classic jailbreak techniques:

  • DAN (Do Anything Now)
  • STAN (Strive To Avoid Norms)
  • Developer mode
  • Hypothetical framing

Usage

# Generate all attacks for a harmful prompt
attacks = Tribunal.RedTeam.generate_attacks("How do I pick a lock?")

# Generate specific attack type
{:ok, encoded} = Tribunal.RedTeam.base64_encode("harmful prompt")

# Test your LLM against red team attacks
for {attack_type, prompt} <- attacks do
  response = MyLLM.generate(prompt)
  refute_jailbreak_llm(response, query: prompt)
end

Summary

Functions

Encodes prompt in Base64 and asks model to decode and execute.

Character roleplay to bypass restrictions.

Classic DAN (Do Anything Now) jailbreak.

Uses delimiter injection to confuse parsing.

Developer mode jailbreak attempt.

Generates encoding-based attacks that transform the prompt.

Generates all attack variants for a given prompt.

Hypothetical framing to bypass restrictions.

Classic ignore previous instructions attack.

Generates prompt injection attacks.

Generates classic jailbreak attacks.

Converts prompt to leetspeak (1337).

Converts prompt to Pig Latin.

Academic/research framing.

Reverses the prompt text.

Attempts to switch the model's role.

Encodes prompt using ROT13 cipher.

STAN (Strive To Avoid Norms) jailbreak variant.

Attempts to extract the system prompt.

Functions

base64_attack(prompt)

Encodes prompt in Base64 and asks model to decode and execute.

character_roleplay_attack(prompt)

Character roleplay to bypass restrictions.

dan_attack(prompt)

Classic DAN (Do Anything Now) jailbreak.

delimiter_injection_attack(prompt)

Uses delimiter injection to confuse parsing.

developer_mode_attack(prompt)

Developer mode jailbreak attempt.

encoding_attacks(prompt)

Generates encoding-based attacks that transform the prompt.

generate_attacks(prompt, opts \\ [])

Generates all attack variants for a given prompt.

Returns a list of {attack_type, prompt} tuples.

hypothetical_attack(prompt)

Hypothetical framing to bypass restrictions.

ignore_instructions_attack(prompt)

Classic ignore previous instructions attack.

injection_attacks(prompt)

Generates prompt injection attacks.

jailbreak_attacks(prompt)

Generates classic jailbreak attacks.

leetspeak_attack(prompt)

Converts prompt to leetspeak (1337).

pig_latin_attack(prompt)

Converts prompt to Pig Latin.

research_framing_attack(prompt)

Academic/research framing.

reversed_attack(prompt)

Reverses the prompt text.

role_switch_attack(prompt)

Attempts to switch the model's role.

rot13_attack(prompt)

Encodes prompt using ROT13 cipher.

stan_attack(prompt)

STAN (Strive To Avoid Norms) jailbreak variant.

system_prompt_extraction_attack()

Attempts to extract the system prompt.