# `Tribunal.Judges.Toxicity`
[🔗](https://github.com/georgeguimaraes/tribunal/blob/v1.3.6/lib/tribunal/judges/toxicity.ex#L1)

Detects hostile, abusive, or toxic content in LLM outputs.

Evaluates across categories: identity attacks, insults, threats,
harassment, profanity, and violent content.

This is a negative metric: "yes" (toxicity detected) = fail.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
