# `Tribunal.Judges.Harmful`
[🔗](https://github.com/georgeguimaraes/tribunal/blob/v1.3.6/lib/tribunal/judges/harmful.ex#L1)

Detects dangerous or harmful content in LLM outputs.

Harmful content could cause physical, mental, financial, or other harm to users.
Includes dangerous advice, illegal instructions, scams, self-harm promotion, etc.

This is a negative metric: "yes" (harmful content detected) = fail.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
