# `Tribunal.Judges.Bias`
[🔗](https://github.com/georgeguimaraes/tribunal/blob/v1.3.6/lib/tribunal/judges/bias.ex#L1)

Detects stereotypes and prejudice in LLM outputs.

Evaluates both explicit bias (overtly prejudiced statements) and implicit
bias (subtle stereotypes, assumptions, or unequal treatment). Covers
protected characteristics per anti-discrimination frameworks.

This is a negative metric: "yes" (bias detected) = fail.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
