ExFairness.Metrics.PredictiveParity (ExFairness v0.5.1)
View SourcePredictive Parity (Outcome Test) fairness metric.
Predictive parity requires that the positive predictive value (PPV) or precision is equal across groups defined by the sensitive attribute.
Mathematical Definition
P(Y = 1 | Ŷ = 1, A = 0) = P(Y = 1 | Ŷ = 1, A = 1)The disparity is measured as:
Δ_PP = |PPV_{A=0} - PPV_{A=1}|When to Use
- When the meaning of a positive prediction should be consistent across groups
- Risk assessment tools (positive prediction should mean similar risk)
- Credit scoring (approved applicants should have similar default rates)
- When users rely on predictions to make decisions
Limitations
- Ignores true positive rates and false negative rates
- May conflict with equalized odds when base rates differ
- Less restrictive than equalized odds
References
- Chouldechova, A. (2017). "Fair prediction with disparate impact." Big Data, 5(2), 153-163.
Examples
iex> predictions = Nx.tensor([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0])
iex> labels = Nx.tensor([1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> result = ExFairness.Metrics.PredictiveParity.compute(predictions, labels, sensitive)
iex> result.passes
true
Summary
Functions
Computes predictive parity disparity between groups.
Types
Functions
@spec compute(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: result()
Computes predictive parity disparity between groups.
Parameters
predictions- Binary predictions tensor (0 or 1)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options::threshold- Maximum acceptable PPV disparity (default: 0.1):min_per_group- Minimum samples per group for validation (default: 10)
Returns
A map containing:
:group_a_ppv- Positive predictive value for group A:group_b_ppv- Positive predictive value for group B:disparity- Absolute difference in PPV:passes- Whether disparity is within threshold:threshold- Threshold used:interpretation- Plain language explanation of the result
Examples
iex> predictions = Nx.tensor([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0])
iex> labels = Nx.tensor([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> result = ExFairness.Metrics.PredictiveParity.compute(predictions, labels, sensitive)
iex> result.passes
false