ExFairness (ExFairness v0.5.1)
View SourceExFairness - Fairness and bias detection library for Elixir AI/ML systems.
ExFairness provides comprehensive fairness metrics, bias detection algorithms, and mitigation techniques to ensure equitable predictions across different demographic groups.
Features
- Fairness Metrics: Demographic parity, equalized odds, equal opportunity, and more
- Bias Detection: Statistical testing, disparate impact analysis, intersectional bias
- Mitigation: Reweighting, resampling, threshold optimization, adversarial debiasing
- Reporting: Comprehensive fairness reports with interpretations
Quick Start
# Compute demographic parity
predictions = Nx.tensor([1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0])
sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
result = ExFairness.demographic_parity(predictions, sensitive)
# => %{disparity: 0.0, passes: true, ...}Metrics
demographic_parity/3- Demographic parity (statistical parity)equalized_odds/4- Equalized odds (equal TPR and FPR)equal_opportunity/4- Equal opportunity (equal TPR)predictive_parity/4- Predictive parity (equal PPV)- More metrics coming soon...
Summary
Functions
Computes calibration fairness between groups using predicted probabilities.
Computes demographic parity disparity between groups.
Computes equal opportunity disparity between groups.
Computes equalized odds disparity between groups.
Evaluates fairness using a CrucibleIR.Reliability.Fairness configuration.
Generates a comprehensive fairness report across multiple metrics.
Computes predictive parity disparity between groups.
Functions
@spec calibration(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: ExFairness.Metrics.Calibration.result()
Computes calibration fairness between groups using predicted probabilities.
Calibration checks whether predicted probabilities align with actual outcomes equally across groups, reporting ECE/MCE per group and the disparity.
Parameters
probabilities- Predicted probabilities (0.0 to 1.0)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options (seeExFairness.Metrics.Calibration.compute/4)
Examples
iex> probs = Nx.tensor([0.1, 0.3, 0.6, 0.9, 0.2, 0.4, 0.7, 0.8, 0.5, 0.3,
...> 0.1, 0.3, 0.6, 0.9, 0.2, 0.4, 0.7, 0.8, 0.5, 0.3])
iex> labels = Nx.tensor([0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> result = ExFairness.calibration(probs, labels, sensitive, n_bins: 5)
iex> result.passes
true
@spec demographic_parity(Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: ExFairness.Metrics.DemographicParity.result()
Computes demographic parity disparity between groups.
Demographic parity requires that the probability of a positive prediction is equal across groups defined by the sensitive attribute.
Parameters
predictions- Binary predictions tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options (seeExFairness.Metrics.DemographicParity.compute/3)
Returns
A map containing fairness metrics. See ExFairness.Metrics.DemographicParity.compute/3
for details.
Examples
iex> predictions = Nx.tensor([1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> result = ExFairness.demographic_parity(predictions, sensitive)
iex> result.passes
true
@spec equal_opportunity(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: ExFairness.Metrics.EqualOpportunity.result()
Computes equal opportunity disparity between groups.
Equal opportunity requires that TPR is equal across groups.
Parameters
predictions- Binary predictions tensor (0 or 1)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options (seeExFairness.Metrics.EqualOpportunity.compute/4)
Returns
A map containing fairness metrics. See ExFairness.Metrics.EqualOpportunity.compute/4
for details.
@spec equalized_odds(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: ExFairness.Metrics.EqualizedOdds.result()
Computes equalized odds disparity between groups.
Equalized odds requires that both TPR and FPR are equal across groups.
Parameters
predictions- Binary predictions tensor (0 or 1)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options (seeExFairness.Metrics.EqualizedOdds.compute/4)
Returns
A map containing fairness metrics. See ExFairness.Metrics.EqualizedOdds.compute/4
for details.
Examples
iex> predictions = Nx.tensor([1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1])
iex> labels = Nx.tensor([1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> result = ExFairness.equalized_odds(predictions, labels, sensitive)
iex> result.passes
true
@spec evaluate( Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), struct(), Nx.Tensor.t() | nil ) :: %{ metrics: map(), overall_passes: boolean(), violations: [map()] }
Evaluates fairness using a CrucibleIR.Reliability.Fairness configuration.
This function provides a bridge between the Crucible framework and ExFairness, allowing fairness evaluation to be configured using CrucibleIR's configuration structures.
Note: This function is available when the crucible_ir dependency is loaded.
Parameters
predictions- Binary predictions tensor (0 or 1)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)config- CrucibleIR.Reliability.Fairness configuration structprobabilities- (Optional) Prediction probabilities for calibration metrics
Returns
A map containing:
:metrics- Map of metric results for each configured metric:overall_passes- Boolean indicating if all metrics pass:violations- List of metrics that failed to pass
Examples
iex> config = %CrucibleIR.Reliability.Fairness{
...> enabled: true,
...> metrics: [:demographic_parity],
...> threshold: 0.1
...> }
iex> predictions = Nx.tensor([1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0])
iex> labels = Nx.tensor([1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> result = ExFairness.evaluate(predictions, labels, sensitive, config)
iex> result.overall_passes
true
@spec fairness_report(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: ExFairness.Report.report()
Generates a comprehensive fairness report across multiple metrics.
Parameters
predictions- Binary predictions tensor (0 or 1)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options (seeExFairness.Report.generate/4). To include calibration, passprobabilities: probs.
Returns
A comprehensive fairness report. See ExFairness.Report.generate/4 for details.
Examples
iex> predictions = Nx.tensor([1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1])
iex> labels = Nx.tensor([1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1])
iex> sensitive = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
iex> report = ExFairness.fairness_report(predictions, labels, sensitive)
iex> report.total_count
4
@spec predictive_parity(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t(), keyword()) :: ExFairness.Metrics.PredictiveParity.result()
Computes predictive parity disparity between groups.
Predictive parity requires that PPV/precision is equal across groups.
Parameters
predictions- Binary predictions tensor (0 or 1)labels- Binary labels tensor (0 or 1)sensitive_attr- Binary sensitive attribute tensor (0 or 1)opts- Options (seeExFairness.Metrics.PredictiveParity.compute/4)
Returns
A map containing fairness metrics. See ExFairness.Metrics.PredictiveParity.compute/4
for details.