CrucibleXai (CrucibleXAI v0.4.0)
View SourceCrucibleXAI - Explainable AI (XAI) Library for Elixir
A comprehensive library for explaining machine learning model predictions using state-of-the-art interpretability techniques. Built on Nx for high-performance numerical computing.
Features
LIME (Local Interpretable Model-agnostic Explanations)
- Explain any black-box model locally with interpretable linear models
- Multiple sampling strategies (Gaussian, Uniform, Categorical)
- Flexible kernel functions for proximity weighting
- Feature selection methods (highest weights, forward selection, Lasso)
Model-Agnostic: Works with any prediction function
High Performance: Built on Nx tensors for efficient computation
Flexible: Extensive configuration options
Well-Tested: Comprehensive test suite with property-based testing
Quick Start
# Explain a model prediction
predict_fn = fn [x, y] -> 2.0 * x + 3.0 * y + 1.0 end
instance = [1.0, 2.0]
explanation = CrucibleXai.explain(instance, predict_fn)
# View explanation
IO.puts(CrucibleXai.Explanation.to_text(explanation))Main Modules
CrucibleXai.LIME- LIME explanationsCrucibleXai.SHAP- SHAP (Shapley values) explanationsCrucibleXai.Explanation- Explanation structure and utilitiesCrucibleXai.Validation- Explanation quality metrics and validation (v0.3.0+)CrucibleXai.LIME.Sampling- Data perturbation strategiesCrucibleXai.LIME.Kernels- Proximity weighting functionsCrucibleXai.LIME.InterpretableModels- Linear regression modelsCrucibleXai.LIME.FeatureSelection- Feature selection methods
References
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD.
Summary
Functions
Compute infidelity score.
Explain a model prediction using LIME.
Explain multiple instances.
Explain using SHAP (Shapley values).
Calculate feature importance using permutation importance.
Measure faithfulness of an explanation.
Quick validation for production use.
Validate an explanation comprehensively.
Functions
@spec compute_infidelity(list(), map(), (any() -> any()), keyword()) :: CrucibleXAI.Validation.Infidelity.result()
Compute infidelity score.
Measures explanation error via perturbation-based testing.
New in v0.3.0.
Parameters
instance- Instance explainedattributions- Attribution mappredict_fn- Prediction functionopts- Options
Returns
Map with infidelity score (lower is better)
@spec explain( list() | Nx.Tensor.t(), (any() -> number() | Nx.Tensor.t()), Keyword.t() ) :: CrucibleXAI.Explanation.t()
Explain a model prediction using LIME.
Convenience function that delegates to CrucibleXai.LIME.explain/3.
Parameters
instance- The instance to explainpredict_fn- Function that takes input and returns predictionopts- Options (seeCrucibleXai.LIMEfor details)
Returns
%Explanation{} struct with feature weights and metadata
Examples
iex> predict_fn = fn [x] -> x * 2.0 end
iex> explanation = CrucibleXai.explain([5.0], predict_fn, num_samples: 100)
iex> explanation.method
:lime
@spec explain_batch( list(), (any() -> number() | Nx.Tensor.t()), Keyword.t() ) :: [CrucibleXAI.Explanation.t()]
Explain multiple instances.
Convenience function that delegates to CrucibleXai.LIME.explain_batch/3.
Parameters
instances- List of instances to explainpredict_fn- Prediction functionopts- Options
Returns
List of %Explanation{} structs
@spec explain_shap(list() | Nx.Tensor.t(), list(), function(), keyword()) :: %{ required(integer()) => float() }
Explain using SHAP (Shapley values).
Convenience function that delegates to CrucibleXAI.SHAP.explain/4.
Parameters
instance- The instance to explainbackground_data- Background dataset for baselinepredict_fn- Prediction functionopts- Options (seeCrucibleXAI.SHAPfor details)
Returns
Map of feature_index => shapley_value
Examples
iex> predict_fn = fn [x] -> x * 2.0 end
iex> shap = CrucibleXai.explain_shap([5.0], [[0.0]], predict_fn, num_samples: 500)
iex> is_map(shap)
true
@spec feature_importance((any() -> any()), [{list(), number()}, ...], Keyword.t()) :: %{ required(integer()) => %{importance: float(), std_dev: float()} }
Calculate feature importance using permutation importance.
Convenience function that delegates to CrucibleXAI.FeatureAttribution.permutation_importance/3.
Parameters
predict_fn- Prediction functionvalidation_data- List of {instance, label} tuplesopts- Options (seeCrucibleXAI.FeatureAttributionfor details)
Returns
Map of feature_index => %{importance: float, std_dev: float}
Examples
iex> predict_fn = fn [x] -> x * 2.0 end
iex> data = [{[1.0], 2.0}, {[2.0], 4.0}]
iex> imp = CrucibleXai.feature_importance(predict_fn, data, num_repeats: 2)
iex> is_map(imp)
true
@spec measure_faithfulness( list(), CrucibleXAI.Explanation.t(), (any() -> any()), keyword() ) :: CrucibleXAI.Validation.Faithfulness.faithfulness_result()
Measure faithfulness of an explanation.
Tests whether removing important features causes proportional prediction changes.
New in v0.3.0.
Parameters
instance- Instance explainedexplanation- Explanation structpredict_fn- Prediction functionopts- Options
Returns
Map with faithfulness score and details
@spec quick_validate(CrucibleXAI.Explanation.t(), list(), (any() -> any()), keyword()) :: CrucibleXAI.Validation.quick_report()
Quick validation for production use.
Fast quality check using faithfulness and infidelity metrics only.
New in v0.3.0.
Parameters
explanation- Explanation struct to validateinstance- Instance that was explainedpredict_fn- Prediction functionopts- Options
Returns
Map with quality scores and pass/fail status
Examples
iex> explanation = CrucibleXai.explain([5.0], fn [x] -> x * 2.0 end)
iex> quick = CrucibleXai.quick_validate(explanation, [5.0], fn [x] -> x * 2.0 end)
iex> is_boolean(quick.passes_quality_gate)
true
@spec validate_explanation( CrucibleXAI.Explanation.t(), list(), (any() -> any()), keyword() ) :: CrucibleXAI.Validation.comprehensive_report()
Validate an explanation comprehensively.
Measures explanation quality across multiple dimensions: faithfulness, infidelity, sensitivity, and axiom compliance.
New in v0.3.0.
Parameters
explanation- Explanation struct to validateinstance- Instance that was explainedpredict_fn- Prediction functionopts- Options (seeCrucibleXAI.Validationfor details)
Returns
Map with validation results and quality score
Examples
iex> explanation = CrucibleXai.explain([5.0, 10.0], fn [x, y] -> 2.0 * x + 3.0 * y end)
iex> validation = CrucibleXai.validate_explanation(explanation, [5.0, 10.0], fn [x, y] -> 2.0 * x + 3.0 * y end)
iex> is_map(validation)
true
iex> Map.has_key?(validation, :quality_score)
true