View Source API Reference Scholar v0.3.1
Modules
Model representing affinity propagation clustering. The first dimension
of :clusters_centers
is set to the number of samples in the dataset.
The artificial centers are filled with :infinity
values. To fillter
them out use prune
function.
Perform DBSCAN clustering from vector array or distance matrix.
Gaussian Mixture Model.
Performs hierarchical, agglomerative clustering on a dataset.
K-Means Algorithm.
Principal Component Analysis (PCA).
Univariate imputer for completing missing values with simple strategies.
Module for numerical integration.
Cubic Bezier Spline interpolation.
Cubic Spline interpolation.
Linear interpolation.
Bayesian ridge regression: A fully probabilistic linear model with parameter regularization.
Isotonic regression is a method of fitting a free-form line to a set of observations by solving a convex optimization problem. It is a form of regression analysis that can be used as an alternative to polynomial regression to fit nonlinear data.
Ordinary least squares linear regression.
Logistic regression in both binary and multinomial variants.
Least squares polynomial regression.
Linear least squares with $L_2$ regularization.
Support Vector Machine linear classifier.
Multidimensional scaling (MDS) seeks a low-dimensional representation of the data in which the distances respect well the distances in the original high-dimensional space.
t-SNE (t-Distributed Stochastic Neighbor Embedding) is a nonlinear dimensionality reduction technique.
TriMap: Large-scale Dimensionality Reduction Using Triplets.
Classification Metric functions.
Metrics related to clustering algorithms.
Distance metrics between multi-dimensional tensors. They all support distance calculations between any subset of axes.
Metrics for evaluating the results of approximate k-nearest neighbor search algorithms.
Provides metrics and calculations related to ranking quality.
Regression Metric functions.
Similarity metrics between multi-dimensional tensors.
Module containing cross validation, splitting function, and other model selection methods.
The Complement Naive Bayes classifier.
Gaussian Naive Bayes algorithm for classification.
Naive Bayes classifier for multinomial models.
Brute-Force k-Nearest Neighbor Search Algorithm.
Implements a k-d tree, a space-partitioning data structure for organizing points in a k-dimensional space.
K-Nearest Neighbors Classifier.
K-Nearest Neighbors Regressor.
LargeVis algorithm for approximate k-nearest neighbor (k-NN) graph construction.
Nearest Neighbors Descent (NND) is an algorithm that calculates Approximated Nearest Neighbors (ANN) for a given set of points[1].
The Radius Nearest Neighbors.
Random Projection Forest for k-Nearest Neighbor Search.
Set of functions for preprocessing data.
Scales a tensor by dividing each sample in batch by maximum absolute value in the batch
Scales a tensor by dividing each sample in batch by maximum absolute value in the batch
Implements functionality for rescaling tensor to unit norm. It enables to apply normalization along any combination of axes.
Implements encoder that converts integer value (substitute of categorical data in tensors) into 0-1 vector. The index of 1 in the vector is aranged in sorted manner. This means that for x < y => one_index(x) < one_index(y).
Implements encoder that converts integer value (substitute of categorical data in tensors) into other integer value.
The values assigned starts from 0
and go up to num_classes - 1
.They are maintained in sorted manner.
This means that for x < y => encoded_value(x) < encoded_value(y).
Standardizes the tensor by removing the mean and scaling to unit variance.
Statistical functions