View Source Scholar.Cluster.AffinityPropagation (Scholar v0.3.1)
Model representing affinity propagation clustering. The first dimension
of :clusters_centers
is set to the number of samples in the dataset.
The artificial centers are filled with :infinity
values. To fillter
them out use prune
function.
The algorithm has a time complexity of the order $O(N^2T)$, where $N$ is the number of samples and $T$ is the number of iterations until convergence. Further, the memory complexity is of the order $O(N^2)$.
Summary
Functions
Cluster the dataset using affinity propagation.
Predict the closest cluster each sample in x
belongs to.
Optionally prune clusters, indices, and labels to only valid entries.
Functions
Cluster the dataset using affinity propagation.
Options
:iterations
(pos_integer/0
) - Number of iterations of the algorithm. The default value is300
.:damping_factor
(float/0
) - Damping factor in the range [0.5, 1.0) is the extent to which the current value is maintained relative to incoming values (weighted 1 - damping). The default value is0.5
.:preference
- How to compute the preferences for each point - points with larger values of preferences are more likely to be chosen as exemplars. The number of clusters is influenced by this option.The preferences is either an atom, each is a
Nx
reduction function to apply on the input similarities (such as:reduce_min
,:median
,:mean
, etc) or a float.The default value is
:reduce_min
.:key
- Determines random number generation for centroid initialization. If the key is not provided, it is set toNx.Random.key(System.system_time())
.:learning_loop_unroll
(boolean/0
) - Iftrue
, the learning loop is unrolled. The default value isfalse
.:converge_after
(pos_integer/0
) - Number of iterations with no change in the number of estimated clusters that stops the convergence. The default value is15
.
Return Values
The function returns a struct with the following parameters:
:clusters_centers
- Cluster centers from the initial data.:cluster_centers_indices
- Indices of cluster centers.:num_clusters
- Number of clusters.
Examples
iex> key = Nx.Random.key(42)
iex> x = Nx.tensor([[12,5,78,2], [9,3,81,-2], [-1,3,6,1], [1,-2,5,2]])
iex> Scholar.Cluster.AffinityPropagation.fit(x, key: key)
%Scholar.Cluster.AffinityPropagation{
labels: Nx.tensor([0, 0, 2, 2]),
cluster_centers_indices: Nx.tensor([0, -1, 2, -1]),
cluster_centers: Nx.tensor(
[
[12.0, 5.0, 78.0, 2.0],
[:infinity, :infinity, :infinity, :infinity],
[-1.0, 3.0, 6.0, 1.0],
[:infinity, :infinity, :infinity, :infinity]
]
),
num_clusters: Nx.tensor(2, type: :u64),
iterations: Nx.tensor(22, type: :s64)
}
Predict the closest cluster each sample in x
belongs to.
Examples
iex> key = Nx.Random.key(42)
iex> x = Nx.tensor([[12,5,78,2], [9,3,81,-2], [-1,3,6,1], [1,-2,5,2]])
iex> model = Scholar.Cluster.AffinityPropagation.fit(x, key: key)
iex> model = Scholar.Cluster.AffinityPropagation.prune(model)
iex> Scholar.Cluster.AffinityPropagation.predict(model, Nx.tensor([[10,3,50,6], [8,3,8,2]]))
#Nx.Tensor<
s64[2]
[0, 1]
>
Optionally prune clusters, indices, and labels to only valid entries.
It returns an updated and pruned model.
Examples
iex> key = Nx.Random.key(42)
iex> x = Nx.tensor([[12,5,78,2], [9,3,81,-2], [-1,3,6,1], [1,-2,5,2]])
iex> model = Scholar.Cluster.AffinityPropagation.fit(x, key: key)
iex> Scholar.Cluster.AffinityPropagation.prune(model)
%Scholar.Cluster.AffinityPropagation{
labels: Nx.tensor([0, 0, 1, 1]),
cluster_centers_indices: Nx.tensor([0, 2]),
cluster_centers: Nx.tensor(
[
[12.0, 5.0, 78.0, 2.0],
[-1.0, 3.0, 6.0, 1.0]
]
),
num_clusters: Nx.tensor(2, type: :u64),
iterations: Nx.tensor(22, type: :s64)
}