View Source Manifold learning
Mix.install([
{:scholar, github: "elixir-nx/scholar"},
{:explorer, "~> 0.8.2", override: true},
{:exla, "~> 0.7.2"},
{:nx, "~> 0.7.2"},
{:req, "~> 0.4.14"},
{:kino_vega_lite, "~> 0.1.11"},
{:kino, "~> 0.12.3"},
{:kino_explorer, "~> 0.1.18"},
{:tucan, "~> 0.3.1"}
])
Setup
We will use Explorer
in this notebook, so let's define an alias for its main module DataFrame:
require Explorer.DataFrame, as: DF
Explorer.DataFrame
And let's configure EXLA
as our default backend (where our tensors are stored) and compiler (which compiles Scholar code) across the notebook and all branched sections:
Nx.global_default_backend(EXLA.Backend)
Nx.Defn.global_default_options(compiler: EXLA)
[]
Testing Manifold Learning Functionalities
In this notebook, we test how manifold learning algorithms work and how to use them for dimensionality reduction.
First, let's fetch the dataset that we experiment on. The data represents 3D coordinates of a mammoth. Below we include a figure of original dataset.
source = "https://raw.githubusercontent.com/MNoichl/UMAP-examples-mammoth-/master/mammoth_a.csv"
data = Req.get!(source).body
df = DF.load_csv!(data)
#Explorer.DataFrame<
Polars[999778 x 3]
x f64 [58.823, 59.197, 58.734, 59.043, 59.223, ...]
y f64 [228.407, 228.642, 228.931, 228.693, 228.667, ...]
z f64 [79.843, 77.478, 78.515, 78.571, 78.611, ...]
>
Now, convert the dataframe into tensor, so we can manipulate the data using Scholar
.
tensor_data = Nx.stack(df, axis: 1)
#Nx.Tensor<
f64[999778][3]
EXLA.Backend<host:0, 0.2236801022.581042190.174268>
[
[58.823, 228.407, 79.843],
[59.197, 228.642, 77.478],
[58.734, 228.931, 78.515],
[59.043, 228.693, 78.571],
[59.223, 228.667, 78.611],
[59.103, 228.711, 78.305],
[58.854, 228.786, 78.597],
[59.123, 228.695, 77.371],
[59.002, 228.592, 78.925],
[58.368, 227.879, 81.155],
[59.168, 229.8, 74.95],
[58.798, 229.431, 76.296],
[59.257, 229.227, 76.144],
[58.408, 250.928, 93.15],
[58.575, 250.743, 93.323],
[71.011, 217.179, 62.859],
[70.259, 217.511, ...],
...
]
>
Since there is almost 1 million data points and they are sorted, we shuffle dataset and then use only the part of the dataset.
Trimap
We start with Trimap. It's a manifold learning algorithm that is based of nearest neighbors. It preserves the global structure of dataset, which can be used for understanding the overall data distribution. Let's look what will be the result of the Trimap on mammoth dataset.
{tensor_data, key} = Nx.Random.shuffle(Nx.Random.key(42), tensor_data)
trimap_res =
Scholar.Manifold.Trimap.transform(tensor_data[[0..10000, ..]],
key: Nx.Random.key(55),
num_components: 2,
num_inliers: 12,
num_outliers: 4,
weight_temp: 0.5,
learning_rate: 0.1,
metric: :squared_euclidean
)
#Nx.Tensor<
f64[10001][2]
EXLA.Backend<host:0, 0.2236801022.581042192.174259>
[
[87.5960481212865, 107.83563946728007],
[95.93224095586187, 91.03811962459187],
[76.23538360750037, 101.22665484812988],
[91.28047447374527, 83.89099279844432],
[84.84272736715022, 63.05776975461835],
[77.47594824886791, 70.58007253201106],
[48.1726647167702, 83.59889134210331],
[87.09967163881679, 81.16403134057438],
[78.77111626020424, 89.5381523774902],
[97.90247495971414, 90.30331715233045],
[74.82692937810215, 88.72983366943957],
[84.41794589221618, 100.28068173065604],
[48.9525994243841, 65.31540933930391],
[97.74716685388529, 87.01213386339163],
[55.25583739452793, 78.09500152893669],
[78.8556707294727, 96.20470118706773],
[87.06284966061098, 92.95614239803128],
[98.14345384260274, 77.08288277313781],
[45.61527701121027, 86.90240376974904],
[109.77362315414614, 105.15212699721849],
[77.18580406991187, 90.15798204995377],
[72.62493132684811, 61.503358925355755],
[64.86122257994592, 56.26636625077798],
[107.15238372453034, 86.8502564790939],
[76.74146917610297, 92.78343841441362],
...
]
>
Now, lets plot the results of Trimap algorithm
coords = [
x: trimap_res[[.., 0]] |> Nx.to_flat_list(),
y: trimap_res[[.., 1]] |> Nx.to_flat_list()
]
Tucan.scatter(coords, "x", "y", point_size: 1)
|> Tucan.set_size(300, 300)
|> Tucan.set_title(
"Mammoth dataset with reduced dimensionality using Trimap",
offset: 25
)
For sure, we can recognize mammoth on this picture. Trimap indeed preserved the global structure of data. Result is similar to the projection of 3D mammoth to the YZ plane. Now, plot this projection and compare these two plots.
coords = [
x: tensor_data[[0..10000, 1]] |> Nx.to_flat_list(),
y: tensor_data[[0..10000, 2]] |> Nx.to_flat_list()
]
Tucan.scatter(coords, "x", "y", point_size: 1)
|> Tucan.set_size(300, 300)
|> Tucan.set_title(
"Mammoth data set with reduced dimensionality using trimap",
offset: 25
)
These two plots are similiar but there are some important differences. Even if the second figure seems "prettier" it is less informative than the result of trimap. On the first figure, we can spot two tusks while one the second one they overlap and we see only one. Similarly, legs overlay on the first plot and one the second one they are spread and don't intersect with each other.
t-SNE
Now, lets try different algorithm: t-Distributed Stochastic Neighbor Embedding, mostly known as t-SNE. t-SNE is a nonlinear dimensionality reduction technique that emphasizes local relationships, making it useful for identifying patterns and subgroups within the data.
tsne_res =
Scholar.Manifold.TSNE.fit(tensor_data[[0..2000, ..]],
key: Nx.Random.key(55),
num_components: 2,
perplexity: 125,
exaggeration: 10.0,
learning_rate: 500,
metric: :squared_euclidean
)
warning: Range.new/2 has a default step of -1, please call Range.new/3 explicitly passing the step of -1 instead
(nx 0.7.2) lib/nx/shape.ex:1811: Nx.Shape.validate_dot_axes!/6
(nx 0.7.2) lib/nx/shape.ex:1744: Nx.Shape.dot/8
(nx 0.7.2) lib/nx.ex:12525: Nx.dot/6
(nx 0.7.2) lib/nx/lin_alg/qr.ex:102: Nx.LinAlg.QR."__defn:norm__"/1
(nx 0.7.2) lib/nx/lin_alg/qr.ex:109: Nx.LinAlg.QR."__defn:householder_reflector__"/3
#Nx.Tensor<
f64[2001][2]
EXLA.Backend<host:0, 0.2236801022.581042190.174275>
[
[-7.720863452772271, -8.924593239946077],
[-7.627408661169662, -8.493499839300748],
[-7.655294023089491, -8.67975773066019],
[-7.660605743680346, -8.669241639954539],
[-7.6598753013087535, -8.658590299593522],
[-7.650612040457881, -8.62046834395759],
[-7.66115746267903, -8.687692549707776],
[-7.624018688200683, -8.485297780382929],
[-7.6767334531761025, -8.7359601794454],
[-7.810238386734004, -9.244003168397134],
[-7.5452610843798915, -8.215109191115873],
[-7.582641346103063, -8.371180999947315],
[-7.5790580771288925, -8.318735667656417],
[-6.330265165889219, -17.67182425161148],
[-6.350620100516294, -17.664527925102853],
[-8.511254173153981, 3.2484235457551365],
[-8.667916640544206, 3.2325600834700907],
[-8.652664730644178, 3.1774228289244664],
[-8.722131914178286, 3.1262080759539486],
[-8.772817730928294, 3.098786298010697],
[-2.6314317865997636, 3.17224901794698],
[-2.8022476551090123, 3.2279404380435994],
[-2.8261919490936998, 3.2470904773946208],
[-4.9968543194910575, 4.953954052553824],
[-6.961367201446683, -17.431908124735948],
...
]
>
coords = [
x: tsne_res[[.., 0]] |> Nx.to_flat_list(),
y: tsne_res[[.., 1]] |> Nx.to_flat_list()
]
Tucan.scatter(coords, "x", "y", point_size: 1)
|> Tucan.set_size(300, 300)
|> Tucan.set_title(
"Mammoth dataset with reduced dimensionality using t-SNE",
offset: 25
)
As we see, t-SNE gives completely different results than Trimap. This is because t-SNE has a completely different mathematical background of computation. Also t-SNE is slower algorithm, so it can't be used on such big datasets as Trimap. However, t-SNE preserves some features of mammoth like small tusks, feets, and corp. You can experiment with parameter perplexity which can substantially change the output of the algorithm.