View Source Explorer.Query (Explorer v0.10.0)
High-level query for Explorer.
Explorer.DataFrame vs DF
All examples below assume you have defined aliased
Explorer.DataFrame
toDF
as shown below:require Explorer.DataFrame, as: DF
Queries convert regular Elixir code which compile to efficient
dataframes operations. Inside a query, only the limited set of
Series operations are available and identifiers, such as strs
and nums
, represent dataframe column names:
iex> df = DF.new(strs: ["a", "b", "c"], nums: [1, 2, 3])
iex> DF.filter(df, nums > 2)
#Explorer.DataFrame<
Polars[1 x 2]
strs string ["c"]
nums s64 [3]
>
If a column has unusual format, you can either rename it before-hand,
or use col/1
inside queries:
iex> df = DF.new("unusual nums": [1, 2, 3])
iex> DF.filter(df, col("unusual nums") > 2)
#Explorer.DataFrame<
Polars[1 x 1]
unusual nums s64 [3]
>
All operations from Explorer.Series
are imported inside queries.
This module also provides operators to use in queries, which are
also imported into queries.
Supported operations
Queries are supported in the following operations:
Explorer.DataFrame.sort_by/2
Explorer.DataFrame.filter/2
Explorer.DataFrame.mutate/2
Explorer.DataFrame.summarise/2
Interpolation
If you want to access variables defined outside of the query
or get access to all Elixir constructs, you must use ^
:
iex> min = 2
iex> df = DF.new(strs: ["a", "b", "c"], nums: [1, 2, 3])
iex> DF.filter(df, nums > ^min)
#Explorer.DataFrame<
Polars[1 x 2]
strs string ["c"]
nums s64 [3]
>
iex> min = 2
iex> df = DF.new(strs: ["a", "b", "c"], nums: [1, 2, 3])
iex> DF.filter(df, nums < ^if(min > 0, do: 10, else: -10))
#Explorer.DataFrame<
Polars[3 x 2]
strs string ["a", "b", "c"]
nums s64 [1, 2, 3]
>
^
can be used with col
to access columns dynamically:
iex> df = DF.new("unusual nums": [1, 2, 3])
iex> name = "unusual nums"
iex> DF.filter(df, col(^name) > 2)
#Explorer.DataFrame<
Polars[1 x 1]
unusual nums s64 [3]
>
Conditionals
Queries support both if/2
and unless/2
operations inside queries.
cond/1
can be used to write multi-clause conditions:
iex> df = DF.new(a: [10, 4, 6])
iex> DF.mutate(df,
...> b:
...> cond do
...> a > 9 -> "Exceptional"
...> a > 5 -> "Passed"
...> true -> "Failed"
...> end
...> )
#Explorer.DataFrame<
Polars[3 x 2]
a s64 [10, 4, 6]
b string ["Exceptional", "Failed", "Passed"]
>
Across and comprehensions
Explorer.Query
leverages the power behind Elixir for-comprehensions
to provide a powerful syntax for traversing several columns in a dataframe
at once. For example, imagine you want to standardize the data on the
iris dataset, you could write this:
iex> iris = Explorer.Datasets.iris()
iex> DF.mutate(iris,
...> sepal_width: (sepal_width - mean(sepal_width)) / variance(sepal_width),
...> sepal_length: (sepal_length - mean(sepal_length)) / variance(sepal_length),
...> petal_length: (petal_length - mean(petal_length)) / variance(petal_length),
...> petal_width: (petal_width - mean(petal_width)) / variance(petal_width)
...> )
#Explorer.DataFrame<
Polars[150 x 5]
sepal_length f64 [-1.0840606189132322, -1.3757361217598405, -1.66741162460645, -1.8132493760297554, -1.2298983703365363, ...]
sepal_width f64 [2.3722896125315045, -0.28722789030650403, 0.7765791108287005, 0.2446756102610982, 2.9041931130991068, ...]
petal_length f64 [-0.7576391687443839, -0.7576391687443839, -0.7897606710936369, -0.7255176663951307, -0.7576391687443839, ...]
petal_width f64 [-1.7147014356654708, -1.7147014356654708, -1.7147014356654708, -1.7147014356654708, -1.7147014356654708, ...]
species string ["Iris-setosa", "Iris-setosa", "Iris-setosa", "Iris-setosa", "Iris-setosa", ...]
>
While the code above does its job, it is quite repetitive. With across and for-comprehensions, we could instead write:
iex> iris = Explorer.Datasets.iris()
iex> DF.mutate(iris,
...> for col <- across(["sepal_width", "sepal_length", "petal_length", "petal_width"]) do
...> {col.name, (col - mean(col)) / variance(col)}
...> end
...> )
#Explorer.DataFrame<
Polars[150 x 5]
sepal_length f64 [-1.0840606189132322, -1.3757361217598405, -1.66741162460645, -1.8132493760297554, -1.2298983703365363, ...]
sepal_width f64 [2.3722896125315045, -0.28722789030650403, 0.7765791108287005, 0.2446756102610982, 2.9041931130991068, ...]
petal_length f64 [-0.7576391687443839, -0.7576391687443839, -0.7897606710936369, -0.7255176663951307, -0.7576391687443839, ...]
petal_width f64 [-1.7147014356654708, -1.7147014356654708, -1.7147014356654708, -1.7147014356654708, -1.7147014356654708, ...]
species string ["Iris-setosa", "Iris-setosa", "Iris-setosa", "Iris-setosa", "Iris-setosa", ...]
>
Which achieves the same result in a more concise and maintainable way.
across/1
may receive any of the following input as arguments:
a list of columns indexes or names as atoms and strings
a range
a regex that keeps only the names matching the regex
For example, since we know the width and length columns are the first four, we could also have written (remember ranges in Elixir are inclusive):
DF.mutate(iris,
for col <- across(0..3) do
{col.name, (col - mean(col)) / variance(col)}
end
)
Or using a regex:
DF.mutate(iris,
for col <- across(~r/(sepal|petal)_(length|width)/) do
{col.name, (col - mean(col)) / variance(col)}
end
)
For those new to Elixir, for-comprehensions have the following format:
for PATTERN <- GENERATOR, FILTER do
EXPR
end
A comprehension filter is a mechanism that allows us to keep only columns
based on additional properties, such as its dtype
. A for-comprehension can
have multiple generators and filters. For instance, if you want to apply
standardization to all float columns, we can use across/0
to access all
columns and then use a filter to keep only the float ones:
iex> iris = Explorer.Datasets.iris()
iex> DF.mutate(iris,
...> for col <- across(), col.dtype == {:f, 64} do
...> {col.name, (col - mean(col)) / variance(col)}
...> end
...> )
#Explorer.DataFrame<
Polars[150 x 5]
sepal_length f64 [-1.0840606189132322, -1.3757361217598405, -1.66741162460645, -1.8132493760297554, -1.2298983703365363, ...]
sepal_width f64 [2.3722896125315045, -0.28722789030650403, 0.7765791108287005, 0.2446756102610982, 2.9041931130991068, ...]
petal_length f64 [-0.7576391687443839, -0.7576391687443839, -0.7897606710936369, -0.7255176663951307, -0.7576391687443839, ...]
petal_width f64 [-1.7147014356654708, -1.7147014356654708, -1.7147014356654708, -1.7147014356654708, -1.7147014356654708, ...]
species string ["Iris-setosa", "Iris-setosa", "Iris-setosa", "Iris-setosa", "Iris-setosa", ...]
>
For-comprehensions works with all dataframe verbs. As we have seen
above, for mutations we must return tuples as pair with the mutation
name and its value. summarise
works similarly. Note in both cases
the name could also be generated dynamically. For example, to compute
the mean per species, you could write:
iex> Explorer.Datasets.iris()
...> |> DF.group_by("species")
...> |> DF.summarise(
...> for col <- across(), col.dtype == {:f, 64} do
...> {"#{col.name}_mean", round(mean(col), 3)}
...> end
...> )
#Explorer.DataFrame<
Polars[3 x 5]
species string ["Iris-setosa", "Iris-versicolor", "Iris-virginica"]
sepal_length_mean f64 [5.006, 5.936, 6.588]
sepal_width_mean f64 [3.418, 2.77, 2.974]
petal_length_mean f64 [1.464, 4.26, 5.552]
petal_width_mean f64 [0.244, 1.326, 2.026]
>
sort_by
expects a list of columns to sort by, while for-comprehensions
in filter
generate a list of conditions, which are joined using and
.
For example, to filter all entries have both sepal and petal length above
average, using a filter on the column name, one could write:
iex> iris = Explorer.Datasets.iris()
iex> DF.filter(iris,
...> for col <- across(), String.ends_with?(col.name, "_length") do
...> col > mean(col)
...> end
...> )
#Explorer.DataFrame<
Polars[70 x 5]
sepal_length f64 [7.0, 6.4, 6.9, 6.5, 6.3, ...]
sepal_width f64 [3.2, 3.2, 3.1, 2.8, 3.3, ...]
petal_length f64 [4.7, 4.5, 4.9, 4.6, 4.7, ...]
petal_width f64 [1.4, 1.5, 1.5, 1.5, 1.6, ...]
species string ["Iris-versicolor", "Iris-versicolor", "Iris-versicolor", "Iris-versicolor", "Iris-versicolor", ...]
>
Do not mix comprehension and queries
The filter inside a for-comprehension works at the meta level: it can only filter columns based on their names and dtypes, but not on their values. For example, this code does not make any sense and it will fail to compile:
|> DF.filter( for col <- across(), col > mean(col) do col end end)
Another way to think about it, the comprehensions traverse on the columns themselves, the contents inside the comprehension do-block traverse on the values inside the columns.
Implementation details
Queries simply become lazy dataframe operations at runtime. For example, the following query
Explorer.DataFrame.filter(df, nums > 2)
is equivalent to
Explorer.DataFrame.filter_with(df, fn df -> Explorer.Series.greater(df["nums"], 2) end)
This means that, whenever you want to generate queries programatically,
you can fallback to the regular _with
APIs.
In the _with
APIs, the callbacks receive an Explorer.DataFrame
as an
input. That dataframe is backed by the special Explorer.Backend.QueryFrame
backend.
Explorer.DataFrame.filter_with(df, fn query_backed_frame ->
IO.inspect(query_backed_frame)
...
end)
# #Explorer.DataFrame<
# QueryFrame[??? x 1]
# ...
# >
A "query-backed" dataframe cannot be manipulated. You may only access its series. And when you do, you get back "lazy-backed" versions of those series:
Explorer.DataFrame.filter_with(df, fn query_backed_frame ->
IO.inspect(query_backed_frame["a"])
...
end)
# #Explorer.Series<
# LazySeries[???]
# s64 (column("a"))
# >
"Lazy-backed" series are backed by the special Explorer.Backend.LazySeries
backend. All Explorer.Series
functions work on lazy-backed series too. So
you can write your _with
callbacks without ever referencing the fact that
the backend is the lazy one.
Summary
Functions
Delegate to Explorer.Series.pow/2
.
Delegate to Explorer.Series.multiply/2
.
Unary plus operator.
Delegate to Explorer.Series.add/2
.
Unary minus operator.
Delegate to Explorer.Series.subtract/2
.
Delegate to Explorer.Series.divide/2
.
Delegate to Explorer.Series.not_equal/2
.
Delegate to Explorer.Series.less/2
.
Delegate to Explorer.Series.less_equal/2
.
String concatenation operator.
Delegate to Explorer.Series.equal/2
.
Delegate to Explorer.Series.greater/2
.
Delegate to Explorer.Series.greater_equal/2
.
Accesses all columns in the dataframe.
Accesses the columns given by selector
in the dataframe.
Binary and operator.
Accesses a column by name.
Returns the dataframe scoped by this query.
Provides if/2
conditionals inside queries.
Returns a "query-backed" Explorer.DataFrame
for use in queries.
Unary not operator.
Binary or operator.
Builds an anonymous function from a query.
Provides unless/2
conditionals inside queries.
Functions
Delegate to Explorer.Series.pow/2
.
Delegate to Explorer.Series.multiply/2
.
Unary plus operator.
Works with numbers and series.
Delegate to Explorer.Series.add/2
.
Unary minus operator.
Works with numbers and series.
Delegate to Explorer.Series.subtract/2
.
Delegate to Explorer.Series.divide/2
.
Delegate to Explorer.Series.not_equal/2
.
Delegate to Explorer.Series.less/2
.
Delegate to Explorer.Series.less_equal/2
.
String concatenation operator.
Works with strings and series of strings.
Examples
DF.mutate(df, name: first_name <> " " <> last_name)
If you want to convert concatenate non-string series, you can explicitly cast them to string before:
DF.mutate(df, name: cast(year, :string) <> "-" <> cast(month, :string))
Or use format:
DF.mutate(df, name: format([year, "-", month]))
Delegate to Explorer.Series.equal/2
.
Delegate to Explorer.Series.greater/2
.
Delegate to Explorer.Series.greater_equal/2
.
Accesses all columns in the dataframe.
This is the equivalent to across(..)
.
See the module docs for more information.
Accesses the columns given by selector
in the dataframe.
across/1
is used as the generator inside for-comprehensions.
See the module docs for more information.
Binary and operator.
Works with boolean and series.
Accesses a column by name.
If your column name contains whitespace or start with uppercase letters, you can still access its name by using this macro:
iex> df = Explorer.DataFrame.new("unusual nums": [1, 2, 3])
iex> Explorer.DataFrame.filter(df, col("unusual nums") > 2)
#Explorer.DataFrame<
Polars[1 x 1]
unusual nums s64 [3]
>
name
must be an atom, a string, or an integer.
It is equivalent to df[name]
but inside a query.
This can also be used if you want to access a column programmatically, for example:
iex> df = Explorer.DataFrame.new(nums: [1, 2, 3])
iex> name = :nums
iex> Explorer.DataFrame.filter(df, col(^name) > 2)
#Explorer.DataFrame<
Polars[1 x 1]
nums s64 [3]
>
For traversing multiple columns programmatically,
see across/0
and across/1
.
Returns the dataframe scoped by this query.
Provides if/2
conditionals inside queries.
Returns a "query-backed" Explorer.DataFrame
for use in queries.
This function is mostly an implementation detail for the *_with
callbacks.
See the "Implementation details" section of the @moduledoc
for details.
There are some limited instances where it's more convenient to work with
query-backed DataFrame
s. For example, if you want to re-use a lazy series,
you can do so like this:
alias Explorer.{DataFrame, Query, Series}
df = DataFrame.new(a: [1, 2, 3])
qf = Query.new(df)
gt_1 = Series.greater(qf["a"], 1)
lt_3 = Series.less(qf["a"], 3)
df
|> DataFrame.filter_with(gt_1)
|> DataFrame.to_columns(atom_keys: true)
#=> %{a: [2, 3]}
df
|> DataFrame.filter_with(lt_3)
|> DataFrame.to_columns(atom_keys: true)
#=> %{a: [1, 2]}
df
|> DataFrame.filter_with(Series.and(gt_1, lt_3))
|> DataFrame.to_columns(atom_keys: true)
#=> %{a: [2]}
However, if you think you need new/1
, first check that you can't accomplish
the same thing with across/0
inside a macro. The latter is usually easier to
work with.
Unary not operator.
Works with boolean and series.
Binary or operator.
Works with boolean and series.
Builds an anonymous function from a query.
This is the entry point used by Explorer.DataFrame.filter/2
and friends to convert queries into anonymous functions.
See the moduledoc for more information.
Provides unless/2
conditionals inside queries.