View Source Ten Minutes to Explorer

introduction

Introduction

Explorer is a dataframe library for Elixir. A dataframe is a common data structure used in data analysis. It is a two-dimensional table composed of columns and rows similar to a SQL table or a spreadsheet.

Explorer's aim is to provide a simple and powerful API for manipulating dataframes. It takes influences mainly from the tidyverse, but if you've used other dataframe libraries like pandas you shouldn't have too much trouble working with Explorer.

This document is meant to give you a crash course in using Explorer. More in-depth documentation can be found in the relevant sections of the docs.

We strongly recommend you run this livebook locally so you can see the outputs and play with the inputs!

installation

Installation

Connect to a Mix project with Explorer installed or:

Mix.install([
  {:explorer, "~> 0.2.0"},
  {:kino, "~> 0.4.1"}
])

reading-and-writing-data

Reading and writing data

Data can be read from delimited files (like CSV), NDJSON, Parquet, and the Arrow IPC (feather) format. You can also load in data from a map or keyword list of columns with Explorer.DataFrame.new/1.

For CSV, your 'usual suspects' of options are available:

  • delimiter - A single character used to separate fields within a record. (default: ",")
  • dtypes - A keyword list of [column_name: dtype]. If a type is not specified for a column, it is imputed from the first 1000 rows. (default: [])
  • header - Does the file have a header of column names as the first row or not? (default: true)
  • max_rows - Maximum number of lines to read. (default: nil)
  • null_character - The string that should be interpreted as a nil value. (default: "NA")
  • skip_rows - The number of lines to skip at the beginning of the file. (default: 0)
  • columns - A list of column names to keep. If present, only these columns are read into the dataframe. (default: nil)

Explorer also has multiple example datasets built in, which you can load from the Explorer.Datasets module like so:

df = Explorer.Datasets.fossil_fuels()

You'll notice that the output looks slightly different than many dataframe libraries. Explorer takes inspiration on this front from glimpse in R. A benefit to this approach is that you will rarely need to elide columns.

If you'd like to see a table with your data, we've got you covered there too.

Explorer.DataFrame.table(df)

Writing files is very similar to reading them. The options are a little more limited:

  • header - Should the column names be written as the first line of the file? (default: true)
  • delimiter - A single character used to separate fields within a record. (default: ",")

First, let's add some useful aliases:

alias Explorer.DataFrame
alias Explorer.Series

And then write to a file of your choosing:

input = Kino.Input.text("Filename")
filename = Kino.Input.read(input)
DataFrame.to_csv(df, filename)

working-with-series

Working with Series

Explorer, like Polars, works up from the concept of a Series. In many ways, you can think of a dataframe as a row-aligned map of Series. These are like vectors in R or series in Pandas.

For simplicity, Explorer uses the following Series dtypes:

  • :float - 64-bit floating point number
  • :integer - 64-bit signed integer
  • :boolean - Boolean
  • :string - UTF-8 encoded binary
  • :date - Date type that unwraps to Elixir.Date
  • :datetime - DateTime type that unwraps to Elixir.NaiveDateTime

Series can be constructed from Elixir basic types. For example:

s1 = Series.from_list([1, 2, 3])
s2 = Series.from_list(["a", "b", "c"])
s3 = Series.from_list([~D[2011-01-01], ~D[1965-01-21]])

You'll notice that the dtype and size of the Series are at the top of the printed value. You can get those programmatically as well.

Series.dtype(s3)
Series.size(s3)

And the printed values max out at 50:

1..100 |> Enum.to_list() |> Series.from_list()

Series are also nullable.

s = Series.from_list([1.0, 2.0, nil, nil, 5.0])

And you can fill in those missing values using one of the following strategies:

  • :forward - replace nil with the previous value
  • :backward - replace nil with the next value
  • :max - replace nil with the series maximum
  • :min - replace nil with the series minimum
  • :mean - replace nil with the series mean
Series.fill_missing(s, :forward)

In the case of mixed numeric types (i.e. integers and floats), Series will downcast to a float:

Series.from_list([1, 2.0])

In all other cases, Series must all be of the same dtype or else you'll get an ArgumentError.

Series.from_list([1, 2, 3, "a"])

One of the goals of Explorer is useful error messages. If you look at the error above, you get:

Cannot make a series from mismatched types. Type of "a" does not match inferred dtype integer.

Hopefully this makes abundantly clear what's going on.

Series also implements the Access protocol. You can slice and dice in many ways:

s = 1..10 |> Enum.to_list() |> Series.from_list()
s[1]
s[-1]
s[0..4]
s[[0, 4, 4]]

And of course, you can convert back to an Elixir list.

Series.to_list(s)

Explorer supports comparisons.

s = 1..11 |> Enum.to_list() |> Series.from_list()
s1 = 11..1 |> Enum.to_list() |> Series.from_list()
Series.equal(s, s1)
Series.equal(s, 5)
Series.not_equal(s, 10)
Series.greater_equal(s, 4)

And arithmetic.

Series.add(s, s1)
Series.subtract(s, 4)
Series.multiply(s, s1)

Remember those helpful errors? We've tried to add those throughout. So if you try to do arithmetic with mismatching dtypes:

s = Series.from_list([1, 2, 3])
s1 = Series.from_list([1.0, 2.0, 3.0])
Series.add(s, s1)

Just kidding! Integers and floats will downcast to floats. Let's try again:

s = Series.from_list([1, 2, 3])
s1 = Series.from_list(["a", "b", "c"])
Series.add(s, s1)

You can flip them around.

s = Series.from_list([1, 2, 3, 4])
Series.reverse(s)

And sort.

1..100 |> Enum.to_list() |> Enum.shuffle() |> Series.from_list() |> Series.sort()

Or argsort.

s = 1..100 |> Enum.to_list() |> Enum.shuffle() |> Series.from_list()
ids = Series.argsort(s)

Which you can pass to Explorer.Series.take/2 if you want the sorted values.

Series.take(s, ids)

You can calculate cumulative values.

s = 1..100 |> Enum.to_list() |> Series.from_list()
Series.cum_sum(s)

Or rolling ones.

Series.rolling_sum(s, 4)

You can count and list unique values.

s = Series.from_list(["a", "b", "b", "c", "c", "c"])
Series.distinct(s)
Series.n_distinct(s)

And you can even get a dataframe showing the counts for each distinct value.

Series.count(s)

working-with-dataframes

Working with DataFrames

A DataFrame is really just a collection of Series of the same size. Which is why you can create a DataFrame from a Keyword list.

DataFrame.new(a: [1, 2, 3], b: ["a", "b", "c"])

Similarly to Series, the Inspect implementation prints some info at the top and to the left. At the top we see the shape of the dataframe (rows and columns) and then for each column we see the name, dtype, and first five values. We can see a bit more from that built-in dataset we loaded in earlier.

df

You will also see grouping information there, but we'll get to that later. You can get the info yourself directly:

DataFrame.names(df)
DataFrame.dtypes(df)
DataFrame.shape(df)
{DataFrame.n_rows(df), DataFrame.n_columns(df)}

We can grab the head.

DataFrame.head(df)

Or the tail. Let's get a few more values from the tail.

DataFrame.tail(df, 10)

select

Select

Let's jump right into it. We can select columns pretty simply.

DataFrame.select(df, ["year", "country"])

But Elixir gives us some superpowers. In R there's tidy-select. I don't think we need that in Elixir. Anywhere in Explorer where you need to pass a list of column names, you can also execute a filtering callback on the column names. It's just an anonymous function passed to df |> DataFrame.names() |> Enum.filter(callback_here).

DataFrame.select(df, &String.ends_with?(&1, "fuel"))

Want all but some columns? DataFrame.select/3 takes :keep or :drop as the last arg. It just defaults to :keep.

DataFrame.select(df, &String.ends_with?(&1, "fuel"), :drop)

filter

Filter

The next verb we'll look at is filter. You can pass in a boolean mask series of the same size as the dataframe, but I find it's more handy to use callbacks on the dataframe to generate those masks.

DataFrame.filter(df, &Series.equal(&1["country"], "AFGHANISTAN"))
filtered_df =
  df
  |> DataFrame.filter(&Series.equal(&1["country"], "ALGERIA"))
  |> DataFrame.filter(&Series.greater(&1["year"], 2012))
DataFrame.filter(filtered_df, [true, false])

Remember those helpful error messages?

DataFrame.filter(df, &Series.equal(&1["cuontry"], "AFGHANISTAN"))

mutate

Mutate

A common task in data analysis is to add columns or change existing ones. Mutate is a handy verb.

DataFrame.mutate(df, new_column: &Series.add(&1["solid_fuel"], &1["cement"]))

Did you catch that? You can pass in new columns as keyword arguments. It also works to transform existing columns.

DataFrame.mutate(df,
  gas_fuel: &Series.cast(&1["gas_fuel"], :float),
  gas_and_liquid_fuel: &Series.add(&1["gas_fuel"], &1["liquid_fuel"])
)

DataFrame.mutate/2 is flexible though. You may not always want to use keyword arguments. Given that column names are String.t(), it may make more sense to use a map.

DataFrame.mutate(df, %{"gas_fuel" => &Series.subtract(&1["gas_fuel"], 10)})

DataFrame.transmute/2, which is DataFrame.mutate/2 that only retains the specified columns, is forthcoming.

arrange

Arrange

Sorting the dataframe is pretty straightforward.

DataFrame.arrange(df, "year")

But it comes with some tricks up its sleeve.

DataFrame.arrange(df, asc: "total", desc: "year")

Sort operations happen left to right. And keyword list args permit specifying the direction.

distinct

Distinct

Okay, as expected here too. Very straightforward.

DataFrame.distinct(df, columns: ["year", "country"])

You can specify whether to keep the other columns as well.

DataFrame.distinct(df, columns: ["country"], keep_all?: true)

rename

Rename

Rename can take either a list of new names or a callback that is passed to Enum.map/2 against the names. You can also use a map or keyword args to rename specific columns.

DataFrame.rename(df, year: "year_test")
DataFrame.rename(df, &(&1 <> "_test"))

dummies

Dummies

This is fun! We can get dummy variables for unique values.

DataFrame.dummies(df, ["year"])
DataFrame.dummies(df, ["country"])

sampling

Sampling

Random samples can give us a percent or a specific number of samples, with or without replacement, and the function is seedable.

DataFrame.sample(df, 10)
DataFrame.sample(df, 0.4)

Trying for those helpful error messages again.

DataFrame.sample(df, 10000)
DataFrame.sample(df, 10000, replacement: true)

pull-slice-take

Pull/slice/take

Slicing and dicing can be done with the Access protocol or with explicit pull/slice/take functions.

df["year"]
DataFrame.pull(df, "year")
df[["year", "country"]]
DataFrame.take(df, [1, 20, 50])

Negative offsets work for slice!

DataFrame.slice(df, -10, 5)
DataFrame.slice(df, 10, 5)

pivot

Pivot

We can pivot_longer/3 and pivot_wider/4. These are inspired by tidyr.

There are some shortcomings in pivot_wider/4 related to polars. The values_from column must be a numeric type.

DataFrame.pivot_longer(df, ["year", "country"], value_columns: &String.ends_with?(&1, "fuel"))
DataFrame.pivot_wider(df, "country", "total", id_columns: ["year"])

Let's make those names look nicer!

tidy_names = fn name ->
  name
  |> String.downcase()
  |> String.replace(~r/\s/, " ")
  |> String.replace(~r/[^A-Za-z\s]/, "")
  |> String.replace(" ", "_")
end

df |> DataFrame.pivot_wider("country", "total", id_columns: ["year"]) |> DataFrame.rename(tidy_names)

joins

Joins

Joining is fast and easy. You can specify the columns to join on and how to join. Polars even supports cartesian (cross) joins, so Explorer does too.

df1 = DataFrame.select(df, ["year", "country", "total"])
df2 = DataFrame.select(df, ["year", "country", "cement"])
DataFrame.join(df1, df2)
df3 = df |> DataFrame.select(["year", "cement"]) |> DataFrame.slice(0, 500)
DataFrame.join(df1, df3, how: :left)

grouping

Grouping

Explorer supports groupby operations. They're limited based on what's possible in Polars, but they do most of what you need to do.

grouped = DataFrame.group_by(df, ["country"])

Notice that the Inspect call now shows groups as well as rows and columns. You can, of course, get them explicitly.

DataFrame.groups(grouped)

And you can ungroup explicitly.

DataFrame.ungroup(grouped)

But what we care about the most is aggregating! Let's see which country has the max per_capita value.

grouped |> DataFrame.summarise(per_capita: [:max]) |> DataFrame.arrange(desc: :per_capita_max)

Qatar it is. You can use the following aggregations:

The API is similar to mutate: you can use keyword args or a map and specify aggregations to use.

grouped |> DataFrame.summarise(per_capita: [:max, :min], total: [:min])

Speaking of mutate, it's 'group-aware'. As are arrange, distinct, and n_rows.

DataFrame.arrange(grouped, desc: :total)

that-s-it

That's it!

And not. This is certainly not exhaustive, but I hope it gives you a good idea of what can be done and what the 'flavour' of the API is like. I'd love contributions and issues raised where you find them!