Ecto

Most web applications today need some form of data validation and persistence. In the Elixir ecosystem, we have Ecto to enable this. Before we jump into building database-backed web features, we’re going to focus on the finer details of Ecto to give a solid base to build our web features on top of. Let’s get started!

Ecto currently has adapters for the following databases:

  • PostgreSQL
  • MySQL
  • MSSQL
  • SQLite3
  • MongoDB

Newly generated Phoenix projects include Ecto with the PostgreSQL adapter by default (you can pass the --no-ecto flag to exclude this).

For a thorough, general guide for Ecto, check out the Ecto getting started guide. For an overview of all Ecto specific mix tasks for Phoenix, see the mix tasks guide.

This guide assumes that we have generated our new application with Ecto integration and that we will be using PostgreSQL. For instructions on switching to MySQL, please see the Using MySQL section.

The default Postgres configuration has a superuser account with username ‘postgres’ and the password ‘postgres’. If you take a look at the file config/dev.exs, you’ll see that Phoenix works off this assumption. If you don’t have this account already setup on your machine, you can connect to your postgres instance by typing psql and then entering the following commands:

CREATE USER postgres;
ALTER USER postgres PASSWORD 'postgres';
ALTER USER postgres WITH SUPERUSER;

Now that we have Ecto and Postgres installed and configured, the easiest way to use Ecto is to generate an Ecto schema through the phx.gen.schema task. Ecto schemas are a way for us to specify how Elixir data types map to and from external sources, such as database tables. Let’s generate a User schema with name, email, bio, and number_of_pets fields.

$ mix phx.gen.schema User users name:string email:string \
bio:string number_of_pets:integer

* creating ./lib/hello/user.ex
* creating priv/repo/migrations/20170523151118_create_user.exs

Remember to update your repository by running migrations:

   $ mix ecto.migrate

A couple of files were generated with this task. First, we have a user.ex file, containing our Ecto schema with our schema definition of the fields we passed to the task. Next, a migration file was generated inside priv/repo/migrations which will create our database table that our schema maps to.

With our files in place, let’s follow the instructions and run our migration. If the repo hasn’t been created yet, run the mix ecto.create task. Next we can run:

$ mix ecto.migrate
Compiling 1 file (.ex)
Generated hello app

[info]  == Running Hello.Repo.Migrations.CreateHello.User.change/0 forward

[info]  create table users

[info]  == Migrated in 0.0s

Mix assumes that we are in the development environment unless we tell it otherwise with MIX_ENV=another_environment mix some_task. Our Ecto task will get its environment from Mix, and that’s how we get the correct suffix to our database name.

If we log in to our database server, and connect to our hello_dev database, we should see our users table. Ecto assumes that we want an integer column called id as our primary key, so we should see a sequence generated for that as well.

$ psql -U postgres

Type "help" for help.

postgres=# \connect hello_dev
You are now connected to database "hello_dev" as user "postgres".
hello_dev=# \d
                List of relations
 Schema |       Name        |   Type   |  Owner
--------+-------------------+----------+----------
 public | schema_migrations | table    | postgres
 public | users             | table    | postgres
 public | users_id_seq      | sequence | postgres
(3 rows)
hello_dev=# \q

If we take a look at the migration generated by phx.gen.schema in priv/repo/migrations, we’ll see that it will add the columns we specified. It will also add timestamp columns for inserted_at and updated_at which come from the timestamps/0 function.

defmodule Hello.Repo.Migrations.CreateHello.User do
  use Ecto.Migration

  def change do
    create table(:users) do
      add :name, :string
      add :email, :string
      add :bio, :string
      add :number_of_pets, :integer

      timestamps()
    end

  end
end

And here’s what that translates to in the actual users table.

hello_dev=# \d users
Table "public.users"
Column         |            Type             | Modifiers
---------------+-----------------------------+----------------------------------------------------
id             | integer                     | not null default nextval('users_id_seq'::regclass)
name           | character varying(255)      |
email          | character varying(255)      |
bio            | character varying(255)      |
number_of_pets | integer                     |
inserted_at    | timestamp without time zone | not null
updated_at     | timestamp without time zone | not null
Indexes:
"users_pkey" PRIMARY KEY, btree (id)

Notice that we do get an id column as our primary key by default, even though it isn’t listed as a field in our migration.

The Repo

Our Hello.Repo module is the foundation we need to work with databases in a Phoenix application. Phoenix generated it for us in lib/hello/repo.ex, and this is what it looks like.

defmodule Hello.Repo do
  use Ecto.Repo, otp_app: :hello

  @doc """
  Dynamically loads the repository url from the
  DATABASE_URL environment variable.
  """
  def init(_, opts) do
    {:ok, Keyword.put(opts, :url, System.get_env("DATABASE_URL"))}
  end
end

Our repo has three main tasks - to bring in all the common query functions from Ecto.Repo, to set the otp_app name equal to our application name, and to initialize the options passed to the database adapter in init/2. We’ll talk more about how to use the Repo in a bit.

When phx.new generated our application, it included some basic repo configuration as well. Let’s look at config/dev.exs.

...
# Configure your database
config :hello, Hello.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: "postgres",
  password: "postgres",
  database: "hello_dev",
  hostname: "localhost",
  pool_size: 10
...

It begins by configuring our otp_app name and repo module. Then it sets the adapter – Postgres, in our case. It also sets our login credentials. Of course, you can change these to match your actual credentials if they are different.

We also have similar configuration in config/test.exs and config/prod.secret.exs which can also be changed to match your actual credentials.

The Schema

Ecto schemas are responsible for mapping Elixir values to external data sources, as well as mapping external data back into Elixir data-structures. We can also define relationships to other schemas in our applications. For example, our User schema might have many Post’s, and each Post would belong to a User. Ecto also handles data validation and type casting with changesets, which we’ll discuss in a moment.

Here’s the User schema that Phoenix generated for us.

defmodule Hello.User do
  use Ecto.Schema
  import Ecto.Changeset
  alias Hello.User


  schema "users" do
    field :bio, :string
    field :email, :string
    field :name, :string
    field :number_of_pets, :integer

    timestamps()
  end

  @doc false
  def changeset(%User{} = user, attrs) do
    user
    |> cast(attrs, [:name, :email, :bio, :number_of_pets])
    |> validate_required([:name, :email, :bio, :number_of_pets])
  end
end

Ecto schemas at their core are simply Elixir structs. Our schema block is what tells Ecto how to cast our %User{} struct fields to and from the external users table. Often, the ability to simply cast data to and from the database isn’t enough and extra data validation is required. This is where Ecto Changesets come in. Let’s dive in!

Changesets and Validations

Changesets define a pipeline of transformations our data needs to undergo before it will be ready for our application to use. These transformations might include type-casting, user input validation, and filtering out any extraneous parameters. Often we’ll use changesets to validate user input before writing it to the database. Ecto Repos are also changeset-aware, which allows them not only to refuse invalid data, but also perform the minimal database updates possible by inspecting the changeset to know which fields have changed.

Let’s take a closer look at our default changeset function.

  def changeset(%User{} = user, attrs) do
    user
    |> cast(attrs, [:name, :email, :bio, :number_of_pets])
    |> validate_required([:name, :email, :bio, :number_of_pets])
  end

Right now, we have two transformations in our pipeline. In the first call, we invoke Ecto.Changeset’s cast/3, passing in our external parameters and marking which fields are required for validation. cast/3 first takes a struct, then the parameters (the proposed updates), and then the final field is the list of columns to be updated. cast/3 also will only take fields that exist in the schema. Next, validate_required/3 checks that this list of fields is present in the changeset that cast/3 returns. By default with the generator, all fields are required.

We can verify this functionality in iex. Let’s fire up our application inside iex by running iex -S mix. In order to minimize typing and make this easier to read, let’s alias our Hello.User struct.

$ iex -S mix

iex> alias Hello.User
Hello.User

Next, let’s build a changeset from our schema with an empty User struct, and an empty map of parameters.

iex> changeset = User.changeset(%User{}, %{})

#Ecto.Changeset<action: nil, changes: %{},
 errors: [name: {"can't be blank", [validation: :required]},
  email: {"can't be blank", [validation: :required]},
  bio: {"can't be blank", [validation: :required]},
  number_of_pets: {"can't be blank", [validation: :required]}],
 data: #Hello.User<>, valid?: false>

Once we have a changeset, we can check it if it is valid.

iex> changeset.valid?
false

Since this one is not valid, we can ask it what the errors are.

iex> changeset.errors
[name: {"can't be blank", [validation: :required]},
 email: {"can't be blank", [validation: :required]},
 bio: {"can't be blank", [validation: :required]},
 number_of_pets: {"can't be blank", [validation: :required]}]

Now, let’s make number_of_pets optional. In order to do this, we simply remove it from the list.

    |> validate_required([:name, :email, :bio])

Now casting the changeset should tell us that only name, email, and bio can’t be blank. We can test that by running recompile() inside iex and then rebuilding our changeset.

iex> recompile()
Compiling 1 file (.ex)
:ok

iex> changeset = User.changeset(%User{}, %{})
#Ecto.Changeset<action: nil, changes: %{},
 errors: [name: {"can't be blank", [validation: :required]},
  email: {"can't be blank", [validation: :required]},
  bio: {"can't be blank", [validation: :required]}],
 data: #Hello.User<>, valid?: false>

iex> changeset.errors
[name: {"can't be blank", [validation: :required]},
 email: {"can't be blank", [validation: :required]},
 bio: {"can't be blank", [validation: :required]}]

What happens if we pass a key/value pair that is in neither defined in the schema nor required?

Inside our existing IEx shell, let’s create a params map with valid values plus an extra random_key: "random value".

iex> params = %{name: "Joe Example", email: "joe@example.com", bio: "An example to all", number_of_pets: 5, random_key: "random value"}
%{email: "joe@example.com", name: "Joe Example", bio: "An example to all",
number_of_pets: 5, random_key: "random value"}

Next, let’s use our new params map to create another changeset.

iex> changeset = User.changeset(%User{}, params)
#Ecto.Changeset<action: nil,
 changes: %{bio: "An example to all", email: "joe@example.com",
   name: "Joe Example", number_of_pets: 5}, errors: [],
 data: #Hello.User<>, valid?: true>

Our new changeset is valid.

iex> changeset.valid?
true

We can also check the changeset’s changes - the map we get after all of the transformations are complete.

iex(9)> changeset.changes
%{bio: "An example to all", email: "joe@example.com", name: "Joe Example",
  number_of_pets: 5}

Notice that our random_key and random_value have been removed from the final changeset. Changesets allow us to cast external data, such as user input on a web form or data from a CSV file into valid data into our system. Invalid parameters will be stripped and bad data that is unable to be cast according to our schema will be highlighted in the changeset errors.

We can validate more than just whether a field is required or not. Let’s take a look at some finer-grained validations.

What if we had a requirement that all biographies in our system must be at least two characters long? We can do this easily by adding another transformation to the pipeline in our changeset which validates the length of the bio field.

  def changeset(%User{} = user, attrs) do
    user
    |> cast(attrs, [:name, :email, :bio, :number_of_pets])
    |> validate_required([:name, :email, :bio, :number_of_pets])
    |> validate_length(:bio, min: 2)
  end

Now, if we try to cast data containing a value of “A” for our user’s bio, we should see the failed validation in the changeset’s errors.

iex> changeset = User.changeset(%User{}, %{bio: "A"})
iex> changeset.errors[:bio]
{"should be at least %{count} character(s)",
 [count: 2, validation: :length, min: 2]}

If we also have a requirement for the maximum length that a bio can have, we can simply add another validation.

  def changeset(%User{} = user, attrs) do
    user
    |> cast(attrs, [:name, :email, :bio, :number_of_pets])
    |> validate_required([:name, :email, :bio, :number_of_pets])
    |> validate_length(:bio, min: 2)
    |> validate_length(:bio, max: 140)
  end

Let’s say we want to perform at least some rudimentary format validation on the email field. All we want to check for is the presence of the “@”. The validate_format/3 function is just what we need.

  def changeset(struct, params \\ %{}) do
    struct
    |> cast(params, [:name, :email, :bio, :number_of_pets])
    |> validate_required([:name, :email, :bio, :number_of_pets])
    |> validate_length(:bio, min: 2)
    |> validate_length(:bio, max: 140)
    |> validate_format(:email, ~r/@/)
  end

If we try to cast a user with an email of “example.com”, we should see an error message like the following.

iex> changeset = User.changeset(%User{}, %{email: "example.com"})
iex> changeset.errors[:email]
{"has invalid format", [validation: :format]}

There are many more validations and transformations we can perform in a changeset. Please see the Ecto Changeset documentation for more information.

Data Persistence

We’ve talked a lot about migrations and data-storage, but we haven’t yet persisted any of our schemas or changesets. We briefly looked at our repo module in lib/hello/repo.ex earlier, now it’s time to put it to use. Ecto Repo’s are the interface into a storage system, be it a Database like PostgreSQL, or an external service like a RESTful API. The Repo module’s purpose is to take care of the finer details of persistence and data querying for us. As the caller, we only care about fetching and persisting data. The Repo takes care of the underlying Database adapter communication, connection pooling, and error translation for database constraint violations.

Let’s head back over to IEx with iex -S mix, and insert a couple of users to the database.

iex> alias Hello.{Repo, User}
[Hello.Repo, Hello.User]
iex> Repo.insert(%User{email: "user1@example.com"})
[debug] QUERY OK db=4.6ms
INSERT INTO "users" ("email","inserted_at","updated_at") VALUES ($1,$2,$3) RETURNING "id" ["user1@example.com", {{2017, 5, 23}, {19, 6, 4, 822044}}, {{2017, 5, 23}, {19, 6, 4, 822055}}]
{:ok,
 %Hello.User{__meta__: #Ecto.Schema.Metadata<:loaded, "users">,
  bio: nil, email: "user1@example.com", id: 3,
  inserted_at: ~N[2017-05-23 19:06:04.822044], name: nil, number_of_pets: nil,
  updated_at: ~N[2017-05-23 19:06:04.822055]}}

iex> Repo.insert(%User{email: "user2@example.com"})
[debug] QUERY OK db=5.1ms
INSERT INTO "users" ("email","inserted_at","updated_at") VALUES ($1,$2,$3) RETURNING "id" ["user2@example.com", {{2017, 5, 23}, {19, 6, 8, 452545}}, {{2017, 5, 23}, {19, 6, 8, 452556}}]
{:ok,
 %Hello.User{__meta__: #Ecto.Schema.Metadata<:loaded, "users">,
  bio: nil, email: "user2@example.com", id: 4,
  inserted_at: ~N[2017-05-23 19:06:08.452545], name: nil, number_of_pets: nil,
  updated_at: ~N[2017-05-23 19:06:08.452556]}}

We started by aliasing our User and Repo modules for easy access. Next, we called Repo.insert/1 and passed a user struct. Since we’re in the dev environment, we can see the debug logs for the query our Repo performed when inserting the underlying %User{} data. We received a 2-tuple back with {:ok, %User{}}, which lets us know the insertion was successful. With a couple of users inserted, let’s fetch them back out of the repo.

iex> Repo.all(User)
[debug] QUERY OK source="users" db=2.7ms
SELECT u0."id", u0."bio", u0."email", u0."name", u0."number_of_pets", u0."inserted_at", u0."updated_at" FROM "users" AS u0 []
[%Hello.User{__meta__: #Ecto.Schema.Metadata<:loaded, "users">,
  bio: nil, email: "user1@example.com", id: 3,
  inserted_at: ~N[2017-05-23 19:06:04.822044], name: nil, number_of_pets: nil,
  updated_at: ~N[2017-05-23 19:06:04.822055]},
 %Hello.User{__meta__: #Ecto.Schema.Metadata<:loaded, "users">,
  bio: nil, email: "user2@example.com", id: 4,
  inserted_at: ~N[2017-05-23 19:06:08.452545], name: nil, number_of_pets: nil,
  updated_at: ~N[2017-05-23 19:06:08.452556]}]

That was easy! Repo.all/1 takes a data source, our User schema in this case, and translates that to an underlying SQL query against our database. After it fetches the data, the Repo then uses our Ecto schema to map the database values back into Elixir data-structures according to our User schema. We’re not just limited to basic querying – Ecto includes a full-fledged query DSL for advanced SQL generation. In addition to a natural Elixir DSL, Ecto’s query engine gives us multiple great features, such as SQL injection protection and compile-time optimization of queries. Let’s try it out.

iex> import Ecto.Query
Ecto.Query

iex> Repo.all(from u in User, select: u.email)
[debug] QUERY OK source="users" db=2.4ms
SELECT u0."email" FROM "users" AS u0 []
["user1@example.com", "user2@example.com"]

First, we imported Ecto.Query, which imports the from macro of Ecto’s Query DSL. Next, we built a query which selects all the the email addresses in our user’s table. Let’s try another example.

iex)> Repo.one(from u in User, where: ilike(u.email, "%1%"),
                               select: count(u.id))
[debug] QUERY OK source="users" db=1.6ms SELECT count(u0."id") FROM "users" AS u0 WHERE (u0."email" ILIKE '%1%') []
1

Now we’re starting to get a taste of Ecto’s rich querying capabilities. We used Repo.one/1 to fetch the count of all users with an email address containing “1”, and received the expected count in return. This just scratches the surface of Ecto’s query interface, and much more is supported such as sub-querying, interval queries, and advanced select statements. For example, let’s build a query to fetch a map of all user id’s to their email addresses.

iex> Repo.all(from u in User, select: %{u.id => u.email})
[debug] QUERY OK source="users" db=0.9ms
SELECT u0."id", u0."email" FROM "users" AS u0 []
[%{3 => "user1@example.com"}, %{4 => "user2@example.com"}]

That little query packed a big punch. It both fetched all user emails from the database and efficiently built a map of the results in one go. You should browse the Ecto.Query documentation to see the breadth of supported query features.

In addition to inserts, we can also perform updates and deletes with Repo.update/1 and Repo.delete/1 to update or delete a single schema. Ecto also supports bulk persistence with the Repo.insert_all, Repo.update_all, and Repo.delete_all functions.

There is quite a bit more that Ecto can do and we’ve only barely scratched the surface. With a solid Ecto foundation in place, we’re now ready to continue building our app and integrate the web facing application with our backend persistence. Along the way, we’ll expand our Ecto knowledge and learn how to properly isolate our web interface from the underlying details of our system. Please take a look at the Ecto documentation for the rest of the story.

In our context guide, we’ll find out how to wrap up our Ecto access and business logic behind modules that group related functionality. We’ll see how Phoenix helps us design maintainable applications, and we’ll find out about other neat Ecto features along the way.

Using MySQL

Phoenix applications are configured to use PostgreSQL by default, but what if we want to use MySQL instead? In this guide, we’ll walk through changing that default whether we are about to create a new application, or whether we have an existing one configured for PostgreSQL.

If we are about to create a new application, configuring our application to use MySQL is easy. We can simply pass the --database mysql flag to phx.new and everything will be configured correctly.

$ mix phx.new hello_phoenix --database mysql

This will set up all the correct dependencies and configuration for us automatically. Once we install those dependencies with mix deps.get, we’ll be ready to begin working with Ecto in our application.

If we have an existing application, all we need to do is switch adapters and make some small configuration changes.

To switch adapters, we need to remove the Postgrex dependency and add a new one for Mariaex instead.

Let’s open up our mix.exs file and do that now.

defmodule HelloPhoenix.Mixfile do
  use Mix.Project

  . . .
  # Specifies your project dependencies.
  #
  # Type `mix help deps` for examples and options.
  defp deps do
    [
      {:phoenix, "~> 1.3.0"},
      {:phoenix_pubsub, "~> 1.0"},
      {:phoenix_ecto, "~> 3.2"},
      {:mariaex, ">= 0.0.0"},
      {:phoenix_html, "~> 2.10"},
      {:phoenix_live_reload, "~> 1.0", only: :dev},
      {:gettext, "~> 0.11"},
      {:cowboy, "~> 1.0"}
    ]
  end
end

Next, we need to configure our new adapter. Let’s open up our config/dev.exs file and do that.

config :hello_phoenix, HelloPhoenix.Repo,
adapter: Ecto.Adapters.MySQL,
username: "root",
password: "",
database: "hello_phoenix_dev"

If we have an existing configuration block for our HelloPhoenix.Repo, we can simply change the values to match our new ones. The most important thing is to make sure we are using the MySQL adapter adapter: Ecto.Adapters.MySQL,.

We also need to configure the correct values in the config/test.exs and config/prod.secret.exs files as well.

Now all we need to do is fetch our new dependency, and we’ll be ready to go.

$ mix do deps.get, compile

With our new adapter installed and configured, we’re ready to create our database.

$ mix ecto.create

The database for HelloPhoenix.repo has been created. We’re also ready to run any migrations, or do anything else with Ecto that we might choose.

$ mix ecto.migrate
[info] == Running HelloPhoenix.Repo.Migrations.CreateUser.change/0 forward
[info] create table users
[info] == Migrated in 0.2s