Gladvent
An Advent Of Code runner for Gleam
This library is intended to be imported to your gleam project and used as a command runner for your advent of code project in gleam.
To add this library to your project run: gleam add gladvent
and add import gladvent
to your main gleam file.
This package works only on gleam’s erlang target!
Due to changes made in gleam 1.5, users that were calling gladvent via
gleam run -m
should upgrade to v2 and add a call togladvent.run
in their projectmain
function.
Using the library
- Add gladvent as a dependency via
gleam add gladvent
- Add
gladvent.run()
to your project’smain
function.
Multi-year support
Gladvent now comes with out-of-the-box multi-year support via the --year
flag when running it.
For convenience it defaults to the current year. Therefore, passing --year=YEAR
to either the run
, run all
or new
commands will use the year specified or the current year if the flag was not provided.
Seeing help messages
- To see available subcommands:
gleam run -- --help
- To see help for the
run
command:gleam run run --help
- To see help for the
run
command:gleam run run all --help
- To see help for the
new
command:gleam run new --help
General Workflow
Where X is the day you’d like to add:
Note: this method requires all day solutions be in src/aoc_<year>/
with filenames day_X.gleam
, each solution module containing fn pt_1(String) -> Int
and a fn pt_2(String) -> Int
- run
gleam run new X
- add your input to
input/<YEAR>/X.txt
- add your code to
src/aoc_<YEAR>/day_X.gleam
- run
gleam run run X
Available commands
This project provides your application with 2 command groups, new
and run
:
New
new
: createsrc/aoc_<year>/day_<day>.gleam
andinput/<year>/<day>.txt
files that correspond to the specified days- format:
gleam run new a b c ...
- format:
Run
The run
command expects input files to be in the input/<year>
directory, and code to be in src/aoc_<year>/
(corresponding to the files created by the new
command).
-
run
: run the specified days- format:
gleam run run a b c ...
- format:
-
run all
: run all registered days- format:
gleam run run all
- format:
Note:
- any triggered
assert
,panic
ortodo
will be captured and printed, for example:
Part 1: error: todo - unimplemented in module aoc_2024/day_1 in function pt_1 at line 2
Fetching problem inputs
When stubbing out a new day’s solution with the new
command, you can use the --fetch
flag to tell gladvent to fetch your problem input from the advent of code website.
Some things to note:
- The
AOC_COOKIE
environment variable must be set with your advent of code session cookie. - Gladvent will only attempt to fetch your input if the input file for the day being requested does not exist. This is to prevent accidental and redundant calls to the advent of code website, there should be no reason to fetch input data for the same day more than once.
Reusable parse funtions
Gladvent supports modules with functions that provide a pub fn parse(String) -> a
where the type a
matches with the type of the argument for the runner functions pt_1
and pt_2
.
If this parse
function is present, gladvent will pick it up and run it only once, providing the output to both runner functions.
An example of which looks like this:
pub fn parse(input: String) -> Int {
let assert Ok(i) = int.parse(input)
i
}
pub fn pt_1(input: Int) -> Int {
input + 1
}
pub fn pt_2(input: Int) -> Int {
input + 2
}
Note: gladvent
now leverages gleam’s export package-interface
functionality to type-check your parse
and pt_{1|2}
functions to make sure that they are compatible with each other.
Defining expectations for easy refactoring
One of the most satisfying aspects of advent of code (for me), second only to that sweet feeling of first solving a problem, is iteration and refactoring.
Gladvent makes it easy for you to define expected outputs in your gleam.toml
for all your solutions so that you can have the confidence to refactor your solutions as much as you want without having to constantly compare with your submissions on the advent of code website.
Expectations in gleam.toml
Defining expectations is as simple as adding sections to your gleam.toml
in the following format:
[gladvent.<year as int>]
1 = { pt_1 = <int or string>, pt_2 = <int or string> }
2 = { pt_1 = <int or string>, pt_2 = <int or string> }
3 = { pt_1 = <int or string>, pt_2 = <int or string> }
...
For example, to set the expectations for Dec 1st 2024 (2024 day 1) you would add something like:
[gladvent.2024]
1 = { pt_1 = 1, pt_2 = 2 }
When running, gladvent will detect whether a specific day has it’s expectations set and if so will print out the result for you.
Let’s say that your computed solution for 2024 day 1 is actually 1 for pt_1 and 3 for pt_2, the output will look like this:
Ran 2024 day 1:
Part 1: ✅ met expected value: 1
Part 2: ❌ unmet expectation: got 3, expected 2
Example inputs
Sometimes it’s helpful to run advent of code solutions against example inputs to verify expectations.
Gladvent now provides a --example
flag in both the new
and run
commands to conveniently support that workflow without needing to modify your actual problem input files.
Example input files will be generated at and run from input/<year>/<day>.example.txt
.
Note: gladvent will not compare your solution output against the expectations defined in gleam.toml
when running in example mode.
Display execution time
Use the --timed
flag when running your solutions to display how long each part took to solve.
For example:
Ran 2024 day 1:
Part 1: ✅ met expected value: 1579939 (in 885 µs)
Part 2: ✅ met expected value: 20351745 (in 605 µs)
Note: as the output of the parse
function is reused for both parts, its execution time is not included in the displayed time.
FAQ
Why did you make this?
It seemed fun, I like small command line utilities and I wanted a way to get advent of code done in gleam without having the overhead of lots of copy-pasting and connecting things to get it to run.
Why run as a command line utility and not just use unit tests?
I thought a lot about that and I just prefer the overall interactivity of a CLI better, as well as allowing for endless runs or runs with configurable timeouts.
Having it run as part of eunit
doesnt provide as much flexibility as I would like. Other testing frameworks have been popping up but I leave the decision to use them up to you!
Why did you change your mind on fetching inputs?
I started to reflect a bit after gladvent’s users kept asking for the feature…
While my initial rationale was twofold:
- To encourage people to use the advent of code website, and I felt like fetching inputs somehow took away from that.
- To minimise the risk that people would use a tool I made to spam the advent of code website with requests.
Fetching inputs in a smart way (only ever if your input file does not already exist, so you should only need to do it once per day) still requires users to visit the advent of code website for the following (things gladvent will never do):
- fetching the description of the daily problems
- submitting solutions