nibble

Types

A dead end represents a the point where a parser that had committed down a path failed. It contains the position of the failure, the Error describing the failure, and the context stack for any parsers that had run.

pub type DeadEnd(tok, ctx) {
  DeadEnd(
    pos: Span,
    problem: Error(tok),
    context: List(#(Span, ctx)),
  )
}

Constructors

  • DeadEnd(
      pos: Span,
      problem: Error(tok),
      context: List(#(Span, ctx)),
    )
pub type Error(tok) {
  BadParser(String)
  Custom(String)
  EndOfInput
  Expected(String, got: tok)
  Unexpected(tok)
}

Constructors

  • BadParser(String)
  • Custom(String)
  • EndOfInput
  • Expected(String, got: tok)
  • Unexpected(tok)
pub type Loop(a, state) {
  Continue(state)
  Break(a)
}

Constructors

  • Continue(state)
  • Break(a)

The Parser type has three paramteres, let’s take a look at each of them:

Parser(a, tok, ctx)
// (1) ^
// (2)    ^^^
// (3)         ^^^
  1. a is the type of value that the parser knows how to produce. If you were writing a parser for a programming language, this might be your expression type.

  2. tok is the type of tokens that the parser knows how to consume. You can take a look at the Token type for a bit more info, but note that it’s not necessary for the token stream to come from nibble’s lexer.

  3. ctx is used to make error reporting nicer. You can place a parser into a custom context. When the parser runs the context gets pushed into a stack. If the parser fails you can see the context stack in the error message, which can make error reporting and debugging much easier!

pub opaque type Parser(a, tok, ctx)

Functions

pub fn any() -> Parser(a, a, b)
pub fn backtrackable(parser: Parser(a, b, c)) -> Parser(a, b, c)

By default, parsers will not backtrack if they fail after consuming at least one token. Passing a parser to backtrackable will change this behaviour and allows us to jump back to the state of the parser before it consumed any input and try another one.

This is most useful when you want to quickly try a few different parsers using one_of.

🚨 Backtracing parsers can drastically reduce performance, so you should avoid them where possible. A common reason folks reach for backtracking is when they want to try multiple branches that start with the same token or same sequence of tokens.

To avoid backtracking in these cases, you can create an intermediate parser that consumes the common tokens and then use one_of to try the different branches.

pub fn do(
  parser: Parser(a, b, c),
  f: fn(a) -> Parser(d, b, c),
) -> Parser(d, b, c)
pub fn do_in(
  context: a,
  parser: Parser(b, c, a),
  f: fn(b) -> Parser(d, c, a),
) -> Parser(d, c, a)
pub fn eof() -> Parser(Nil, a, b)
pub fn fail(message: String) -> Parser(a, b, c)

Create a parser that consumes no tokens and always fails with the given error message.

pub fn guard(cond: Bool, expecting: String) -> Parser(Nil, a, b)
pub fn in(parser: Parser(a, b, c), context: c) -> Parser(a, b, c)
pub fn inspect(
  parser: Parser(a, b, c),
  message: String,
) -> Parser(a, b, c)

Run the given parser and then inspect it’s state.

pub fn lazy(parser: fn() -> Parser(a, b, c)) -> Parser(a, b, c)

Defer the creation of a parser until it is needed. This is often most useful when creating a parser that is recursive and is not a function.

pub fn loop(
  init: a,
  step: fn(a) -> Parser(Loop(b, a), c, d),
) -> Parser(b, c, d)
pub fn many(parser: Parser(a, b, c)) -> Parser(List(a), b, c)
pub fn many1(parser: Parser(a, b, c)) -> Parser(List(a), b, c)
pub fn map(
  parser: Parser(a, b, c),
  f: fn(a) -> d,
) -> Parser(d, b, c)
pub fn one_of(parsers: List(Parser(a, b, c))) -> Parser(a, b, c)
pub fn optional(
  parser: Parser(a, b, c),
) -> Parser(Option(a), b, c)

Try the given parser, but if it fails return None instead of failing.

pub fn or(parser: Parser(a, b, c), default: a) -> Parser(a, b, c)

Try the given parser, but if it fails return the given default value instead of failing.

pub fn replace(
  parser: Parser(a, b, c),
  with b: d,
) -> Parser(d, b, c)
pub fn return(value: a) -> Parser(a, b, c)

The simplest kind of parser. return consumes no tokens and always produces the given value. Sometimes called succeed instead.

This function might seem useless at first, but it is very useful when used in combination with do or then.

import nibble.{do, return}

fn unit8_parser() {
  use int <- do(int_parser())

  case int >= 0, int <= 255 {
    True, True ->
      return(int)

    False, _ ->
      throw("Expected an int >= 0")

    _, False ->
      throw("Expected an int <= 255")
 }
}

💡 return and succeed are names for the same thing. We suggesting using return unqualified when using do and Gleam’s use syntax, and nibble.succeed in a pipeline with nibble.then.

pub fn run(
  src: List(Token(a)),
  parser: Parser(b, a, c),
) -> Result(b, List(DeadEnd(a, c)))

Parsers don’t do anything until they’re run! The run function takes a Parser and a list of Tokens and runs it; returning either the parsed value or a list of DeadEnds where the parser failed.

pub fn sequence(
  parser: Parser(a, b, c),
  separator sep: Parser(d, b, c),
) -> Parser(List(a), b, c)
pub fn span() -> Parser(Span, a, b)

A parser that returns the current token position.

pub fn succeed(value: a) -> Parser(a, b, c)

The simplest kind of parser. succeed consumes no tokens and always produces the given value. Sometimes called return instead.

This function might seem useless at first, but it is very useful when used in combination with do or then.

import nibble

fn unit8_parser() {
  int_parser()
  |> nibble.then(fn(int) {
    case int >= 0, int <= 255 {
      True, True -> succeed(int)
      False, _ -> fail("Expected an int >= 0")
      _, False -> fail("Expected an int <= 255")
    }
  })
}

💡 succeed and return are names for the same thing. We suggest using succeed in a pipeline with nibble.then, and return unqalified when using do with Gleam’s use syntax.

pub fn take_at_least(
  parser: Parser(a, b, c),
  count: Int,
) -> Parser(List(a), b, c)
pub fn take_exactly(
  parser: Parser(a, b, c),
  count: Int,
) -> Parser(List(a), b, c)
pub fn take_if(
  expecting: String,
  predicate: fn(a) -> Bool,
) -> Parser(a, a, b)
pub fn take_map(
  expecting: String,
  f: fn(a) -> Option(b),
) -> Parser(b, a, c)

Take the next token and attempt to transform it with the given function. This is useful when creating reusable primtive parsers for your own tokens such as take_identifier or take_number.

pub fn take_map_while(
  f: fn(a) -> Option(b),
) -> Parser(List(b), a, c)
pub fn take_map_while1(
  expecting: String,
  f: fn(a) -> Option(b),
) -> Parser(List(b), a, c)

💡 If this parser succeeds, the list produced is guaranteed to be non-empty. Feel free to let assert the result!

pub fn take_until(
  predicate: fn(a) -> Bool,
) -> Parser(List(a), a, b)
pub fn take_until1(
  expecting: String,
  predicate: fn(a) -> Bool,
) -> Parser(List(a), a, b)

💡 If this parser succeeds, the list produced is guaranteed to be non-empty. Feel free to let assert the result!

pub fn take_up_to(
  parser: Parser(a, b, c),
  count: Int,
) -> Parser(List(a), b, c)
pub fn take_while(
  predicate: fn(a) -> Bool,
) -> Parser(List(a), a, b)

💡 This parser can succeed without consuming any input (if the predicate immediately fails). You can end up with an infinite loop if you’re not careful. Use take_while1 if you want to guarantee you take at least one token.

pub fn take_while1(
  expecting: String,
  predicate: fn(a) -> Bool,
) -> Parser(List(a), a, b)

💡 If this parser succeeds, the list produced is guaranteed to be non-empty. Feel free to let assert the result!

pub fn then(
  parser: Parser(a, b, c),
  f: fn(a) -> Parser(d, b, c),
) -> Parser(d, b, c)
pub fn throw(message: String) -> Parser(a, b, c)

The opposite of return, this parser always fails with the given message. Sometimes called fail instead.

pub fn token(tok: a) -> Parser(Nil, a, b)
Search Document