nibble
Types
A dead end represents a the point where a parser that had committed down a
path failed. It contains the position of the failure, the Error
describing the failure, and the context stack for any parsers that had run.
pub type DeadEnd(tok, ctx) {
DeadEnd(
pos: lexer.Span,
problem: Error(tok),
context: List(#(lexer.Span, ctx)),
)
}
Constructors
-
DeadEnd( pos: lexer.Span, problem: Error(tok), context: List(#(lexer.Span, ctx)), )
pub type Error(tok) {
BadParser(String)
Custom(String)
EndOfInput
Expected(String, got: tok)
Unexpected(tok)
}
Constructors
-
BadParser(String) -
Custom(String) -
EndOfInput -
Expected(String, got: tok) -
Unexpected(tok)
The Parser type has three parameters, let’s take a look at each of them:
Parser(a, tok, ctx)
// (1) ^
// (2) ^^^
// (3) ^^^
-
ais the type of value that the parser knows how to produce. If you were writing a parser for a programming language, this might be your expression type. -
tokis the type of tokens that the parser knows how to consume. You can take a look at theTokentype for a bit more info, but note that it’s not necessary for the token stream to come from nibble’s lexer. -
ctxis used to make error reporting nicer. You can place a parser into a custom context. When the parser runs the context gets pushed into a stack. If the parser fails you can see the context stack in the error message, which can make error reporting and debugging much easier!
pub opaque type Parser(a, tok, ctx)
Values
pub fn any() -> Parser(tok, tok, ctx)
Returns the next token in the input stream. Fails if there are no more tokens.
pub fn backtrackable(
parser: Parser(a, tok, ctx),
) -> Parser(a, tok, ctx)
By default, parsers will not backtrack if they fail after consuming at least
one token. Passing a parser to backtrackable will change this behaviour and
allows us to jump back to the state of the parser before it consumed any input
and try another one.
This is most useful when you want to quickly try a few different parsers using
one_of.
🚨 Backtracing parsers can drastically reduce performance, so you should avoid them where possible. A common reason folks reach for backtracking is when they want to try multiple branches that start with the same token or same sequence of tokens.
To avoid backtracking in these cases, you can create an intermediate parser
that consumes the common tokens and then use one_of to try
the different branches.
pub fn do_in(
context: ctx,
parser: Parser(a, tok, ctx),
f: fn(a) -> Parser(b, tok, ctx),
) -> Parser(b, tok, ctx)
pub fn eof() -> Parser(Nil, tok, ctx)
Succeeds if the input stream is empty, fails otherwise. This is useful to verify that you’ve consumed all the tokens in the input stream.
pub fn fail(message: String) -> Parser(a, tok, ctx)
Create a parser that consumes no tokens and always fails with the given error message.
pub fn guard(
cond: Bool,
expecting: String,
) -> Parser(Nil, tok, ctx)
Fails if the given condition is false, otherwise returns Nil.
pub fn inspect(
parser: Parser(a, tok, ctx),
message: String,
) -> Parser(a, tok, ctx)
Run the given parser and then inspect it’s state.
pub fn lazy(
parser: fn() -> Parser(a, tok, ctx),
) -> Parser(a, tok, ctx)
Defer the creation of a parser until it is needed. This is often most useful when creating a parser that is recursive and is not a function.
pub fn loop(
init: state,
step: fn(state) -> Parser(Loop(a, state), tok, ctx),
) -> Parser(a, tok, ctx)
pub fn many1(
parser: Parser(a, tok, ctx),
) -> Parser(List(a), tok, ctx)
This is the same as many1, but is guaranteed to return at least
one value.
pub fn one_of(
parsers: List(Parser(a, tok, ctx)),
) -> Parser(a, tok, ctx)
Try the given parsers in order until one succeeds. If all fail, the parser fails.
pub fn optional(
parser: Parser(a, tok, ctx),
) -> Parser(option.Option(a), tok, ctx)
Try the given parser, but if it fails return
None instead
of failing.
pub fn or(
parser: Parser(a, tok, ctx),
default: a,
) -> Parser(a, tok, ctx)
Try the given parser, but if it fails return the given default value instead of failing.
pub fn return(value: a) -> Parser(a, tok, ctx)
The simplest kind of parser. return consumes no tokens and always
produces the given value. Sometimes called succeed instead.
This function might seem useless at first, but it is very useful when used in
combination with do or then.
import nibble.{do, return}
fn unit8_parser() {
use int <- do(int_parser())
case int >= 0, int <= 255 {
True, True ->
return(int)
False, _ ->
throw("Expected an int >= 0")
_, False ->
throw("Expected an int <= 255")
}
}
💡 return and succeed are names for the same thing.
We suggesting using return unqualified when using do and Gleam’s use
syntax, and nibble.succeed in a pipeline with nibble.then.
pub fn run(
src: List(lexer.Token(tok)),
parser: Parser(a, tok, ctx),
) -> Result(a, List(DeadEnd(tok, ctx)))
pub fn sequence(
parser: Parser(a, tok, ctx),
separator sep: Parser(x, tok, ctx),
) -> Parser(List(a), tok, ctx)
Consumes a sequence of tokens using the given parser, separated by the
given separator parser. Returns a list of the parsed values, ignoring
the results of the separator parser.
pub fn span() -> Parser(lexer.Span, tok, ctx)
A parser that returns the current token position.
pub fn succeed(value: a) -> Parser(a, tok, ctx)
The simplest kind of parser. succeed consumes no tokens and always
produces the given value. Sometimes called return instead.
This function might seem useless at first, but it is very useful when used in
combination with do or then.
import nibble
fn unit8_parser() {
int_parser()
|> nibble.then(fn(int) {
case int >= 0, int <= 255 {
True, True -> succeed(int)
False, _ -> fail("Expected an int >= 0")
_, False -> fail("Expected an int <= 255")
}
})
}
💡 succeed and return are names for the same thing.
We suggest using succeed in a pipeline with nibble.then, and return
unqalified when using do with Gleam’s use syntax.
pub fn take_at_least(
parser: Parser(a, tok, ctx),
count: Int,
) -> Parser(List(a), tok, ctx)
Apply the parser a minimum of count times, returning a list of the results.
pub fn take_exactly(
parser: Parser(a, tok, ctx),
count: Int,
) -> Parser(List(a), tok, ctx)
Take count consecutive tokens from the stream using the given parser.
pub fn take_if(
expecting: String,
predicate: fn(tok) -> Bool,
) -> Parser(tok, tok, ctx)
Takes the next token off the stream if it satisfies the given predicate.
pub fn take_map(
expecting: String,
f: fn(tok) -> option.Option(a),
) -> Parser(a, tok, ctx)
Take the next token and attempt to transform it with the given function. This
is useful when creating reusable primitive parsers for your own tokens such as
take_identifier or take_number.
pub fn take_map_while(
f: fn(tok) -> option.Option(a),
) -> Parser(List(a), tok, ctx)
Applies a function to consecutive tokens while the given function returns
Some.
💡 This parser can succeed without consuming any input (if the predicate
immediately succeeds). You can end up with an infinite loop if you’re not
careful. Use take_map_while1 if you want to guarantee you
take at least one token.
pub fn take_map_while1(
expecting: String,
f: fn(tok) -> option.Option(a),
) -> Parser(List(a), tok, ctx)
Applies a function to consecutive tokens while the given function returns
Some.
💡 If this parser succeeds, the list produced is guaranteed to be non-empty.
Feel free to let assert the result!
pub fn take_until(
predicate: fn(tok) -> Bool,
) -> Parser(List(tok), tok, ctx)
Take token from the stream until the given predicate is satisfied.
💡 This parser can succeed without consuming any input (if the predicate
immediately succeeds). You can end up with an infinite loop if you’re not
careful. Use take_until1 if you want to guarantee you
take at least one token.
pub fn take_until1(
expecting: String,
predicate: fn(tok) -> Bool,
) -> Parser(List(tok), tok, ctx)
Take token from the stream until the given predicate is satisfied.
💡 If this parser succeeds, the list produced is guaranteed to be non-empty.
Feel free to let assert the result!
pub fn take_up_to(
parser: Parser(a, tok, ctx),
count: Int,
) -> Parser(List(a), tok, ctx)
Apply the parser up to count times, returning a list of the results.
💡 This parser can succeed without consuming any input (if the parser fails immediately) and return an empty list. You can end up with an infinite loop if you’re not careful.
pub fn take_while(
predicate: fn(tok) -> Bool,
) -> Parser(List(tok), tok, ctx)
Take tokens from the stream while the given predicate is satisfied.
💡 This parser can succeed without consuming any input (if the predicate
immediately fails). You can end up with an infinite loop if you’re not
careful. Use take_while1 if you want to guarantee you
take at least one token.
pub fn take_while1(
expecting: String,
predicate: fn(tok) -> Bool,
) -> Parser(List(tok), tok, ctx)
Take tokens from the stream while the given predicate is satisfied.
💡 If this parser succeeds, the list produced is guaranteed to be non-empty.
Feel free to let assert the result!
pub fn throw(message: String) -> Parser(a, tok, ctx)