BeHOLd.Util.Lexer (behold v1.1.3)
View SourceContains a tokenize/1 function for tokenizing a string representing a
formula in TH0 syntax or a TPTP TH0 problem file using NimbleParsec. This
is mainly used as a preprocessing step for parsing. For information about the
returned structure, see https://hexdocs.pm/nimble_parsec/NimbleParsec.html.
Examples
iex> {:ok, tokens, "", _, _, _} = tokenize("A & B")
iex> tokens
[var: "A", and: "&", var: "B"]
Summary
Types
Tokens are generated by the lexer as a list which contains {type, data}
pairs, where type is a label given as atom and data is the original
string representation of the token.
Functions
Parses the given binary as tokenize.
Types
Functions
@spec tokenize(binary(), keyword()) :: {:ok, [term()], rest, context, line, byte_offset} | {:error, reason, rest, context, line, byte_offset} when line: {pos_integer(), byte_offset}, byte_offset: non_neg_integer(), rest: binary(), reason: String.t(), context: map()
Parses the given binary as tokenize.
Returns {:ok, [token], rest, context, position, byte_offset} or
{:error, reason, rest, context, line, byte_offset} where position
describes the location of the tokenize (start position) as {line, offset_to_start_of_line}.
To column where the error occurred can be inferred from byte_offset - offset_to_start_of_line.
Options
:byte_offset- the byte offset for the whole binary, defaults to 0:line- the line and the byte offset into that line, defaults to{1, byte_offset}:context- the initial context value. It will be converted to a map