View Source Absinthe.Lexer (absinthe v1.7.5)

Link to this section Summary

Link to this section Functions

Link to this function

do_tokenize(binary, opts \\ [])

View Source
@spec do_tokenize(binary(), keyword()) ::
  {:ok, [term()], rest, context, line, byte_offset}
  | {:error, reason, rest, context, line, byte_offset}
when line: {pos_integer(), byte_offset},
     byte_offset: pos_integer(),
     rest: binary(),
     reason: String.t(),
     context: map()

Parses the given binary as do_tokenize.

Returns {:ok, [token], rest, context, position, byte_offset} or {:error, reason, rest, context, line, byte_offset} where position describes the location of the do_tokenize (start position) as {line, offset_to_start_of_line}.

To column where the error occurred can be inferred from byte_offset - offset_to_start_of_line.

options

Options

  • :byte_offset - the byte offset for the whole binary, defaults to 0
  • :line - the line and the byte offset into that line, defaults to {1, byte_offset}
  • :context - the initial context value. It will be converted to a map
Link to this function

line_and_column(arg, byte_offset, column_correction)

View Source
Link to this function

tokenize(input, options \\ [])

View Source
@spec tokenize(binary(), Keyword.t()) ::
  {:ok, [any()]}
  | {:error, binary(), {integer(), non_neg_integer()}}
  | {:error, :exceeded_token_limit}