View Source TflInterp (tfl_interp v0.1.16)

Tensorflow lite intepreter for Elixir. Deep Learning inference framework for embedded devices.

Summary

Functions

Adjust NMS result to aspect of the input image. (letterbox)

Get name of backend NN framework.

Ensure that the back-end framework is as expected.

Get the flat binary from the output tensor on the interpreter.

Get list of the flat binary from the output tensoron the interpreter.

Get the propaty of the tflite model.

Invoke prediction.

run(x) deprecated

Put a flat binary to the input tensor on the interpreter.

Put flat binaries to the input tensors on the interpreter.

Stop the tflite interpreter.

Ensure that the model matches the back-end framework.

Functions

adjust2letterbox(nms_result, aspect \\ [1.0, 1.0])

Adjust NMS result to aspect of the input image. (letterbox)

Parameters:

  • nms_result - NMS result {:ok, result}
  • [rx, ry] - aspect ratio of the input image

framework()

Get name of backend NN framework.

framework?(name)

Ensure that the back-end framework is as expected.

get_memo(mod)

get_output_tensor(mod, index, opts \\ [])

Get the flat binary from the output tensor on the interpreter.

Parameters

  • mod - modules' names or session.
  • index - index of output tensor in the model

get_output_tensors(mod, range)

Get list of the flat binary from the output tensoron the interpreter.

Parameters

  • mod - modules' names or session.
  • range - range of output tensor in the model

info(mod)

Get the propaty of the tflite model.

Parameters

  • mod - modules' names

invoke(mod)

Invoke prediction.

Two modes are toggled depending on the type of input data. One is the stateful mode, in which input/output data are stored as model states. The other mode is stateless, where input/output data is stored in a session structure assigned to the application.

Parameters

  • mod/session - modules name(stateful) or session structure(stateless).

Examples.

    output_bin = session()  # stateless mode
      |> TflInterp.set_input_tensor(0, input_bin)
      |> TflInterp.invoke()
      |> TflInterp.get_output_tensor(0)

non_max_suppression_multi_class(mod, arg, boxes, scores, opts \\ [])

Execute post processing: nms.

Parameters

  • mod - modules' names
  • num_boxes - number of candidate boxes
  • num_class - number of category class
  • boxes - binaries, serialized boxes tensor[num_boxes][4]; dtype: float32
  • scores - binaries, serialized score tensor[num_boxes][num_class]; dtype: float32
  • opts
    • iou_threshold: - IOU threshold
    • score_threshold: - score cutoff threshold
    • sigma: - soft IOU parameter
    • boxrepr: - type of box representation
      • :center - center pos and width/height
      • :topleft - top-left pos and width/height
      • :corner - top-left and bottom-right corner pos

run(x)

This function is deprecated. Use invoke/1 instead.

set_input_tensor(mod, index, bin, opts \\ [])

Put a flat binary to the input tensor on the interpreter.

Parameters

  • mod - modules' names or session.
  • index - index of input tensor in the model
  • bin - input data - flat binary, cf. serialized tensor
  • opts - data conversion

set_input_tensors(mod, from, items)

Put flat binaries to the input tensors on the interpreter.

Parameters

  • mod - modules' names or session.
  • from - first index of input tensor in the model
  • items - list of input data - flat binary, cf. serialized tensor

stop(mod)

Stop the tflite interpreter.

Parameters

  • mod - modules' names

validate_model(model, url)

Ensure that the model matches the back-end framework.

Parameters

  • model - path of model file
  • url - download site