C++ inferencing APIs
TensorFlow Lite utilties
The following APIs simplify your code when
working with a tflite::Interpreter
.
coral/tflite_utils.h
-
namespace
coral
¶ Functions
-
bool
MatchShape
(absl::Span<const int> shape, const std::vector<int> &pattern)¶ Returns whether the shape matches the pattern.
Negative numbers in the pattern indicate the corresponding shape dimension can be anything. Use -1 in the pattern for cosnsitency.
-
absl::Span<const int>
TensorShape
(const TfLiteTensor &tensor)¶
-
int
TensorSize
(const TfLiteTensor &tensor)¶
-
template<typename
InputIt
, typenameOutputIt
>
OutputItDequantize
(InputIt first, InputIt last, OutputIt d_first, float scale, int32_t zero_point)¶
-
template<typename
T
, typenameOutputIt
>
OutputItDequantize
(absl::Span<const T> span, OutputIt d_first, float scale, int32_t zero_point)¶
-
template<typename
InputIt
, typenameOutputIt
>
OutputItQuantize
(InputIt first, InputIt last, OutputIt d_first, float scale, int32_t zero_point)¶
-
absl::Status
MakeEdgeTpuInterpreter
(const tflite::FlatBufferModel &model, edgetpu::EdgeTpuContext *tpu_context, tflite::ops::builtin::BuiltinOpResolver *resolver, tflite::StatefulErrorReporter *error_reporter, std::unique_ptr<tflite::Interpreter> *interpreter)¶ Returns interpreter which can run Edge TPU models if tpu_context is not null, otherwise returns regular interpreter.
PoseNet custom op is always supported.
resolver
anderror_reporter
can be null, in which case the default resolver and error reporter objects will be used. Note: whenerror_reporter
is null, tflite runtime error message will not be returned.
-
std::unique_ptr<tflite::Interpreter>
MakeEdgeTpuInterpreterOrDie
(const tflite::FlatBufferModel &model, edgetpu::EdgeTpuContext *tpu_context = nullptr, tflite::ops::builtin::BuiltinOpResolver *resolver = nullptr, tflite::StatefulErrorReporter *error_reporter = nullptr)¶
-
absl::Status
SetTensorBuffer
(tflite::Interpreter *interpreter, int tensor_index, const void *buffer, size_t buffer_size)¶ Replaces existing tensor buffer with the provided one.
Caller owns provided buffer. Tensor quantization parameters are preserved. This function is a required ‘hack’ for performance reasons until this functionality would become a part of TensorFlow Lite API.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContext
(const std::string &device, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ Returns TPU context or nullptr if requested TPU context is not available.
Parameter
device
:”” any TPU device
”usb” any TPU device on USB bus
”pci” any TPU device on PCIe bus
”:N” N-th TPU device, e.g. “:0”
”usb:N” N-th TPU device on USB bus, e.g. “usb:0”
”pci:N” N-th TPU device on PCIe bus, e.g. “pci:0”
Parameter
options
: See edgetpu.h for details.All TPUs are always enumerated in the same order assuming hardware configuration doesn’t change (no added/removed devices between enumerations). Under the assumption above, the same index N will always point to the same device.
Consider 2 USB devices and 4 PCIe devices connected to the host. The way to reference specifically USB devices: “usb:0”, “usb:1”. The way to reference specifically PCIe devices: “pci:0”, “pci:1”, “pci:2”, “pci:3”. The generic way to reference all devices (no assumption about device type): “:0”, “:1”, “:2”, “:3”, “:4”, “:5”.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContextOrDie
(const std::string &device, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ The same as above but crashes if requested TPU context is not available.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContext
(absl::optional<edgetpu::DeviceType> device_type = absl::nullopt, absl::optional<int> device_index = absl::nullopt, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ The same as previously defined
GetEdgeTpuContext
exceptdevice
parameter is replaced with two separate ones:device_type
anddevice_index
.Custom options would only be passed when
device_type
anddevice_index
are non-empty.
-
std::shared_ptr<edgetpu::EdgeTpuContext>
GetEdgeTpuContextOrDie
(absl::optional<edgetpu::DeviceType> device_type = absl::nullopt, absl::optional<int> device_index = absl::nullopt, const edgetpu::EdgeTpuManager::DeviceOptions &options = {})¶ The same as above but crashes if requested TPU context is not available.
-
std::unique_ptr<tflite::FlatBufferModel>
LoadModelOrDie
(const std::string &path)¶
-
std::unique_ptr<tflite::FlatBufferModel>
LoadModelOrDie
(const flatbuffers::FlatBufferBuilder &fbb)¶
-
absl::Status
InvokeWithMemBuffer
(tflite::Interpreter *interpreter, const void *buffer, size_t in_size, tflite::StatefulErrorReporter *reporter = nullptr)¶ Invoke tflite::Interpreter by using given buffer as an input tensor.
For input, we assume there is only one tensor. Input buffer contains |in_size| elements and could have padding elements at the end. |in_size| could be larger than the input tensor size, denoted by n, and only the first n elements of the input buffer will be used. |in_size| can not be smaller than n.
Note: null
reporter
is allowed, however, tflite runtime error message will not be returned in this case. To get tflite runtime error message,reporter
must be set to the one that is used to create interpreter.
-
absl::Status
InvokeWithDmaBuffer
(tflite::Interpreter *interpreter, int dma_fd, size_t in_size, tflite::StatefulErrorReporter *reporter = nullptr)¶ Invoke tflite::Interpreter by using given DMA file descriptor as an input tensor.
Works only for Edge TPU models running on PCIe TPU devices.
Note: null
reporter
is allowed, however, tflite runtime error message will not be returned in this case. To get tflite runtime error message,reporter
must be set to the one that is used to create interpreter.
-
bool
ContainsEdgeTpuCustomOp
(const tflite::FlatBufferModel &model)¶ Returns whether a tflite model contains any Edge TPU custom operator.
-
bool
Image classification
Use the following APIs with image classification models.
coral/classification/adapter.h
-
namespace
coral
Functions
-
std::vector<Class>
GetClassificationResults
(absl::Span<const float> scores, float threshold = -std::numeric_limits<float>::infinity(), size_t top_k = std::numeric_limits<size_t>::max())¶ Converts inference output tensors to classification results.
Returns top_k ClassificationCandidate elements ordered by score, first element has the highest score.
’tensors’ : vector<vector<float>>, result of RunInference() call
’threshold’ : float, minimum confidence threshold for returned classifications. For example, use 0.5 to receive only classifications with a confidence equal-to or higher-than 0.5.
’top_k’: size_t, the maximum number of classifications to return.
The function will return a vector of predictions which is sorted by <score, label_id> in descending order.
-
struct
Class
¶
-
std::vector<Class>
Object detection
Use the following APIs with object detection models.
coral/detection/adapter.h
-
namespace
coral
Functions
-
std::vector<Object>
GetDetectionResults
(absl::Span<const float> bboxes, absl::Span<const float> ids, absl::Span<const float> scores, size_t count, float threshold = -std::numeric_limits<float>::infinity(), size_t top_k = std::numeric_limits<size_t>::max())¶ Converts inference output tensors to SSD detection results.
Returns top_k DetectionCandidate elements ordered by score, first element has the highest score.
’bboxes’ : bounding boxes of detected objects. Four floats per object (box-corner encoding).
’ids’: class identifiers of detected objects. One float per object.
’scores’: confidence scores of detected objects. One float per object.
’count’: number of detected objects, all tensors defined above have valid data only for these objects.
’threshold’ : float, minimum confidence threshold for returned predictions. For example, use 0.5 to receive only predictions with a confidence equal-to or higher-than 0.5.
’top_k’: size_t, the maximum number of predictions to return.
The function will return a vector of predictions which is sorted by <score, label_id> in descending order.
-
struct
Object
¶ - #include <adapter.h>
Detection result.
-
std::vector<Object>
coral/bbox.h
-
namespace
coral
Functions
-
template<typename
T
>
structBBox
¶
-
template<typename
Is this content helpful?