Image classification

An image classification model can label an image with known objects and provide a confidence score.

With the Coral Edge TPU™, you can run an image classification model directly on your device, using real-time video at almost 400 frames per second. You can even run additional models concurrently on the same Edge TPU while maintaining a high frame rate.

This page provides several trained models that are compiled for the Edge TPU, example code to run them, plus information about how to train your own model with TensorFlow.

Trained models link

These models are trained and compiled for the Edge TPU.

Notice: These are not production-quality models; they are for demonstration purposes only.
Model name Detections/Dataset Input size Depth mul. TF ver. Latency 1 Accuracy Micro 2 Model size Downloads

EfficientNet-EdgeTpu (L)*

1,000 objects
ILSVRC2012

300x300x3 N/A 1 21.3 ms Top-1: 81.2%
Top-5: 95.1%
Yes 12.8 MB

EfficientNet-EdgeTpu (M)*

1,000 objects
ILSVRC2012

240x240x3 N/A 1 7.3 ms Top-1: 80.1%
Top-5: 94.5%
Yes 8.7 MB

EfficientNet-EdgeTpu (S)*

1,000 objects
ILSVRC2012

224x224x3 N/A 1 5.0 ms Top-1: 78.9%
Top-5: 94.7%
Yes 6.8 MB

Inception V1

1,000 objects
ILSVRC2012

224x224x3 N/A 1 3.4 ms Top-1: 71.9%
Top-5: 92.0%
Yes 7.0 MB

Inception V3

1,000 objects
ILSVRC2012

224x224x3 N/A 1 13.4 ms Top-1: 75.4%
Top-5: 93.2%
Yes 12.0 MB

Inception V3

1,000 objects
ILSVRC2012

299x299x3 N/A 1 42.8 ms Top-1: 79.9%
Top-5: 95.7%
No 23.9 MB

Inception V4

1,000 objects
ILSVRC2012

299x299x3 N/A 1 84.7 ms Top-1: 80.7%
Top-5: 95.5%
No 42.9 MB

MobileNet V1

1,000 objects
ILSVRC2012

128x128x3 0.25 1 0.9 ms Top-1: 40.8%
Top-5: 67.2%
Yes 0.7 MB

MobileNet V1

1,000 objects
ILSVRC2012

160x160x3 0.5 1 1.4 ms Top-1: 63.7%
Top-5: 83.4%
Yes 1.6 MB

MobileNet V1

1,000 objects
ILSVRC2012

192x192x3 0.75 1 1.8 ms Top-1: 67.2%
Top-5: 88.1%
Yes 2.8 MB

MobileNet V1

1,000 objects
ILSVRC2012

224x224x3 1.0 1 2.8 ms Top-1: 69.5%
Top-5: 90.6%
Yes 4.4 MB

MobileNet V2

900+ birds
iNaturalist 2017

224x224x3 1.0 1 2.6 ms N/A Yes 4.1 MB

MobileNet V2

1000+ insects
iNaturalist 2017

224x224x3 1.0 1 2.7 ms N/A Yes 4.1 MB

MobileNet V2

2000+ plants
iNaturalist 2017

224x224x3 1.0 1 2.6 ms N/A Yes 5.5 MB

MobileNet V2

1,000 objects
ILSVRC2012

224x224x3 1.0 1 2.9 ms Top-1: 73.2%
Top-5: 90.0%
Yes 4.0 MB

MobileNet V1

1,000 objects
ILSVRC2012

224x224x3 1.0 2 2.8 ms Top-1: 69.5%
Top-5: 89.8%
Yes 4.5 MB

MobileNet V2

1,000 objects
ILSVRC2012

224x224x3 1.0 2 3.0 ms Top-1: 73.2%
Top-5: 91.8%
Yes 4.1 MB

MobileNet V3

1,000 objects
ILSVRC2012

224x224x3 1.0 2 3.0 ms Top-1: 77.5%
Top-5: 93.6%
Yes 4.9 MB

ResNet-50

1,000 objects
ILSVRC2012

224x224x3 N/A 2 42.2 ms Top-1: 73.6%
Top-5: 93.8%
No 25.0 MB

Popular Products V1
New

100,000 popular
US products

224x224x3 N/A 1 7.0 ms N/A Yes 9.8 MB

1 Latency is the time to perform one inference, as measured with a Coral USB Accelerator on a desktop CPU. Latency varies between systems, so this is primarily intended for comparison between models. For more comparisons, see the Performance Benchmarks.

2 Indicates compatibility with the Dev Board Micro. Some models are not compatible because they require a CPU-bound op that is not supported by TensorFlow Lite for Microcontrollers or they require more memory than available on the board. (All models are compatible with all other Coral boards.)

* Beware that the EfficientNet family of models have unique input quantization values (scale and zero-point) that you must use when preprocessing your input. For example preprocessing code, see the classify_image.py or classify_image.cc examples.

On-device training models link

These models are designed for compatibility with the on-device transfer learning APIs provided with PyCoral and libcoral.

The "backpropagation" models are embedding extractor models, compiled with the last fully-connected layer removed. They do not perform classification on their own, and must be paired with the SoftmaxRegression API, which allows you to perform on-device backpropagation to train the classification layer.

The "weight imprinting" models are modified to include an L2-normalization layer and other changes to be compatible with the ImprintingEngine API, which performs weight imprinting to retrain classifications.

Model name Training style Base dataset Input size TF ver. Micro 1 Model size Downloads

EfficientNet-EdgeTpu (L)

Backpropagation

1,000 objects
ILSVRC2012

300x300x3 1 Yes 11.7 MB

EfficientNet-EdgeTpu (M)

Backpropagation

1,000 objects
ILSVRC2012

240x240x3 1 Yes 7.6 MB

EfficientNet-EdgeTpu (S)

Backpropagation

1,000 objects
ILSVRC2012

224x224x3 1 Yes 5.7 MB

MobileNet V1

Backpropagation

1,000 objects
ILSVRC2012

224x224x3 1 Yes 3.5 MB

MobileNet V1

Weight imprinting

1,000 objects
ILSVRC2012

224x224x3 1 No 5.3 MB

1 Indicates compatibility with the Dev Board Micro. Some models are not compatible because they require a CPU-bound op that is not supported by TensorFlow Lite for Microcontrollers or they require more memory than available on the board. (All models are compatible with all other Coral boards.)

Example code link

videocam

Image recognition with video

Multiple examples showing how to stream images from a camera and run classification or detection models with the TensorFlow Lite API. Each example uses a different camera library, such as GStreamer, OpenCV, PyGame, and PiCamera.

Languages: Python, C++

Pipelined image classification

An example showing how to pipeline a model across multiple Edge TPUs, allowing you to significantly increase throughput for large models such as Inception.

Languages: Python, C++