Image classification

An image classification model can label an image with known objects and provide a confidence score.

With the Coral Edge TPU™, you can run an image classification model directly on your device, using real-time video at almost 400 frames per second. You can even run additional models concurrently on the same Edge TPU while maintaining a high frame rate.

This page provides several trained models that are compiled for the Edge TPU, example code to run them, plus information about how to train your own model with TensorFlow.

Trained models link

These models are trained and compiled for the Edge TPU.

Notice: These are not production-quality models; they are for demonstration purposes only.
Model name Detections/Dataset Input size Depth mul. TF ver. Latency 1 Accuracy Model size Downloads

EfficientNet-EdgeTpu (L)

1,000 objects
ILSVRC2012

300x300 N/A 1 24.5 ms Top-1: 81.2%
Top-5: 95.1%
12.8 MB

Edge TPU model, CPU model,
Labels file, All model files

EfficientNet-EdgeTpu (M)

1,000 objects
ILSVRC2012

240x240 N/A 1 8.4 ms Top-1: 80.1%
Top-5: 94.5%
8.7 MB

Edge TPU model, CPU model,
Labels file, All model files

EfficientNet-EdgeTpu (S)

1,000 objects
ILSVRC2012

224x224 N/A 1 4.9 ms Top-1: 78.9%
Top-5: 94.7%
6.8 MB

Edge TPU model, CPU model,
Labels file, All model files

Inception V1

1,000 objects
ILSVRC2012

224x224 N/A 1 3.1 ms Top-1: 71.9%
Top-5: 92.0%
7.0 MB

Edge TPU model, CPU model,
Labels file, All model files

Inception V3

1,000 objects
ILSVRC2012

224x224 N/A 1 13.0 ms Top-1: 75.4%
Top-5: 93.2%
12.0 MB

Edge TPU model, CPU model,
Labels file, All model files

Inception V3

1,000 objects
ILSVRC2012

299x299 N/A 1 42.2 ms Top-1: 79.9%
Top-5: 95.7%
24.0 MB

Edge TPU model, CPU model,
Labels file, All model files

Inception V4

1,000 objects
ILSVRC2012

299x299 N/A 1 84.0 ms Top-1: 80.5%
Top-5: 95.7%
43.0 MB

Edge TPU model, CPU model,
Labels file, All model files

MobileNet V1

1,000 objects
ILSVRC2012

128x128 0.25 1 0.6 ms Top-1: 41.2%
Top-5: 66.6%
0.7 MB

Edge TPU model, CPU model,
Labels file

MobileNet V1

1,000 objects
ILSVRC2012

160x160 0.5 1 1.0 ms Top-1: 63.7%
Top-5: 83.4%
1.6 MB

Edge TPU model, CPU model,
Labels file

MobileNet V1

1,000 objects
ILSVRC2012

192x192 0.75 1 1.4 ms Top-1: 67.2%
Top-5: 88.1%
2.9 MB

Edge TPU model, CPU model,
Labels file

MobileNet V1

1,000 objects
ILSVRC2012

224x224 1.0 1 2.3 ms Top-1: 69.5%
Top-5: 90.6%
4.5 MB

Edge TPU model, CPU model,
Labels file, All model files

MobileNet V2

900+ birds
iNaturalist 2017

224x224 1.0 1 2.5 ms N/A 4.1 MB

Edge TPU model, CPU model,
Labels file

MobileNet V2

1000+ insects
iNaturalist 2017

224x224 1.0 1 2.5 ms N/A 4.1 MB

Edge TPU model, CPU model,
Labels file

MobileNet V2

2000+ plants
iNaturalist 2017

224x224 1.0 1 2.6 ms N/A 5.5 MB

Edge TPU model, CPU model,
Labels file

MobileNet V2

1,000 objects
ILSVRC2012

224x224 1.0 1 2.5 ms Top-1: 73.2%
Top-5: 90.0%
4.0 MB

Edge TPU model, CPU model,
Labels file, All model files

MobileNet V1

1,000 objects
ILSVRC2012

224x224 1.0 2 2.4 ms Top-1: 69.5%
Top-5: 89.8%
4.6 MB

Edge TPU model, CPU model,
Labels file

MobileNet V2

1,000 objects
ILSVRC2012

224x224 1.0 2 2.5 ms Top-1: 73.2%
Top-5: 91.8%
4.2 MB

Edge TPU model, CPU model,
Labels file

MobileNet V3

1,000 objects
ILSVRC2012

224x224 1.0 2 2.7 ms Top-1: 77.5%
Top-5: 93.6%
5.0 MB

Edge TPU model, CPU model,
Labels file

ResNet-50

1,000 objects
ILSVRC2012

224x224 N/A 2 42.9 ms Top-1: 73.6%
Top-5: 93.8%
25.0 MB

Edge TPU model, CPU model,
Labels file

1 Latency is the time to perform one inference, as measured with a Coral USB Accelerator on a desktop CPU. Latency varies between systems and is primarily intended for comparison between models. For more comparisons, see the Performance Benchmarks.

On-device training models link

These models are designed for compatibility with the on-device transfer learning APIs provided with PyCoral and libcoral.

The "backpropagation" models are embedding extractor models, compiled with the last fully-connected layer removed. They do not perform classification on their own, and must be paired with the SoftmaxRegression API, which allows you to perform on-device backpropagation to train the classification layer.

The "weight imprinting" models are modified to include an L2-normalization layer and other changes to be compatible with the ImprintingEngine API, which performs weight imprinting to retrain classifications.

Model name Training style Base dataset Input size TF ver. Model size Downloads

EfficientNet-EdgeTpu (L)

Backpropagation

1,000 objects
ILSVRC2012

300x300 1 11.7 MB

Edge TPU model, CPU model,
Labels file

EfficientNet-EdgeTpu (M)

Backpropagation

1,000 objects
ILSVRC2012

240x240 1 7.6 MB

Edge TPU model, CPU model,
Labels file

EfficientNet-EdgeTpu (S)

Backpropagation

1,000 objects
ILSVRC2012

224x224 1 5.7 MB

Edge TPU model, CPU model,
Labels file

MobileNet V1

Backpropagation

1,000 objects
ILSVRC2012

224x224 1 3.5 MB

Edge TPU model, CPU model,
Labels file

MobileNet V1

Weight imprinting

1,000 objects
ILSVRC2012

224x224 1 5.4 MB

Edge TPU model, CPU model,
Labels file, All model files

Example code link

videocam

Image recognition with video

Multiple examples showing how to stream images from a camera and run classification or detection models with the TensorFlow Lite API. Each example uses a different camera library, such as GStreamer, OpenCV, PyGame, and PiCamera.

Languages: Python, C++

Pipelined image classification

An example showing how to pipeline a model across multiple Edge TPUs, allowing you to significantly increase throughput for large models such as Inception.

Languages: Python, C++