Models

Built for the Edge TPU

In the lists below, each "Edge TPU model" link provides a .tflite file that is pre-compiled to run on the Edge TPU. You can run these models on your Coral device using our example code. (Remember to download the model's corresponding labels file.)

For many of the models, we've also provided a link for "All model files," which is an archive file that includes the following:

  • Trained model checkpoints
  • Frozen graph for the trained model
  • Eval graph text protos (to be easily viewed)
  • Info file containing input and output information
  • Quantized TensorFlow Lite model that runs on CPU (included with classification models only)

Download this "All model files" archive to get the checkpoint file you'll need if you want to use the model as your basis for transfer-learning, as shown in the tutorials to retrain a classification model and retrain an object detection model.

If you'd like to download all models at once, you can clone our Git repo https://github.com/google-coral/edgetpu and then find the models in test_data/.

Notice: These are not production-quality models; they are for demonstration purposes only.

To build your own model for the Edge TPU, you must use the Edge TPU Compiler.

All models trained on ImageNet used the ILSVRC2012 dataset.

Image classification


Object detection


MobileNet SSD v2 (Faces)

Detects the location of human faces
Dataset: Open Images v4
Input size: 320x320
(Does not require a labels file)

Semantic segmentation


On-device backpropagation (classification)

These are embedding extractor models—they're compiled with the last fully-connected layer removed. They do not perform classification on their own, and must be paired with the SoftmaxRegression API, which allows you to perform on-device backpropagation to train the classification layer.

For details, read Retrain a classification model on-device with backpropagation.


MobileNet v1 embedding extractor

Input size: 224x224
Training: 1,000 objects from ImageNet

On-device weight imprinting (classification)

These models are modified to include an L2-normalization layer and other changes to be compatible with the ImprintingEngine API, which performs weight imprinting to retrain classifications. It's built for the Edge TPU but the last fully-connected layer executes on the CPU to enable retraining.

For details, read Retrain a classification model on-device with weight imprinting.