EfficientNet-EdgeTpu (S)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Built for the Edge TPU
In the lists below, each "Edge TPU model" link provides a .tflite
file that is pre-compiled to run on the Edge TPU. You can run these models on your Coral device using our example code.
For some models, there's a link for "All model files," which is an archive that includes the following:
Download this "All model files" archive to get the checkpoint file you'll need if you want to use the model as your basis for transfer-learning, as shown in the tutorials to retrain a classification model and retrain an object detection model.
If you'd like to download all models at once, you can clone our Git repo
https://github.com/google-coral/test_data
.
To build your own model for the Edge TPU, you must use the Edge TPU Compiler.
All models trained on ImageNet used the ILSVRC2012 dataset.
EfficientNet-EdgeTpu (S)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
EfficientNet-EdgeTpu (M)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 240x240
EfficientNet-EdgeTpu (L)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 300x300
MobileNet V1
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileNet V2
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileNet V3
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileNet V1 (TF2)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileNet V2 (TF2)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileNet V3 (TF2)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileNet V2 (iNat insects)
Recognizes 1,000+ types of insect
Dataset: iNaturalist
Input size: 224x224
MobileNet V2 (iNat plants)
Recognizes 2,000+ types of plants
Dataset: iNaturalist
Input size: 224x224
MobileNet V2 (iNat birds)
Recognizes 900+ types of birds
Dataset: iNaturalist
Input size: 224x224
Inception V1
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Inception V2
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
Inception V3
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299
Inception V4
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299
ResNet-50
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 224x224
MobileDet SSD
Detects the location of 90 types of objects
Dataset: COCO
Input size: 320x320
MobileNet SSD v1 (COCO)
Detects the location of 90 types of objects
Dataset: COCO
Input size: 300x300
MobileNet SSD v2 (COCO)
Detects the location of 90 types of objects
Dataset: COCO
Input size: 300x300
MobileNet SSD v2 (Faces)
Detects the location of human faces
Dataset: Open Images v4
Input size: 320x320
(Does not require a labels file)
U-Net MobileNet v2 (128)
Recognizes and segments pets using 3 classes: pixels belonging to a pet, pixels bordering a pet, and background pixels (it does not classify the type of pet)
Dataset: Oxford-IIIT Pet Dataset
Input size: 128x128
U-Net MobileNet v2 (256)
Recognizes and segments pets using 3 classes: pixels belonging to a pet, pixels bordering a pet, and background pixels (it does not classify the type of pet)
Dataset: Oxford-IIIT Pet Dataset
Input size: 256x256
MobileNet v2 DeepLab v3 (0.5 depth multiplier)
Recognizes and segments 20 types of objects
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 0.5
MobileNet v2 DeepLab v3 (1.0 depth multiplier)
Recognizes and segments 20 types of objects
Dataset: PASCAL VOC 2012
Input size: 513x513
Depth multiplier: 1.0
These are embedding extractor models—they're compiled with the last fully-connected layer removed. They do not perform classification on their own, and must be paired with the SoftmaxRegression
API, which allows you to perform on-device backpropagation to train the classification layer.
For details, read Retrain a classification model on-device with backpropagation.
EfficientNet-EdgeTpu (S) embedding extractor
Input size: 224x224
Training: 1,000 objects from ImageNet
EfficientNet-EdgeTpu (M) embedding extractor
Input size: 240x240
Training: 1,000 objects from ImageNet
EfficientNet-EdgeTpu (L) embedding extractor
Input size: 300x300
Training: 1,000 objects from ImageNet
MobileNet v1 embedding extractor
Input size: 224x224
Training: 1,000 objects from ImageNet
These models are modified to include an L2-normalization layer and other changes to be compatible with the ImprintingEngine
API, which performs weight imprinting to retrain classifications. It's built for the Edge TPU but the last fully-connected layer executes on the CPU to enable retraining.
For details, read Retrain a classification model on-device with weight imprinting.
MobileNet v1 with L2-norm
Input size: 224x224
Training: 1,000 objects from ImageNet