Object detection example (deprecated)

Note: This example and the APIs used in it are no longer maintained. We recommend you instead use this example using the TF Lite APIs.

This object_detection.py example performs object detection with DetectionEngine from the Edge TPU API, using the given detection model, labels file, and image. If no labels file is given, it defaults to detecting faces (you must be using a face detection model).

The examples below use a MobileNet SSD that's trained to detect either 1,000 different types of objects or just human faces.

Before you begin, you must have already set up your Dev Board or USB Accelerator.

Download the Edge TPU API examples

echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update

sudo apt-get install edgetpu-examples

The examples are saved at /usr/share/edgetpu/examples/.

Caution: If you first set up your device prior to our v2.12.1 update (Sep 25, 2019), then running the following examples might cause conflicts with your previous version of the Edge TPU library. To resolve, follow the steps to set up our new Debian packages.

Run the example for object detection

cd /usr/share/edgetpu/examples/

python3 object_detection.py \
--model models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
--label models/coco_labels.txt \
--input images/grace_hopper.bmp \
--output ${HOME}/object_detection_results.jpg
Figure 1. grace_hopper.bmp

You should see results like this:

-----------------------------------------
person
score =  0.87890625
box =  [0.36061957478523254, 16.603958666324615, 513.4191654920578, 596.4085128307343]

By default, you'll see a lot more objects detected, but most have very low confidence scores. That's because the object_detection.py script has the top_k value set to 10. You can set this lower to receive results for just one or a few of the top results.

The script also creates a copy of the image with box overlays for the detected objects and saves it at the location specified with the --output parameter.

Help! If you're on a Raspberry Pi and you see an error that says No such file or directory: 'feh', run sudo apt-get install feh and then try again.

Run the example for face detection

Execute object_detection.py with a face detection model (and no labels file):

cd /usr/share/edgetpu/examples/

python3 object_detection.py \
--model models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
--input images/grace_hopper.bmp \
--output ${HOME}/face_detection_results.jpg

You should see results like this:

-----------------------------------------
score =  0.99609375
box =  [143.88912090659142, 40.834905445575714, 381.8060402870178, 365.49142384529114]

See the object_detection.py source here.

To create your own object detection model, read the tutorial about how to Retrain an object detection model.

grace_hopper.bmp is available in the public domain from Naval History and Heritage Command.