Image classification example with Pi Camera (deprecated)

Note: This example and the APIs used in it are no longer maintained. We recommend you instead use this example for the Pi Camera using the TF Lite APIs.

This example performs live image classification with the Raspberry Pi camera and the USB Accelerator, using the ClassificationEngine API. (If you have a Dev Board, instead see how to connect a Coral Camera or USB camera.)

Before you begin, you must have already set up your USB Accelerator with a Raspberry Pi and Pi Camera.

Download the Edge TPU API examples

echo "deb coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list

curl | sudo apt-key add -

sudo apt-get update

sudo apt-get install edgetpu-examples

The examples are saved at /usr/share/edgetpu/examples/.

Caution: If you first set up your device prior to our v2.12.1 update (Sep 25, 2019), then running the following examples might cause conflicts with your previous version of the Edge TPU library. To resolve, follow the steps to set up our new Debian packages.

Download the model

For this demo, you need a model that can recognize objects available for you to put in front of the camera. So we suggest you use the following MobileNet model that can recognize over 1,000 kinds of objects:


mkdir -p $EXAMPLE_DIR && cd $EXAMPLE_DIR

curl -O \

Run the code

cd /usr/share/edgetpu/examples/

python3 \
--model $EXAMPLE_DIR/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--label $EXAMPLE_DIR/imagenet_labels.txt

Now start holding some objects up to the camera and you'll see the live classification results on your monitor.

This sample captures camera images using the picamera API (see the source here).

To create your own classification model, read the tutorial about how to Retrain an image classification model.