Image classification example with Pi Camera (deprecated)
This classify_capture.py
example performs live image classification with the Raspberry Pi camera
and the USB Accelerator, using the ClassificationEngine
API. (If you have a Dev Board, instead see how to
connect a Coral Camera or USB camera.)
Before you begin, you must have already set up your USB Accelerator with a Raspberry Pi and Pi Camera.
Download the Edge TPU API examples
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install edgetpu-examples
The examples are saved at /usr/share/edgetpu/examples/
.
Download the model
For this demo, you need a model that can recognize objects available for you to put in front of the camera. So we suggest you use the following MobileNet model that can recognize over 1,000 kinds of objects:
EXAMPLE_DIR=$HOME/coral-examples
mkdir -p $EXAMPLE_DIR && cd $EXAMPLE_DIR
curl -O https://dl.google.com/coral/canned_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
-O https://dl.google.com/coral/canned_models/imagenet_labels.txt
Run the code
cd /usr/share/edgetpu/examples/
python3 classify_capture.py \
--model $EXAMPLE_DIR/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--label $EXAMPLE_DIR/imagenet_labels.txt
Now start holding some objects up to the camera and you'll see the live classification results on your monitor.
This classify_capture.py
sample captures camera images using the
picamera
API
(see the source here).
To create your own classification model, read the tutorial about how to Retrain an image classification model.
Is this content helpful?