A USB accessory that brings machine learning inferencing to existing systems. Works with Linux, Mac, and Windows systems.
The Coral USB Accelerator adds an Edge TPU coprocessor to your system, enabling high-speed machine learning inferencing on a wide range of systems, simply by connecting it to a USB port.
Performs high-speed ML inferencing
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. See more performance benchmarks.
Supports all major platforms
Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10.
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
Supports AutoML Vision Edge
Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.
|ML accelerator||Google Edge TPU coprocessor: |
4 TOPS (int8); 2 TOPS per watt
|Connector||USB 3.0 Type-C* (data/power)|
|Dimensions||65 mm x 30 mm|
|Availability||Australia, European Union (except France, Czech Republic), Ghana, Hong Kong, India, Indonesia, Israel, Japan, Kenya, Malaysia, New Zealand, Oman, Philippines, Singapore, South Korea, Taiwan, Thailand, United States, Vietnam|
* Compatible with USB 2.0 but inferencing speed is slower.
- Model compatibility on the Edge TPU
- Edge TPU inferencing overview
- Run multiple models with multiple Edge TPUs
- Pipeline a model with multiple Edge TPUs