M.2 Accelerator with Dual Edge TPU
Integrate two Edge TPUs into legacy and new systems using an M.2 E-key interface.
The Coral M.2 Accelerator with Dual Edge TPU is an M.2 module that brings two Edge TPU coprocessors to existing systems and products with a compatible M.2 E-key slot.*
Performs high-speed ML inferencing
Each Edge TPU coprocessor is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner.
With the two Edge TPUs in this module, you can double the inferences per second (8 TOPS) in several ways, such as by running two models in parallel or pipelining one model across both Edge TPUs.
Works with Debian Linux and Windows
Integrates with Debian-based Linux or Windows 10 systems with a compatible card module slot.
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
Supports AutoML Vision Edge
Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.
2x Google Edge TPU coprocessor: |
8 TOPS (int8); 2 TOPS per watt
|Connector||M.2 E-key (with two PCIe Gen2 x1 lanes)*|
|Dimensions||22 mm x 30 mm (M.2-2230-D3-E)|
|Availability||Australia, European Union (except Czech Republic), Ghana, Hong Kong, India, Indonesia, Israel, Japan, Kenya, Malaysia, New Zealand, Oman, Philippines, Singapore, South Korea, Taiwan, Thailand, United States, Vietnam|
* Although the M.2 Specification (section 5.1.2) declares E-key sockets provide two instances of PCIe x1, most manufacturers provide only one. To use both Edge TPUs, be sure your socket connects both instances to the host.
- Model compatibility on the Edge TPU
- Edge TPU inferencing overview
- Run multiple models with multiple Edge TPUs
- Pipeline a model with multiple Edge TPUs