Introducing the Coral Partner Program!

Learn more

Build beneficial and privacy preserving AI

A local AI platform to strengthen society, improve the environment, and enrich lives

Coral is a complete toolkit to build products with local AI. Our on-device inferencing capabilities allow you to build products that are efficient, private, fast and offline.

Efficient

Balance power and performance with local, embedded applications.

Private

Keep user data private by performing all inferences locally. You decide when data is stored or transferred.

Fast

Run lightning-fast AI at industry-leading inference speeds for embedded devices.

Offline

Deploy in the field where connectivity is limited.

Flexible enough for startups and large-scale enterprises

See all products

Solutions for on-device intelligence

Object detection

Draw a square around the location of various recognized objects in an image.

Pose estimation

Estimate the poses of people in an image by identifying various body joints.

Image segmentation

Identify various objects in an image and their location on a pixel-by-pixel basis.

Key phrase detection

Listen to audio samples and quickly recognize known words and phrases.

Building smarter cities

Manufacturing more efficiently

Helping people get around

Meet Darcy, a Coral-powered platform that streamlines cloud-to-edge AI

Darcy and its user-friendly software make the potentially challenging task of standing up AI applications much easier for the likes of city administrators, school districts, and even farmers looking for ways to monitor remote fields and orchards for areas that need attention.

Smart camera features Google AI accelerator and ARM processor

Vision System Design

Smart camera features Google AI accelerator and ARM processor

The VisionAI smart camera features a quad-core Cortex-A53 ARM processor and a Coral Accelerator to support AI, deep learning, and machine learning applications.

Google Releases New Coral APIs for IoT AI

InfoQ

Google Releases New Coral APIs for IoT AI

The new release brings parity across the C++ and Python SDKs and includes more efficient memory usage. Other updates include additional pre-trained models and general availability of model pipelining.