Audio and Speech for Edge AI

Creating a more connected & intuitive world

Syntiant has built the capabilities to train and deploy production-grade models for always-on audio and speech applications. Whether a custom wakeword, or a common audio event such as glass break sound, Syntiant’s deep learning models are optimally designed for edge applications, massively improving power and performance levels and ultimately enabling a natural, hands-free interface directly on the edge device. 

Pre-Trained and Customizable Models for Targeted Industries
smart-home_edited.png
personal-devices_edited.png
industrial.png

Industrial 

retail.png

Commercial

automotive.png

Automotive

Always-On Voice (AOV)
Wakewords & Commands, Menus

vectorstock_30697423.png

Acoustics Echo Cancellation

Shutterstock_713676661.png

Noise Suppression
to Aid with AOV

Acoustic Event Detection (AED)
Glassbreak, Alarms

AI detection, tracking & classification with proven performance

Unique, adaptive neural network structure with blocks that map exceptionally well to vector units
 
Dynamically scales with the
complexity of the input
speed.png
Accelerated Model Building
Fast
fast.png
Runs on existing host core 
alongside existing software
Incredibly more accurate and robust compared to other innovations
 
Easily generalizes new perception tasks for maximum scalability
neural-network.png
Deep Learning Algorithms 
customizable.png
Customizable
Can be tuned to a customer-specific use case
Seamlessly runs on the most widely used chipsets in the market
 
Optimizes ports to new ISAs 20x
faster than solutions in the market
hardware.png
Hardware Agnostic
turnkey.png
Turnkey
Installed as Over-The-Air firmware updates