top of page

Audio and Speech for Edge AI
Creating a more connected & intuitive world
Syntiant has built the capabilities to train and deploy production-grade models for always-on audio and speech applications. Whether a custom wakeword, or a common audio event such as glass break sound, Syntiant’s deep learning models are optimally designed for edge applications, massively improving power and performance levels and ultimately enabling a natural, hands-free interface directly on the edge device.
Pre-Trained and Customizable Models for Targeted Industries
Always-On Voice (AOV)
Wakewords & Commands, Menus

Acoustics Echo Cancellation

Noise Suppression
to Aid with AOV
Acoustic Event Detection (AED)
Glassbreak, Alarms
AI detection, tracking & classification with proven performance
Unique, adaptive neural network structure with blocks that map exceptionally well to vector units
Dynamically scales with the
complexity of the input

Accelerated Model Building
Fast

Runs on existing host core
alongside existing software
Incredibly more accurate and robust compared to other innovations
Easily generalizes new perception tasks for maximum scalability

Deep Learning Algorithms

Customizable
Can be tuned to a customer-specific use case
Seamlessly runs on the most widely used chipsets in the market
Optimizes ports to new ISAs 20x
faster than solutions in the market

Hardware Agnostic

Turnkey
Installed as Over-The-Air firmware updates
bottom of page