Sensory Unveils Smart Wakewords, Ushering in an Era of Truly Natural, Conversational AI
New AI-powered technology allows devices to understand context, intent, and identity—enabling seamless conversations without rigid 'Hey, device' commands.
The new generation of Voice LLMs need a low power method to call them up, and Smart Wakewords creates the enabling technology that finally shifts the industry toward a 'natural dialogue' paradigm.”
SANTA CLARA, CA, UNITED STATES, December 1, 2025 /EINPresswire.com/ -- Sensory, Inc., the pioneer in embedded voice and vision AI, today announced the launch of Sensory Smart Wakewords, a revolutionary leap forward in human-machine communication. This next-generation technology shatters the limitations of traditional, rigid wakewords, enabling consumer electronics to listen, understand, and act with unprecedented flexibility and accuracy. For the first time, interacting with devices will feel less like giving commands and more like having a natural conversation.— Todd Mozer, CEO of Sensory
For years, the voice experience has been defined by a frustratingly robotic call-and-response. Sensory Smart Wakewords dismantles this barrier by creating a fluid, intelligent, and personalized interface. The new technology introduces a suite of new approaches to add flexibility while increasing accuracy. One example is the ability to use a wakeword before or after a command. Sensory allows a device to be always listening at low power and capture a few seconds of audio in a temporary buffer. When it hears the wakeword, it can process the entire utterance either before or afterwards, understanding the user's intent without forcing an unnatural speaking order.
"We've been trapped in a 'command-and-control' paradigm with voice AI for far too long," said Todd Mozer, CEO of Sensory. "The new generation of Voice LLMs need a low power method to call them up, and Smart Wakewords creates the enabling technology that finally shifts the industry toward a 'natural dialogue' paradigm. It's about making the technology adapt to humans, not the other way around. This is the key to unlocking the true potential of voice interaction in consumer electronics for everything from wearables and vehicles to the next generation of robots, and voice agents."
Intelligence That Adapts to the User and Environment
Sensory Smart Wakewords introduces a suite of AI-driven features that operate primarily on-device for maximum privacy and efficiency. This "edge intelligence" allows for a deeply contextual and adaptive experience:
- Contextual Understanding: An integrated Natural Language Understanding (NLU) engine determines if a phrase is an intended command or simply background conversation between people, dramatically reducing false activations. It can even use simple queries like "How," "Why," or "Tell me" as a trigger, passing the phrase to a Language Model (LM) to determine if a response is appropriate.
- Adaptive Thresholds: The system intelligently and automatically adjusts its sensitivity. By assessing the signal-to-noise (S/N) ratio, background noise levels, and even detecting user accents, it can fine-tune its models to minimize false accepts and false rejects in any environment.
- Conversational Flow: Users can engage in frictionless follow-up conversations. After an initial interaction, the system can enter a temporary mode where no wakeword is needed, allowing for a natural back-and-forth dialogue within a user-defined time window.
- Biometric Security: The wakeword itself can become a user key. With Sensory's industry-leading voice biometrics, the technology can be trained to respond only to a specific user's voice, adding a powerful layer of personalization and security.
Multi-Layered Intelligence for Unmatched Accuracy
The power of Smart Wakewords lies in its multi-layered validation architecture, which collaborates across the hardware and software stack to confirm user intent.
1. On-Chip: A low-power hardware block listens for the initial wakeword. This allows the device to operate in low power mode until it triggers the OS to process a conversational AI interaction.
2. On-Device: The device's OS can run more sophisticated acoustic models and NLU to re-validate the wakeword, check for biometric matches, and analyze intent without Internet connectivity.
3. Cloud Confirmation: For the most critical tasks, it can confer with a large language model (LLM) in the cloud to assess the probability of the user's intent, combining the wakeword's likelihood score with the contextual meaning of the full phrase.
This multi-level approach allows product developers to create the perfect balance of responsiveness, accuracy, and power consumption. The technology is designed for a vast array of electronics, including wearables, hearables, smartphones, medical devices, automotive systems, and PCs. With the ability to listen for multiple wakewords in parallel, a single device can seamlessly serve multiple functions or users.
Availability
Sensory Smart Wakewords will be available for licensing in January 2026! The technology is expected to be broadly available on leading Edge-AI platforms, including Snapdragon® S7 Gen 1 Sound Platform, Arm-based SoCs, and Cadence HiFi DSP cores. For more information, please visit www.sensory.com or contact sales@sensory.com.
About Sensory
Sensory Inc. develops fast, accurate, and private on-device AI technologies, powering over 2 billion devices globally from Amazon, Google, Microsoft, Samsung, and many others. With more than 60 patents, Sensory's innovations in speech recognition, emergency vehicle detection, voice assistants, biometrics, and natural language understanding span automotive, consumer electronics, wearables, medical and more.
Snapdragon and Snapdragon Sound are trademarks or registered trademarks of Qualcomm Incorporated. Snapdragon Sound is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.
Press
Sensory, Inc.
press@sensory.com
Visit us on social media:
LinkedIn
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.