In this interview with Semicon Leaders Asia, Charlene Wan, VP of Corporate Marketing & IR at Ambiq Micro, highlights how compressionKIT™ is extending ultra-low-power AI beyond inference to fundamentally rethink how data is captured, stored, and utilised at the edge. By enabling intelligent AI-based compression for continuous sensor streams, the platform helps developers reduce transmission overhead, optimise storage, and extend battery life across always-on devices. The technology complements Ambiq’s broader edge AI portfolio by improving end-to-end system efficiency while supporting scalable applications in wearables, healthcare, and industrial IoT. Through close collaboration with developers and ecosystem partners, Ambiq aims to accelerate the adoption of data-efficient edge computing for the next generation of intelligent, low-power systems.
1. What does the beta release of compressionKIT™ represent for Ambiq Micro’s vision in advancing ultra-low-power edge AI solutions?
The beta release of compressionKIT™ marks an important milestone in Ambiq’s long-term vision of enabling truly scalable, always-on edge AI. While the industry has made significant progress in optimising compute efficiency for machine learning at the edge, one of the less addressed challenges has been the efficient handling of continuous data streams generated by always-on sensors.
At Ambiq, we see ultra-low power AI not just as a function of efficient processors but as a system-level challenge that encompasses sensing, data movement, storage, and inference. compressionKIT represents a natural evolution of this philosophy. It introduces AI-driven data compression as a foundational capability within the edge AI pipeline, allowing developers to rethink how data is captured and utilised rather than simply processed.
By bringing intelligence into the compression layer, we are enabling devices to operate longer, store more meaningful data, and reduce unnecessary transmission—all within tight energy budgets. This aligns directly with our mission to push the boundaries of what is possible in battery-powered and energy-constrained environments.
As Scott Hanson, CTO of Ambiq, puts it:
“The future of edge AI isn’t just about how efficiently you can run inference—it’s about how intelligently you manage data across the entire system. With compressionKIT, we’re extending our ultra-low-power philosophy beyond compute to fundamentally rethink how data is captured, stored, and utilised at the edge.”
Ultimately, compressionKIT reinforces our commitment to delivering not just low-power chips, but a comprehensive platform for efficient, always-on intelligence
2. How does compressionKIT help developers address the growing challenges of managing continuous sensor data in always-on devices?
Always-on devices, by definition, generate a constant stream of data—whether it’s audio, motion, physiological signals, or environmental inputs. This creates a set of practical challenges for developers, including limited memory capacity, high energy costs associated with wireless transmission, and the need to maintain real-time responsiveness.
compressionKIT addresses these challenges by enabling intelligent, AI-based edge compression. Instead of relying on traditional, generic compression techniques, it uses models trained to understand the structure and relevance of specific sensor data types. This allows it to significantly reduce data size while preserving the features that matter most for downstream analysis.
From a developer’s perspective, this has several tangible benefits. First, it extends on-device storage capacity, allowing for longer data logging without increasing hardware requirements. Second, it reduces the frequency and volume of data transmission, which is among the most power-intensive operations on edge devices. Third, it enables more flexible data workflows, where compressed data can be stored and later reconstructed or analysed as needed.
Importantly, compressionKIT also helps developers avoid the trade-off between data retention and resource constraints. Instead of discarding potentially valuable data early in the pipeline, they can retain compact, information-rich representations that preserve optionality for future use cases.
3. Could you share how this AI-based codec complements Ambiq’s existing edge AI portfolio and enhances overall system efficiency?
Ambiq’s edge AI portfolio has been built around enabling efficient inference on ultra-low-power hardware. Our SPOT® platform and neural network acceleration capabilities allow developers to deploy sophisticated models within strict energy budgets.
compressionKIT complements this portfolio by addressing the stages before and after inference—specifically, how data is captured, stored, and prepared for processing. In many always-on applications, the cost of handling raw data can exceed the cost of running the AI model itself. By reducing the data footprint early, compressionKIT improves efficiency across the entire system.
This creates a multiplier effect. Lower data volumes mean reduced memory usage, less frequent access to storage, and fewer wireless transmissions. All of these contribute to lower overall power consumption and improved device longevity. At the same time, because the compression is AI-driven, it maintains the fidelity required for accurate inference and analytics.
In this way, compressionKIT transforms data efficiency into a core component of system design, rather than an afterthought. It works in tandem with our hardware and software stack to deliver end-to-end optimisation, enabling developers to build more capable and scalable edge AI solutions.
4. How is Ambiq Micro working with developers and partners during the beta phase to refine and accelerate the adoption of compressionKIT?
The beta phase is a critical period for us, as it allows us to work closely with developers and partners to ensure that compressionKIT meets real-world needs across diverse applications.
We are engaging with a select group of early adopters who are integrating compressionKIT into their workflows and providing direct feedback on performance, usability, and integration. This collaborative approach helps us refine both the underlying models and the developer experience.
In addition, we are focusing on use-case-driven optimisation. Different applications—such as audio sensing, wearable health monitoring, or industrial vibration analysis—have unique data characteristics. By working closely with partners in these domains, we can tailor compression strategies to deliver the best balance of compression ratio, reconstruction quality, and power efficiency.
We are also investing in tooling and documentation to ensure that developers can easily incorporate compressionKIT into their existing pipelines. Seamless integration is essential for adoption, so we are prioritising compatibility with our broader SDK and development ecosystem.
Finally, we are fostering a broader ecosystem of collaboration, including OEMs, system integrators, and software partners, to accelerate deployment across multiple industries. The goal is to make compressionKIT not just a standalone feature, but a widely adopted capability within the edge AI landscape.
5. What opportunities do you see for collaboration across industries such as wearables, healthcare, and industrial IoT as demand for efficient data handling grows?
As always-on sensing becomes more prevalent, the need for efficient data handling is emerging as a common challenge across industries. This creates significant opportunities for cross-sector collaboration.
In wearables, for example, there is a growing demand for continuous health monitoring and personalised insights. compressionKIT enables richer data collection without compromising battery life, opening the door to more advanced applications in fitness, wellness, and medical monitoring.
In healthcare, efficient compression supports long-term data logging and analysis, which is critical for diagnostics and remote patient monitoring. It also aligns with increasing emphasis on data privacy, as more processing and storage can be kept on-device.
In industrial IoT, sensors are often deployed at scale in environments where power and connectivity are limited. compressionKIT allows these systems to operate more autonomously, reducing the need for constant data transmission while still enabling predictive maintenance and anomaly detection.
What is particularly exciting is the potential for shared innovation across these domains. Techniques developed for one type of sensor or application can often be adapted to others, especially when powered by AI-based approaches. This creates a collaborative ecosystem where advancements in data efficiency can benefit multiple industries simultaneously.
6. What message would you like to share with customers and partners about the future of low-power AI and data-efficient edge computing?
The future of edge AI will be defined by efficiency at every level—not just in computation, but in how data is captured, represented, and utilised. As devices become more intelligent and more autonomous, the ability to manage data effectively will be just as important as the ability to process it.
At Ambiq, we believe that data efficiency is a fundamental enabler of this future. compressionKIT is one step in that direction, but it also reflects a broader commitment to rethinking the entire edge AI stack.
For our customers and partners, our message is clear: there is a tremendous opportunity to build the next generation of always-on, intelligent systems by embracing a more holistic approach to design. By combining ultra-low-power hardware with AI-driven data optimisation, we can unlock new use cases, extend device capabilities, and deliver better user experiences.
We are excited to collaborate with the ecosystem to push these boundaries further. Together, we can create solutions that are not only more powerful, but also more efficient, sustainable, and scalable.