MarkTechPost@AI 2024年09月08日
TinyTNAS: A Groundbreaking Hardware-Aware NAS Tool for TinyML Time Series Classification
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

TinyTNAS 是一款专门为 TinyML 时间序列分类设计的突破性硬件感知多目标神经架构搜索工具。与传统的 NAS 方法相比,TinyTNAS 能够在 CPU 上高效运行,无需依赖大量 GPU 资源,并允许用户在内存、闪存和 MAC 操作等方面设置约束,以在这些参数内发现最佳神经网络架构。TinyTNAS 还能够执行时间限制搜索,确保在用户指定的时间内找到最佳模型。

🤔 TinyTNAS 是一款专门为 TinyML 时间序列分类设计的硬件感知多目标神经架构搜索工具,它能够在 CPU 上高效运行,无需依赖大量 GPU 资源,使之更易于访问和应用于各种场景。

💪 TinyTNAS 允许用户在内存、闪存和 MAC 操作等方面设置约束,以在这些参数内发现最佳神经网络架构,从而针对资源受限的 TinyML 应用优化神经网络架构。

⏱️ TinyTNAS 能够执行时间限制搜索,确保在用户指定的时间内找到最佳模型,这对于需要快速部署的资源受限设备来说非常重要。

📊 TinyTNAS 在多个数据集上取得了显著成果,例如 UCIHAR、PAMAP2 和 WISDM,证明了它在人类活动识别、医疗保健和人机交互等领域的通用性。

🚀 TinyTNAS 在 UCIHAR 数据集上实现了显著的资源使用减少,包括内存、MAC 操作和闪存,同时保持了优越的准确性,并将延迟降低了 149 倍。

📈 TinyTNAS 在 PAMAP2 和 WISDM 数据集上实现了 6 倍的内存使用减少,以及其他资源使用的大幅减少,而没有损失准确性。

⚡ TinyTNAS 在 CPU 环境中能够在 10 分钟内完成搜索过程,效率远高于传统方法。

🌟 TinyTNAS 证明了其在优化资源受限 TinyML 应用的神经网络架构方面的有效性,为 AIoT 和低成本、低功耗嵌入式 AI 应用开辟了新的可能性。

🥇 TinyTNAS 是首批专门为 TinyML 时间序列分类设计的 NAS 工具之一,标志着 NAS 与 TinyML 结合的重大进展。

Neural Architecture Search (NAS) has emerged as a powerful tool for automating the design of neural network architectures, providing a clear advantage over manual design methods. It significantly reduces the time and expert effort required in architecture development. However, traditional NAS faces significant challenges as it depends on extensive computational resources, particularly GPUs, to navigate large search spaces and identify optimal architectures. The process involves determining the best combination of layers, operations, and hyperparameters to maximize model performance for specific tasks. These resource-intensive methods are impractical for resource-constrained devices, that need rapid deployment, which limits their widespread adoption.

The current approaches discussed in this paper include Hardware-aware NAS (HW NAS) approaches that address the impracticality of resource-constrained devices by integrating hardware metrics into the search process. However, these methods still use GPUs for model optimization, limiting their accessibility. In the TinyML domain, frameworks like MCUNet and MicroNets have become popular in the neural architecture optimization for MCUs, but they too require significant GPU resources. Recent research has introduced CPU-based HW NAS methods for tiny CNNs, but they come with limitations, such as depending on standard CNN layers instead of more efficient options.

A team of researchers from the Indian Institute of Technology Kharagpur, India have proposed TinyTNAS, a cutting-edge hardware-aware multi-objective Neural Architecture Search tool specially designed for TinyML time series classification. TinyTNAS operates efficiently on CPUs, making it more accessible and practical for a wider range of applications. It allows users to define constraints on RAM, FLASH, and MAC operations to discover optimal neural network architectures within these parameters. A unique feature of TinyTNAS is its ability to perform time-bound searches, ensuring the best possible model is found within a user-specified duration.

TinyTNAS’s architecture is designed to work across various time-series datasets, demonstrating its versatility in lifestyle, healthcare, and human-computer interaction domains. Five datasets are utilized, including UCIHAR, PAMAP2, and WISDM for human activity recognition, and MIT-BIH and PTB Diagnostic ECG Database for healthcare applications. UCIHAR provides 3-axial linear acceleration and angular velocity data, PAMAP2 captures data from 18 physical activities using IMU sensors and a heart rate monitor, and WISDM contains accelerometer and gyroscope data. MIT-BIH includes annotated ECG data covering various arrhythmias, while the PTB Diagnostic ECG Database comprises ECG records from subjects with different cardiac conditions.

The results prove the outstanding performance of TinyTNAS across all five datasets. It achieves remarkable reductions in resource usage on the UCIHAR dataset, including RAM, MAC operations, and FLASH memory. It maintains superior accuracy and reduces latency by 149 times. The results for PAMAP2 and WISDM datasets show 6 times reduction in RAM usage, and a significant reduction in other resource usage, without losing accuracy. TinyTNAS is much more efficient as it completes the search process within 10 minutes in a CPU environment. These results prove the TinyTNAS’s effectiveness in optimizing neural network architectures for resource-constrained TinyML applications.

In this paper, researchers introduced TinyTNAS which represents a significant advancement in bridging Neural Architecture Search (NAS) with TinyML for time series classification on resource-constrained devices. It operates efficiently on CPUs without GPUs and allows users to define constraints on RAM, FLASH, and MAC operations, finding optimal neural network architectures. The results on multiple datasets demonstrate its significant performance improvements over existing methods. This work raises the bar for optimizing neural network designs for AIoT and low-cost, low-power embedded AI applications. It is one of the first efforts to create a NAS tool specifically designed for TinyML time series classification.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and LinkedIn. Join our Telegram Channel.

If you like our work, you will love our newsletter..

Don’t Forget to join our 50k+ ML SubReddit

The post TinyTNAS: A Groundbreaking Hardware-Aware NAS Tool for TinyML Time Series Classification appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

TinyML NAS 时间序列分类 硬件感知 资源受限
相关文章