MCU AI/ML - Bridging the Gap Between Intelligence and Embedded Systems

2024-11-09 SILICON LABS Official Website
MCU,microcontrollers,Wireless MCUs,EFR32xG24 MCU,microcontrollers,Wireless MCUs,EFR32xG24 MCU,microcontrollers,Wireless MCUs,EFR32xG24 MCU,microcontrollers,Wireless MCUs,EFR32xG24

Artificial Intelligence (AI) and Machine Learning (ML) are key technologies that enable systems to learn from data, make inferences, and enhance their performance over time. These technologies have often been used in large-scale data centers and powerful GPUs, but there's an increasing demand to deploy them on resource-limited devices, such as microcontrollers (MCUs).


In this blog, we will examine the intersection of MCU technology and AI/ML, and how it affects low-power edge devices. We'll discuss the difficulties, innovations, and practical use cases of running AI on battery-operated MCUs.


AI/ML and MCUs: A Brief Overview

AI creates computer systems that can do human-like tasks, such as understanding language, finding patterns, and deciding. Machine Learning, a subset of AI, involves using algorithms that let computers learn from data and get better over time. ML models can find patterns, sort objects, and predict outcomes from examples.


MCUs play an important role in making AI and ML possible on edge devices.


Some use cases for MCU based AI/ML at the edge include:

  • Keyword Spotting: Recognize specific words or phrases (e.g., voice commands) without the need for cloud connectivity

  • Sensor Fusion: Combining data from multiple sensors to make more informed decisions than with single sensor solutions

  • Anomaly detection: Detecting outliers or abnormal patterns in sensor data that may indicate faults, errors or threats for predictive maintenance or quality control

  • Object detection: Identifying and locating objects of interest (e.g., faces, pedestrians, vehicles) in images or videos captured by cameras or other sensors.

  • Gesture recognition: Interpreting human gestures (e.g., hand movements, facial expressions, body poses) in images or videos captured by cameras or other sensors to improve human computer interaction


Challenges of AI/ML on MCUs

Deep learning models, particularly deep neural networks (DNNs), have become indispensable for complex tasks like computer vision and natural language processing. However, their computational demands are substantial. Such resource-intensive models are impractical for everyday devices, especially those powered by low-energy MCUs found in edge devices. The growth of deep learning model complexity is undeniable. As DNNs become more sophisticated, their size balloons, making them incompatible with the limited computing resources available on MCUs.


What is TinyML?

TinyML refers to machine learning models and techniques optimized for deployment on resource-constrained devices. These devices operate at the edge, where data is generated, and inferencing is performed locally. Typically run on low power MCUs, TinyML systems perform inferences on data collected locally to the node. Inferencing is the moment of truth for an AI model, testing how well it can apply knowledge learned during training. Local inferencing enables MCUs to execute AI models directly, making real-time decisions without relying on external servers or cloud services.


Local inferencing in the context of AI/ML is crucial for several reasons:

Resource Constraints: Many embedded devices, especially those running on battery power, have limited resources such as memory, processing capability, and energy efficiency. Traditional general-purpose microcontrollers struggle to perform AI tasks efficiently due to their limited processing power and memory, constrained energy resources, or lack of on-chip acceleration. Local inferencing allows these resource-constrained devices to execute AI workloads without draining excessive power to improve efficiency and performance for things like:

User Experience Enhancement: Consider an example: an AI-enabled electronic cat flap. By training it to distinguish between cats and other objects, it can open the door only for the authorized cat. Here, local inferencing improves the user experience by ensuring safety and convenience without the need for additional hardware like RFID collars.

Efficiency and Performance: GPUs are commonly used for large-scale AI deployments because they can perform many processes in parallel, essential for effective AI training. However, GPUs are costly and exceed power budgets for small-scale embedded applications. AI-optimized MCUs, with specialized architectures, strike a balance by delivering better performance and power efficiency for AI workloads. SILICON LABS includes a matrix vector processor as part of its AI/ML enablement. This specialized peripheral is designed to enhance the performance of AI/ML algorithms or vector math operations to shorten inferencing time and perform these critical tasks at lower power.


In summary, local inferencing at the edge empowers real-time decision-making, reduces latency, enhances security, empowers battery-operated devices with AI capabilities and enhances user experiences making it a critical component of modern computing systems, while respecting resource limitations.


Silicon Labs Pioneering AI/ML Solutions for the Edge:

In the dynamic landscape of technology, Silicon Labs stands out as a trailblazer in bringing Artificial intelligence (AI) and Machine learning (ML) to the edge. Our commitment to innovation has led to groundbreaking solutions that empower resource-constrained devices, such as microcontrollers (MCUs), with intelligent capabilities.


Devices Optimized for TinyML

The EFR32xG24, EFR32xG28, and EFR32xG26 families of MCUs and Wireless MCUs combine a 78 MHz ARM Cortex®-M33 processor, high-performance radios, precision analog performance, and an AI/ML hardware accelerator giving developers a flexible platform for deploying edge intelligence. Supporting a broad range of wireless IoT protocols, these SoCs incorporate the highest security with the best RF performance/energy-efficiency ratio in the market.


Today’s developers are often forced to pay steep performance or energy penalties for deploying AI/ML at the edge. The xG24, xG28, and xG26 families alleviate those penalties as the first ultra-low powered devices with dedicated AI/ML accelerators built in lowering overall design complexity. This specialized hardware is designed to handle complex calculations up to an 8x faster inferencing along with up to a 6x improvement in energy efficiency when compared to a firmware only approach and with even more performance gained when compared to cloud-based solutions. The use of the hardware accelerator offloads the burden of inferencing from the main application MCU leaving more clock cycles available to service your application.


Tools for Simplifying AI/ML Development

The tools to build, test, and deploy the algorithms needed for machine learning are just as important as the MCUs running those algorithms. By partnering with leaders in the TinyML space like TensorFlow, SensiML, and Edge Impulse, Silicon Labs provides options for beginners and experts alike. Using this new AI/ML toolchain with Silicon Labs’s Simplicity Studio, developers can create applications that draw information from various connected devices to make intelligent machine learning-driven decisions.


Silicon Labs provides a variety of tools and resources to support machine learning (ML) applications. Here are some of them:

Machine Learning Applications: The development platform supports embedded machine learning (TinyML) model inference, backed by the TensorFlow Lite for Microcontrollers (TFLM) framework. The repository contains a collection of embedded applications that leverage ML.

Machine Learning Toolkit (MLTK): This is a Python package with command-line utilities and scripts to aid the development of machine-learning models for Silicon Lab's embedded platforms. It includes features for executing ML operations from a command-line interface or a Python script, determining how efficiently an ML model will execute on an embedded platform, and training an ML model using Google TensorFlow.


Silicon Labs provides a TinyML solution as part of the Machine Learning Toolkit (MLTK). The toolkit includes several models that are used by the TinyML benchmark. These models are available on the Silicon Labs GitHub and include anomaly detection, image classification, and keyword spotting.


AI/ML powered edge devices are opening new horizons for how we engage with our surroundings, and they will soon transform our lives in amazing ways. Silicon Labs is at the forefront of TinyML innovation, making it possible to bring these capabilities to low power, connected edge devices like never before.


Learn more about how our EFR and EFM MCU platform is optimized for AI/ML at the Edge in our recent Wireless Compute Tech Talk session, An Optimized Platform for AI/ML at the Edge.


授权代理商:世强先进(深圳)科技股份有限公司
技术资料,数据手册,3D模型库,原理图,PCB封装文件,选型指南来源平台:世强硬创平台www.sekorm.com
现货商城,价格查询,交期查询,订货,现货采购,在线购买,样品申请渠道:世强硬创平台电子商城www.sekorm.com/supply/
概念,方案,设计,选型,BOM优化,FAE技术支持,样品,加工定制,测试,量产供应服务提供:世强硬创平台www.sekorm.com
集成电路,电子元件,电子材料,电气自动化,电机,仪器全品类供应:世强硬创平台www.sekorm.com
  • +1 赞 0
  • 收藏
  • 评论 0

本文由飞猫警长转载自SILICON LABS Official Website,原文标题为:MCU AI/ML - Bridging the Gap Between Intelligence and Embedded Systems,本站所有转载文章系出于传递更多信息之目的,且明确注明来源,不希望被转载的媒体或个人可与我们联系,我们将立即进行删除处理。

相关研发服务和供应服务

评论

   |   

提交评论

全部评论(0

暂无评论

相关推荐

行业前沿的MCU+AI/ML开发工具弥合智能和嵌入式系统之间的差距

在本文中,Silicon Labs产品营销高级经理Gopinath Krishniah先生将带您探究MCU技术和AI/ML的交叉与汇合,以及它如何影响低功耗边缘设备的发展;同时将讨论在电池供电设备的MCU上运行AI的困难、创新和实际用例,并进一步介绍芯科科技专为边缘智能开发所提供全套MCU+AI/ML工具的解决方案。

2024-07-11 -  技术探讨 代理服务 技术支持 采购服务

Leveraging Helium and ARM® Cortex®-M85 for Unprecedented DSP and AI Performance on an MCU Core

In conclusion, the Cortex-M85 with Helium can contribute to a significant uplift in AI/ML and DSP performance while outshining the rest of the Cortex-M cores in scalar performance. This makes it an ideal choice for more complex processing tasks.

2023-04-26 -  技术探讨

Power Your Edge AI Application with the Industry’s Most Powerful Arm MCUs

As is evident, RA8 MCUs with Helium can significantly improve neural network performance without the need for any additional hardware acceleration, thus providing a low-cost, low-power option for the implementation of simpler AI and machine learning use cases.

2023-11-03 -  技术探讨

AI加速边缘计算,聚焦AIOT芯片,NPU SOC,离线语音MCU,高算力智能模组等

世强硬创联合地平线,阿普奇,启英泰伦,美格智能,普林芯驰,唯创知音,九芯电子,芯闻,VINKO,MERRY带来AI新产品,聚焦AIOT芯片,NPU SOC,离线语音MCU,高算力智能模组等,加速边缘计算。

2023-06-08 -  活动

Silicon Labs(芯科科技)EFM8™ 8位MCU选型指南

目录- EFM8 microcontrollers   

型号- EFM8UB20F32G,EFM8UB20F64G,EFM8,EFM8SB20F64G,SLSTK2000A,SLSTK2022A,SLSTK2010A,SLSTK2020A,SLSTK2030A,EFM8UB10F8G,EFM8SB20F32G,EFM8SB20F16G,SLTB005A,EFM8LB11F32E,EFM8UB11F16G,EFM8SB10F2G,EFM8BB21F16G,EFM8SB10F4G,EFM8LB11F16E,EFM8BB10F2G,EFM8SB10F8G,EFM8 FAMILY,EFM8LB12F64E,EFM8LB10F16E,EFM8LB12F32E,EFM8UB10F16G,EFM8UB31F40G,EFM8BB31F32G,EFM8BB31F64G,SLSTK2001A,SLSTK2011A,SLSTK2021A,EFM8BB10F4G,EFM8BB31F16G,EFM8UB30F40G,EFM8BB10F8G,EFM8BB22F16G

AUGUST 2018  - SILICON LABS  - 选型指南  - REV C 代理服务 技术支持 采购服务

APM32芯得 EP.38 | TinyMaix赋予APM32F411 AI推理能力

TinyMaix是矽速科技开发的轻量级机器学习库,适用于微控制器,能在资源受限的MCU上运行深度学习模型。它支持多种芯片架构和模型转换,具有低内存消耗和用户友好接口。文章介绍了TinyMaix在APM32F411 MCU上的移植过程,包括源码准备、工程配置、编译器设置、解决编译错误和实现计时函数。移植后,TinyMaix能够成功运行手写数字识别、人像检测和图片分类等实例。

2024-10-30 -  设计经验 代理服务 技术支持 采购服务

芯科科技PG28 32位微控制器配备AI/ML硬件加速器,可在较低能耗的情况下将机器学习推理的性能加倍

本文重点介绍了Silicon Labs(亦称“芯科科技”)的PG28 32位微控制器(MCU)的特性和优势,其领先集成了人工智能和机器学习(AI/ML)硬件加速器,可在边缘位置以更低功耗进行更快推理;同时还兼容EFR32xG28 无线SoC 平台(ZG28、FG28 和 SG28),为开发人员提供实现各种低功耗、高性能嵌入式物联网应用的理想选择。

2024-06-22 -  原厂动态 代理服务 技术支持 采购服务

AI融合物联网大势所趋,ML语音识别和手势控制应用分享

芯科科技作为智能、安全物联网无线连接领域的开拓者,正在致力于将AI/ML带到边缘。我们对创新的承诺导致了开创性的解决方案,它赋予资源受限的设备如MCU具备更丰富的智能功能。

2024-09-30 -  原厂动态 代理服务 技术支持 采购服务

招聘高算力MCU/SOC工程师,负责智能驾驶、AI、机器人等热门项目

参与汽车电子、人工智能、工业等热门项目的研发。技术大牛带队、接触全品类器件,快速拓宽市场应用领域和技术的广度、深度;服务中国TOP2000+知名硬科技企业,丰厚项目奖、年终奖。

2024-08-08 -  招聘信息 投递简历

合纵连横,航顺HK32 MCU预测AI大潮下的MCU发展新趋势

在6月15日召开的“赋能创芯,共筑生态”2024年度航顺HK32MCU新品发布会暨第二次代理商培训大会上,航顺芯片联合创始人、首席科学家&CTO王翔分享了AI大潮下的MCU发展趋势,概括起来就是---合纵连横!

2024-06-22 -  原厂动态 代理服务 技术支持 采购服务

Silicon Labs(芯科科技)Gecko系列32位MCU选型指南

目录- Gecko™MCUs Energy-friendly microcontrollers   

型号- EFM32G,EFM32LG,EFM32GG11B,EFM32GG12B,EFM32PG,EFM32TG,EFM32TG11B,EFM32HG,EFM32GG,EFM32WG,EFM32ZG,EFM32JG

SEPTEMBER 2018  - SILICON LABS  - 选型指南  - REV A 代理服务 技术支持 采购服务

AN928.2:EFR32系列2布局设计指南

描述- 本指南旨在帮助用户设计适用于EFR32系列2无线Gecko平台的PCB板,以实现良好的射频性能。指南涵盖了匹配网络类型、布局建议和设计原则,包括不同型号的EFR32xG21、EFR32xG22、EFR32xG23、EFR32xG24、EFR32xG25和EFR32xG27的匹配网络设计和PCB布局规则。此外,还提供了调试和编程接口连接器的详细信息以及电源配置的建议。

型号- EFR32XG25,EFR32XG23,EFR32XG24,EFR32XG27,EFR32XG28,EFR32XG21-C,EFR32XG21-B,EFR32BG27,EFR32ZG28,EFR32FG23,EFR32FG22,EFR32BG22,EFR32FG28,EFR32,EFR32BG21,EFR32BG24,EFR32FG25,EFR32MG27,EFR32MG24,EFR32MG22,EFR32MG21,EFR32ZG23,EFR32XG21,EFR32 SERIES,EFR32XG22

September 2023  - SILICON LABS  - 应用笔记或设计指南  - Rev. 1.4 代理服务 技术支持 采购服务 查看更多版本

EFM32PG22 Gecko MCU系列数据表

描述- 该资料介绍了Silicon Labs的EFM32PG22 Gecko系列微控制器。这些MCU专为低功耗嵌入式应用设计,具备高效的76.8 MHz Cortex-M33内核、丰富的模拟和通信外设,适用于个人护理设备、家用电器、工业自动化和消费电子产品等领域。

型号- EFM32PG22C200F512IM32-CR,EFM32PG22,EFM32PG22C200F512IM32-C,EFM32PG22C200F64IM32-C,EFM32PG22C200F256IM40-C,EFM32PG22C200F128IM40-C,EFM32PG22C200F512IM40-C,EFM32PG22C200F256IM32-C,EFM32PG22C200F128IM32-C,EFM32PG22C200F64IM40-C

June, 2024  - SILICON LABS  - 数据手册  - Rev. 1.3 代理服务 技术支持 采购服务

32位MCU APM32F407IG测评:移植轻量级AI推理框架TinyMaix如何实现手写数字识别

本文将介绍如何为APM32F407IG芯片移植轻量级AI推理框架——TinyMaix,并在开发板上运行TinyMaix的手写数字识别示例。

2023-10-31 -  设计经验 代理服务 技术支持 采购服务
展开更多

电子商城

查看更多

品牌:SILICON LABS

品类:Wireless SoC

价格:¥31.7756

现货: 88,140

品牌:SILICON LABS

品类:BLE Radio Board Kit

价格:¥292.4844

现货: 3

品牌:SILICON LABS

品类:8 BIT MCU

价格:¥3.9026

现货: 83,555

品牌:SILICON LABS

品类:8 BIT MCU

价格:¥3.5305

现货: 80,365

品牌:SILICON LABS

品类:8位MCU

价格:¥5.8534

现货: 71,919

品牌:SILICON LABS

品类:Mixed-Signal MCU

价格:¥12.9143

现货: 37,758

品牌:SILICON LABS

品类:8位MCU

价格:¥8.1764

现货: 35,397

品牌:SILICON LABS

品类:8 BIT MCU

价格:¥5.0172

现货: 34,705

品牌:SILICON LABS

品类:8位MCU

价格:¥14.1226

现货: 29,699

品牌:SILICON LABS

品类:8 BIT MCU

价格:¥4.3667

现货: 27,741

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

现货市场

查看更多

品牌:SILICON LABS

品类:Wireless SoC

价格:¥15.1400

现货:1,455

品牌:SILICON LABS

品类:8位MCU

价格:¥4.9000

现货:12,000

品牌:SILICON LABS

品类:Mixed-Signal MCU

价格:¥10.1700

现货:10,000

品牌:SILICON LABS

品类:8 BIT MCU

价格:¥3.7900

现货:3,451

品牌:SILICON LABS

品类:Mixed-Signal MCU

价格:¥11.1200

现货:1,201

品牌:SILICON LABS

品类:8 BIT MCU

价格:¥16.8500

现货:550

品牌:SILICON LABS

品类:8位MCU

价格:¥56.0000

现货:550

品牌:SILICON LABS

品类:8位MCU

价格:¥5.1900

现货:396

品牌:SILICON LABS

品类:8位MCU

价格:¥39.8000

现货:266

品牌:RENESAS

品类:16-BIT MCU

价格:¥5.5190

现货:910,635

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

查看更多

授权代理品牌:接插件及结构件

查看更多

授权代理品牌:部件、组件及配件

查看更多

授权代理品牌:电源及模块

查看更多

授权代理品牌:电子材料

查看更多

授权代理品牌:仪器仪表及测试配组件

查看更多

授权代理品牌:电工工具及材料

查看更多

授权代理品牌:机械电子元件

查看更多

授权代理品牌:加工与定制

世强和原厂的技术专家将在一个工作日内解答,帮助您快速完成研发及采购。
我要提问

954668/400-830-1766(工作日 9:00-18:00)

service@sekorm.com

研发客服
商务客服
服务热线

联系我们

954668/400-830-1766(工作日 9:00-18:00)

service@sekorm.com

投诉与建议

E-mail:claim@sekorm.com

商务合作

E-mail:contact@sekorm.com

收藏
收藏当前页面