GreenWaves Building Occupancy Management Solution Using the TensorFlow Object Detection API

2024-09-12 GREENWAVES Official Website
infrared sensors,sensor,GREENWAVES infrared sensors,sensor,GREENWAVES infrared sensors,sensor,GREENWAVES infrared sensors,sensor,GREENWAVES

1 Building occupancy management solution introduction

GREENWAVES has developed a people counting solution for occupancy management in smart building systems, providing real-time insights into how available space is used by employees and customers. The sensor can be used for tasks such as meeting room or cafeteria usage optimization, desk reservations and usage based cleaning.


People counting with infrared sensors offers best-in-class accuracy with total compliance to privacy-related regulations for indoor environments. GAP processors provide a combination of computing ability for AI and low energy operation that enables this type of application.


As part of our development process, we needed to train an optimized neural network with a single shot detector SSD backend. The TensorFlow Object Detection API comes with a number of prepackaged backbone models, but we wanted to design something more optimized for our detection task. We aimed to 

    ● reduce memory

    ● reduce complexity, and 

    ● reduce power consumption 


In this document, we will show how we carried this out and how a custom network design can still leverage all the backend SSD creation offered by the TensorFlow Object Detection API. We hope this will allow you

    ● become familiar with an object detection API like the one provided by TensorFlow.

    ● learn how to modify the API with respect to your custom specifications (i.e., model structure).

    ● learn how to employ the API for custom solutions such as occupancy management.

    ● learn how to generate optimized code for running your solution on GreenWaves’ GAP processors.


2 Object detection API 

Constructing, training, and deploying machine learning models for the localization and identification of multiple objects is a challenging task. To make this easier, we attempted to leverage the TensorFlow Object Detection API, an open source framework for object detection built on top of TensorFlow. The API involves a group of useful object detection methodologies including 

    ● Single Shot MultiBox Detector (SSD) 

    ● CenterNet 

    ● RCNN 

    ● EfficientDet 

    ● ExtremeNet


Please use these links (TensorFlow 2 Detection Model Zoo, TensorFlow 1 Detection Model Zoo) to view a full list of object detection methodologies supported by the API. It is clear that these solutions have different network architecture, training, and optimization strategies. If you are interested in finding out more, you can read this article for more details on different frameworks along with their advantages and disadvantages. Although these frameworks exhibit different characteristics, all employ deep convolutional neural networks (CNNs) to extract high-level features from the input images, called backbone models. In fact, it has become normal practice to employ and adapt the modern state-of-the art CNNs for feature extractor backbones. This can be achieved by removing the final fully connected classification layers from a CNN, leaving a deep neural network that can be used to extract semantic meaning from the input image without changing its spatial structure. 


The following are some useful CNN structures that can be used for backbone models of detectors: 

    ● VGGNet 

    ● MobileNet 

    ● ResNet 

    ● GoogleNet

    ● DenseNet 

    ● Inception


TensorFlow already provides a collection of detection models backboned to pre-trained CNNs on datasets like COCO and ImageNet. These models can be used for out-of-the-box inference if the target categories are already included in these datasets. Otherwise, they can be used to initialize the model when training on new datasets. 


In the rest of this article, we will focus on the SSD object detection algorithm and show you how to use the TensorFlow Object Detection API to develop your own detection network. 


2.1 Single Shot MultiBox Detector (SSD) 

An SSD network has two principal components: a backbone model and an SSD head. As explained earlier, the backbone is typically a CNN model that may be inherited from a state-of-the-art deep model trained on datasets like Imagenet and COCO. The SSD head consists of one or more convolutional layers added to the end of the backbone network where object bounding box classification takes place. The SSD head layers predict the offsets and associated confidence scores to a designed set of default bounding boxes of different scales and aspect ratios (Figure 1).



Figure 1. An SSD model structure that adds several feature layers to the end of a base network that predicts the offsets to default boxes of different scales and aspect ratios and their associated confidences


As can be seen in Table 1, TensorFlow provides various SSD heads backboned to a collection of pre-trained models to facilitate the training process.


In the table, the speed refers to the running time in ms per input image, which includes all corresponding preprocessing and post-processing steps. It should be mentioned that the runtime values highly depend on hardware configuration, and these values are produced using a unique computer but are useful as a relative measure of latency.


3 SSD solution deployment using the API 

Using pretrained models as an SSD backbone eases the training process but puts constraints on the network structure. To enable efficient inference on the edge, we need to train a custom CNN solution for applications such as infrared human detection that only require a small backbone model with a relatively restricted number of parameters. 


In the next section, we will show how you can modify the TensorFlow Object Detection API in order to construct any 4 custom SSD model. If you are already familiar with the theoretical concept of a Single Shot MultiBox Detector, then this section will provide you with a concrete example that will allow you to develop any custom SSD model using the API.


3.1 API model structure 


All models under the TensorFlow Object Detection API must implement the DetectionModel interface; for more details, you can take a look at the file defining the generic base class for detection models in the API: 


    ● API: object_detection/core/model.py At a high level, detection models receive input images and predict output tensors.


 At training time, output tensors are directly passed to a specified loss function while at evaluation time, they are passed to the post processing function, which converts the raw outputs into actual bounding boxes. The Object Detection API follows this structure. The model you want to train should include the five functions below: 


    ● Preprocess applies any preprocessing operation to the input image tensor. This could include transformations for data augmentation or input normalization. 

    ● Predict produces the model’s raw predictions that are passed to the corresponding loss or post processing functions (e.g., Non-Maximum Suppression). 

    ● Postprocess converts raw prediction tensors into appropriate detection results (e.g., bounding box index and offset, class scores, etc.). 

    ● Loss defines a loss function that calculates scalar loss tensors over the provided ground truth.

    ● Restore loads checkpoints into the TensorFlow graph. 


Depending on whether you are training or evaluating the network, a batch of input images passes through a different sequence of steps, as depicted in Figure 2.


Figure 2.


3.2 API object detection models 

To allow the construction of DetectionModels for various object detection methodologies (i.e., SSD, CenterNet, RCNN, etc.), different meta-architectures are implemented by the TensorFlow Object Detection API. The idea behind metaarchitectures is to provide a standard way to create valid DetectionModels for each of the object detection methodologies. All object detection meta-architectures can be found at the following link: 

    ● API: object_detection/meta_architectures 


In the case of custom models, you have the option of implementing a complete DetectionModel following a specific metaarchitecture. However, instead of defining a model from scratch, it is possible to create only a feature extractor that can be employed by one of the pre-defined meta-architectures to construct a DetectionModel. It should be emphasized that meta-architectures are classes that define entire families of models using the DetectionModel abstraction.


3.3 The SSD meta-architecture API 

Before describing the stages in the development of a custom SSD model, it is important to establish an understanding of the details of the SSD meta-architecture. As you can see in the example SSD model, there are three principal parts to an SSD model: 

    ● SSD feature maps 

    ● Prediction layers (i.e., classes and offsets) 

    ● Post processing layers 


When constructing your model, the Object Detection API uses a model configuration file to automatically create prediction and post processing layers. The configuration contains the anchor generator (e.g., aspect ratios and scales of the default bounding boxes), box predictors (e.g., convolution layer hyper parameters), and post processing (e.g., iou and score thresholds) parameter values. However, SSD feature maps are created by employing pre-constructed feature extractor models. The full list of SSD feature extractor models can be found at

    ● API: object_detection/models 


We can choose an appropriate feature extractor model from the pre-constructed models in the configuration file (i.e., feature_extractor). However, this requires us to know the mappings from model names to their pre-defined structures before changing the configuration file. This mapping can be found at 

    ● API: object_detection/builders/model_builder.py


Also, a number of sample configuration files are provided in the following API: 

    ● API: object_detection/configs/tf2


Figure 3. A typical SSD model constructed by the API


An SSD meta-architecture (SSDMetaArch) requires a feature extractor (SSDFeatureExtractor) to automatically construct appropriate class and offset prediction layers according to the configuration parameters (i.e., number of classes, scales, and aspect ratios). The general framework is summarized in Figure 4.


Figure 4. SSD DetectionModel construction framework


3.4 Custom SSD model design 

Now, let’s look at how we can construct a custom SSD model using API meta-architectures. While we can implement a custom SSD detection model from scratch, the previous sections have shown that the construction of a custom model can be achieved via the definition of an appropriate feature extractor model. The API will automatically create prediction and post processing layers using the configuration parameters. We only need to construct an SSD feature extractor through the SSDFeatureExtractor class. The custom feature extractor can be added to the mapping of the feature extractor in

● API: object_detection/builders/model_builder.py 


A visual representation of all the elements of the SSDFeatureExtractor class necessary to define a valid SSD feature extractor is shown in Figure 5. 

● API: object_detection/meta_architectures/ssd_meta_arch.py



Figure 5. A valid SSD feature extractor structure 


In the next section, we explain the principal components of the SSDFeatureExtractor and show how one can construct or adapt a feature extractor for any kind of application.


3.4.1 Preprocess 

This defines the preprocessing operation that normalizes input images for the classification backbone.



3.4.2 Classification_backbone 

The classification backbone is the network structure for the extraction of basic feature maps from the preprocessed inputs.



3.4.3 _feature_map_layout 

This is a dictionary that determines which basic feature maps are being used to generate SSD feature maps by feature_map_generator.



In this example, the SSD model would have four prediction layers; the first three layers are provided by mp1, mp2, and mp3, and the API will automatically create the fourth (' ') with 256 features. Remember that the API will use your last feature map ('mp3') as input for the fourth one. 


It is very important to keep in mind that there should be a correspondence between the layers' names provided in the _feature_map_layout and the backbone_classification model, as the feature map generator takes those layers' outputs as its inputs for the construction of feature maps.


3.4.4 feature_map_generator 

This is used to construct SSD feature maps from the features determined in the feature_map_layout.



The feature map generator has a function in its API to create feature maps automatically so there is no need for further coding. 


3.4.5 Feature maps 

SSD feature maps are created for each input image as shown below:



3.5 Custom model embedding 

Following the implementation of the instructions in Section 4 and the construction of a custom SSD feature extractor (e.g., ssd_custome_keras_feature_extractor.py), first, we have to ensure that the file has been added to the API models folder at.

    ● API: object_detection/models 


Second, this custom feature extractor can be added to the mapping in which all feature extractor definitions are provided, and we can access them within the pipeline configuration file using their corresponding keys. To do this, open the API model builder at API: object_detection/builders/model_builder.py and add following lines:



3.6 API installation 

After applying all modifications related to the custom model design, follow the normal steps for the installation of the API provided on the TensorFlow website. After the installation of the API, the custom SSD feature extractor can be accessed within the configuration file:



4 Occupancy management data preparation 

Some infrared images used for training the human detection model are shown in Figure 6. Images are acquired by various sensors installed at different locations and heights. It should be noted that some acquisitions are blurry due to bad focus adjustment after sensor installation. It would be better to adjust focus according to the sensor height, but our model will learn to work around this.



4.1 Annotations 

Human annotations are provided in the Pandas DataFrame structure exported to csv files, which include image filenames and their corresponding bounding boxes. An example of this information is provided in the table below.



In Figure 7, for each input image, corresponding ground truth bounding boxes are drawn where humans are present in the frames.


Figure 7. Ground truth bounding boxes are drawn around corresponding images 


4.2 API data preparation 

The id of the classes start from 1, and the class id of 0 is reserved for the background context. In the human detection model, the labels map is as follows:


In summary, the corresponding tf_example for each image in the dataset is created and stored in the dataset’s TFRecord file (Figure 8).


Figure 8. TFRecord file generation


5. Summary 

In this document, we provided an overview on how to train and optimize a neural network for occupancy management applications, leveraging all the backend SSD offered by the TensorFlow Object Detection API. To enable easy porting of custom NNs on GAP, we have developed GAPflow, a set of tools released by GreenWaves Technologies that allows users to accelerate the deployment of NNs on GAP while ensuring high performance of and low-energy consumption on GAP processors. The GAPflow toolset assists programmers in achieving short time-toprototype of DL-based applications by generating GAP-optimized code based on the provided DL model, and it fully supports the importation of detection models created using the Tensorflow Object Detection API. Watch our tutorial here.

授权代理商:世强先进(深圳)科技股份有限公司
技术资料,数据手册,3D模型库,原理图,PCB封装文件,选型指南来源平台:世强硬创平台www.sekorm.com
现货商城,价格查询,交期查询,订货,现货采购,在线购买,样品申请渠道:世强硬创平台电子商城www.sekorm.com/supply/
概念,方案,设计,选型,BOM优化,FAE技术支持,样品,加工定制,测试,量产供应服务提供:世强硬创平台www.sekorm.com
集成电路,电子元件,电子材料,电气自动化,电机,仪器全品类供应:世强硬创平台www.sekorm.com
  • +1 赞 0
  • 收藏
  • 评论 0

本文由Ray转载自GREENWAVES Official Website,原文标题为:TensorFlow Object Detection API,本站所有转载文章系出于传递更多信息之目的,且明确注明来源,不希望被转载的媒体或个人可与我们联系,我们将立即进行删除处理。

相关研发服务和供应服务

评论

   |   

提交评论

全部评论(0

暂无评论

相关推荐

GAP-8: A RISC-V SoC for AI at the Edge of the IoT

Current ultra-low power smart sensing edge devices, operating for years on small batteries, are limited to low-bandwidth sensors, such as temperature or pressure. Enabling the next generation of edge devices to process data from richer sensors such as image, video, audio, or multi-axial motion/vibration has huge application potential.

2024-08-10 -  应用方案 代理服务 技术支持 批量订货

Mitsumi MMR920 Pressure Sensor is Ideal for Water Level Measurement in a Smart Water Bottle Due to Its Small Size, Low-cost, and Liquid Compatibility

Mitsumi MMR920 pressure sensor is ideal for water level measurement in a smart water bottle due to its small size, low-cost, and liquid compatibility.

2024-11-12 -  应用方案 代理服务 技术支持 批量订货

Enabling ANC in Open-Ear Earbuds Beyond Apple with GreenWaves GAP9 AI+DSP Processor

By uniquely meeting this sub-millisecond latency requirement, the ultra-low-latency shared memory AI+DSP processor GAP9 enables brands, beyond Apple, to implement active noise cancellation (ANC) in open-ear designs.

2024-10-22 -  应用方案 代理服务 技术支持 批量订货

GAP9下一代处理器,用于可听式设备和智能传感器多维计算,用于环境感知ANC、基于神经网络的降噪、3D声音、多传感器分析等

描述- 本资料介绍了GreenWaves Technologies的GAP9处理器,这是一款专为可穿戴设备设计的嵌入式音频处理器。该处理器集成了先进的数字信号处理能力和基于神经网络的算法,提供超低能耗和延迟。它支持主动降噪(ANC)、神经网络噪声消除、3D声音和多传感器分析等功能,适用于智能耳机等可穿戴设备和智能安全系统。

型号- GAP9

2022/6/21  - GREENWAVES  - 数据手册  - Version 1.14 代理服务 技术支持 批量订货 查看更多版本

GAP9下一代处理器,用于便携式设备和智能传感器

描述- GAP9下一代处理器专为可穿戴设备和智能传感器设计,具备多维计算能力,支持环境感知ANC、基于神经网络的降噪、3D音效、多传感器分析等功能。该处理器在音频处理方面表现出色,具有低功耗、低延迟的特点,适用于可穿戴设备、智能安全系统和智能建筑传感器等领域。

型号- GAP9

2024/5/13  - GREENWAVES  - 数据手册  - Version 1.15 代理服务 技术支持 批量订货

占用管理参考平台精确、匿名、电池供电的人员检测传感器

描述- 本资料介绍了LYNRED和GreenWaves Technologies共同开发的占用管理参考平台,该平台利用红外传感器和超低功耗处理器实现电池供电的人流检测。平台旨在解决办公空间利用率问题,提供准确、匿名的人员检测与计数功能,适用于会议室占用检测、共享空间管理等。

型号- GAP8

2020/2/20  - GREENWAVES  - 数据手册  - Version 1.1 代理服务 技术支持 批量订货

GreenWaves Presented at Edge AI Summit in 18-20 November 2020

Greenwaves Technologies was proud to be a sponsor of the Edge AI virtual summit in 18-20 November 2020. Greenwaves presented GAP IoT application processors based on RISC-V architecture and will share knowledge and expertise in Edge Computing and Artificial intelligence.

2024-08-09 -  原厂动态 代理服务 技术支持 批量订货

Lynred and GreenWaves collaborate on New Occupancy Management Reference Platform for People Counting Sensor

GreenWaves and Lynred have collaborated on an open-source workspace management platform that allows quick deployment of sensors collecting accurate occupancy data. This platform combines Lynred‘s low-power IR sensors with GreenWaves‘ GAP8 processor to create battery-operated people counting devices, released under open source licenses. The platform ensures occupant anonymity using infrared technology and will be demonstrated at Embedded World in Nuremburg, Germany.

2024-09-07 -  产品 代理服务 技术支持 批量订货

GAPPoc : A Family of GAP8-centric Proof Of Concept boards for Edge AI

Our GAP8 application processor chip is great at analyzing and understanding data from IoT sensors, from the simplest to the most complex, in a very tight power envelope – from a few tens of milliwatt in active mode down to a few microwatts in sleep mode.

2024-08-14 -  产品 代理服务 技术支持 批量订货

GreenWaves Technologies Produced a Prototype of a New Wireless Modem for the IoT

It is implementing the GreenWaves LPWAN wireless modem dedicated to the high data rate LPWAN applications. This model includes a NUCLEO development board, a XILINX Zynq-7000 FPGA development board and an ATREB215-XPRO evaluation board instantiating an ATMEL AT86RF215 analog front-end.

2024-08-11 -  产品 代理服务 技术支持 批量订货
2018 05 28  - GREENWAVES  - 用户指南  - Version 1.1 代理服务 技术支持 批量订货

Gapuino用户手册

型号- GAP8

2018 11 21  - GREENWAVES  - 用户指南  - Version 1.2 代理服务 技术支持 批量订货

AI赋能舞动最强音浪,GREENWAVES精彩亮相2024亚洲音频展!

2024年5月16-17日,蓝牙音频行业技术产业盛会2024(春季)亚洲音频展在深圳福田会展中心正式举行!2024年,GREENWAVES携基于AI降噪的创新GAP9处理器亮相本届盛会,带大家亲身体验GAP9处理器的卓越之处!与公司的资深工程师展开深入交流,包括GreenWaves如何巧妙融合DSP和AI技术,突破功耗限制,为可穿戴设备和物联网产品注入创新活力,打造用户心仪的卓越体验。

2024-08-08 -  原厂动态 代理服务 技术支持 批量订货

我在Silicon Labs的connect_soc_sensor工程添加看门狗代码后,发现我的程序没有调用喂狗代码时,芯片也没有执行看门狗复位动作。这个看门狗代码在其他工程是可以正常执行看门狗复位的,请问这是什么问题呢?

Silicon Labs的connect_soc_sensor程序,在connect协议栈运行connect_stack_tick()函数时会调用喂狗代码,所以导致应用程序没有调用喂狗代码,芯片也不会执行看门狗复位。

2024-11-15 -  技术问答 代理服务 技术支持 批量订货
展开更多

电子商城

查看更多

品牌:TE connectivity

品类:Humidity Sensors

价格:¥4.5000

现货: 89,104

品牌:MELEXIS

品类:Current Sensor IC

价格:¥15.3139

现货: 65,792

品牌:MELEXIS

品类:Integrated Current Sensor IC

价格:¥15.3811

现货: 50,898

品牌:MELEXIS

品类:Triaxis Position Sensor IC

价格:¥22.8370

现货: 38,102

品牌:TE connectivity

品类:Pressure Sensor

价格:¥5.0000

现货: 38,088

品牌:STANDEX-MEDER

品类:Reed Sensors

价格:¥4.8720

现货: 26,190

品牌:MELEXIS

品类:Integrated Current Sensor IC

价格:¥14.2165

现货: 24,580

品牌:TE connectivity

品类:temperature sensor

价格:¥8.9531

现货: 24,115

品牌:矽典微

品类:K波段智能毫米波传感器

价格:¥26.6670

现货: 23,471

品牌:MELEXIS

品类:Position Sensor IC

价格:¥7.6832

现货: 22,529

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

现货市场

查看更多

品牌:SILICON LABS

品类:Switch Hall Effect Magnetic Position Sensor

价格:¥2.2924

现货:126,000

品牌:TI

品类:IC

价格:¥2.3459

现货:60,000

品牌:MELEXIS

品类:Position Sensors

价格:¥3.8658

现货:54,556

品牌:ONSEMI

品类:图像传感器

价格:¥55.5000

现货:43,751

品牌:TI

品类:测温芯片

价格:¥3.5400

现货:37,567

品牌:SILICON LABS

品类:Light Sensor

价格:¥20.3400

现货:28,003

品牌:纳芯微电子

品类:Analog Sensor Conditioner

价格:¥2.8600

现货:17,336

品牌:AKM

品类:Sensor

价格:¥13.5500

现货:5,790

品牌:ALLEGRO

品类:马达/运动/点火控制器和驱动器

价格:¥32.3820

现货:4,500

品牌:RENESAS

品类:传感器

价格:¥34.5841

现货:3,853

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

品牌:

品类:

价格:

现货:

查看更多

授权代理品牌:接插件及结构件

查看更多

授权代理品牌:部件、组件及配件

查看更多

授权代理品牌:电源及模块

查看更多

授权代理品牌:电子材料

查看更多

授权代理品牌:仪器仪表及测试配组件

查看更多

授权代理品牌:电工工具及材料

查看更多

授权代理品牌:机械电子元件

查看更多

授权代理品牌:加工与定制

世强和原厂的技术专家将在一个工作日内解答,帮助您快速完成研发及采购。
我要提问

954668/400-830-1766(工作日 9:00-18:00)

service@sekorm.com

研发客服
商务客服
服务热线

联系我们

954668/400-830-1766(工作日 9:00-18:00)

service@sekorm.com

投诉与建议

E-mail:claim@sekorm.com

商务合作

E-mail:contact@sekorm.com

收藏
收藏当前页面