The Charm of IoT Edge Computing from ON Semiconductor RSL10

As more sensors are added, gateways can also be overwhelmed with telemetry data from the local sensor network. In this case, there are two options to alleviate this data and network congestion. More gateways can be added, or more edge processing can be pushed to the end nodes.

This article is compiled from embedded-computing

Prior to 2019, most IoT systems consisted of ultra-low-power wireless sensor nodes (usually battery powered) that provided sensing capabilities.

Their main purpose is to send telemetry data to the cloud for big data processing. As IoT becomes the new buzzword and market trend, almost every company is doing it to enable a proof of concept (PoC). Cloud service providers have nice dashboards that Display data in attractive graphs to help support the PoC. The main reason for a PoC is to convince stakeholders to invest in IoT and demonstrate ROI so that larger projects can be funded.

As the ecosystem scales, it becomes clear that more data is being sent back and forth through the cloud. This can clog up bandwidth pipes and make it difficult to move data in and out of the cloud fast enough. This will also generate minimal latency and, in extreme cases, may break applications that require guaranteed throughput.

While standards such as 5G and Wi-Fi 6E promise major improvements in bandwidth and transfer speeds, the explosion in the number of IoT nodes communicating with the cloud is exploding. In addition to the sheer number of devices, costs are also increasing. Early IoT infrastructure and platform investments need to pay off, and as the number of nodes increases, the infrastructure needs to both scale and be profitable.

Around 2019, the idea of ​​edge computing became a popular solution. Edge computing enables more advanced processing within local sensor networks. This minimizes the amount of data that needs to go through the gateway to the cloud and back. This directly reduces costs and frees up bandwidth for other nodes when needed. Transferring less data per node also has the potential to reduce the number of gateways required to collect data and transmit it to the cloud.

Another technology trend that is enhancing edge computing is artificial intelligence (AI). Early AI services were largely cloud-based. As innovations develop and algorithms become more efficient, AI has moved very rapidly to the end nodes and its use has become standard practice. A well-known example is the Amazon Alexa voice assistant. Detection and wake-up after hearing the trigger word “Alexa” is a common use of edge AI. In this case, trigger word detection is done locally in the system’s microcontroller (MCU). After a successful trigger, the rest of the commands go through the Wi-Fi network to the cloud, where the most demanding AI processing is done. In this way, the wake-up delay can be minimized for the best user experience.

In addition to addressing bandwidth and cost issues, edge AI processing brings additional benefits to applications. In predictive maintenance, for example, small sensors can be added to motors to measure temperature and vibration. A trained AI model can be very effective at predicting when a motor will or will have a bad bearing or overload condition. Getting this early warning is very important to make repairs before the motor fails completely. This predictive maintenance significantly reduces production line downtime, as equipment can be proactively repaired before it completely fails. This results in substantial cost savings with minimal loss of efficiency. As Benjamin Franklin put it, “A penny of prevention is worth a pound of cure”.

As more sensors are added, gateways can also be overwhelmed with telemetry data from the local sensor network. In this case, there are two options to alleviate this data and network congestion. More gateways can be added, or more edge processing can be pushed to the end nodes.

The idea of ​​pushing more processing to end nodes (usually sensors) is brewing and gaining momentum quickly. End nodes typically operate in the mW range and sleep in the µW range most of the time. Due to the low power consumption and cost requirements of end nodes, their processing power is also limited. In other words, they are very resource-constrained.

For example, a typical sensor node can be controlled by an MCU as simple as an 8-bit processor with 64 kB of flash memory and 8 kB of RAM, with a clock speed of about 20 MHz. Alternatively, the MCU could be as complex as an Arm Cortex-M4F processor with 2 MB of flash and 512 kB of RAM, clocked at around 200 MHz.

Adding edge processing to resource-constrained end-node devices is challenging and requires innovation and optimization at both the hardware and software levels. However, since end nodes will still exist in the system, it is economical to add as much edge processing power as possible.

As a summary of the evolution of edge processing, it is clear that end nodes will continue to become smarter, but they must also continue to respect their low resource requirements for resources and costs. Edge processing, like cloud processing, will continue to prevail. Having the option to assign functions to the correct locations allows the system to be optimized for each application and ensures the best performance and lowest cost. Efficiently allocating hardware and software resources is key to balancing competing performance and cost goals. The right balance minimizes data transfers to the cloud, the number of gateways, and adds as much functionality as possible to sensors or end nodes.

Example of an ultra-low power edge sensor node

Developed by ON semiconductor, the RSL10 Smart Lens Camera, which can be used out of the box or easily added to applications, addresses these challenges. Using a number of key components developed by ON Semiconductor and ecosystem partners, the event-triggered AI-ready imaging platform provides engineering teams with an easy way to access AI-enabled object detection and recognition in a low-power format.

The technology employed is to capture a single image frame using the tiny but powerful ARX3A0 CMOS image sensor, which is then uploaded to a cloud service for processing. The images are processed and compressed by the Image Sensor Processor (ISP) before being sent. With JPEG compression applied, the image data is much faster to transfer to a gateway or phone via the Bluetooth Low Energy (BLE) communication network (a companion app is also available). ISPs are a good example of local (end node) edge processing. Images are compressed locally and less data is sent over the air to the cloud, resulting in smaller power and network cost savings due to reduced communication time.

The ISP is designed for ultra-low power operation, operating at only 3.2 mW. It can also be configured to provide some on-sensor preprocessing to further reduce active power, such as setting a region of interest. This allows the sensor to remain in a low power mode until an object or motion is detected in the area of ​​interest.

The fully certified RSL10 System in Package (RSL10 SIP), also provided by ON Semiconductor, provides further processing and BLE communication. The device provides industry-leading low-power operation and reduces time-to-market.

(Figure 1: The RSL10 Smart Shot Camera contains all the components needed for a rapidly deployable edge processing node.)

As can be seen in Figure 1, the board contains multiple sensors for triggering activities. These include motion sensors, accelerometers, and environmental sensors. Once triggered, the board can send the image to the smartphone via BLE, where the companion app can upload it to a cloud service, such as the Amazon Rekognition service. The cloud service implements deep learning machine vision algorithms. For the RSL10 Smart Shot Camera, set the cloud service to perform object detection. Once the images are processed, the smartphone app is updated with the detected algorithms and their probability of success. These types of cloud-based services are very accurate because they literally have billions of images to train machine vision algorithms.

(Figure 2: The image sensor processor of the RSL10 Smart Shot Camera allows images to be sent via Bluetooth Low Energy (BLE) to a smartphone and then to the cloud, where computer vision algorithms can be applied to object detection.)

in conclusion

As discussed, IoT is changing and becoming more optimized for massive and cost-effective scaling. Continue to develop new connectivity technologies to help address power, bandwidth and capacity issues. AI continues to evolve, becoming more powerful and efficient, enabling it to move to the edge and even end nodes. IoT is evolving and adapting to growing trends.

ON Semiconductor’s RSL10 smart camera is a modern example of how it can successfully solve the main problems that put AI at the edge: power, bandwidth, cost and latency.

The Links:   SKIM455GD12T4D1 LTM10C273