How edge computing devices are driving AI solutions

Single-board computers (SBCs) have changed significantly in the last two decades. What used to be simple 8-bit CPUs with limited RAM have now been transformed into quad-core data-crunching machines that can be as small as a pack of gum. Now that edge computing is becoming more popular, how can SBCs benefit? In this article, we will look at why AI is being driven away from data centers and how SBCs are being made artificial intelligence (AI)-friendly.

The pros and cons of AI

The use of AI in products is constantly increasing due to the many advantages that it brings to the table. This modern solution helps to create customizable products for customers and simultaneously can improve the product for all customers.

For example, popular platform Google Assistant can learn and customize its responses to single individuals, which can then be developed to customize responses to an entire group of individuals. This, in turn, helps improve the experience for all customers.

Getting AI into a product can be quite difficult. The most common method for AI implementation is cloud-based AI. This approach is challenging because cloud-based AI systems have their main AI algorithm running at a data center. The need to have customer devices send and receive information from a data center works but inherently has several issues for a robust AI system integration.

Privacy concerns have become a sensitive and primary issue with AI. Sensitive information is being sent to an unknown location that could potentially be accessible by unauthorized individuals. Let’s consider the consumer-popular Alexa-based products from Amazon. Amazon’s Alexa has AI capabilities wherein users can ask it questions and get a response. When you really think about it, Alexa is like a telephone in the sense that the user’s questions are sent to a data center for AI processing instead of the processing being done locally at the device. The privacy concern arises from the fear that Alexa could potentially record conversations and store them without customer knowledge or consent, thus making them accessible to a wide range of employees at Amazon with access to the AI data or systems.

Latency is the next issue. Products that use a remote data center need to send the data, wait for it to be processed, and then get the result. Because no internet connection is instant, there will be a small delay, and this latency can vary depending on traffic. Furthermore, as the number of internet users increases, so will system latency. This could potentially make products unresponsive.

Another issue associated with latency is the internet access itself. An always-on device that relies on a remote data center needs a continuous internet connection. It is not unheard of for website providers and DNS servers to have hiccups that can lead to websites being inaccessible. If this does happen, then any product that relies on a data center is not going to be fully reliable. Locations with unreliable or limited data connections will not be suitable for internet-reliant devices.

Solutions using edge computing devices

Edge computing is a concept that takes the best of both worlds. It is a solution that can ease privacy concerns, reduce the reliance of internet access, and provide on-device AI algorithms that don’t rely on a data center. Simply put, edge computing works by processing data locally on the edge of the network, wherein edge is considered the end device with no internet devices below it. This approach solves many problems and concerns that cloud-based AI systems typically have.

Edge computing solutions shift the AI execution from a data center to a device. While machine learning is not often executed locally (as this is a complex and expensive task), the neural networks produced as a result of the machine-learning process can be executed locally. Tasks that require only the use of neural nets include handwriting, gesture recognition, and object recognition.

Another problem that edge computing solves is latency. AI neural networks are processed as soon as data becomes available, and this can dramatically decrease execution time. Instead of having to wait for the device to establish an internet connection, send the data, wait for the data center to process it, and then send the result back to the device for output, edge computing runs this entire process locally, reducing the need for internet connection and, as a result, latency. The use of edge computing also keeps potentially private information local to the device, as a data center is not being used to process information, often storing data for later learning use.

However, while edge computing sounds great, it has one major disadvantage: AI neural nets can be incredibly complex and difficult to run! While microcontrollers such as the Arduino can be made to run neural nets, the speed at which they would run practical nets would be incredibly slow — so much so, in fact, that the use of a cloud-based AI system would seem almost instantaneous by comparison.

Fortunately for designers, several silicon-based companies have begun producing AI co-processors designed to run neural-net and AI algorithms efficiently. Because these are co-processors, the main processor is freed up to perform other tasks.

So when it comes to single-board computers, what options exist for edge computing?

Solution 1: Google Coral range

The Google Coral range of products is ideal for edge computing thanks to the inclusion of the TensorFlow co-processor. This co-processor is specially designed for mobile and embedded applications. It allows for the execution of TensorFlow Lite AI algorithms. TensorFlow Lite is a “cut-down” version of TensorFlow and provides a more-than-acceptable compromise for AI execution on small devices.

The Dev Board is an Arm A-53 quad-core SBC that includes the TensorFlow co-processor, as well as an integrated GC7000 Lite Graphics GPU, 1 GB of LPDDR4 RAM, 8 GB of eMMC flash memory, and a Wi-Fi SoC for internet connectivity. With HDMI outputs, USB inputs, and GPIO, the board is an excellent solution for edge computing in AI environments, including object recognition and speech recognition, especially in locations where internet access is limited.

In some situations, changing the fundamental hardware is not possible. In these situations, it is easier to add a co-processor as an add-on; this is where the USB accelerator comes in. The Coral USB Accelerator is a USB slave device that provides any connected computer with a TensorFlow co-processor — without the need for a PCB redesign. The advantage of these two Coral products is the ease for scaling, as the Dev Board processor unit can be removed from the motherboard and inserted into a finalized product, while the USB accelerator has mounting holes and an incredibly small footprint (65 × 30 mm).

Solution 2: Nvidia Jetson Nano

AI scenarios that require multiple neural networks to run in parallel while keeping energy consumption low will benefit greatly from the Nvidia Jetson Nano. This single-board computer can run simultaneous neural networks for applications such as object detection and speech processing while consuming just 5 W.

This Arm A57-based SBC includes 4 GB of LPDDR4 RAM, microSD card storage, 4K video encode, 4K video decode at 60 fps, multiple USB 3.0 ports, GPIO, and other peripheral ports. The AI core of the Nvidia Jetson Nano is capable of performing 472 GFLOPS, while the Jetson products are available as both a developer kit and module (developer kits are used for developmental stages, whereas the modules are for end-use applications in products).

The small dimension of the development kit (100 × 80 × 29 mm) makes it ideal for low-profile locations, while the low energy consumption of the Nvidia Jetson Nano also makes it ideal for remote locations that require AI capabilities but lack internet and power.

See related product

945-13450-0000-100 | Jetson Nano Developer Kit

NVIDIA Embedded System Development Boards and Kits View

Solution 3: Intel Compute Stick

The Intel Compute Stick is arguably one of the smallest SBCs available today. Its major benefit? It can turn any HDMI display into a computer.

Physically speaking, the Intel Compute Stick is no bigger than a regular pack of gum (no longer than 4.5 inches), yet within it, there is either an Intel Atom or Intel Core M Processor, up to 4 GB of RAM, and 64 GB of storage.

Despite its size, the Intel Compute Stick also integrates several forms of connectivity, including Wi-Fi, Bluetooth, and three USB ports. While the Intel Compute Stick does not include an AI co-processor, its powerful core potentially makes it a candidate for edge computing, which could potentially bring AI tasks to any HDMI display. This makes the Intel Compute Stick ideal for interactive terminals and home devices that need to be mounted in tight locations.

Solution 4: Raspberry Pi 4

The latest Raspberry Pi computer, the Raspberry Pi 4, is a potential candidate for edge-computing applications for several reasons. First, the core of the Raspberry Pi 4 is based on a quad-core A72 ARMv8-A, which has been clocked at 1.5 GHz — significantly faster than any of its predecessors. Secondly, the Raspberry Pi 4 includes 4 GB of RAM, and the wide range of connectivity including Wi-Fi, Bluetooth, and GPIO enable it to interact with a wide range of hardware. The Broadcom Video Core VI enables the Pi to control multiple screens up to 4K, and its relatively small size (88 × 58 mm) enables it to be mounted in tight locations.

While all of these features make the Raspberry Pi 4 a potential edge-computing candidate, it does not include an AI co-processor, so all AI algorithms must run on the CPU. The Raspberry Pi 4 can be teamed up with the Coral USB Accelerator to provide the Raspberry Pi 4 with a TensorFlow co-processor. The result of this combination could produce a very powerful platform for creating AI applications with dual-screen capability, camera port for object recognition, and GPIO for hardware interfacing.

See related product

RASPBERRYPI4 4GB | 4 Model B

Raspberry Pi Foundation Embedded System Development Boards and Kits View

Outlook

While not all SBCs come with an AI co-processor, incorporating one (especially in embedded systems) into your design can be hugely beneficial. Even those that don’t include an AI co-processor can still rely on an external processor such as the Coral USB Accelerator. Either way, AI in embedded devices is something that will become commonplace in the next decade, and at that point, even the simplest devices will have a minimum level of intelligence.


perks 1



Latest News

Sorry, your filter selection returned no results.

We've updated our privacy policy. Please take a moment to review these changes. By clicking I Agree to Arrow Electronics Terms Of Use  and have read and understand the Privacy Policy and Cookie Policy.

Our website places cookies on your device to improve your experience and to improve our site. Read more about the cookies we use and how to disable them here. Cookies and tracking technologies may be used for marketing purposes.
By clicking “Accept”, you are consenting to placement of cookies on your device and to our use of tracking technologies. Click “Read More” below for more information and instructions on how to disable cookies and tracking technologies. While acceptance of cookies and tracking technologies is voluntary, disabling them may result in the website not working properly, and certain advertisements may be less relevant to you.
We respect your privacy. Read our privacy policy here