Understanding the Microsoft Vision AI Development Kit

Computer vision has come a long way since the days of reading text off of typed pages or helping identify shorts and opens on an assembly line. Advances in computing abilities, enormous amounts of data stored over the last 20 years in data centers and new and better algorithms have finally helped make artificial intelligence (AI) more than just a buzzword.

Computers are already utilizing imaging technologies and machine-learning (ML) software to diagnose images to classify and index them for, say, facial recognition in social networking applications like Facebook, in security and access control and even diagnosis for melanoma.

This is, however, just the tip of the potential application iceberg when you bring in the broader AI to not only analyze data, deliver insights and increase accuracy by learning but also to make decisions. Considering that about 80 percent of our understanding of the environment around us comes from visual perception, any AI implementation beyond simple machine learning will have to rely heavily on computer vision.

However, analyzing computer-vision data is a bandwidth-intensive business. While processor companies like Qualcomm have developed chips that can now handle intensive compute load, moving all of the image and video data to a far-off cloud for analysis and back whenever an application demands it may not always be feasible.

Let’s take an extreme case—self-driving cars, which are still under development. Even with future 5G speeds, the amount of data that needs to be transferred to a data center to analyze traffic situations simply breaks that application because it demands a real-time response. Consider another example—that of a home security system using facial recognition as part of multi-factor authentication (MFA). Should home internet connection speeds slow in the evening when more people start streaming movies, consumers may find themselves waiting on a sluggish system. And if you have ever been in a crowded elevator that stops frequently, you experienced consumer impatience as you watched the equally frequent punching of the “close door” button.

The solution to enabling applications like self-driving cars and ensuring success in the smart home market is the same: Move AI to the network or internet-of-things (IoT) edge.

That will open up AI in the computer vision market so that it will be $3.62 billion in 2018 and grow at 47.54-percent CAGR to $25.32 billion by 2023, according to a report offered by ResearchAndMarkets.com. The time to be a part of that market is now.

Kit up with Microsoft Vision AI

If you are ready to make your mark with IoT solutions like home-monitoring cameras, enterprise security cameras and smart-home devices with built-in AI, you can get the hardware, software and cloud components in one kit to get started right away.

Developed by Qualcomm Technologies, Microsoft and eInfochips, the Vision AI Developer Kit offers engineers the following benefits:

- Low latency

- Robustness

- Privacy

- Efficient utilization of network bandwidth

- Efficient utilization of cloud resources

- Cloud storage

- Cloud processing

- Enhanced security

- Enhanced device provisioning and management

- Integrated environment to build, train, validate and deploy AI models on edge devices

- Device telemetry and analytics

The kit is a camera-based device that combines IoT, edge and AI from Microsoft with the Qualcomm Vision Intelligence 300 Platform and the Qualcomm Neural Processing SDK for AI for on-device edge computing.

The sum of parts

The kit comprises powerful tools to make it easy to develop AI-on-edge products. Let’s take a closer look at them to understand what capabilities you can deliver in your applications. 

Qualcomm QCS603 system-on-chip (SoC): The SoC’s AI engine utilizes a digital signal processor (DSP) for image processing on the edge using the Qualcomm Hexagon 685 Vector Processor, two Qualcomm Hexagon Vector eXtensions (HVX), the Qualcomm Adreno 615 GPU with OpenGL ES 3.2, Vulkan, OpenCL support and Qualcomm Snapdragon Neural Processing Engine with a programming interface that supports Tensorflow, Caffe/Caffe2, ONNE and Android NN and delivers 2.1 TOPS @ 1 W.

Azure IoT Edge: This is a set of software tools in the kit that enable deployment of Microsoft’s Azure service functions, stream analytics, machine learning and SQL server databases from the cloud. It handles routing of messages among modules, devices at the edge and the cloud and, in doing so, moves cloud analytics and custom business logic to devices.

Azure IoT Edge itself consists of three components: IoT Edge modules deployed to devices to locally run Azure services, third-party services or custom code; IoT Edge runtime, which runs on each IoT Edge device and manages the modules; and a cloud-based interface with which you remotely monitor and manage the devices.

Azure IoT Edge supports custom modules that can be coded in C#, C, Node.js, Python and Java. It offers store and forward, which enables it to be operational even in unstable network environments.

Azure Machine Learning service: AML is a cloud service that allows you to train, deploy, automate and manage machine-learning models. The service can auto-train a model and auto-tune it. With the service SDK for Python, along with open-source Python packages, you can build and train highly accurate machine-learning and deep-learning models yourself.

Starting your development

Working with the kit is fairly straightforward. You would begin by training your models in Microsoft Azure for object detection and classification for any application like flagging defects in a manufacturing scenario or examining movement in a home for environmental and lighting control.

Then you would deploy the models to your new kit. With Qualcomm’s hardware and Azure IoT Edge, you can choose to simply run your models without connecting to the cloud.

When you connect to the cloud, you get the benefit of integration between the Qualcomm Neural Processing SDK for AI and AML. This allows you to convert those AI models to TensorFlow, Caffe/Caffe2 and the ONNX standard for pre-training.

The AML service packages your models into hardware acceleration-ready containers or modules, which can be deployed by Azure IoT Edge to devices with the Qualcomm Vision Intelligence Platform. Dedicated hardware — the CPU, the GPU or the DSP — accelerates the Qualcomm Neural Processing SDK to give you AI inferencing on the edge.

All you need now is the Microsoft Vision AI Developer Kit from Arrow.com to start the project.

newsletter 1

최신 뉴스

Sorry, your filter selection returned no results.

개인정보 보호정책이 업데이트되었습니다. 잠시 시간을 내어 변경사항을 검토하시기 바랍니다. 동의를 클릭하면 Arrow Electronics 개인정보 보호정책 및 이용 조건에 동의하는 것입니다.

당사의 웹사이트에서는 사용자의 경험 향상과 사이트 개선을 위해 사용자의 기기에 쿠키를 저장합니다. 당사에서 사용하는 쿠키 및 쿠키 비활성화 방법에 대해 자세히 알아보십시오. 쿠키와 추적 기술은 마케팅 목적으로 사용될 수 있습니다. '동의'를 클릭하면 기기에 쿠키를 배치하고 추적 기술을 사용하는 데 동의하는 것입니다. 쿠키 및 추적 기술을 해제하는 방법에 대한 자세한 내용과 지침을 알아보려면 아래의 '자세히 알아보기'를 클릭하십시오. 쿠키 및 추적 기술 수락은 사용자의 자발적 선택이지만, 웹사이트가 제대로 작동하지 않을 수 있으며 사용자와 관련이 적은 광고가 표시될 수 있습니다. Arrow는 사용자의 개인정보를 존중합니다. 여기에서 당사의 개인정보 보호정책을 읽을 수 있습니다.