Human Machine Interface (HMI) is developing in the direction of non-contact technology, in which speech and gesture recognition have become the focus. Gesture recognition, without the requirement of touching the device, does not make any sound to interfere with surrounding environment and has thus became a favorite choice of users. This article presents a development overview and solutions to current issues relating to gesture recognition technology.
Gesture Recognition Technology Poses Irreplaceable Advantages
Human machine interface has developed from punch tape, keyboard, mouse and touch control in its very early stages to non-contact technology. Among these mainstream technologies are speech recognition and gesture recognition which are classified into Touchless User Interface (TUI). Speech recognition is the most naturally intuitive and speech control intelligent loudspeakers like Google Home and Amazon Alexa are typical examples. Through speaking the command, you can control these devices without relying on touch. However, it is easy for speech recognition to make erroneous judgments in noisy environments or to disturb others in silent public spaces. Moreover, with the risk of security leakage, these are best used in private settings.
On the other hand, along with the gradual evolution of gesture recognition technology, the quantity of electronic devices with the function of gesture recognition is also increasing. Just as its name implies, gesture recognition aims to recognize human movements or “gestures”. For example, waving hands before the device in a certain way can ask it to start specific applications. This gesture recognition often appears in smart phones and tablet PCs.
Although there are many different types of gesture recognition technology, all of them are based on the basic principle of treating human movement recognition as the input method. These devices have one or several sensors (or cameras) to monitor users’ movements. When the device detects a movement corresponding to the command, it will respond with the appropriate output such as unlocking the device, starting applications, adjusting volume etc.
Compared to a touch human machine interface, what are the advantages of gesture recognition? Firstly, gesture recognition will not wear on devices or components. Traditional QWERTY keyboards or touch screen interfaces will inevitably cause device wear. In addition, touch screen interfaces cannot be used in humid environments, while users wear gloves or when users have difficulty touching the control panel. However, the usage of gesture recognition does not require direct touch of the device thus avoiding device wear. You only need to move your hands or fingers in front of the sensor to create a relative response.
Gesture recognition also opens the door for a brand-new world of input method. Users can not only adopt traditional input methods, but also try others based on gestures. Some devices even permit users to set their own gestures. Aside from smart phones and tablet PCs, gesture recognition can also be applied to other human machine interfaces including architectural and industrial control panels, automobile information entertainment systems and video game consoles.
Gesture Recognition Solutions Accelerate Product Development Speed
Touchless gesture recognition has become increasingly more important in automobile user interfaces because it has improved safety and comfort. These kinds of interfaces enable drivers to interact with other controls (such as audio and air conditioning) while driving, improving the safety and comfort of drivers.
In the past ten years, many vision-based dynamic gestures recognition algorithms have been introduced and various technology has been used in recognizing gestures. Although various computer vision algorithms have made it possible to apply color and depth cameras in gesture recognition, it is still challenging to create reliable classifications for gestures from different subjects in greatly changing light conditions.
In order to improve classification accuracy, it is necessary to adopt a gesture recognition method with multi-mode sensors. NVIDIA uses 3D Convolutional Neural Networks (CNN) to combine RGBD data from the hand area with the movement data from the upper body skeletal. With the functions of Histogram of Gradient (HOG) and Support Vector Machines (SVM) classifiers to provide optimum performance, gesture information from depth, color and radar sensors are integrated to jointly train convolutional neural networks and conduct gesture recognition in the greatly changing light conditions.
ADI has also introduced an optical sensor used for gesture recognition, which can measure an object’s location, distance and gestures through a single sensor and has better detection precision and reliability compared to current solutions. In the past, the competitive solutions requiring multiple sensors usually had a lower precision rate and it was difficult to combine signals because the objects “see” from different angles were different. Single-point sensing used in ADI ADUX1020 optical sensors improve application reliability and requires less components, reducing design complexity and costs for system developers.
Aside from single-point sensing, ADI ADUX1020 optical sensors also have very high ambient light rejection abilities. This provides reliable and accurate operation in severe light conditions, delivering a more reliable application experience for end users. Other optical sensors are challenged by ambient light from light sources such as sunlight, high frequency LED and fluorescent lamps, which may interrupt the sensors’ ability to accurately read gestures. As the ability to overcome complex environments grows and the field of gesture recognition applications becomes more extensive, market space will grow exponentially making now the best time to invest in the relevant product development.
RELATED PRODUCTS