The competition and cooperation between humans and artificial intelligence

0623-ArrowTimes-Nordic-Header-Image-820x410

Artificial Intelligence (AI) has already defeated human elite players in chess and Go, among other board games. This seems to suggest that AI is now smarter than humans, and that humans have lost in this competition with machines. Has humanity really lost its edge in comparison with AI? Will AI completely replace human thinking and judgment? This article will explore the development of AI and the solutions offered by Nordic in AI-related applications.

The Battle between Humans and Artificial Intelligence

As early as 1997, the then-world chess champion Garry Kasparov lost a series of matches to an IBM supercomputer named "Deep Blue", a result that many saw as indicative of AI finally surpassing humans with the help of powerful computing capabilities.

25 years later, the pendulum has swung even further in favor of machines. Today's commercial chess programs not only easily defeat the strongest human players, but can also relatively easily beat the former supercomputer "Deep Blue". This is hardly surprising, as it is estimated that chess grandmasters can only calculate ahead by 30 moves at most, while the best chess engines can effortlessly handle 80 to 100 moves. This is not a fair match at all.

In 2016, the AlphaGo AI software developed by Google DeepMind defeated the best human Go player in the world, in the oldest board game in existence. Mastering Go takes more time as there are infinitely more possible moves and brute-force calculations are not accurate enough for judging. Thus, until 2016, elite humans could still beat machines with their intuition and evaluative skills. However, AI began to turn the tables with the advancement of deep learning algorithms. The AI model did not evaluate all possible solutions, but used deep learning to reduce potential actions under consideration and predict sequence outcomes and winning probabilities. Essentially, the computer has replicated the repeated experimentation of humans over several centuries from scratch, to see if there might be a better way to solve the game's problems than the methods we have devised over the past few thousand years. Today, it has succeeded.

Although it may seem like the competition between humans and machines in this story has reached its conclusion, the reality is not quite so simple. Since the human defeat, research has shown that by studying the conclusions of AI models and attempting to think in both machine-like and human-like patterns, professional Go players have improved their performance in matches against humans. While humans have not been able to regain the advantage, the gap between humans and machines has greatly narrowed, and it has been proven that when AI surpasses humans, it can still lead to progress for humanity.

The thinking mode of machines is vastly different from that of humans

After a long period of evolution, humans have become adept decision makers. We utilize our understanding of the past and conceptualization of the future, and then make judgments through trial and error, intuition, and pattern recognition. Humans are exceptionally good at this, but we are still not perfect.

There are two main flaws in human decision-making. First, our ability to process information accurately is weak. We use heuristic methods (thinking shortcuts, such as rules of thumb) to evaluate, which often leads to inaccurate conclusions. Second, because we are human, we have our own subjectivity and biases. As a result, our decisions sometimes result in inexplicable and irrational judgments. We may focus on irrelevant information, rationalize wrong decisions, and perhaps most worrisome, we succumb to biases. For example, in a 2011 study, it was found that judges' favorable rulings almost dropped to zero shortly before meal breaks during legal proceedings, and immediately returned to 65% after they finished their meal, which is worth reflecting on.

On the other hand, the way machines "think" is different from humans. AI and machine learning (ML) are sometimes interchangeable terms used to describe how machines process information. However, in the IoT industry, the definitions of these terms are more specific. AI is a whole field of future development that aims to achieve human-like intelligence levels in machines, while ML is a practical mathematical field in which we use math to solve practical problems with data.

More and more IoT end devices are equipped with ML capabilities, aiming to mimic biological intelligence technologies and find occasional deviations in the originally monotonous data stream, then interpret these patterns to help make decisions. What machine solutions lack is the human instinct and intuitive features, but at least they won't make wrong decisions before lunch, and we can learn a lot from this detached logic.

Combining human brains with silicon brains will be more perfect

Whether humans or machines are better at interpreting data depends on the type of task. Machine learning, natural language processing, data analytics, and other AI applications can excel at processing large amounts of data without requiring human judgment, but when it comes to handling ambiguity, vagueness, and incomplete information, as well as when emotional intelligence and judgment are needed, humans typically outperform machines. In games like chess and Go, machines are now easily winning, but beyond games, research shows that combining human brains with silicon brains, known as augmented intelligence, will yield the best results.

This augmented intelligence has tremendous development opportunities in rapidly expanding IoT applications. As of 2021, there are already over 10 billion active IoT devices, and if projections hold, this could increase to 250 billion devices in the near future. Without ML to detect changes in devices, the data generated by IoT will be overwhelming. However, allowing machines to make all decisions without a degree of human supervision is risky. Enterprises will face multiple risks of AI in IoT, such as privacy intrusion, mechanical decision-making, and loss of management control. For example, while advanced wearable devices are now widely used in healthcare, allowing providers to remotely monitor patients' heart rates, breathing rates, and blood pressure, we do not want these devices to bear the responsibility of diagnosis, as it would be difficult to find hidden information in data without the supervision of a physician. Therefore, can ML be used to filter the vast amounts of data generated by medical wearables to reduce false alarms and filter out incorrect information so that doctors can make more informed clinical decisions? Of course, this goal has not yet been achieved.

Machine learning enhances human decision-making abilities

Using IoT equipped with ML capabilities to enhance human decision-making will be a daunting challenge. According to IDC's analysis, by 2025, the amount of data generated by IoT devices is expected to reach 73.1 zettabytes, and relaying such a large amount of data to the cloud is impractical due to network limitations, insufficient device resources, and high costs.

However, today's IoT chips can support TinyML, a lightweight version of ML. For example, it enables edge devices to continuously monitor data from industrial machinery to predict the risk of faults. Nowadays, "learning" is executed by powerful cloud servers, which "re-tune" the prediction algorithms and reprogram IoT devices wirelessly.

The ultimate goal of AI is to directly enhance human decision-making with ML, although there is still a long way to go. The progress of deep learning has enabled machines to achieve superhuman performance in some tasks, but this has not been substantively rewarded. However, we should not waste time learning how machines make decisions but focus on using their talents directly. The data generated by IoT will ensure the machine's predictive capabilities and can replace humans in many mundane situations, but when it comes to complex judgments, machine predictions will supplement rather than replace humans. Machines will provide us with more decision-making information, but we will always be the ones making significant decisions.

Supporting developers to quickly utilize embedded machine learning

Through the collaboration between Nordic and Edge Impulse, Nordic's Thingy:53 multi-sensor IoT prototyping platform and Thingy:91 and nRF5340 development kits can all support TinyML, enabling developers to quickly start using embedded machine learning.

Nordic Thingy:53™ is an easy-to-use IoT prototyping platform that allows for the creation of prototypes and proofs-of-concept (PoC) without building custom hardware. Built around Nordic's flagship dual-core wireless nRF5340 SoC, the processing power and memory size of its dual Arm Cortex-M33 processors enable it to run embedded ML models directly on the device.

Nordic Thingy:91, on the other hand, is an easy-to-use battery-powered prototyping platform for cellular IoT using LTE-M, NB-IoT, and GNSS. It is ideal for creating proofs-of-concept, demos, and initial prototypes during your cellular IoT development phase. Thingy:91 is built around the nRF9160 SiP and has been certified for LTE bands worldwide, meaning that it can be used almost anywhere in the world. Cellular communication can be easily interlaced with GNSS positioning acquisition, making it ideal for product ideas with complex asset tracking, and the kit comes pre-loaded with a sophisticated asset tracking application.

The Nordic nRF5340 development kit is a dual-core Bluetooth 5.3 SoC that supports communication protocols such as Low Energy Bluetooth, Bluetooth mesh, NFC, Matter, Thread, and Zigbee. The nRF5340 development kit contains everything needed to get started with development on one board and supports the development of a wide range of wireless protocols. It supports Low Energy Bluetooth with high throughput of 2 Mbps, advertising extensions, and long-range capabilities. Mesh networking protocols such as Bluetooth mesh, Thread, and Zigbee can run simultaneously with Low Energy Bluetooth, allowing smartphones to provision, commission, configure, and control mesh network nodes, which is a prerequisite for Matter applications. It also supports NFC, ANT, 802.15.4, and proprietary 2.4 GHz protocols.

Conclusion

In the era of rapid AI development, using AI technology to assist human management and decision-making has become a trend. By using AI for basic information processing and judgment, and then presenting the information to humans for important decision-making reference, this enhanced intelligent application will be an important direction for future development. Nordic's machine learning solutions can accelerate your development speed when developing IoT AI applications, making it worth further understanding and adoption.

최신 뉴스

Sorry, your filter selection returned no results.

개인정보 보호정책이 업데이트되었습니다. 잠시 시간을 내어 변경사항을 검토하시기 바랍니다. 동의를 클릭하면 Arrow Electronics 개인정보 보호정책 및 이용 조건에 동의하는 것입니다.

당사의 웹사이트에서는 사용자의 경험 향상과 사이트 개선을 위해 사용자의 기기에 쿠키를 저장합니다. 당사에서 사용하는 쿠키 및 쿠키 비활성화 방법에 대해 자세히 알아보십시오. 쿠키와 추적 기술은 마케팅 목적으로 사용될 수 있습니다. '동의'를 클릭하면 기기에 쿠키를 배치하고 추적 기술을 사용하는 데 동의하는 것입니다. 쿠키 및 추적 기술을 해제하는 방법에 대한 자세한 내용과 지침을 알아보려면 아래의 '자세히 알아보기'를 클릭하십시오. 쿠키 및 추적 기술 수락은 사용자의 자발적 선택이지만, 웹사이트가 제대로 작동하지 않을 수 있으며 사용자와 관련이 적은 광고가 표시될 수 있습니다. Arrow는 사용자의 개인정보를 존중합니다. 여기에서 당사의 개인정보 보호정책을 읽을 수 있습니다.