Artificial Intelligence in the IIoT is a Matter of Trust
02/07/2019 Marcellus Buchheit
Artificial Intelligence is a hot commodity in the technology world these days. But what does it mean in the context of the Industrial IoT?
An early definition of artificial intelligence was one of “thinking machines” that could make decisions like humans, and with some people, elicited a fear that these thinking machines could actually replace humans in the manufacturing world. Today’s perception of AI, however, is geared more towards machines that exhibit human reasoning as a “guide to provide better services or create better products rather than trying to achieve a perfect replica of the human mind”, as noted in a Forbes article by Bernard Marr. He added that “It’s no longer a primary objective for most to get to AI that operates just like a human brain, but to use its unique capabilities to enhance our world.”
When applied to Industrial Internet of Things (IIoT) systems, AI has been demonstrated to offer business and technology advancements, such as cost reduction and better performance. Examples include the benefits of predictive maintenance leading to reduced outages, better resource management and scheduling and enhanced insights into system usage. AI has also been used to design physical structures, electronic components, and to perform quality assurance testing of complex systems.
At the crux of the JOI article was the notion of trust – trust in that systems operate correctly based on evidence that can be understood. IoT Trustworthiness is defined in the IIC Vocabulary as the “degree of confidence one has that the system performs as expected with characteristics including safety, security, privacy, reliability and resilience in the face of environmental disturbances, human errors, system faults and attacks.”
If the AI system makes it hard or impossible to understand how a decision was made, trust in the system is reduced. The article goes on to describe the various risks and challenges AI can pose to the trustworthiness of an IIoT system.
One example illustrated how AI can be used to probe a system for vulnerabilities by attempting to attack the system itself. The AI system was connected to a video game and subsequently learned how to defeat the game in novel ways. A benign example for sure, but imagine, however, if the system was not a harmless video game but rather an air traffic control system, city traffic light system or nuclear power plant. The dire implications of uncontrolled AI are clear.
While the technology might expose vulnerabilities to malicious manipulation in IoT systems, AI can also be used to enhance the trustworthiness of a system. The JOI article points out two categories in particular where AI in IIoT is emerging:
The use of AI to improve the efficiency, reliability, and effectiveness of processes and tasks that can be fully automated with little risk. These are processes and tasks that are generally mundane, repeatable, static with few variations, or tasks that are very specific and/or localized to specific components in system.
The use of AI in processes that are critical, consequential and non-mundane. When the level of risk is high enough, humans must maintain the ultimate decision-making capacity – this is referred to as the “human-in-the-loop” approach or HIL.
The article discusses the challenges, risks, and benefits of AI in IIoT environments in much more detail. You can read the full article here.
Co-founder of WIBU-SYSTEMS AG, President and CEO of WIBU-SYSTEMS USA
Marcellus Buchheit earned his Master of Science degree in computing science at the University of Karlsruhe, Germany in 1989, the same year in which he co-founded Wibu-Systems. He is well known for designing innovative techniques to protect software against reverse-engineering, tampering, and debugging. He speaks frequently at industry events and is an active member of the Industrial Internet Consortium. He currently serves as the President and CEO of Wibu-Systems USA Inc.