Trustworthiness in the context of an industrial system is a relatively new term intended to provide a better understanding of the meaning of trust in such a system and how this trust can be approached by the operational user as well as the planner and designer of the system.
As defined by the IIC in its recently released Industrial Internet of Things Vocabulary v2.1 document: “Trustworthiness is the degree of confidence one has that the system performs as expected. Characteristics include safety, security, privacy, reliability and resilience in the face of environmental disturbances, human errors, system faults and attacks.”
While industrial systems vary greatly in their purpose and scope, their stakeholders share an important common element, and that is a deep-rooted trust. For example:
The owners, investors and operational users trust that these systems work as specified, are profitable and flawless during their expected lifetime.
Neighbors, customers and employees trust that the systems are safe and do not threaten their health or create environmental hazards.
The government trusts that laws and regulations are fulfilled: e.g. patient privacy standards in a hospital, clean-air directives in a fossil power plant or public safety in an urban transportation system.
With expectations high, it is quite a challenge for system engineers to fulfill all of these principles of trustworthiness in the design and operation of industrial systems.
While most experts agree that the five trustworthiness characteristics and their interaction are an important goal for any industrial system design, there are ongoing discussions about whether a design which fulfills all requirements of trustworthiness can be automatically trusted by all parties.
Let’s take a brief look of why the notion of trustworthiness in industrial systems can be so complex in relation to the five trustworthiness characteristics as shown in the Trustworthiness Target Model above:
Humans are protected by privacy and safety, while security, reliability and resilience have no direct relationship in this area.
The Environment is exclusively protected by safety without other considerations involved.
The System is protected by security and to some degree by reliability to protect the system against damage or loss of components.
Finally, the system in Operation is manly shielded by security and reliability, while partially protected by resilience.
One of the key challenges to trustworthiness design is that none of the trustworthiness characteristics can be implemented as a separate technology and that the trustworthiness of an industrial system cannot be implemented by simply combining such technologies as the characteristics may support or interfere with each other.
One approach to addressing these challenges in industrial design is to employ a new classification of Trustworthiness Methods that are assigned to the system characteristics rather than the trustworthiness characteristics. In my article in the Fall issue of the IIC’s Journal of Innovation, I provide an in-depth look at these Trustworthiness Methods and introduce a new concept, the Trustworthy System Status Model (TSSM), to help designers plan a system that goes beyond the “normal” status and proactively prevent, by using specific Trustworthiness Methods, a system that has reached “disrupted” status from slipping into a “damaged or disastrous” status or even permanently lost.