As artificial intelligence (AI) has become an important part of better decision-making, the question of whether or not to trust a machine’s algorithm has been a centre of a public debate. The National Institute of Standards and Technology (NIST) published a document that proposes a list of nine factors that contribute to a person’s potential trust in an AI system.
According to NIST the issue of human trust in AI systems is measurable, they just need to measure it accurately and appropriately. Many factors get incorporated into human decisions about trust. The factors take into account the thoughts and feelings of users about the system and how they perceive risks in using it. The factors take into account the thoughts and feelings of users about the system and how they perceive risks in using it.
The researchers largely base the document on past research into trust, beginning with the integral role of trust in human history and how it has shaped our cognitive processes. They gradually turn to the unique trust challenges associated with AI, which is rapidly taking on tasks that go beyond human capacity.
AI systems can be trained to discover patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbour in one of them. Automation does not only replace human work, but it can do work that humans cannot possibly do alone.
The proposed factors are different from the technical requirements of trustworthy AI that NIST is establishing in collaboration with the broader community of AI developers and practitioners. The paper shows how a person may weigh the factors described differently depending on both the task itself and the risk involved in trusting the AI’s decision. The factors include accuracy, reliability, resiliency, objectivity, security, explainability, safety, accountability, and privacy.
For example, when it comes to accuracy in music, a music selection algorithm may not need to be overly accurate, especially if a person is curious to step outside their tastes at times to experience novelty. It would be a far different matter to trust an AI that was only 90% accurate in making a cancer diagnosis, which is a far riskier task.
NIST stressed that the ideas in the publication are based on background research and that they would benefit from public scrutiny. The model for AI user trust is all based on others’ research and the fundamental principles of cognition. For that reason, they would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.
Researchers have been utilising AI in healthcare, such as identifying brain tumour. As reported by OpenGov Asia, the U.S.- based AI solution provider and National Taiwan University Hospital (NTUH) develop the first-ever AI-powered tumour auto-contouring solution called VBrain. In this case, the AI device can map out the location of the tumour more quickly and accurately than traditional manual contouring.
CEO of the U.S. AI solution provider said that he was thrilled to bring the AI device to their partners across the U.S. and Taiwan. Receiving unique FDA clearance for this solution allows the company to further its commitment to transforming radiotherapy workflows through developing full-body auto-contouring solutions. The future of AI is near, bringing the second set of eyes and hands to assist clinicians in analysing and segmenting medical scans and further improving patient cancer care.