To analyse potentially cancerous lesions in mammography scans, Computer engineers and radiologists at Duke University have developed an Artificial Intelligence (AI) platform to determine if an invasive biopsy is necessary. Unlike its many predecessors, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions.
Rather than allowing the AI to freely develop its own procedures, the researchers trained it to locate and evaluate lesions just like an actual radiologist would be trained. The AI could make for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better health care decisions.
If a computer is going to help make important medical decisions, physicians need to trust that the AI is basing its conclusions on something that makes sense. We need algorithms that not only work but explain themselves and show examples of what they are basing their conclusions on. That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.
– Joseph Lo, Professor of Radiology, Duke University
Engineering AI that reads medical images is a huge industry. Thousands of independent algorithms already exist, and the FDA has approved more than 100 of them for clinical use. Whether reading MRI, CT or mammogram scans, however, very few of them use validation datasets with more than 1000 images or contain demographic information. This dearth of information, coupled with the recent failures of several notable examples, has led many physicians to question the use of AI in high-stakes medical decisions.
The researchers’ idea is to build a system to say that this specific part of a potentially cancerous lesion looks a lot like this other one. Without these explicit details, medical practitioners will lose time and faith in the system if there is no way to understand why it sometimes makes mistakes.
The researchers trained the new AI with 1,136 images taken from 484 patients at Duke University Health System. They first taught the AI to find the suspicious lesions in question and ignore all of the healthy tissue and other irrelevant data. Then they hired radiologists to carefully label the images to teach the AI to focus on the edges of the lesions, where the potential tumours meet healthy surrounding tissue and compare those edges to edges in images with known cancerous and benign outcomes.
This is a unique way to train an AI how to look at medical imagery. Other AIs are not trying to imitate radiologists; they are coming up with their methods for answering the question that is often not helpful or, in some cases, depend on flawed reasoning processes. After training was complete, the researchers put the AI to the test. While it did not outperform human radiologists, it did just as well as other black box computer models. When the new AI is wrong, people working with it will be able to recognise that it is wrong and why it made the mistake.
As reported by OpenGov Asia, a new report showed that Artificial Intelligence (AI) has reached a critical turning point in its evolution. Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people’s lives daily—from helping people to choose a movie to aid in medical diagnoses.
In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have leapt in recent years from the academic setting to everyday applications.