Artificial intelligence (AI) has the potential to help doctors accurately diagnose patients and predict the risk for complex diseases. Using AI, one can generate models that health care providers can use to predict patients’ risk for heart disease, cancer and various other conditions. However, AI must be trained using data from multiple providers to make the models accurate.
While health care generates vast amounts of data year after year, most of it isn’t available because of the need to protect identifiable patient information. With limited data access, AI models often aren’t as reliable in the real world, limiting how they can be used within healthcare.
To expand AI applications while still protecting patient data, the U.S. Department of Energy (DOE) has committed $1 million toward a one-year collaborative research project. The goal of the project is to create a secure AI framework that enables health care organisations to improve AI models used in biomedicine while keeping sensitive data secure.
Our ultimate hope is to safely expand our ability to use AI models and high-performance computing to further advance the field of biomedicine. AI models can leak data. This means that someone can take a model you’ve developed and reconstruct the data that the model was trained with. This becomes a huge problem when you’re dealing with data that is sensitive and protected.
– Ravi Madduri, Computer Scientist, Argonne
Within biomedicine, data used to train models can include information that is considered protected. To preserve privacy, organisations have avoided sharing their AI models or the data used to train them, and instead train their models using the limited data available to them internally. Using this approach, organisations are at risk of creating models that have bias.
The research team will deliver a framework that can allow organisations to train AI models using data across multiple organisations, all while keeping protected data secure. The framework will be developed as a software package for processing and securing data along with advanced algorithms for federated learning, a form of machine learning that enables multiple organisations to collaboratively train a single model.
The team will be incorporating differential privacy, state-of-the-art statistical techniques that can ensure privacy by having multiple institutions training a model. The team will develop the secure federated learning algorithms for the framework. The framework will also be integrated with AI and supercomputing resources and expertise at both Argonne and DOE’s Lawrence Livermore National Laboratory (LNLL), which will enable researchers to train models more rapidly.
The team will then demonstrate the efficacy of their framework by using AI models that predict the severity of COVID-19, leveraging public and private biomedical datasets. Researchers also plan to use the framework to predict the risk of developing cardiovascular diseases.
If successful, this work will make it possible for organisations to develop and confidently share their AI models with scientists and relevant research groups, all without the worry of leaking private information.
As reported by OpenGov Asia, DOE’s Argonne National Laboratory has received nearly $3 million in funding for two interdisciplinary projects that will further develop artificial intelligence (AI) and machine learning technology.
The two grants were presented by the DOE’s Office of Advanced Scientific Computing Research (ASCR). They will aid Argonne scientists and collaborators to seek AI and machine learning work in the development of approaches to handle enormous data sets or develop better outcomes where minimal data exists.
These two projects are part of five the DOE recently awarded for interdisciplinary work using AI to advance the science conducted in the national labs. All five are focused on developing reliable and efficient AI and machine learning methods to address a broad range of science needs.