Artificial Intelligence (AI) has already been used to improve disease treatment and detection, discover promising new drugs, identify links between genes and diseases, and more. Hence, there’s a lot of excitement at the intersection of AI and health care.
By gaining access to the right data to train and test the new algorithms, AI researchers can utilise the information to help patients. However, hospitals are concerned about patient privacy to share sensitive information with research teams due to the difficulty to verify the confidentiality of the data.
An AI company is addressing those problems with a technology that lets AI algorithms run on encrypted datasets that never leave the data owner’s system. Health care organisations can control how their datasets are used, while researchers can protect the confidentiality of their models and search queries. Neither party needs to see the data or the model to collaborate. The platform can also combine data from multiple sources, creating rich insights that fuel more effective algorithms.
You should not have to talk with hospital executives for five years before you can run your machine learning algorithm. Our goal is to help patients, to help machine earning scientists, and to create new therapeutics. We want new algorithms — the best algorithms — to be applied to the biggest possible data set.
– Manolis Kellis, MIT Professor
MIT researchers in the Computer Science and Artificial Intelligence Laboratory (CSAIL) analysed data from clinical trials, gene association studies, hospital intensive care units, and more. The researchers found some problems in the hospitals as they use hard drives, ancient file transfer protocol, or even sending communication in the mail which was not well-tracked.
Hospitals and other health care organisations make parts of their data available to researchers by setting up a node behind their firewall. The AI company then sends encrypted algorithms to the servers where the datasets reside in a process called federated learning. The algorithms crunch the data locally in each server and transmit the results back to a central model, which updates itself. No one — not the researchers and the data owners—has access to the models or the datasets.
The researchers then invite machine learning researchers to come and train on last year’s data and predict this year’s data. If there is a new type of algorithm that is performing best in these community-level assessments, people can adopt it locally at many different institutions and level the playing field. So, the only thing that matters is the quality of the algorithm rather than the power of the connections.
By enabling a large number of datasets to be anonymised into aggregate insights, the technology also allows researchers to study rare diseases, in which small pools of relevant patient data are often spread out among many institutions. That has historically made the data difficult to apply AI models.
The right place to solve this is not an academic project. The right place to solve this is in industry, where the platform can be for any researcher. Creating an ecosystem of academia, researchers, pharma, biotech, and hospital partners will make that vision of medicine of the future become a reality.
MIT researchers have been inventing many technologies in the healthcare field. As reported by OpenGov Asia, researchers at MIT and the Beth Israel Deaconess Medical Center are combining machine learning and human-computer interaction to create a better electronic health record (EHR). They developed a system that unifies the processes of looking up medical records and documenting patient information into a single, interactive interface.
Driven by Artificial Intelligence (AI), this smart EHR automatically displays customised, patient-specific medical records when a clinician needs them. The system also provides autocomplete for clinical terms and auto-populates fields with patient information to help doctors work more efficiently.