Electronic health records have been widely adopted with the hope they would save time and improve the quality of patient care. But due to fragmented interfaces and tedious data entry procedures, physicians often spend more time navigating these systems than they do interacting with patients.
Researchers at MIT and the Beth Israel Deaconess Medical Center are combining machine learning and human-computer interaction to create a better electronic health record (EHR). They developed a system that unifies the processes of looking up medical records and documenting patient information into a single, interactive interface.
Driven by Artificial Intelligence (AI), this smart EHR automatically displays customised, patient-specific medical records when a clinician needs them. The system also provides autocomplete for clinical terms and auto-populates fields with patient information to help doctors work more efficiently.
In the origins of EHRs, there was this tremendous enthusiasm, but few stopped to ask the deep questions around whether they would be of use for the clinician. I think a lot of clinicians feel they have had this burden of EHRs put on them for the benefit of bureaucracies and scientists and accountants. We came into this project asking how EHRs might actually benefit clinicians.
To design an EHR that would benefit doctors, the researchers had to think like doctors. They created a note-taking editor with a side panel that displays relevant information from the patient’s medical history. That historical information appears in the form of cards that are focused on particular problems or concepts.
Most EHRs store historical information on separate pages and list medications or lab values alphabetically or chronologically, forcing the clinician to search through data to find the information they need, Murray says. This system only displays information relevant to the particular concept the clinician is writing about.
Pieces of interactive text called chips serve as links to related cards. As a physician types a note, the autocomplete system recognises clinical terms, such as medications, lab values, or conditions, and transforms them into chips. Each chip is displayed as a word or phrase that has been highlighted in a certain colour depending on its category (red for a medical condition, green for a medication, yellow for a procedure.
Through the use of autocomplete, structured data on the patient’s conditions, symptoms, and medication usage are collected with no additional effort from the physician. The advance will change the paradigm of how to create large-scale health datasets for studying disease progression and assessing the real-world effectiveness of treatments.
After a year-long iterative design process, the researchers tested the new system by deploying the software in the emergency department at Beth Israel Deaconess Medical Center in Boston. They worked with an emergency physician and four hospital scribes who enter notes into the electronic health record. Deploying the software in an emergency department, where doctors operate in a high-stress environment, involved a delicate balancing act.
The Covid-19 pandemic complicated the deployment. The researchers had been visiting the emergency department to get a sense of the workflow, but were forced to end those visits due to COVID-19 and were unable to be in the hospital while the system was being deployed.
Despite those initial challenges, the system became popular throughout the one-month deployment. They gave the system an average rating of 83.75 (out of 100) for usability. Those initial results are promising, but as the researchers consider the feedback and work on future iterations of the system, they plan to proceed with caution.