Machine Learning can automate charting using patient-doctor conversations

Shared by Radhika Narayanan | 0 0 0 | about 1 month ago

Automating symptoms recording clerical aspects of medical record keeping through speech recognition during a patients visit1 could allow physicians to dedicate more time directly with patients, according to a new report published in JAMA Intern Med.Researchers considered the feasibility of using machine learning to automatically populate a review of systems (ROS) of all symptoms discussed in an encounter.MethodsFor the report, researchers used 90,000 human-transcribed, de-identified medical encounters described previously2. The 2547 subjects were then randomly collected from primary care and selected medical subspecialties to undergo labelling of 185 symptoms by scribes. The rest were used for unsupervised training of the research model, a recurrent neural network3,4 that has been commonly used for language understanding. There were previously reported model details5.Because some mentions of symptoms were irrelevant to the ROS (eg, a physician mentioning nausea as a possible adverse effect), scribes assigned each symptom mention a relevance to the ROS, defined as being directly related to a patient’s experience. Scribes also indicated if the symptom was experienced or not. A total of 2547 labeled transcripts were randomly split into training (2091 80%) and test (456 20%) sets.From the test set, researchers then selected 800 snippets containing at least 1 of 16 common symptoms that would be included in the ROS, and asked 2 scribes to independently assess how likely they would include the initially labeled symptom in the ROS. When both said extremely likely we defined this as a clearly mentioned symptom. All other symptom mentions were considered unclear.The input to the machine learning model used in the report, was a sliding window of 5 conversation turns (snippets), and its output was each symptom mentioned, its relevance, and if the patient experienced it. Then the team assessed the sensitivity and positive-predictive value, across the entire test set. They additionally calculated the sensitivity of identifying the symptom and the accuracy of correct documentation, in clearly vs unclearly mentioned symptoms. The study was exempt from institutional review board approval because of the retrospective de-identified nature of the data set and the snippets presented in the manuscript are synthetic snippets modelled after real spoken language patterns, but are not from the original dataset and contain no data derived from actual patients.

Read More On healthmanagement.org

Categories Artificial intelligence & Robotics Patient engagement &portals



0 Votes

You must log in to post a comment