Innovation Roundup
A newsletter about innovation, technology, health economics, regulation and empathy in medicine (9/18/23)
Welcome to the latest issue of our newsletter. Here what we have in store for you this week:
In our Science Roundup, we dive into the transformative potential of Large Language Models, like ChatGPT, in refining clinical decision support. We explore its ability to optimize alert logic, juxtaposing its recommendations with human-generated ones, and the significance of such AI-driven interventions in today's healthcare landscape.
Our Industry Roundup underscores the momentum in health system consolidation, as seen with major transitions from Cerner to Epic and the broader implications for the EMR market.
Next, the Healthcare Economics and Policy segment introduces CMS's innovative state-based total cost of care model, a promising strategy for seamless care coordination and hospital engagement.
Lastly, in our Regulatory Roundup, we delve into the crucial conversation surrounding the safety and regulatory oversight of medical AI chatbots interacting directly with patients.
Enjoy,
Science Roundup
Using AI-generated suggestions from ChatGPT to optimize clinical decision support
Ever wondered how to accurately assess the role of LLMs in medicine? While there's been a dearth of examples, an emerging body of evidence is now lighting the way, revealing the intricacies of this evaluation process.
Clinical decision support offers recommendations to healthcare professionals at the point of care. However, about 90% of these alerts are disregarded by clinicians due to reasons like irrelevancy or poor timing. Large language models, like ChatGPT by OpenAI, can potentially help improve these suggestions and reduce alert fatigue. This work aimed to see if ChatGPT can offer useful recommendations to enhance clinical decision support and if its suggestions are on par with human-generated ones.
They utilized the ChatGPT chatbot to automatically suggest improvements to clinical alerts. The alert logic was converted into prompts for ChatGPT, formatted to ask for potential additional exclusions.
They then combined AI-generated suggestions with those previously made by clinical informaticians and grouped them by alert. The suggestions, both AI and human-derived, were randomized within each alert. If human suggestions had specific alert identifiers, they were reformatted, since ChatGPT didn't use these identifiers due to lack of access to record IDs.
Suggestions were rated by participants using a 5-point Likert scale on eight aspects: understanding, relevance, usefulness, acceptance without edits, potential workflow change, redundancy with existing alert logic, inversion of suggestion, and potential bias contribution. Additionally, a text box was provided for participants to give further feedback on each suggestion, addressing issues like the AI's tendency to generate fabricated information, termed as "hallucination."
AI-generated suggestions achieved high scores in understanding and relevance and did not differ significantly from human-generated suggestions. In addition, AI-generated suggestions did not show significant differences in terms of bias, inversion, redundancy, or workflow compared to human-generated suggestions. However, AI-generated suggestions received lower scores for usefulness and acceptance.
The study shows that ChatGPT can effectively analyze alert logic and produce valuable suggestions. While most AI-generated suggestions require modifications, they provide a foundation for experts. This method allows for quick analysis of numerous alerts, facilitating the scaling of CDS optimization. Furthermore, this approach can be incorporated during the alert development phase to offer AI-based suggestions early on. This is what a prototype for this process looks like:
Industry Roundup
Recently, both Intermountain and UPMC have transitioned from Cerner to Epic. These shifts shouldn't be seen merely as a reduction in market share for Cerner; instead, they underscore the trend of health system consolidation. UPMC is streamlining nine EHRs into a singular system, while Intermountain has set an internal target to unify under one system by 2025. UPMC aims to be fully integrated with Epic by 2026, cementing Epic's leading position among major health systems and academic medical institutions. On another note, Oracle's future strategy and growth, especially in light of unfavorable VA publicity, is under scrutiny. It is looking like we are slowly moving toward (an almost) unified EMR!
Healthcare Economics and Policy Roundup
CMS is rolling out a fresh state-based total cost of care model, drawing inspiration from Vermont’s ACO Model and Maryland’s highly successful Total Cost of Care model. This model is tailored for states, hospitals, and primary care practices, offering a comprehensive capitation for hospitals based on Medicare FFS benchmarks, encompassing both inpatient and outpatient services. CMS is aiming for collaborations with up to eight states under this scheme. Their direction becomes clear with this move – emphasizing the integration of behavioral health, enhanced coordination between primary and specialty care, alignment of payors with states, and motivating hospital engagement coupled with a boost in primary care investments. Let’s see if it gets the desired outcome.
Regulatory Roundup
Medical AI chatbots: are they safe to talk to patients?
According to FDA guidlines “software functions” that support or provide clinical recommendations to patients or caregivers (as opposed to licensed healthcare providers) meet the definition of medical devices requiring FDA review and approval. These products can sidestep FDA review only if humans fully preside over the software function. Health care providers must “independently review the basis for the recommendations presented by the software” so that they do not rely primarily on AI-based recommendations, but rather on their own judgments, to make clinical decisions.
In short, anytime a medically trained chatbot or other AI-assisted device is intended to operate independently from licensed clinicians, FDA review and approval is necessary.
Miscellaneous
I watched Citizen Kane for the first time this weekend and it is spectacular. My only regret is that I had not watched until now. This was probably the first movie I re watched immediately after watching it for the first time. This is my favorite quote on the movie :
“Citizen Kane remains the ultimate commentary on American culture since the early 20th century. It conveys America’s inherent polarities (individualism vs collective impulses; libertarianism vs puritanism; innocence vs corruption; the underdog mentality to rebel against oppression vs the impulse to rule over the masses through duping strategies) via a deft synergy of form and content. It has never been more rewarding to screen and talk about this film than in our current political moment.” Roy Grundmann
If you have not watched it, do your self a favor and watch it right away!
The heart, bronchi and bronchial vessels by Leonardo da Vinci
Life, swallowing, tasting, and speaking after a total glossectomy (meaning: I have no tongue)
Jake is one of my favorite internet writers and this is so brave and heart breaking.
The first days home, I got numerous phone calls from seemingly everyone employed by the Mayo Clinic, and some from people not, like specialty pharmacies. Bess answered the calls and quickly began to sound annoyed: “Do you realize that Jake had a total glossectomy and can’t speak?” Yes, Mayo was aware. Yes, they had a department largely devoted to removing not just patient’s tongues, but their larynx and vocal chords as well. No, no one at Mayo considered how to handle this situation, despite Mayo’s large ENT patient population.
Thank you again for reading my newsletter.
If you enjoy reading this newsletter, please share it with someone else who might enjoy reading it.
Talk soon,
Hamid