Innovation Roundup
A newsletter about innovation, technology, health economics, regulation and empathy in medicine (7/10/23)
Hello,
Welcome to the latest edition of our newsletter. Here's what we have in store for you this week:
Ethical and Social Risks of LLMs: Potential harm from Large Language Models (LLMs), focusing on discrimination, exclusion, and toxicity; information hazards; and misinformation harms.
Wearable Technology and Cardiovascular Health: Future of patient assessment through wearable technology
Industry Roundup: A comprehensive map of different market segmentations in digital health, complemented by a paper providing more background.
Healthcare Economics and Policy Roundup: We feature an excellent guide from the Canadian Drug and Health Technology Agency on reporting real-world evidence. This guide covers everything from research questions to statistical methods and study limitations, and is a must-read for those involved in generating real-world evidence.
Regulatory Roundup: We feature a paper that examines the association between regulatory submission characteristics and recalls of medical devices receiving 510(k) clearance.
I hope you find this issue informative and thought-provoking. As always, I welcome your feedback and look forward to continuing to provide you with the latest insights in science, technology, policy and healthcare.
Best Regards,
Hamid
Science Roundup
Wearable technology and the cardiovascular system: the future of patient assessment
A great review of devices, the underlying sensor technologies, the data acquired, and their application, providing a perspective on where these tools could sit within cardiovascular health care, the challenges that need to be resolved, and the studies required to confirm their utility. I have been spending a lot of time thinking about incorporating these sensors our digital health program and I found this review to be really helpful.
Ethical and social risks of harm from #LLM
A few key points :
discrimination, exclusion, + toxicity LLMs reflect unjust, toxic, or oppressive tendencies found in the underlying training data e.g. characterizing patients of color as "drug-seeking"
information hazards LLMs generate utterances that contain private or safety-critical information e.g., re-identification of patients via prediction of identifying data (e.g., genomic, clinical, etc.)
misinformation harms LLMs assign high probabilities to false, misleading, or non-sensical information e.g., medical misinformation for patients and/or clinician users
Industry Roundup
I came across this map and I think it is a great guide to different market segmentations in digital health. I am bookmarking it for future reference
Complement it with this paper that provides more background to the above figure
Healthcare Economics and Policy Roundup
Guidance for Reporting Real-World Evidence from Canadian Drug and Health Technology Agency
We have been thinking a lot about creating real world evidence for some of our portfolio companies. This is an excellent outline that highlight the process of generating real world evidence (importance of clear research questions and study design, setting and context, data specifications, data sources, and participant selection, need for precise definitions of exposure and comparators, outcomes, and the handling of bias, confounding, and effect modifiers). The document further discusses statistical methods, study findings, interpretation, and generalizability, and acknowledges the limitations of the study. I am going to be referencing this a lot.
Regulatory Roundup
What happens if a new medical device is approved based on a device that has been recalled because of a safety problem? They found some important predictors of recall. The applicant devices citing predicate medical devices with 3 or more ongoing recalls were significantly associated with a 9.31–percentage-point increase (95% CI, 2.84-15.77 percentage points) in recall probability compared with devices without ongoing recalls of predicate medical devices. This raises the concern that :
substantial equivalence may be insufficient in certain circumstances for assessing the safety profile of moderate-risk medical devices, particularly when the safety profiles of predicate medical devices are not well understood or when applicant devices differ in some meaningful way from their predicate medical devices.
This points to a potential important policy recommendation for FDA:
It may be valuable to ask manufacturers to disclose to the FDA the recall status of predicate medical devices cited in their 510(k) submissions (manufacturers currently are not required to do this), and explicitly request that manufacturers describe whether applicant devices are subject to the same concerns that led to the recalls of the predicate medical devices
Thank you again for reading my newsletter.
If you enjoy reading this newsletter, please share it with someone else who might enjoy reading it.
Talk soon,
Hamid