Note: This post is grounded in views and experiences I’ve developed working as a resident physician while interacting with & building clinical A

m3 | music, medicine, machine learning

submited by
Style Pass
2025-01-10 19:00:17

Note: This post is grounded in views and experiences I’ve developed working as a resident physician while interacting with & building clinical AI tools. I do think that a lot of these thoughts broadly apply across use-cases and domains.

The way I’ve thought about (and prioritized) recalling facts vs. reasoning about them has evolved through the years, as I’ve made the transition from pure computer science to medicine. As a resident physician, my daily decision-making consists of an extensive combination of recall + reasoning — drawing from clinical, physiological, pharmacological knowledge and trying to figure out what is going on with a patient.

Over the past couple years, there have been a few LLM-based clinical reference tools that have gained adoption by healthcare providers. At a high level, they pull information from medical journals and pass them to an LLM to answer a user’s query (a typical RAG framework). The thought behind this is that grounding LLM output with legitimate clinical sources improves the knowledge and accuracy of a system, compared to directly querying a tool like ChatGPT.

Relying on resources to locate clinical data and studies to help guide decision making is not new — we want clinical reasoning to at least be informed by evidence-based guidelines and studies. However, in my view, these LLM-driven tools often subtly cross the line from purely assisting with “information synthesis” into “clinical reasoning”.

Leave a Comment