Open Source Healthcare

AI Doctor App?

Good afternoon, everyone! I am super excited about the possibilities that AI is opening up in HMIS and healthcare. As a new FNP, I feel the gravity of my assessments, diagnoses, and treatments for all patients, even those with straightforward problems. I still consult my texts and guides for every case, and double and triple-check my pharmaceutical prescriptions. I remember learning that 76% of diagnoses could be obtained from a patient’s subjective data alone, so I spend time carefully collecting as much information as I can during the health history. This is easy for me to do because my FNP practice is a side-line gig, something I do as a ministry for friends and family as I rely on my full-time, low-stress job as a home health nurse to pay my mortgage. But in most clinical practices, primary care providers see one patient every 20 minutes and write more than 20 prescriptions per day, with an average visit count falling between 15 to 25 patients per day (Wolters Kluwer, 2020). With that lofty number it is just not possible to research much on each patient, and errors are not unlikely to happen in this heavy workload environment.

But what if artificial intelligence (AI) did the heavy hitting for us? As office staff, nurse aids, nurses, and phlebotomists go about their business performing ancillary healthcare services gathering patients’ demographic data, subjective information, vital signs, and lab work, instead of merely typing it into a typical software database, this information could interface with an AI platform which specializes in robust medical diagnostic systems. These AI systems are also called large language models (LLMs). They are machine learning systems that produce humanlike responses from language, and can solve complex cases, show clinical reasoning abilities, take patient histories, and even display empathetic communication (Savage et al., 2024). The AI platform could then function as the brains of the provider and ‘crunch’ the data into a viable diagnosis. From here, the treatment plan would logically flow, along with the usual prescriptions, further tests, and referrals. It is mind-boggling. But could an AI function as well as a real human medical provider?

According to this provocative study in JAMA by Goh et al. in 2024, AI could–and did–outperform medical providers. This study was a randomized clinical trial, and it was conducted over one month in 2023. It investigated whether physicians with three years average experience with training in family medicine, internal medicine, or emergency medicine were more successful diagnosticians when they utilized AI tools (the intervention) versus traditional tools. Surprisingly, they were not. The physicians that used ‘conventional’ research aids such as UpToDate, Google, and textbooks, were 74% accurate in their diagnoses, and the physicians that used AI-enhanced search tools of ChatGPT Plus [GPT-4] and OpenAI were only 76% accurate, which was not a statistically significant improvement. However, the LLM alone was 92% successful in finding the correct diagnosis (Goh et al., 2024). Because the researchers’ original hypothesis was whether AI intervention helped in providers’ diagnostic awareness, and were not looking at AI versus providers, they did not heavily interpret or extrapolate these incidental findings, even though they did state that more research is necessary in marrying these two entities: Physicians and AI tools.

No one wants to put himself out of a job, even if AI is showing promise of being a better care provider than a physician. In light of this admittedly small study, I envision that AI will be the future answer to our broken healthcare system and that it will bring an open-source mindset into the highly political and powerful arena of medical information, which is modern medicine.

References

Goh, E., Gallo, R., Horn, J., Strong, E., Weng, Y., Kerman, H., Cool, J. A., Kanjee, Z., Parsons, A. S., Ahuja, N., Horvitz, E., Yang, D., Milstein, A., Olson, A. P. J., Rodman, A., & Chen, J. H. (2024). Large language model influence on diagnostic reasoning: A randomized clinical trial. JAMA Network Open, 7(10), 1-12. https://doi.org10.1001/jamanetworkopen.2024.40969

Savage, T., Nayak, A., Gallo, R., Rangan, E., & Chen, J. H. (2024). Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine. NPJ digital medicine7(1), 20. https://doi.org/10.1038/s41746-024-01010-1

Wolters Kluwer. (2020, July 16). NPs and PAs by the numbers: The data behind America’s frontline healthcare providers. NPs and PAs by the numbers: the data behind America’s frontline healthcare providers | Wolters Kluwer

Leave a comment