Using AI to simplify medical reports
Reading: Use of Artificial Intelligence Chatbots in Interpretation of Pathology Reports, Steimetz and others (2024), JAMA Network Open (via Dr Penguin).
Question Can artificial intelligence chatbots accurately simplify pathology reports so that patients can easily understand them?
Yes.
Last year, my father was in hospital, and I used ChatGPT to decode letters from his consultant to his GP, to better understand his hospital discharge notes, and pathology (specimen) reports. I even used it to clarify things the medical staff told us—because sometimes you don’t think of the right questions when the person is in front of you. It’s good to be able to return with better informed questions.
I’m a sample size of one, but this paper looked at 1,134 pathology reports and checked the simplification with actual experts:
Three pathologists reviewed the flagged reports and categorized them as medically correct, partially medically correct, or medically incorrect; they also recorded any instances of hallucinations.
It looks pretty good to me, specifically for GPT-4:
- 1,104 (97.44%) reports correct;
- 24 (2.12%) partially correct;
- 5 (0.44%) incorrect; and
- 3 (0.26%) hallucinations.
The trick with all of this is going to be knowing when to trust the results, and when to question them. Still, it’s reassuring to know today it’s possible to get some useful simplification.