Human/Health-AI communication loop

Reading: Are AI Tools Ready to Answer Patients’ Questions About Their Medical Care?JAMA, 6 March 2026.

OpenAI says that ChatGPT Health is built “to guide patients to health care professionals for diagnosis and treatment.” Unfortunately it currently under-triages emergency conditions and over-triages non-emergency conditions

That’s the headline you might have seen, but the JAMA news article points to another paper with an equally interesting result: 

The limiting factor [in LLM performance] wasn’t just the model’s medical knowledge […] It was the human-AI communication loop: people providing incomplete information, the model misinterpreting key details, and, importantly, people failing to carry forward a relevant diagnostic suggestion that the model did raise during the exchange.

In other words, turning to an LLM for help is harder than it looks. You might not give the relevant information, or the LLM might not ask for it, or you might discard a valid idea. This feels right, as you just don’t know how to tell good from bad—unless you have the medical knowledge to start with.

There’s a clear example in extended data table 2. The scenario is a brain bleed, with one person being told to take it easy and the other correctly told to get to A&E. The difference is one person didn’t mention that their headache came on suddenly.

The situation will improve, I like to think, but for now: “use chatbots only for low-stakes support, such as explaining medical terms, preparing questions for a clinician, and summarizing what they’ve been told.” In my experience, that works well and has been incredibly useful.