Software (including AI) deployments in health tech

Reading: The testing of AI in medicine is a mess. Here’s how it should be done, Nature, 21 August 2024.

Discussion of all the intertwined issues.

Software approval current happen without published trials or analysis:

“Health-care organizations are seeing many approved devices that don’t have clinical validation,” says David Ouyang, a cardiologist at Cedars-Sinai Medical Center in Los Angeles, California. Some hospitals opt to test such equipment themselves.

The incentives for companies making devices (including software/AI devices) needs to encourage publications and trials. 

How the technology is used is vital to its success:

Implementation depends on how well health-care professionals interact with the algorithms: a perfectly good tool will fail if humans ignore its suggestions.

The article cites a study where one trial is successful because of a clear protocol for use, and the same device failed at another site where the physician didn’t follow the instructions. (I’m reminded of a well-known saying: “No matter how it looks at first, it’s always a people problem”.)

AI solutions need to work across populations, or at least know their bounds:

For example, an algorithm developed by Google Health in Palo Alto, California, to detect diabetic retinopathy, a condition that causes vision loss in people with diabetes, was in theory highly accurate. But its performance dropped significantly when the tool was used in clinics in Thailand. An observational study revealed that lighting conditions in the Thai clinics led to low-quality eye images that reduced the tool’s effectiveness.

And patient consent is a fascinating topic in its own right:

While some tools may be clinician-facing, with no need for disclosure on their use, some are directly used by patients.

“What this tool is going to do is take emergency triage data, make a prediction and have a parent directly approve — yes or no — if the child can be tested,” [Devin] Singh [of the Hospital for Sick Children in Toronto] says. This alleviates the burden on the clinician and accelerates the whole process. But it also creates many unprecedented issues. If something goes wrong with the patient, who is responsible? [ …] We need to, in an automated way, obtain informed consent from the family,” Singh says. And the consent has to be reliable and authentic. “It can’t be like when you sign up for social media and there are 20 pages of small print and you just hit accept,” Singh says.