The role of AI in medical malpractice tort is still being actively written, but it is reasonable to look at clinical decision-support for a preview. For example, many clinical guidelines allow for an “acceptable miss rate” if clinical judgment is augmented by use of a validated decision instrument. The flip side, of course, is the potential dim view taken of running afoul of said guideline-concordant CDS.
However, most of those simple decision instruments are grossly transparent and understandable, unlikely the AI augmented decision-making currently infiltrating practice. Of clinical specialties, radiology is at the forefront of machine-learning augmented image analysis – and, so, this article looks at public perception of errors made in radiology, depending on whether the radiologist and the AI agreed or disagreed.
There were about 650 participants recruited from the general public, and they were given two scenarios – a radiologist who missed a brain bleed, and a radiologist who missed cancer. Within each scenario, participants were randomized to the radiologist having no AI involved, or one of four AI augmentation interactions:
Effectively, sometimes the AI missed it along with the radiologist, and sometimes the AI caught it – and in two scenarios, information regarding the performance characteristics of the AI was provided, as well.
The likelihood of the public siding against the radiologist in a medical malpractice decision is here:
The cancer figure is almost identical.
Not terribly surprising, overall, and this is only a simulation – not an actual courtroom, with all the twists and turns potentially occurring in arguments and submissions. However, it is fair to say if you miss pathology in agreement with AI, it helps your case. If you miss something but disagree with AI, generically, it harms your case. The “AI Disagree (FDR)” is a bit more complex, but if the AI tool has accuracy issues, it is probably not as harmful to your case, overall.
Overall, I’d say this is generally similar to the effects of following guideline-concordant CDS – and a nice window into the sorts of cultural influences we will see influencing clinician behavior in interactions with AI tools.
Fascinating stuff. Thank you.