T  he growing use of artificial intelligence in medicine is paralleled by growing concern among many policymakers, patients, and physicians about the

Explaining medical AI is easier said than done

submited by
Style Pass
2021-07-22 17:30:06

T he growing use of artificial intelligence in medicine is paralleled by growing concern among many policymakers, patients, and physicians about the use of black-box algorithms. In a nutshell, it’s this: We don’t know what these algorithms are doing or how they are doing it, and since we aren’t in a position to understand them, they can’t be trusted and shouldn’t be relied upon.

A new field of research, dubbed explainable artificial intelligence (XAI), aims to address these concerns. As we argue in Science magazine, together with our colleagues I. Glenn Cohen and Theodoros Evgeniou, this approach may not help and, in some instances, can hurt.

Artificial intelligence (AI) systems, especially machine learning (ML) algorithms, are increasingly pervasive in health care. They are used for things like evaluating cardiovascular images, identifying eye disease, and detecting bone fractures. Many of these systems, and most of those cleared or approved for use by the Food and Drug Administration, rely on so-called black-box algorithms. While the notion of what constitutes a black-box algorithm is somewhat fluid, we think of it as an algorithm that is exceedingly difficult, or even impossible, for ordinary humans to understand.

Examples of black-box AI models would be any of a class of algorithms ordinarily labeled as “deep learning,” such as neural networks with many layers, convolutions, back propagation, and the like.

Leave a Comment