An Analysis of the Interpretability of Machine Learning in Medical Diagnosis
Author(s)
S. Shanmugapriya , L. Gnanaprasanambikai
Published Date
September 12, 2024
DOI
your-doi-here
Volume / Issue
Vol. 18 / Issue 5
Abstract
Machine learning has been really good at medical tasks, sometimes even better than doctors. But there's a big problem: these deep learning models are like locked boxes. They're hard to understand because they don't show how they make decisions. This makes it tough to use them in real medical situations because we need to trust and understand how they work. To solve this problem, many studies have tried to make deep learning more understandable. In our paper, we review these efforts and what they've found. We look at the ways people have tried to make deep learning in medicine easier to understand, what they've used it for, how they've measured its success, and what data they've used. We also talk about the challenges and what researchers should focus on next.
View Full Article
Download or view the complete article PDF published by the author.