About: Deep neural network (DNN) classifiers have attained remarkable performance in diagnosing known diseases when the models are trained on a large amount of data from known diseases. However, DNN classifiers trained on known diseases usually fail when they confront new diseases such as COVID-19. In this paper, we propose a new deep learning framework and pipeline for explainable medical imaging that can classify known diseases as well as detect new/unknown diseases when the models are only trained on known disease images. We first provide in-depth mathematical analysis to explain the overconfidence phenomena and present the calibrated confidence that can mitigate the overconfidence. Using calibrated confidence, we design a decision engine to determine if a medical image belongs to some known diseases or a new disease. At last, we introduce a new visual explanation to further reveal the suspected region inside each image. Using both Skin Lesion and Chest X-Ray datasets, we validate that our framework significantly improves the accuracy of new disease discovery, i.e., distinguish COVID-19 from pneumonia without seeing any COVID-19 data during training. We also qualitatively show that our visual explanations are highly consistent with doctors’ ground truth. While our work was not designed to target COVID-19, our experimental validation using the real world COVID-19 cases/data demonstrates the general applicability of our pipeline for different diseases based on medical imaging.   Goto Sponge  NotDistinct  Permalink

An Entity of Type : fabio:Abstract, within Data Space : covidontheweb.inria.fr associated with source document(s)

AttributesValues
type
value
  • Deep neural network (DNN) classifiers have attained remarkable performance in diagnosing known diseases when the models are trained on a large amount of data from known diseases. However, DNN classifiers trained on known diseases usually fail when they confront new diseases such as COVID-19. In this paper, we propose a new deep learning framework and pipeline for explainable medical imaging that can classify known diseases as well as detect new/unknown diseases when the models are only trained on known disease images. We first provide in-depth mathematical analysis to explain the overconfidence phenomena and present the calibrated confidence that can mitigate the overconfidence. Using calibrated confidence, we design a decision engine to determine if a medical image belongs to some known diseases or a new disease. At last, we introduce a new visual explanation to further reveal the suspected region inside each image. Using both Skin Lesion and Chest X-Ray datasets, we validate that our framework significantly improves the accuracy of new disease discovery, i.e., distinguish COVID-19 from pneumonia without seeing any COVID-19 data during training. We also qualitatively show that our visual explanations are highly consistent with doctors’ ground truth. While our work was not designed to target COVID-19, our experimental validation using the real world COVID-19 cases/data demonstrates the general applicability of our pipeline for different diseases based on medical imaging.
subject
  • Image processing
  • Medical physics
  • Deep learning
  • Emerging technologies
  • Nuclear medicine
  • Artificial intelligence
  • Artificial neural networks
part of
is abstract of
is hasSource of
Faceted Search & Find service v1.13.91 as of Mar 24 2020


Alternative Linked Data Documents: Sponger | ODE     Content Formats:       RDF       ODATA       Microdata      About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data]
OpenLink Virtuoso version 07.20.3229 as of Jul 10 2020, on Linux (x86_64-pc-linux-gnu), Single-Server Edition (94 GB total memory)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2025 OpenLink Software