Skip to main content
Department of Information Technology

Trustworthy AI-based decision support in cancer diagnostics


To reach successful implementation of AI-based decision support in healthcare it is of highest priority to enhance trust in the system outputs. One reason for lack of trust is the lack of interpretability of the complex non-linear decision making process. A way to build trust is thus to improve humans’ understanding of the process, which drives research within the field of Explainable AI. Another reason for reduced trust is the typically poor handling of new and unseen data of today’s AI-systems. An important path toward increased trust is, therefore, to enable AI systems to assess their own hesitation. Understanding what a model “knows” and what it “does not know” is a critical part of a machine learning system. For a successful implementation of AI in healthcare and life sciences, it is imperative to acknowledge the need for cooperation of human experts and AI-based decision making systems: Deep learning methods, and AI systems, should not replace, but rather augment clinicians and researchers. This project aims to facilitate understandable, reliable and trustworthy utilization of AI in healthcare, empowering the human medical professionals to interpret and interact with the AI-based decision support system.


  • N. Koriakina, N. Sladoje, V. Baši?, and J. Lindblad. Oral cancer detection and interpretation: Deep multiple instance learning versus conventional deep single instance learning. arXiv preprint: arXiv:2202.01783
  • A. Andersson, N. Koriakina, N. Sladoje, and J. Lindblad. End-to-end Multiple Instance Learning with Gradient Accumulation. IEEE International Conference on Big Data (Big Data), pp. 2742-2746, Osaka, Japan, Dec. 2022.
  • N. Koriakina, J. Lindblad, and N. Sladoje. The Effect of Within-Bag Sampling on End-to-End Multiple Instance Learning. In Proceedings of the 12th IEEE International Symposium on Image and Signal Processing and Analysis (ISPA), IEEE, pp. 183-188, Zagreb, Croatia, Sept. 2021.
  • N. Koriakina, N. Sladoje, E. Wetzer, J. Lindblad Uncovering hidden reasoning of convolutional neural networks in biomedical image classification by using attribution methods. 4th NEUBIAS Conference, Bordeaux, France, March 2020.
  • N. Koriakina, N. Sladoje, E. Bengtsson, E. Darai Ramqvist, J-M. Hirsch, C. Runow Stark, J. Lindblad. Visualization of convolutional neural network class activations in automated oral cancer detection for interpretation of malignancy associated changes. 3rd NEUBIAS Conference, Luxembourg, Feb. 2019.


Project supported by AIDA (VINNOVA through MedTech4Health project 2017-02447).
We thank Swedish National Infrastructure for Computing (SNIC) for compute support (SNIC2021-7-58, 2021/7-128 on Alvis at C3SE, Chalmers).

Updated  2023-03-15 23:16:55 by Joakim Lindblad.