Designing effective artificial intelligence software

January - 18 - 2021

Authors: C. Tang, J. C. Y. Seah, Q. Buchlak, C. Jones; Sydney/AU

Poster presented at the European Society of Radiology Congress 2021 | Poster number C-13640

DOI: 10.26044/ecr2021/C-13640 

Learning objectives 

To raise awareness of the importance of usable AI design, provide examples of model interpretability methods, and to summarise clinician reactions to methods of communicating AI model interpretability in a radiological tool.

Background 

In the past decade, the number of AI-enabled tools, especially deep learning solutions, has exploded onto the radiological scene with the promise of revolutionising healthcare[1]. However, these data-driven models are often treated as numerical exercises and black boxes, offering little insight into the reasons for their behaviour. Trust in novel technologies is often limited by a lack of understanding of the decision-making processes behind the technology.  

Findings and procedure details 

Design Cycle “It’s just aggravating to have to move and shuffle all these windows… shuffle between the list and your [Brand Name] dictation software… [or] Google Chrome or Internet Explorer, to search for something on there. Everything’s just opening on top of each other, which is aggravating.” – UX interview with Interventional Radiologist, USA. The design of the entire user experience of our AI tool has involved radiologists and other clinicians at every step. 

Conclusion 

The inclusion of interpretability techniques has been well-received through testing in multiple rounds of user interviews, reflecting a demand from the broader radiological community to be able to demystify the black box of AI. Future AI work should involve radiologists at all steps of the design process in order to address workflow and UI concerns, especially as regulatory authorities move towards guidelines that aim to ensure a safer and more interpretable AI future. 

View poster

References 

All images are used with permission from annalise.ai

All chest radiographs analysed here are from MIMIC-CXR 2.0.0: Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng, S. (2019). MIMIC-CXR Database (version 2.0.0). PhysioNet. https://doi.org/10.13026/C2JT1Q

[1] Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. Springer Science and Business Media LLC; 2018;18(8):500–510 http://dx.doi.org/10.1038/s41568-018-0016-5. 

[2] Recht MP, Dewey M, Dreyer K, et al. Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations. Eur Radiol. Springer Science and Business Media LLC; 2020;30(6):3576–3584 http://dx.doi.org/10.1007/s00330-020-06672-5. 

[3] Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. Springer Science and Business Media LLC; 2019;17(1) http://dx.doi.org/10.1186/s12916-019-1426-2. 

[4] Holzinger A, Biemann C, Pattichis CS, Kell DB. What do we need to build explainable AI systems for the medical domain? 2017; arXiv:1712.09923v1 [cs.AI] 

[5] Geis JR, Brady A, Wu CC, et al. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Insights Imaging. Springer Science and Business Media LLC; 2019;10(1) http://dx.doi.org/10.1186/s13244-019-0785-8. 

[6] Baselli G, Codari M, Sardanelli F. Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way? Eur Radiol Exp. Springer Science and Business Media LLC; 2020;4(1) http://dx.doi.org/10.1186/s41747-020-00159-0. 

[7] Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. Springer Science and Business Media LLC; 2019;17(1) http://dx.doi.org/10.1186/s12916-019-1426-2. 

[8] Samek W, Wiegand T, Müller K-R. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models 2017; arXiv:1708.08296v1 [cs.AI] 

[9] Elton DC. Self-explaining AI as an Alternative to Interpretable AI. Artificial General Intelligence. Springer International Publishing; 2020. p. 95–106 http://dx.doi.org/10.1007/978-3-030-52152-3_10. 

[10] Zhang Y, Liao QV, Bellamy RKE. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM; 2020. http://dx.doi.org/10.1145/3351095.3372852. 

[11] Google. People and AI Guidebook. https://pair.withgoogle.com/. Accessed Feb 2020. 

[12] Reyes M, Meier R, Pereira S, et al. On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Radiology: Artificial Intelligence. Radiological Society of North America (RSNA); 2020;2(3):e190043 http://dx.doi.org/10.1148/ryai.2020190043. 

[13] Brady AP, Neri E. Artificial Intelligence in Radiology—Ethical Considerations. Diagnostics. MDPI AG; 2020;10(4):231 http://dx.doi.org/10.3390/diagnostics10040231. 

[14] Patel BN, Rosenberg L, Willcox G, et al. Human–machine partnership with artificial intelligence for chest radiograph diagnosis. npj Digit Med. Springer Science and Business Media LLC; 2019;2(1) http://dx.doi.org/10.1038/s41746-019-0189-7. 

[15] Narla A, Kuprel B, Sarin K, Novoa R, Ko J. Automated Classification of Skin Lesions: From Pixels to Practice. Journal of Investigative Dermatology. Elsevier BV; 2018;138(10):2108–2110http://dx.doi.org/10.1016/j.jid.2018.06.175. 

[16] Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. Springer Science and Business Media LLC; 2017;542(7639):115–118 http://dx.doi.org/10.1038/nature21056. 

[17] Winkler JK, Fink C, Toberer F, et al. Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition. JAMA Dermatol. American Medical Association (AMA); 2019;155(10):1135 http://dx.doi.org/10.1001/jamadermatol.2019.1735. 

[18] Zeiler MD, Fergus R. Visualizing and Understanding Convolutional Networks. Computer Vision – ECCV 2014. Springer International Publishing; 2014. p. 818–833 http://dx.doi.org/10.1007/978-3-319-10590-1_53. 

[19] Philbrick KA, Yoshida K, Inoue D, et al. What Does Deep Learning See? Insights From a Classifier Trained to Predict Contrast Enhancement Phase From CT Images. American Journal of Roentgenology. American Roentgen Ray Society; 2018;211(6):1184–1193 http://dx.doi.org/10.2214/AJR.18.20331. 

[20] Singh A, Sengupta S, Lakshminarayanan V. Explainable Deep Learning Models in Medical Image Analysis. J Imaging. MDPI AG; 2020;6(6):52 http://dx.doi.org/10.3390/jimaging6060052. 

[21] Arun, N., Gaw, N., Singh, P., Chang, K., Aggarwal, M., Chen, B., Hoebel, K., Gupta, S., Patel, J., Gidwani, M., Adebayo, J., Li, M. D., & Kalpathy-Cramer, J. 2020. Assessing the (Un)trustworthiness of saliency maps for localizing abnormalities in medical imaging. In bioRxiv. https://dx.doi.org/10.1101/2020.07.28.20163899  

Personal information and conflict of interest

C. Tang: Employee: annalise.ai J. C. Y. Seah: Employee: annalise.ai Q. Buchlak: Employee: annalise.ai C. Jones: Employee: annalise.ai

Never miss an update.

Subscribe to our newsletter.