Abstract
Objectives: Artificial intelligence (AI) technologies are developing very rapidly in the medical field, but have yet to be actively used in actual clinical settings. Ensuring reliability is essential to disseminating technologies, necessitating a wide range of research and subsequent social consensus on requirements for trustworthy AI. Methods: This review divided the requirements for trustworthy medical AI into explainability, fairness, privacy protection, and robustness, investigated research trends in the literature on AI in healthcare, and explored the criteria for trustworthy AI in the medical field. Results: Explainability provides a basis for determining whether healthcare providers would refer to the output of an AI model, which requires the further development of explainable AI technology, evaluation methods, and user interfaces. For AI fairness, the primary task is to identify evaluation metrics optimized for the medical field. As for privacy and robustness, further development of technologies is needed, especially in defending training data or AI algorithms against adversarial attacks. Conclusions: In the future, detailed standards need to be established according to the issues that medical AI would solve or the clinical field where medical AI would be used. Furthermore, these criteria should be reflected in AI-related regulations, such as AI development guidelines and approval processes for medical devices.
Original language | English |
---|---|
Pages (from-to) | 315-322 |
Number of pages | 8 |
Journal | Healthcare Informatics Research |
Volume | 29 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2023 |
Bibliographical note
Publisher Copyright:© 2023 The Korean Society of Medical Informatics.
Keywords
- Artificial Intelligence
- Guideline
- Healthcare Disparity
- Machine Learning
- Trust