Nojomi M, Babaee E, Rampisheh Z, Roohravan Benis M, Soheyli M, Rady Raz N. AI‐Powered Clinical Decision Support Systems in Disease Diagnosis, Treatment Planning, and Prognosis: A Systematic Review. Med J Islam Repub Iran 2025; 39 (1) :723-749
URL:
http://mjiri.iums.ac.ir/article-1-9654-en.html
Preventive Medicine and Public Health Research Center, Psychosocial Health Research Institute, Department of Community and Family Medicine, School of Medicine, Iran University of Medical Sciences , babaee.e@iums.ac.ir
Abstract: (24 Views)
Background: Artificial intelligence (AI) is transforming healthcare with applications that can surpass human performance in prevention, detection, and treatment. This systematic review aimed to collect and assess the impact and success of AI technologies across various healthcare domains.
Methods: A systematic search of major databases (including PubMed, Scopus, and ISI) was conducted for articles published up to 2023. Keywords related to AI-driven disease detection, classification, and prognosis were used. Non-English articles or those with inaccessible full texts were excluded. Data was extracted by two researchers, and the quality of selected articles was evaluated based on the strengths and limitations stated by the authors.
Results: In total, 123 articles were included. AI contributions were categorized into three areas. For disease detection (n=75), Coronavirus disease 2019 (COVID-19) was the most frequent topic (n=18), followed by oncology. Chest X-rays were the most common input (n=15). In disease classification (n=23), oncology (especially breast cancer) was the most researched field (n=7), primarily using breast imaging. For prediction and prevention (n=25), oncology was again the most studied category, with clinical and laboratory parameters being the most utilized input (n=12).
Conclusion: AI-driven clinical decision support systems (CDSS) exhibit strong diagnostic and prognostic accuracy in imaging and laboratory settings. However, many models function as “black boxes,” which limits interpretability and clinician trust. Data bias and challenges in integrating AI tools into practice also persist. The findings suggest that future work should focus on explainable AI and rigorous real-world validation to safely implement these tools in healthcare.