Computer Information Systems, Prairie View A&M University, Prairie View, Texas, United States.
World Journal of Advanced Research and Reviews, 2025, 25(01), 385-413
Article DOI: 10.30574/wjarr.2025.25.1.0066
Received on 28 November 2024; revised on 05 January 2025; accepted on 07 January 2025
Introduction: Early detection of cancer plays a crucial role in improving patient outcomes and survival rates. Traditional diagnostic methods often face challenges in accurately identifying early-stage cancers, leading to delayed treatment and reduced chances of successful intervention. Progress achieved in AI within the past few years, specifically, ML and DL, significantly enhanced the potential to diagnose and predict cancer. This review analyses the use of multi-modal imaging data, genomics, and clinical parameters to employ ML approaches in early cancer diagnosis. Combining machine learning with imaging data derived from various modes has proven to be a viable method for improving the diagnostic accuracy of early cancer detection. This review will look at the current state of machine learning in early-stage cancer diagnosis, emphasizing multi-modal imaging analysis.
Materials and Methods: The literature search included several databases which included PubMed, Scopus, Web of Sciences, and Google Scholar. The search keywords were based on areas of interest such as machine learning, multi-modal imaging, early cancer detection, and integration approaches. The search was limited to articles and papers found in peer-reviewed journals, conference proceedings, and preprints of articles on machine learning integration and multi-modal image analysis for early detection of Cancer.
Results: The review findings showed that multi-modal imaging data can be integrated successfully using machine learning algorithms, especially deep learning models, for early cancer diagnosis. These models can harmonize data gathered from MRI, CT, and PET and even tap into advanced machine-learning algorithms to increase the rate of cancer detection and staging. Many experiments have shown that deep learning models including CNNs and RNNs can simplify multi-modal imaging features as well as combine clinical and genomic data streams. Moreover, the combination of genomic, clinical, and demographic databases with images improves the performance of these models even more.
Discussion: The combination of Artificial Intelligence and multi-modal imaging has the advantage of having a higher sensitivity and specificity of early cancer metastases and allows for specific therapies for each patient as well as biomarkers to be found. However, areas like data quality, standardization, and algorithm interpretability for intricate models should be resolved to promote their use in clinical practice.
Conclusion: A combination of multiple imaging data with the help of ML has been found to provide better results in breast cancer, lung cancer, and prostate cancer. Nevertheless, some open problems are still to be solved regarding the data heterogeneity, the scale of the datasets, multi-modal, and the interpretability and generalization of the developed ML models. Additionally, certain technical factors like the security of the data, and possible bias, have to be treated for these approaches to be effectively implemented in clinical settings. As a point of fact, these approaches use the advantages of the different imaging methods and are integrated with other useful information sources for enhancing diagnostic information, which will benefit the patients.
Machine learning; Deep learning; Multi-modal imaging; Early cancer detection; Early diagnosis; Imaging analysis; Biomarkers; Personalized medicine; Artificial intelligence; Artificial intelligence
Preview Article PDF
Toochukwu Juliet Mgbole. Machine learning integration for early-stage cancer detection using multi-modal imaging analysis. World Journal of Advanced Research and Reviews, 2025, 25(01), 385-413. Article DOI: https://doi.org/10.30574/wjarr.2025.25.1.0066.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0