Vetnuus | October 2024 17 Artificial intelligence feasibility in veterinary medicine... <<<16 a clear explanation (e.g., decision trees) are less accurate [16]. There are concerns about limited human control in the humanAI relationship and the AI-produced interpretation might lead to inaccurate decision-making. In this instance, the explainability and causality of AI are crucially useful and can promote acceptance using human guidance if needed [17]. The criticized key patterns in the current systematic review should be applied to AI doctors (medical chatbots, augmented doctors, and medical curricula) that might have been awaited by the public but not by cutting-edge veterinary practitioners. The concept of ambient clinical intelligence seems to be adaptive, sensitive, and responsive to the digital environment and may be attractive to professionals as a means of lowering the fear of automating veterinary medicine [18, 19]. Finally, in the critical process of delivering a reliable and accurate medical decision, professional veterinary scepticism must lead the process as completeness and accuracy are the key features, not a comparison between humans and AI. Artificial intelligence can drive the process, but veterinarians are required to guide it. The quality outcomes of this study were assessed from a random sample (n = 385) of the 883 articles obtained in our systematic review. The recommended review quality stops at 145 at p = 0.05. We reviewed 385 articles to have minimal systematic error. The risk of bias was valued binomially, and the review authors established a list of nine criteria to be assessed. The total risk of bias was low to moderate, which validated the quality of the current approach. Conclusion AI has significant implications in the following areas of veterinary medicine: First, diagnostics; second, education, animal production, and epidemiology; third, animal health and welfare, pathology, and microbiology; and fourth, all remaining categories. Assessment of the appropriateness of error-generating and AI efficacy led us to conclude that AI-derived answers should be used to enhance veterinary ability, not compared to it. The concept of ambient clinical intelligence seems to be adaptive, sensitive, and responsive to the digital environment and may be attractive to veterinary professionals as a means of lowering the fear of automating veterinary medicine. Future studies should focus on an AI model with flexible data input, which can be expanded by clinicians/ users to maximize their interaction with good algorithms and reduce any errors generated by the process.We recommend that the extent of AI in veterinary medicine should be increased, but it should not take over the profession. Authors’ Contributions ASV: Proposed the concept of the research, BF and AIV: Developed the search approach. DGP, LES, and GAV: Collected data. BF and DGP: Classified and categorized the data. LES and GAV: Drafted and revised the manuscript. All authors participated in the data collection and elaboration process. All authors have read, reviewed, and approved the final manuscript. Acknowledgements The authors are thankful to NI Bouchemla, (MS, Jijel University, Algeria), for his support in this review. The authors did not receive any funds for this study. Competing Interests The authors declare that they have no competing interests. Publisher’s Note Veterinary World remains neutral with regard to jurisdictional claims in published institutional affiliation. v References Chang, A.C. (2020) Artificial intelligence in subspecialties. In: IntelligenceBased Medicine. Ch. 8. Academic Press, Cambridge, 267–396. Mintz, Y. and Brodie, R. (2019) Introduction to artificial intelligence in medicine. Minim. Invasive Ther. Allied Tech., 28(2): 73–81. Kottke-Marchant, K. and Davis, B. (2012) Laboratory Haematology Practice. Wiley-Blackwell, Oxford, UK. Hanna, M.G., Parwani, A. and Sirintrapun, S.J. (2020) Whole slide imaging: Technology and applications. Adv. Anat. Pathol., 27(4): 251–259. El Achi, H. and Khoury, J.D. (2020) Artificial intelligence and digital microscopy applications in diagnostic hematopathology. Cancers (Basel), 12(4): 797. PRISMA. (2023) Available from: https://www.prisma-state- ment.org. Retrieved on 07-05-2023. Higgins, J. and Thomas, J., editors. (2021) Cochrane Handbook for Systematic Reviews of Interventions, Version 6.2. Available from: https:// training.cochrane.org/ handbook/current. Retrieved on 10-06-2023. Higgins, J.P.T., Thompson, S.G., Deeks, J.J., Altman, D.G., (2003) Measuring inconsistency in meta-analyses. BMJ, 327(7414): 557–560. Harris, M., Qi, A., Jeagal, L., Torabi, N., Menzies, D., Korobitsyn, A., Pai, M., Nathavitharana, R.R. and Ahmad Khan, F., (2019) A systematic review of the diagnostic accuracy of artificial intelligence-based computer programs to analyze chest x-rays for pulmonary tuberculosis. PLoS One, 14(9): e0221339. Sollini, M., Antunovic, L., Chiti, A. and Kirienko, M. (2019) Towards clinical application of image mining: A systematic review on artificial intelligence and radiomics. Eur. J. Nucl. Med. Mol. Imaging, 46(13): 2656–2672. Higgins, J.P.T., Savović, J., Page, M.J., Elbers, R.G., Sterne, J.A.C. (2022) Assessing risk of bias in a randomized trial. In: Higgins, J.P.T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M.J. and Welch, V.A., editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3. Ch. 8. Cochrane, London. Available from: https://www.training.cochrane. org/hand- book. Retrieved on 11-06-2023. Briganti, G. and Le Moine, O. (2020) Artificial intelligence in medicine: Today and tomorrow. Front. Med. (Lausanne), 7(2): 27. Panch, T., Mattie, H., Celi, L.A. (2019) The “inconvenient truth” about AI in healthcare. NPJ Digit. Med. 2(8): 77. Kelly, C.J., Karthikesalingam, A., Suleyman, M., Corrado,G. and King, D. (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med.,17(189): 195. Liu, X., Faes, L., Kale, A.U., Wagner, S.K., Fu, D.J., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C., Ledsam, J.R., Schmid, M.K., Balaskas, K., Topol, E.J., Bachmann, L.M., Keane, P.A. and Denniston, AK. (2019) A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit. Health, 1(6): e271–e297. Bologna, G. and Hayashi, Y. (2017) Characterization of symbolic rules embedded in deep DIMLP networks: A challenge to transparency of deep learning. J. Artif. Intell. Soft Comput. Res, 7(4), 265–286. Holzinger, A., Langs, G., Denk, H., Zatloukal, K. and Müller, H. (2019) Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 9(4): e1312. Chaiyachati, K.H, Shea, J.A., Asch, D.A., Liu, M., Bellini, L.M., Dine, C.J., Sternberg, A.L, Gitelman, Y., Yeager, A.M., Asch, J.M. and Desai, S.V. (2019) Assessment of inpatient time allocation among first-year internal medicine residents using time-motion observations. JAMA Intern. Med., 179(6): 760–767. Acampora, G., Cook, D.J., Rashidi, P. and Vasilakos, A.V. (2013) A survey on ambient intelligence in health care. Proc. IEEE Inst. Electr. Electron. Eng., 101(12): 2470–2494. Veterinary World, EISSN: 2231-0916 Available at: www.veterinaryworld.org/Vol.16/ October-2023/17.pdf
RkJQdWJsaXNoZXIy OTc5MDU=