VN October 2024

Vetnuus | October 2024 9 algorithms approved by the Food and Drug Administration (FDA) for ECG interpretation (54, 55). Nevertheless, advances in veterinaryspecific AI tools, such as a deep learning model for canine ECG classification, are on the horizon, with the potential to be available soon (56). With the updated image upload function, the capability of GPT-4 and GPT-4o extends to the interpretation of blood work images. The Supplementary material illustrates a veterinary example of GPT-4 and GPT-4o analyzing Case of the Month on eClinPath (57) and providing the correct top differential despite its limited ability to interpret the white blood cell dot plot. Using ChatGPT in Veterinary Education Recent studies leveraging Large Language Models (LLMs) in medical examinations underscore their utility in educational support. In human medical education, GPT-3’s performance, evaluated using 350 questions from the United States Medical Licensing Exam (USMLE) Steps 1, 2CK, and 3, was commendable, achieving scores near or at the passing threshold across all three levels without specialized training (58). This evaluation involved modifying the exam questions into various formats—open-ended or multiple-choice with or without a forced justification—to gauge ChatGPT’s foundational medical knowledge. The AI-generated responses often included key insights, suggesting that ChatGPT’s output could benefit medical students preparing for USMLE (58). Another investigation in human medical education benchmarked the efficacy of GPT-4, Claude 2, and various open-source LLMs using multiple-choice questions from the Nephrology Self-Assessment Program. Success rates varied widely, with open-source LLMs scoring between 17.1–30.6%, Claude 2 at 54.4%, and GPT-4 leading with 73.7% (59). A comparative analysis of GPT-3.5 andGPT-4 indicates the newer version substantially improved in the neonatal-perinatal medicine board examination (60). In the veterinary education context, researchers at the University of Georgia used GPT-3.5 and GPT-4 to answer faculty-generated 495 multiple-choice and true/false questions from 15 courses in the third-year veterinary curriculum (27). The result concurred with the previous study that GPT-4 (77% correct rate) performed substantially better than GPT -3.5 (55% correct rate); however, their performance is significantly lower than that of veterinary students (86%). These studies highlight the variances in LLM knowledge bases, which could affect the quality of medical and veterinary education. Beyond exam preparation, the ChatGPT Plus subscribers can create customized ChatGPT, referred to as GPTs (41) that are freely accessible to other users (61). Veterinarians, for instance, can harness these tools to develop AI tutors to educate clients and boost veterinary students’ learning. For client education, the Cornell Feline Health Center recently launched ‘CatGPT,’ a customized ChatGPT that draws information from its website and peer-reviewed scientific publications to answer owner’s inquiries (62). An example of a custom GPT is a specialized veterinary clinical pathology virtual tutor named VetClinPathGPT (63). This custom GPT draws from legally available open-access textbooks with Creative Commons licenses (64–66) and the eClinPath website (57), ensuring the information provided is sourced from credible references. Students are encouraged to pose any question pertinent to veterinary clinical pathology and can even request specific references or links to web pages. More information about this GPT is detailed in the Supplementary material. Leading Article FIGURE 1 >>> 10

RkJQdWJsaXNoZXIy OTc5MDU=