Vetnuus | October 2024 11 In veterinary medicine, the absence of regulatory oversight, especially in diagnostic imaging, calls for ethical and legal considerations to ensure patient safety in the United States and Canada (105, 106). LLM tools like ChatGPT pose specific regulatory challenges, such as patient data privacy, medical malpractice liability, and informed consent (107). Continuous monitoring and validation are the key, as these models are continuously learning and updating after launch. As of today, the FDA has not authorized any medical devices that use genAI or LLM. Practical learning resources Resources for learning about ChatGPT and generative AI are abundant, including AI companies’ documentation (108–110), online courses from Vanderbilt University and IBM on Coursera (41, 111), Harvard University’s tutorial for generative AI (112), and the University of Michigan’s guides on using generative AI for scientific research (113). These resources are invaluable for veterinarians seeking to navigate the evolving landscape of AI in their practice. Last but not least, readers are advised to engage ChatGPT with wellstructured prompts, such as: ‘I’m a veterinarian with no background in programming. I’m interested in learning how to use generative AI tools like ChatGPT. Can you recommend some resources for beginners?’ (see Supplementary material). The ongoing dialog In the 2023 Responsible AI for Social and Ethical Healthcare (RAISE) Conference held by the Department of Biomedical Informatics at Harvard Medical School, several principles on the judicious application of AI in human healthcare were highlighted (114). These principles could be effectively adapted to veterinary medicine. Integrating AI into veterinary practices should amplify the benefits to animal welfare, enhance clinical outcomes, broaden access to veterinary services, and enrich the patient and client experience. AI should support rather than replace veterinarians, preserving the essential human touch in animal care. Transparent and ethical utilization of patient data is paramount, advocating for opt-out mechanisms in data collection processes while safeguarding client confidentiality. AI tools in the veterinary field ought to be envisioned as adjuncts to clinical expertise, with a potential for their role to develop progressively, subject to stringent oversight. The growing need for direct consumer access to AI in veterinary medicine promises advancements but necessitates meticulous regulation to assure pet owners about data provenance and the application of AI. This review discussed the transformative potential of ChatGPT across clinical, educational, and research domains within veterinary medicine. Continuous dialog, awareness of limitations, and regulatory oversight are crucial to ensure generative AI augments clinical care, educational standards, and academic ethics rather than compromising them. The examples provided in the Supplementary material encourage innovative integration of AI tools into veterinary practice. By embracing responsible adoption, veterinary professionals can harness the full potential of ChatGPT to make the next paradigm shift in veterinary medicine. v Supplementary material The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/ fvets.2024.1395934/full#supplementary-material ChatGPT 101: prompts and prompt engineering Understanding prompts is crucial before engaging with ChatGPT or other generative AI tools. Prompts act as conversation starters, consisting of instructions or queries that elicit responses from the AI. Effective prompts for ChatGPT integrate relevant details and context, enabling the model to deliver precise responses (28). Prompt engineering is the practice of refining inputs to produce optimal outputs. For instance, researchers instructing ChatGPT to identify body condition scores from clinical records begin prompts by detailing the data structure and desired outcomes: “Each row of the dataset is a different veterinary consultation. In the column ‘Narrative’ there is clinical text. Your task is to extract Body Condition Score (BCS) of the animal at the moment of the consultation if recorded. BCS can be presented on a 9-point scale, example BCS 6/9, or on a 5-point scale, example BCS 3.5/5. Your output should be presented in a short-text version ONLY, following the rules below: … (omitted) (28)”. Writing effective prompts involves providing contextual details in a clear and specific way and willingness to refine them as needed. Moreover, incorporating ‘cognitive strategy prompts’ can direct ChatGPT’s reasoning more effectively (refer to Supplementary material for more details). For a comprehensive understanding of prompt engineering, readers are encouraged to refer to specialized literature and open-access online courses dedicated to this subject (41–44). Proper prompt engineering is pivotal for shaping conversations and obtaining the intended results, as illustrated by various examples in this review (Figure 1). Using ChatGPT in clinical care ChatGPT has the potential to provide immediate assistance upon the client’s arrival at the clinic. In human medicine, the pre-trained GPT-4 model is adept at processing chief complaints, vital signs, and medical histories entered by emergency medicine physicians, subsequently making triage decisions that align closely with established standards (45). Given that healthcare professionals in the United States spend approximately 35% of their time documenting patient information (46) and that note redundancy is on the rise (47), ChatGPT ‘s ability to distill crucial information from extensive clinical histories and generate clinical documents are particularly valuable (48). In veterinary medicine, a study utilizing GPT-3.5 Turbo for text mining demonstrated the AI’s capability to pinpoint all overweight body condition score (BCS) instances within a dataset with high precision (28). However, some limitations were noted, such as the misclassification of lameness scoring as BCS, an issue that the researchers believe could be addressed through refined prompt engineering (28). For daily clinical documentation in veterinary settings, veterinarians can input signalment, clinical history, and physical examination findings into ChatGPT to generate Subjective-Objective-Assessment-Plan (SOAP) notes (46). An illustrative veterinary case presented in Supplementary material involved the generation of a SOAP note for a canine albuterol toxicosis incident (49), where ChatGPT efficiently identified the diagnostic tests executed in the case report, demonstrating that ChatGPT can be used as a promising tool to streamline the workflow for veterinarians. Leading Article References available on request.
RkJQdWJsaXNoZXIy OTc5MDU=