VN October 2024

VET Oktober / October 2024 The Monthly Magazine of the SOUTH AFRICAN VETERINARY ASSOCIATION Die Maandblad van die SUID-AFRIKAANSE VETERINÊRE VERENIGING Canine C-reactive protein in the 21st Century: Where are we now and where are we going? CPD THEME Artificial Intelligence (AI) nuus•news CPD article QR code

Dagboek • Diary Ongoing / Online 2024 SAVETCON: Webinars Info: Corné Engelbrecht, SAVETCON, 071 587 2950, corne@savetcon.co.za / https://app.livestorm.co/svtsos Acupuncture – Certified Mixed Species Course Info: Chi University: https://chiu.edu/courses/cva#aboutsouthafrica@tcvm.com SAVA Johannesburg Branch CPD Events Monthly - please visit the website for more info. Venue: Johannesburg Country Club Info: Vetlink - https://savaevents.co.za/ October 2024 November 2024 SAVA Northern Natal and Midlands Branch Congress 05-06 October Venue: Lythwood Lodge, Midlands Info: www.vetlink.co.za PARSA Conference 06 – 08 October Venue: Villa Paradiso, Hartbeespoort, Gauteng Info: corne@savetcon.co.za SAAVT 2024 Conference 09 – 10 October Venue: 26 Degrees South, Muldersdrift Info: conference@savetcon.co.za or https://savetcon.co.za/2024-saavt-biennial-congress/ 12th IAVRPT Symposium 30 October – 02 November Venue: Somerset-West, Cape Town, South Africa Info: https://iavrpt2024.co.za/ or contact conferences@vetlink.co.za Poultry Group of SAVA Annual Congress 06-08 November Venue: 26 Degrees South, Muldersdrift, Gauteng Info: conferences@savetcon.co.za https://savetcon.co.za/poultry2024/

Vetnuus | October 2024 1 Contents I Inhoud President: Dr Paul van der Merwe president@sava.co.za Managing Director: Mr Gert Steyn md@sava.co.za/ +27 (0)12 346 1150 Editor VetNews: Ms Andriette van der Merwe vetnews@sava.co.za Bookkeeper: Ms Susan Heine accounts@sava.co.za/+27 (0)12 346 1150 Bookkeeper's Assistant: Ms Sonja Ludik bookkeeper@sava.co.za/+27 (0)12 346 1150 Secretary: Ms Elize Nicholas elize@sava.co.za/ +27 (0)12 346 1150 Reception: Ms Hanlie Swart reception@sava.co.za/ +27 (0)12 346 1150 Marketing & Communications: Ms Sonja van Rooyen marketing@sava.co.za/ +27 (0)12 346 1150 Membership Enquiries: Ms Debbie Breeze debbie@sava.co.za/ +27 (0)12 346 1150 Vaccination booklets: Ms Debbie Breeze debbie@sava.co.za/ +27 (0)12 346 1150 South African Veterinary Foundation: Ms Debbie Breeze savf@sava.co.za/ +27 (0)12 346 1150 Community Veterinary Clinics: Ms Claudia Cloete cvcmanager@sava.co.za/ +27 (0)63 110 7559 SAVETCON: Ms Corné Engelbrecht corne@savetcon.co.za/ +27 (0)71 587 2950 VetNuus is ‘n vertroulike publikasie van die SAVV en mag nie sonder spesifieke geskrewe toestemming vooraf in die openbaar aangehaal word nie. Die tydskrif word aan lede verskaf met die verstandhouding dat nóg die redaksie, nóg die SAVV of sy ampsdraers enige regsaanspreeklikheid aanvaar ten opsigte van enige stelling, feit, advertensie of aanbeveling in hierdie tydskrif vervat. VetNews is a confidential publication for the members of the SAVA and may not be quoted in public or otherwise without prior specific written permission to do so. This magazine is sent to members with the understanding that neither the editorial board nor the SAVA or its office bearers accept any liability whatsoever with regard to any statement, fact, advertisement or recommendation made in this magazine. VetNews is published by the South African Veterinary Association STREET ADDRESS 47 Gemsbok Avenue, Monument Park, Pretoria, 0181, South Africa POSTAL ADDRESS P O Box 25033, Monument Park Pretoria, 0105, South Africa TELEPHONE +27 (0)12 346-1150 FAX General: +27 (0) 86 683 1839 Accounts: +27 (0) 86 509 2015 WEB www.sava.co.za CHANGE OF ADDRESS Please notify the SAVA by email: debbie@sava.co.za or letter: SAVA, P O Box 25033, Monument Park, Pretoria, 0105, South Africa CLASSIFIED ADVERTISEMENTS (Text to a maximum of 80 words) Sonja van Rooyen assistant@sava.co.za +27 (0)12 346 1150 DISPLAY ADVERTISEMENTS Sonja van Rooyen assistant@sava.co.za +27 (0)12 346 1150 DESIGN AND LAYOUT Sonja van Rooyen PRINTED BY Business Print: +27 (0)12 843 7638 VET nuus•news Diary / Dagboek II Dagboek • Diary Regulars / Gereeld 2 From the President 4 Editor’s notes / Redakteurs notas Articles / Artikels 6 ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research 12 Artificial intelligence feasibility in veterinary medicine: A systematic review 18 Role of Veterinary Extension Advisory and Telehealth Services 21 Veterinary Practice Hygiene, how well are we doing, could we do better, how easy would it be? 24 Part 1: Biosecurity in Poultry Association / Vereniging 26 In Memoriam 27 SAVA News 28 CVC News 36 Legal Mews Events / Gebeure 30 OP Village Traditions 34 Old Res 100 Year Reunion Vet's Health / Gesondheid 41 Life Coaching Technical / Tegnies 38 Ophthalmology Column 42 Royal Canin Column Relax / Ontspan 48 Life Plus 25 Marketplace / Markplein 44 Marketplace Jobs / Poste 46 Jobs / Poste 47 Classifieds / Snuffeladvertensies 24 34 28 Scan the QR code for easy access to this month's CPD article «

Vetnews | Oktober 2024 2 « BACK TO CONTENTS If the impact is only on an individual, or small group of individuals, it is not entertained. Yet, the burden of action is growing by the day. In the past month, three events that led to hours and hours of engagement were the shortage of veterinarians, the challenges at OBP and the announcement of a new veterinary faculty at the University of the Free State. A short article in a not-so-well-known media outlet led to a plethora of calls for radio and even a television interview. Between the Managing Director and me, we handled more than fifteen calls for media engagements in one week. SAVA is often challenged on what we do and if we do enough for the profession. From SAVA’s point of view, there is a lot more we can do and would love to do, but constraints disable us from doing so. The most significant constraint SAVA is facing is financial resources. SAVA is a member organisation and more than 80% of its income is generated from members’ fees. Looking at members resigning from SAVA, the two main reasons are emigration and members who cannot afford the fees anymore. SAVA in itself can do nothing to stop veterinarians from emigrating. To make ends meet the simplest option is to increase the fees of members. Increasing the fees might alleviate the financial shortfall, but will lead to an increase in those who cannot afford the fees. Where does the critical intersection lie between members paying a higher membership fee and those resigning due to increased fees? In my tenure as President, I made it my primary task to recruit new members and did a lot of travelling and interactions to recruit new members, not without an additional financial burden on SAVA. Although SAVA is at a point, even in these trying times, where we have positive membership growth, the income so generated is still not on par with the expenses incurred. Retaining the recruited SAVA members will have a positive effect in the long run. As it stands, SAVA is however no longer financially viable. Something drastic will have to be done. Either SAVA must cut expenses to the bone, which will have a major impact on SAVA engagements with detrimental effects on the veterinary profession, or it must increase its membership fees, recruit more members and look for other sources of income. Although SAVA is doing a lot of excellent work, we can no longer sit and hope for the best. SAVA is at a crossroads and it is now every member’s opportunity/ duty to make the most important decision you will ever make for SAVA and the profession. Forget the past. Who is SAVA now and what should the future SAVA be? It is time to rethink and reengineer SAVA to serve its members and further the status and image of the veterinarian. We must take this decision consciously with due care. We must make it powerfully. Any pearls of wisdom on how SAVA could be turned around, or support in the survival efforts are most welcome. v Kind regards, Paul van der Merwe From the President Dear members, SAVA at Crossroads With the challenging veterinary landscape, SAVA is inundated with calls for support and action. All calls are evaluated concerning their impact on the profession.

Vetnuus | October 2024 3 Vetnuus | August 2024 3 To find out more: Building better practice, together. The co.mpanion partnership is a co.llaborative model that gives you the ownership, support and autonomy you need to build your individual practice’s legacy inside a growing network. co.mpanion is not a corporate body, it is a professional owned and led veterinary model that is right for you if: You are looking for a support structure. You are looking for a better way to exit from or sell your practice. You want to become a shareholder. www.companion.partners Download Value Proposition View Video WhatsApp Sr Dale Parrish

Vetnews | Oktober 2024 4 « BACK TO CONTENTS There are few words like AI, Machine Learning, and ChatGPT to scare the wits out of people. It is as if what we always saw in Science fiction movies has caught up with us and it is borderline scary. When I studied Computer Science back in 1989 to 1991 ( yes premillennium we had computers too and we learned about them and we learned to work them), we already learned about machine learning, firmware that could change or program itself according to input or circumstances. Little did we realise how powerful this would become. Who tried to communicate with the ever-increasing Bots. Almost every service website now resorts to so-called Robots to handle our queries. It is very frustrating when you cannot seem to find the correct machine vocabulary to get an answer out of these machine brains. I have tried a few: on my Bank – no success, with Eskom (I now have a close relationship with a little chatbot called Alfred) no success. But then I had to get proof of my SARS number. Dreading this, I logged onto my e-filing and was given Lwazi as an option. I simply typed in “tax number required” and a form was dropped in my email box. I was blown out of the water. Success at last, Lwazi definitely had ‘knowledge’ as the Nguni translation indicated. It would be interesting to draw the line between what can be done mechanically (because that is all these clever computers can do), and what can only be done by a human. Machines can only do what humans tell them to do, and machine learning is trial and error. Still, the answers to problems have to be provided by a human. Machines do not have a sixth sense. It cannot see the bigger picture, it can only test one scenario at a time, if it succeeds it may ‘remember’ the problem and correct solution. It may do problem-solving a little quicker than man but it cannot be creative. In the magazine, I have included some articles on Artificial intelligence. Something that is still eyed with some suspicion but something I think that can make life easier. The article on Telemedicine in Covid-19 is a good example. In a way, we may always have practised a little telemedicine by advising over the phone. Do we have to reinvent the wheel every time we do something unfamiliar to us, or can we cut through the chase and get to the point quicker? Think of blood analysis machines that can spit out an answer that was previously done by humans and took a lot longer time to solve. ChapGPT is a bit of a contentious issue for me. I am still a ‘cook from scratch’ person and like to create my own content, whether it is baking or writing a note like this. I like to do research and let the creative juices flow. But, I think there is a time and place for everything. My daughter is doing an MBA and one of their main criteria is plagiarism. They have tools (built by people) that will assess the assignment and give a score. Sadly, it has come to this, but it is so easy to find and copy and paste the works of others on the topic. Nearly everything is found in one or the other electronic form which enables us to search for specifics but also copy it. I hope you enjoy the jam-packed magazine. Look out for the more futuristic articles on AI and how it can assist the veterinary world. Andriette v From the Editor Editor’s notes / Redakteurs notas

Vetnuus | October 2024 5 1 2 Long-term support for their skin — and your treatment plan. © Hill’s Pet Nutrition, Inc. 2024 V32800; V32802; V35148 Clinically proven nutrition for both food and environmental allergies supported by 4 clinical studies Nutrition formulated to support the skin barrier against environmental allergies – year round itching eased joy retrieved SCIENCE DID THAT. Derm Complete Adult is also available in a Mini kibble for small dogs Derm/Helicopter/VetNews

Vetnews | Oktober 2024 6 « BACK TO CONTENTS ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research Chu 10.3389/fvets.2024.1395934 Candice P. Chu Department of Veterinary Pathobiology, College of Veterinary Medicine & Biomedical Sciences, Texas A&M University, College Station, TX, United States Article in Frontiers in Veterinary Science · June 2024 ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This review concisely synthesizes the latest research and practical applications of ChatGPT within the clinical, educational, and research domains of veterinary medicine. It intends to provide specific guidance and actionable examples of how generative AI can be directly utilized by veterinary professionals without a programming background. For practitioners, ChatGPT can extract patient data, generate progress notes, and potentially assist in diagnosing complex cases. Veterinary educators can create custom GPTs for student support, while students can utilize ChatGPT for exam preparation. ChatGPT can aid in academic writing tasks in research, but veterinary publishers have set specific requirements for authors to follow. Despite its transformative potential, careful use is essential to avoid pitfalls like hallucination. This review addresses ethical considerations, provides learning resources, and offers tangible examples to guide responsible implementation. A table of key takeaways was provided to summarize this review. By highlighting potential benefits and limitations, this review equips veterinarians, educators, and researchers to harness the power of ChatGPT effectively. Introduction Artificial intelligence (AI) is a trending topic in veterinary medicine. A recent survey on AI in veterinary medicine by Digital and the American Animal Hospital Association, involving 3,968 veterinarians, veterinary technicians/assistants, and students, showed 83.8% of respondents were familiar with AI and its applications in veterinary medicine, with 69.5% using AI tools daily or weekly 1. Yet, 36.9% remain sceptical, citing concerns about the systems’ reliability and accuracy (70.3%), data security and privacy (53.9%), and the lack of training (42.9%) 1. The current application of AI in veterinary medicine covers a wide range of topics, such as dental radiograph (2), colic detection (3), and mitosis detection in digital pathology 4. Machine learning (ML), a subset of AI, enables systems to learn from data without being explicitly programmed (5). Generative AI (genAI), in turn, is a field within ML specializing in creating new content. As a subset of genAI, large language models (LLMs) are known for their human-like text generation capabilities. Notable LLMs include ChatGPT (OpenAI) (6), which is utilized by Microsoft Copilot for Microsoft 365 (7), Llama 3 (Meta) (8), Gemini (Google) (9), and Claude 3 (Anthropic) (10). ChatGPT, initially powered by GPT-3.5, was made publicly accessible by OpenAI on November 30, 2022 (11). In less than a year, ChatGPT has attracted approximately a hundred million weekly users (12), making it the most popular LLM for newcomers to this technology. Based on PubMed search results, academic articles mentioned‘ChatGPT’in the title or abstract grew from 4 in 2022 to 2,062 in 2023, indicating a growing interest in ChatGPT in the medical field (13). Therefore, this review will focus on ChatGPT as the main example of generative AI and discuss its application in veterinary clinics, education, and research. GPT, or Generative Pre-trained Transformer, excels in generating new text, images, and other content formats rather than solely analyzing existing data. It is pre-trained by exposure to vast datasets of text and code, enabling it to recognize patterns and generate human-like responses. It employs the transformer neural network architecture that is particularly adept at processing language, which enables coherent and contextually relevant outputs (14). The free version of ChatGPT provides the capability of answering questions, providing explanations, generating creative content, offering advice, conducting research, engaging in conversation, supporting technical tasks, aiding with education, and creating summaries. On February 1, 2023, OpenAI released ChatGPT Plus, a subscription-based model later powered by GPT-4, which has capabilities in text, image, and voice analysis and generation (15). OpenAI introduced GPT-4 Turbo with Vision on April 9, 2024 (16). This updated model is accessible to developers through the application programming interface (API). Its ability to take images and answer questions has sparked interest in radiology (17, 18), pathology (19), and cancer detection (20, 21). On May 13, 2024, OpenAI released GPT-4o to the public. The ‘o’ in its name emphasizes the new model’s omnipotent in reading, listening, writing, and speaking abilities (22). Despite ChatGPT’s widespread use, a comprehensive review of its applications in veterinary medicine is lacking. The breadth of ChatGPT in medicine covers a wide range of areas, ranging from answering patient and professional inquiries, promoting patient engagement (23), diagnosing complex clinical cases (24), and creating educational material (25). Searching ‘ChatGPT AND veterinary’

Vetnuus | October 2024 7 >>> 8 Leading Article TABLE 1 Key takeaways of the review. in PubMed yielded 14 results until May 2024. After examining the title and abstract of all articles, 5 articles were deemed relevant to the subject and were included in the review (26–30). In addition, online search using the same combination of keywords identified commercial software that integrated ChatGPT to enhance virtual assistance, diagnostic accuracy, communication with pet owners, and optimization of workflows (31–37). While examples of ChatGPT applications are prevalent on social media and in various publications (38–40), the best way to understand its impact is through direct engagement. This article aims to discuss the applications of ChatGPT in veterinary medicine, provide practical implementations, and examine its limitations and ethical considerations. The following content will use’ ChatGPT’ as a general term. When the information on specific versions of ChatGPT is available, terms such as GPT-3.5 or GPT-4 will be used. Highlights of each section are listed in Table 1 for a quick summary of the review. Introduction • Of 3,968 veterinary professionals who participated in a survey, 83.8% of respondents were familiar with AI and its applications in veterinary medicine, with 69.5% using AI tools daily or weekly. • Machine learning (ML) is a subset of artificial intelligence (AI) that enables systems to learn from data without being explicitly programmed. • Generative AI, in turn, is a field within ML specializing in creating new content. • Large language models (LLMs) have human-like text generation capabilities. Examples include ChatGPT (OpenAI), Llama 3 (Meta), Gemini (Google), Gemma (Google), and Claude 3 (Anthropic). • GPT stands for Generative Pre-trained Transformer, indicating its characteristics of content generation, pre-trained by text and codes, and the use of transformer neural network. • Important milestones of ChatGPT’s public release: • November 30, 2022 – ChatGPT (GPT-3.5) o February 1, 2023 - ChatGPT Plus (GPT-3.5) o March 1, 2023 – ChatGPT (upgrade to GPT-3.5 Turbo) o March 14, 2023 – ChatGPT Plus (upgrade to GPT-4) o May 13, 2024 – GPT-4o ChatGPT 101: prompts and prompt engineering • Prompts act as conversation starters, consisting of instructions or queries that elicit responses from the AI. • Prompt engineering is the practice of refining inputs to produce optimal outputs. Common strategies include providing relevant context, detailing the data structure, and specifying desired outcomes. • Cognitive strategy prompts can direct ChatGPT’s reasoning more effectively. See Supplementary material. Using ChatGPT in clinical care • In human medicine, ChatGPT can make triage decisions, mine text from clinical history, create SOAP notes, diagnose complex cases, and interpret image inputs such as blood work and ECG. • A prior publication in veterinary medicine demonstrated ChatGPT’s ability in text-mining. • Examples of applying ChatGPT in writing SOAP notes and interpreting ECG and blood work images are available in Supplementary material. Using ChatGPT in Veterinary Education • ChatGPT has the potential to assist medical exam takers, while the performance in standardized exams may vary among different LLMs. • GPTs are customized ChatGPTs that can serve as AI tutors for clients and veterinary students. o CatGPT: https://chatgpt.com/g/g-NDDXC050T-catgpt o VetClinPathGPT: https://chatgpt.com/g/g-rfB5cBZ6X-vetclinpathgpt

Vetnews | Oktober 2024 8 « BACK TO CONTENTS Moreover, recent research has investigated ChatGPT’s proficiency in human clinical challenges. One study found that GPT-4 could accurately diagnose 57% of complex medical cases, a success rate that outperformed 72% of human readers of medical journals in answering multiple-choice questions (24). Additionally, GPT-4’s top diagnosis concurred with the final diagnosis in 39% of cases and included the final diagnosis within the top differential diagnoses in 64% of cases (50). In veterinary medicine, a notable case is a man on social media platform X (previously known as Twitter), who reported that ChatGPT saved his dog’s life by identifying immunemediated hemolytic anaemia—a diagnosis his veterinarian had missed (51). Veterinarians should recognize that pet owners may consult ChatGPT or similar AI chatbots for advice due to their accessibility (26). While the proliferation of veterinary information online can enhance general knowledge among clients, it also risks spreading misinformation (52). Customizing ChatGPT could address these challenges (refer to ‘Using ChatGPT in Veterinary Education’ below). In a human medicine study, GPT-4 can interpret ECGs and outperformed other LLM tools in correctly interpreting 63% of ECG images (53). A similar study has yet to be found in veterinary medicine. A veterinary example is provided in the Supplementary material, showing that GPT-4 did not identify an atypical atrial flutter with intermittent Ashman phenomenon in a 9-year-old Pug despite the addition of asterisks in the ECG to indicate the wide and tall aberrant QRS complexes (35). This example emphasizes that while ChatGPT is a powerful tool, it cannot replace specialized AI Leading Article Using ChatGPT in academic writing • Most journal publishers agree that ChatGPT cannot be listed as a co-author. • Several veterinary journals request authors to declare the use of ChatGPT in methods, acknowledgement, or designated sections in the manuscript. See Supplementary material. • Reviewers could mistakenly classify human writings as AI-generated content, while ML tools built based on specific language features could achieve 99% accuracy in identifying AI-authored texts. • The official ‘ChatGPT detectors’ are currently underdeveloped by OpenAI. ChatGPT’s limitations and ethical issues • Most veterinary professionals are familiar with AI and its application in veterinary medicine, while some remain sceptical about its reliability and accuracy, data security and privacy, and a lack of training. Hallucination and inaccuracy • Hallucination, or artificial hallucination, refers to the generation of implausible but confident responses by ChatGPT, which poses a significant issue. See Supplementary material. • Inaccuracy is not an uncommon finding when using ChatGPT. These unexpected errors can potentially harm patients. Intellectual property, cybersecurity, and privacy • ChatGPT is trained using undisclosed but purportedly accessible online data, and user-generated content is consistently gathered by OpenAI. • When analyzing clinical data, uploading de-identified datasets is recommended. • Alternatively, considering local installations of open-source, free-for-research-use LLMs, like Llama 3 or Gemma, for enhanced security. U.S. FDA regulation • Most FDA-approved AI and ML-enabled human medical devices are in the field of radiology, followed by cardiovascular and neurology. • FDA has not set premarket requirements for AI tools in veterinary medicine. • The AI- and ML-enabled veterinary products include dictation and notetaking apps, management and communication software, and radiology services, which may or may not have scientific validation. • The continual learning and updating of LLM pose a special regulatory challenge for FDA. Practical learning resources • Resources for learning about ChatGPT and generative AI are abundant, including OpenAI’s documentation, online courses from Vanderbilt University via Coursera, Harvard University’s tutorial for generative AI, and the University of Michigan’s guides on using generative AI for scientific research. Links are provided in Supplementary material. • Readers are encouraged to ask ChatGPT for learning resources: https://chat.openai.com

Vetnuus | October 2024 9 algorithms approved by the Food and Drug Administration (FDA) for ECG interpretation (54, 55). Nevertheless, advances in veterinaryspecific AI tools, such as a deep learning model for canine ECG classification, are on the horizon, with the potential to be available soon (56). With the updated image upload function, the capability of GPT-4 and GPT-4o extends to the interpretation of blood work images. The Supplementary material illustrates a veterinary example of GPT-4 and GPT-4o analyzing Case of the Month on eClinPath (57) and providing the correct top differential despite its limited ability to interpret the white blood cell dot plot. Using ChatGPT in Veterinary Education Recent studies leveraging Large Language Models (LLMs) in medical examinations underscore their utility in educational support. In human medical education, GPT-3’s performance, evaluated using 350 questions from the United States Medical Licensing Exam (USMLE) Steps 1, 2CK, and 3, was commendable, achieving scores near or at the passing threshold across all three levels without specialized training (58). This evaluation involved modifying the exam questions into various formats—open-ended or multiple-choice with or without a forced justification—to gauge ChatGPT’s foundational medical knowledge. The AI-generated responses often included key insights, suggesting that ChatGPT’s output could benefit medical students preparing for USMLE (58). Another investigation in human medical education benchmarked the efficacy of GPT-4, Claude 2, and various open-source LLMs using multiple-choice questions from the Nephrology Self-Assessment Program. Success rates varied widely, with open-source LLMs scoring between 17.1–30.6%, Claude 2 at 54.4%, and GPT-4 leading with 73.7% (59). A comparative analysis of GPT-3.5 andGPT-4 indicates the newer version substantially improved in the neonatal-perinatal medicine board examination (60). In the veterinary education context, researchers at the University of Georgia used GPT-3.5 and GPT-4 to answer faculty-generated 495 multiple-choice and true/false questions from 15 courses in the third-year veterinary curriculum (27). The result concurred with the previous study that GPT-4 (77% correct rate) performed substantially better than GPT -3.5 (55% correct rate); however, their performance is significantly lower than that of veterinary students (86%). These studies highlight the variances in LLM knowledge bases, which could affect the quality of medical and veterinary education. Beyond exam preparation, the ChatGPT Plus subscribers can create customized ChatGPT, referred to as GPTs (41) that are freely accessible to other users (61). Veterinarians, for instance, can harness these tools to develop AI tutors to educate clients and boost veterinary students’ learning. For client education, the Cornell Feline Health Center recently launched ‘CatGPT,’ a customized ChatGPT that draws information from its website and peer-reviewed scientific publications to answer owner’s inquiries (62). An example of a custom GPT is a specialized veterinary clinical pathology virtual tutor named VetClinPathGPT (63). This custom GPT draws from legally available open-access textbooks with Creative Commons licenses (64–66) and the eClinPath website (57), ensuring the information provided is sourced from credible references. Students are encouraged to pose any question pertinent to veterinary clinical pathology and can even request specific references or links to web pages. More information about this GPT is detailed in the Supplementary material. Leading Article FIGURE 1 >>> 10

Vetnews | Oktober 2024 10 « BACK TO CONTENTS Using ChatGPT in academic writing The incorporation of AI in academic writing, particularly in the field of medical research, is a topic marked by considerably more controversy than the previous sections discussed. Ever since the development of GPT-3 in 2020, its text-generating ability has ignited debate within academia (67). Leveraging editing services enhances clarity and minimizes grammatical errors in scientific manuscripts, which can improve their acceptance rate (68). While acknowledgements often thank editorial assistance, the use of spelling-checking software is rarely disclosed. Nowadays, AI-powered writing assistants have integrated advanced LLM capabilities to provide nuanced suggestions for tone and context (45), thus merging the line between original and AI-generated content. Generative AI, like ChatGPT, extends its utility by proposing titles, structuring papers, crafting abstracts, and summarizing research, raising questions about the AI’s role in authorship as per the International Committee of Medical Journal Editors’ guidelines (69) (Supplementary material). Notably, traditional scientific journals are cautious with AI, yet NEJM AI stands out for its advocacy for LLM use (70). However, these journals still refrain from recognizing ChatGPT as a co-author due to accountability concerns over accuracy and ethical integrity (70–72). The academic community remains wary of ChatGPT’s potential to overshadow faculty contributions (73). Several veterinary journals have updated their guidelines in response to the emergence of generative AI. Among the top 20 veterinary medicine journals as per Google Scholar (74), 14 instruct on generative AI usage (Supplementary material). They unanimously advise against listing AI as a co-author, mandating disclosure of AI involvement in Methods, Acknowledgments, or other designated sections. These recommendations typically do not apply to basic grammar and editing tools (Supplementary material). AI could enhance writing efficiency and potentially alleviate disparities in productivity, posing a nuanced proposition that suggests broader acceptance of AI in academia might benefit less skilful writers and foster a more inclusive scholarly community (40). The detectability of AI-generated content and the associated risks of erroneous academic judgments have become significant concerns. A misjudgment has led an ecologist at Cornell University to face publication rejection after being falsely accused by a reviewer who deemed her work as “obviously ChatGPT” (75). However, a study revealed that reviewers could only identify 68% of ChatGPTproduced scientific abstracts, and they also mistakenly tagged 14% of original works as AI-generated (76). In a veterinary study, veterinary neurologists only had a 31–54% success rate in distinguishing AI-crafted abstracts from authentic works (30). To counteract this, a ‘ChatGPT detector’ has been suggested. An ML tool utilizes distinguishing features like paragraph complexity, sentence length variability, punctuation marks, and popular wordings, achieving over 99% effectiveness in identifying AI-authored texts (77). A subsequent refined model can further distinguish human writings from GPT-3.5 and GPT-4 writings in chemistry journals with 99% accuracy (78). While these tools are not publicly accessible, OpenAI is developing a classifier to flag AI-generated text (79), emphasizing the importance of academic integrity and responsible AI use. ChatGPT’s limitations and ethical issues Hallucination and inaccuracy Hallucination, or artificial hallucination, refers to the generation of implausible but confident responses by ChatGPT, which poses a significant issue (80). ChatGPT is known to create fabricated references with incoherent Pubmed ID (81), a problem somewhat mitigated in GPT-4 (18% error rate) compared to GPT-3.5 (55% error rate) (82). The Supplementary material illustrated an example where GPT-4 could have provided more accurate references, including PMIDs, underscoring its limitations for literature searches. In the medical field, accuracy is paramount, and ChatGPT’s inaccuracy can have serious consequences for patients. A study evaluating GPT-3.5’s performance in medical decision-making across 17 specialities found that the model largely generated accurate information but could be surprisingly wrong in multiple instances (83). Another study highlighted that while GPT-3.5 (Dec 15 version) can effectively simplify radiology reports for patients, it could produce incorrect interpretations, potentially harming patients (84). With the deployment of GPT-4 and GPT-4o, the updated database should bring expected improvement; however, these inaccuracies underscore the necessity of using ChatGPT cautiously and in conjunction with professional medical advice. Intellectual property, cybersecurity, and privacy As an LLM, ChatGPT is trained using undisclosed but purportedly accessible online data and ongoing refinement through user interactions during conversations (85). It raises concerns about copyright infringement and privacy violations, as evidenced by ongoing lawsuits against OpenAI for allegedly using private or public information without their permission (86–88). Based on information from the OpenAI website, user-generated content is consistently gathered and used to enhance the service and for research purposes (89). This statement implies that any identifiable patient information could be at risk. Therefore, robust cybersecurity measures are necessary to protect patient privacy and ensure compliance with legal standards in medical settings (90). When analyzing clinical data using an AI chatbot, uploading deidentified datasets is suggested. Alternatively, considering local installations of open-source, free-for-research-use LLMs, like Llama 3 or Gemma (Google), for enhanced security is recommended (91–94). US FDA regulation While the FDA has approved 882 AI and ML-enabled human medical devices, primarily in radiology (76.1%), followed by cardiology (10.2%) and neurology (3.6%) (95), veterinary medicine lacks specific premarket requirements for AI tools. The AI- and MLenabled veterinary products currently span from dictation and notetaking apps (34, 35), management and communication software (36, 37), radiology service (31–33), and personalized chemotherapy (96), to name a few. These products may or may not have scientific validation (97–104) and may be utilized by veterinarians despite the clients’ lack of consent or complete understanding. Leading Article

Vetnuus | October 2024 11 In veterinary medicine, the absence of regulatory oversight, especially in diagnostic imaging, calls for ethical and legal considerations to ensure patient safety in the United States and Canada (105, 106). LLM tools like ChatGPT pose specific regulatory challenges, such as patient data privacy, medical malpractice liability, and informed consent (107). Continuous monitoring and validation are the key, as these models are continuously learning and updating after launch. As of today, the FDA has not authorized any medical devices that use genAI or LLM. Practical learning resources Resources for learning about ChatGPT and generative AI are abundant, including AI companies’ documentation (108–110), online courses from Vanderbilt University and IBM on Coursera (41, 111), Harvard University’s tutorial for generative AI (112), and the University of Michigan’s guides on using generative AI for scientific research (113). These resources are invaluable for veterinarians seeking to navigate the evolving landscape of AI in their practice. Last but not least, readers are advised to engage ChatGPT with wellstructured prompts, such as: ‘I’m a veterinarian with no background in programming. I’m interested in learning how to use generative AI tools like ChatGPT. Can you recommend some resources for beginners?’ (see Supplementary material). The ongoing dialog In the 2023 Responsible AI for Social and Ethical Healthcare (RAISE) Conference held by the Department of Biomedical Informatics at Harvard Medical School, several principles on the judicious application of AI in human healthcare were highlighted (114). These principles could be effectively adapted to veterinary medicine. Integrating AI into veterinary practices should amplify the benefits to animal welfare, enhance clinical outcomes, broaden access to veterinary services, and enrich the patient and client experience. AI should support rather than replace veterinarians, preserving the essential human touch in animal care. Transparent and ethical utilization of patient data is paramount, advocating for opt-out mechanisms in data collection processes while safeguarding client confidentiality. AI tools in the veterinary field ought to be envisioned as adjuncts to clinical expertise, with a potential for their role to develop progressively, subject to stringent oversight. The growing need for direct consumer access to AI in veterinary medicine promises advancements but necessitates meticulous regulation to assure pet owners about data provenance and the application of AI. This review discussed the transformative potential of ChatGPT across clinical, educational, and research domains within veterinary medicine. Continuous dialog, awareness of limitations, and regulatory oversight are crucial to ensure generative AI augments clinical care, educational standards, and academic ethics rather than compromising them. The examples provided in the Supplementary material encourage innovative integration of AI tools into veterinary practice. By embracing responsible adoption, veterinary professionals can harness the full potential of ChatGPT to make the next paradigm shift in veterinary medicine. v Supplementary material The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/ fvets.2024.1395934/full#supplementary-material ChatGPT 101: prompts and prompt engineering Understanding prompts is crucial before engaging with ChatGPT or other generative AI tools. Prompts act as conversation starters, consisting of instructions or queries that elicit responses from the AI. Effective prompts for ChatGPT integrate relevant details and context, enabling the model to deliver precise responses (28). Prompt engineering is the practice of refining inputs to produce optimal outputs. For instance, researchers instructing ChatGPT to identify body condition scores from clinical records begin prompts by detailing the data structure and desired outcomes: “Each row of the dataset is a different veterinary consultation. In the column ‘Narrative’ there is clinical text. Your task is to extract Body Condition Score (BCS) of the animal at the moment of the consultation if recorded. BCS can be presented on a 9-point scale, example BCS 6/9, or on a 5-point scale, example BCS 3.5/5. Your output should be presented in a short-text version ONLY, following the rules below: … (omitted) (28)”. Writing effective prompts involves providing contextual details in a clear and specific way and willingness to refine them as needed. Moreover, incorporating ‘cognitive strategy prompts’ can direct ChatGPT’s reasoning more effectively (refer to Supplementary material for more details). For a comprehensive understanding of prompt engineering, readers are encouraged to refer to specialized literature and open-access online courses dedicated to this subject (41–44). Proper prompt engineering is pivotal for shaping conversations and obtaining the intended results, as illustrated by various examples in this review (Figure 1). Using ChatGPT in clinical care ChatGPT has the potential to provide immediate assistance upon the client’s arrival at the clinic. In human medicine, the pre-trained GPT-4 model is adept at processing chief complaints, vital signs, and medical histories entered by emergency medicine physicians, subsequently making triage decisions that align closely with established standards (45). Given that healthcare professionals in the United States spend approximately 35% of their time documenting patient information (46) and that note redundancy is on the rise (47), ChatGPT ‘s ability to distill crucial information from extensive clinical histories and generate clinical documents are particularly valuable (48). In veterinary medicine, a study utilizing GPT-3.5 Turbo for text mining demonstrated the AI’s capability to pinpoint all overweight body condition score (BCS) instances within a dataset with high precision (28). However, some limitations were noted, such as the misclassification of lameness scoring as BCS, an issue that the researchers believe could be addressed through refined prompt engineering (28). For daily clinical documentation in veterinary settings, veterinarians can input signalment, clinical history, and physical examination findings into ChatGPT to generate Subjective-Objective-Assessment-Plan (SOAP) notes (46). An illustrative veterinary case presented in Supplementary material involved the generation of a SOAP note for a canine albuterol toxicosis incident (49), where ChatGPT efficiently identified the diagnostic tests executed in the case report, demonstrating that ChatGPT can be used as a promising tool to streamline the workflow for veterinarians. Leading Article References available on request.

Vetnews | Oktober 2024 12 « BACK TO CONTENTS Abstract Background and Aim: In recent years, artificial intelligence (AI) has become increasingly necessary in the life sciences, particularly medicine and healthcare. This study aimed to systematically review the literature and critically analyze multiple databases on the use of AI in veterinary medicine to assess its challenges. We aim to foster an understanding of the effects that can be approached and applied for professional awareness. Materials and Methods: This study used multiple electronic databases with information on applied AI in veterinary medicine based on the current guidelines outlined in PRISMA and Cochrane for systematic review. The electronic databases PubMed, Embase, Google Scholar, Cochrane Library, and Elsevier were thoroughly screened through March 22, 2023. The study design was carefully chosen to emphasize evidence quality and population heterogeneity. Results: A total of 385 of the 883 citations initially obtained were thoroughly reviewed. There were four main areas that AI addressed; the first was diagnostic issues, the second was education, animal production, and epidemiology, the third was animal health and welfare, pathology, and microbiology, and the last was all other categories. The quality assessment of the included studies found that they varied in their relative quality and risk of bias. However, AI aftereffect-linked algorithms have raised criticism of their generated conclusions. Conclusion: Quality assessment noted areas of AI outperformance, but there was criticism of its performance as well. It is recommended that the extent of AI in veterinary medicine should be increased, but it should not take over the profession. The concept of ambient clinical intelligence is adaptive, sensitive, and responsive to the digital environment and may be attractive to veterinary professionals as a means of lowering the fear of automating veterinary medicine. Future studies should focus on an AI model with flexible data input, which can be expanded by clinicians/users to maximize their interaction with good algorithms and reduce any errors generated by the process. Introduction Medicine has always been and will likely remain an average profession, wherein offered treat- ments correspond to the most effective plan for the average patient. Individual variation might negate this assumption, and as a result, false positive and false negative results might arise. The more the treatment process can be digitalized, the more pre- cise outcomes can become. In recent years, artificial intelligence (AI) has become increasingly necessary in the life sciences, particularly in medicine and healthcare. Chang [1] reviewed the main areas of AI focus, which included advantages for imaging interpretation using deep-machine learning (ML), which can help with decision-making, digitalization, which can aid in administrative support and natural language process- ing for communication, and education and training, which can be used for data mining, risk assessment, and prediction. Artificial intelligence has been widely adopted and applied in veterinary medicine to animals’ healthcare by maximizing predictive indicators and achieving greater accuracy in diagnosis. Machine learning interacts with imaging, pathology slides, and patients’ electronic medical records to aid in reaching the correct diagnosis, prescribing appropriate therapy, and augmenting professionals’ capabilities [2]. Several areas have attempted to improve diagnosis and disease control through the application of AI. Laboratory haematology analyzers and imaging machines include AI expert systems, and mathematical algorithms use raw input data to provide clinical interpretation [3]. At present, there are growing concerns regarding the comparison of clinicians to AI algorithms, and to what extent do AI outcomes support an accurate clinical decision. For instance, based on slide scanning, digital pathology is more accurate than humans evaluating high-resolution slides. However, veterinarians link these AI findings to the patient’s clinical background before making further decisions. Artificial intelligence tools developed for this field have a diagnostic accuracy of up to 95% and are almost 100 times faster in providing results [4, 5]. In this study, we intend to qualitatively and quantitatively describe the current state of applied AI in veterinary medicine, elucidate future trends, and critically interpret outcomes of those fields that have applied Hi-Tech methods. To the best of our knowledge, there has been no published systematic literature review on the use of AI in veterinary medicine. Therefore, this study aimed to review and critically analyze the literature in different databases and offer a qualitative assessment of these findings with a descriptive analysis of applied AI in veterinary medicine. Artificial intelligence feasibility in veterinary medicine: A systematic review Fayssal Bouchemla1 , Sergey Vladimirovich Akchurin2, Irina Vladimirovna Akchurina2 , Georgiy Petrovitch Dyulger2, Evgenia Sergeevna Latynina2, and Anastasia Vladimirovna Grecheneva3 Corresponding author: Fayssal Bouchemla, e-mail: faysselj18@yahoo.com Co-authors: ASV: sakchurin@rgau-msha.ru, AIV: sakchurin@rgau-msha.ru, GPD: dulger@rgau-msha.ru, LES: evgenialatynina@rgau-msha.ru, GAV: a.grecheneva@rgau-msha.ru 1. Department of Animal Disease, Veterinarian and Sanitarian Expertise, Faculty of Veterinary Medicine, Vavilov Saratov State University of Genetic, Biotechnology and Engineering Saratov, Russia; 2. Department of Veterinary Medicine, Russian State Agrarian University- Moscow Agricultural Academy named after K.A. Timiryazev, 49, str. Timiryazevskaya, Moscow, 127550, Russia; 3. Department of Applied Informatics, Russian State Agrarian UniversityMoscow Agricultural Academy named after K.A. Timiryazev, 49, str. Timiryazevskaya, Moscow, 127550, Russia.

Vetnuus | October 2024 13 Materials and Methods Ethical approval Ethical Committee approval was not required because the study was based on a systematic review. Study period and location The electronic databases PubMed, Embase, Google Scholar, and Scopus were thoroughly screened up to March 22, 2023, for the use of AI in veterinary medicine. The data were extracted at the Department of Veterinary Medicine, Russian StateAgrarian University, Moscow. Search strategy, selection criteria, and study selection The search strategy was developed based on our previous studies and was modified based on the co-authors’views. Article screening was conducted according to the most up-to-date guidelines for systematic reports and meta-analysis as outlined in PRISMA [6] and Cochrane [7]. The electronic databases PubMed, Embase, Google Scholar, and Scopus were thoroughly screened through March 22, 2023, for the use of AI in veterinary medicine. The keywords were terms relevant to animal species, veterinary medicine, and AI. All screenings were performed based on the publication title or the abstract if the full text was unavailable. Identified citations were imported into the Endo file. The approach process and identification of the reviewed articles are illustrated in the flow diagram (Figure 1). A population, intervention, control, and outcomes (PICO) note on external validity was attached to each citation. The study design was carefully chosen to provide quality evidence, as randomized trials without significant limitations provide high-quality and stronger evidence. No automatic filtration was applied in this search, and other references used for the search are listed in the appropriate part of the study. All aspects of AI, including ML, convolutional neural network (CNN), and deep learning (DL) were accepted as part of the search results. A total of 79 relevant studies were retrieved from the search criteria and were included in this study. Extracted/included data For a study to be included in the search, it had to have been an original research publication in a peer-reviewed journal, conference, or book accessible to the reviewers. There were no limitations regarding either country or language of origin of the study. The publication has to describe the use of AI in veterinary medicine. There were no restrictions on study design, and randomized or nonrandomized controlled trials, interventional, observational, or case studies were included. Failure to comply with the required criteria resulted in the exclusion of the publication. In addition, any duplicated publications were excluded from the study. Which category to assign each study to, as noted in Table 1, was discussed and agreed on by all co-authors. The authors developed a table containing 19 criteria specifically for use in extracting data from papers on the use of AI per individual speciality in the veterinary profession. It was agreed that a minimum of 30 reports were necessary to define a specific category. Any criterion with fewer than 30 studies was classified as part of the diverse category. This threshold significantly decreased the number of different criteria and maximized the study’s ability to focus on critical AI orientation. Reviews and references obtained through the search were equally and randomly distributed to all co-authors, who individually screened them based on the study design criteria. There was a total of 19 articles that could have been attributed to more than one criterion. In this situation, all reviewers discussed their views until a consensus was reached. Two independent Exclusion criteria The publications with AI use but not related to veterinary medicine were excluded. Failure to comply with the required criteria resulted in the exclusion of the publication. In addition, any duplicated publications were excluded from the study. Article >>> 14 Figure-1: Flowchart of systematic review process.

RkJQdWJsaXNoZXIy OTc5MDU=