Please use this identifier to cite or link to this item: https://dspace.uzhnu.edu.ua/jspui/handle/lib/60392
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMykhalko, Yaroslav-
dc.contributor.authorKish, Pavlo-
dc.contributor.authorRubtsova, Yelyzaveta-
dc.contributor.authorKutsyn, Oleksandr-
dc.contributor.authorKoval, Valentyna-
dc.date.accessioned2024-03-26T19:19:26Z-
dc.date.available2024-03-26T19:19:26Z-
dc.date.issued2023-
dc.identifier.citationYaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING Wiadomości Lekarskie Medical Advances, VOLUME LXXVI, ISSUE 11, NOVEMBER 2023uk
dc.identifier.urihttps://dspace.uzhnu.edu.ua/jspui/handle/lib/60392-
dc.description.abstractABSTRACT The aim: Evaluate the diagnostic capabilities of the ChatGPT in the feld of medical diagnosis. Materials and methods: We utilized 50 clinical cases, employing Large Language Model ChatGPT-3.5. The experiment had three phases, each with a new chat setup. In the initial phase, ChatGPT received detailed clinical case descriptions, guided by a “Persona Pattern” prompt. In the second phase, cases with diagnostic errors were addressed by providing potential diagnoses for ChatGPT to choose from. The fnal phase assessed artifcial intelligence’s ability to mimic a medical practitioner’s diagnostic process, with prompts limiting initial information to symptoms and history. Results: In the initial phase, ChatGPT showed a 66.00% diagnostic accuracy, surpassing physicians by nearly 50%. Notably, in 11 cases requiring image interpretation, ChatGPT struggled initially but achieved a correct diagnosis for four without added interpretations. In the second phase, ChatGPT demonstrated a remarkable 70.59% diagnostic accuracy, while physicians averaged 41.47%. Furthermore, the overall accuracy of Large Language Model in frst and second phases together was 90.00%. In the third phase emulating real doctor decision-making, ChatGPT achieved a 46.00% success rate. Conclusions: Our research underscores ChatGPT’s strong potential in clinical medicine as a diagnostic tool, especially in structured scenarios. It emphasizes the need for supplementary data and the complexity of medical diagnosis. This contributes valuable insights to AI-driven clinical diagnostics, with a nod to the importance of prompt engineering techniques in ChatGPT’s interaction with doctors.uk
dc.language.isoenuk
dc.publisherALUNA Publishing ul. Przesmyckiego 29, 05-510 Konstancin – Jeziornauk
dc.subjectartifcial intelligence, large language models, ChatGPT, clinical decision support, diagnoseuk
dc.titleFROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKINGuk
dc.typeTextuk
dc.pubTypeСтаттяuk
Appears in Collections:Наукові публікації кафедри терапії та сімейної медицини

Files in This Item:
File Description SizeFormat 
Scopus.pdf8.96 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.