Будь ласка, використовуйте цей ідентифікатор, щоб цитувати або посилатися на цей матеріал: https://dspace.uzhnu.edu.ua/jspui/handle/lib/68102
Назва: From Open-Ended to Multiple-Choice: Evaluating Diagnostic Performance and Consistency of ChatGPT, Google Gemini and Claude AI
Інші назви: From Open-Ended to Multiple-Choice: Evaluating Diagnostic Performance and Consistency of ChatGPT, Google Gemini and Claude AI
Автори: Михалко, Ярослав Омелянович
Філак, Ярослав Феліксович
Дуткевич-Іванська, Юлія Василівна
Сабадош, Мар’яна Володимирівна
Рубцова, Єлізавета Іллівна
Ключові слова: artificial intelligence, large language model, diagnosis, performance
Дата публікації: жов-2024
Видавництво: ALUNA Publishing
Бібліографічний опис: From open-ended to multiple-choice: evaluating diagnostic performance and consistency of ChatGPT, Google Gemini and Claude AI / Y. O. Mykhalko, Y. F. Filak, Y. V. Dutkevych-Ivanska, M. V. Sabadosh, Y. I. Rubtsova // Wiadomości Lekarskie Medical Advances. – 2024, – Vol. 77(10). – p. 1852-1856.
Короткий огляд (реферат): Aim: To determine the performance and response repeatability of freely available LLMs in diagnosing diseases based on clinical case descriptions. Materials and Methods: 100 detailed clinical case descriptions were used to evaluate the diagnostic performance of ChatGPT 3.5, ChatGPT 4o, Google Gemini, and Claude AI 3.5 Sonnet large language models (LLMs). The analysis was conducted in two phases: Phase 1 with only case descriptions, and Phase 2 with descriptions and answer variants. Each phase used specific prompts and was repeated twice to assess agreement. Response consistency was determined using agreement percentage and Cohen's Kappa (k). 95% confidence intervals for proportions were calculated using Wilson's method. Statistical significance was set at p<0.05 using Fisher's exact test. Results: In Phase 1 of the study, ChatGPT 3.5, ChatGPT 4o, Google Gemini, and Claude AI 3.5 Sonnet's efficacy was 69.00%, 64.00%, 44.00%, and 72.00% respectively. All models showed high consistency as agreement percentages ranged from 93.00% to 97.00%, and k ranged from 0.86 to 0.94. In Phase 2 all models' productivity increased significantly (90.00%, 95.00%, 65.00%, and 89.00% for ChatGPT 3.5, ChatGPT 4o, Google Gemini, and Claude AI 3.5 Sonnet respectively). The agreement percentages ranged from 97.00% to 99.00%, while k values were between 0.85 and 0.93. Conclusion: Claude AI 3.5 Sonnet and both ChatGPT models can be used effectively for the differential diagnosis process, while using these models for diagnosing from scratch should be done with caution. As Google Gemini's efficacy was low, its feasibility in real clinical practice is currently questionable.
Тип: Text
Тип публікації: Стаття
URI (Уніфікований ідентифікатор ресурсу): https://dspace.uzhnu.edu.ua/jspui/handle/lib/68102
ISSN: 0043-5147
Розташовується у зібраннях:Наукові публікації кафедри терапії та сімейної медицини

Файли цього матеріалу:
Файл Опис РозмірФормат 
article-wiadomosci-2024.pdf3.46 MBAdobe PDFПереглянути/Відкрити


Усі матеріали в архіві електронних ресурсів захищені авторським правом, всі права збережені.