Будь ласка, використовуйте цей ідентифікатор, щоб цитувати або посилатися на цей матеріал:
https://dspace.uzhnu.edu.ua/jspui/handle/lib/76107
Повний запис метаданих
Поле DC | Значення | Мова |
---|---|---|
dc.contributor.author | Melnyk, Y. B. | - |
dc.date.accessioned | 2025-08-03T11:27:21Z | - |
dc.date.available | 2025-08-03T11:27:21Z | - |
dc.date.issued | 2025-06-30 | - |
dc.identifier.citation | Melnyk, Y. B. (2025). Should we expect ethics from artificial intelligence: The case of ChatGPT text generation. International Journal of Science Annals, 8(1), 5–11. https://doi.org/10.26697/ijsa.2025.1.5 | uk |
dc.identifier.other | https://doi.org/10.26697/ijsa.2025.1.5 | - |
dc.identifier.uri | https://culturehealth.org/ijsa_archive/ijsa.2025.1.5.pdf | - |
dc.identifier.uri | https://dspace.uzhnu.edu.ua/jspui/handle/lib/76107 | - |
dc.description | Melnyk, Y. B. (2025). Should we expect ethics from artificial intelligence: The case of ChatGPT text generation. International Journal of Science Annals, 8(1), 5–11. https://doi.org/10.26697/ijsa.2025.1.5 | uk |
dc.description.abstract | Background and Aim of Study: Implementing artificial intelligence (AI) in various areas of human activity is an avalanche-like process. This situation has raised questions about the feasibility and regulation of AI use that require justification, particularly in the context of scientific research. The aim of the study: to identify the extent to which AI-based chatbots can meet ethical standards when analysing academic publications, given their current level of development. Material and Methods: The present study employed various theoretical methods, including analysis, synthesis, comparison, and generalisation of experimental studies and published data, to evaluate ChatGPT’s capacity to adhere to fundamental ethical principles when analysing academic publications. Results: The present study characterised the possibilities of using AI for academic research and publication preparation. The paper analysed a case of text generation by ChatGPT and found that the information generated by the chatbot was falsified. This fact and other similar data described in publications indicate that ChatGPT has a policy to generate information on request at any cost. This completely disregards the reliability of such information, the copyright of its owners and the basic ethical standards for analysing academic publications established within the scientific community. Conclusions: It is becoming increasingly clear that AI and the various tools based on it will evolve rapidly and have qualities more and more similar to human intelligence. We believe the main danger lies in losing control of this AI development process. The rapid development of negative qualities in AI, such as selfishness, deceitfulness and aggressiveness, which were previously thought to be unique to humans, may in the future generate in AI the idea of achieving superiority over humans. In this context, lying and violating ethical standards when analysing academic publications seem like innocent, childish pranks at the early stages of AI development. The results are important in drawing the attention of developers, scientists, and the general public to the problems of AI and developing specific standards, norms, and rules for its use in various fields | uk |
dc.language.iso | en | uk |
dc.publisher | International Journal of Science Annals | uk |
dc.relation.ispartofseries | 8;1 | - |
dc.subject | ethical standards | uk |
dc.subject | artificial intelligence | uk |
dc.subject | AI-based chatbots | uk |
dc.subject | ChatGPT | uk |
dc.subject | machine learning systems | uk |
dc.subject | falsification of research and publications | uk |
dc.title | Should We Expect Ethics from Artificial Intelligence: The Case of ChatGPT Text Generation | uk |
dc.type | Text | uk |
dc.pubType | Стаття | uk |
Розташовується у зібраннях: | Наукові публікації кафедри психології |
Файли цього матеріалу:
Файл | Опис | Розмір | Формат | |
---|---|---|---|---|
ijsa.2025.1.5.pdf | 2.53 MB | Adobe PDF | Переглянути/Відкрити |
Усі матеріали в архіві електронних ресурсів захищені авторським правом, всі права збережені.