SciELO - Scientific Electronic Library Online

 
vol.75 número5Estudio coste-efectividad del diagnóstico de trombosis venosa profunda en el medio hospitalario índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


Angiología

versión On-line ISSN 1695-2987versión impresa ISSN 0003-3170

Angiología vol.75 no.5 Madrid sep./oct. 2023  Epub 11-Dic-2023

https://dx.doi.org/10.20960/angiologia.00512 

Editorial

Artificial intelligence in medical writing and scientific papers authorship

Rafael Fernández-Samos-Gutiérrez2 

2Angiology, Vascular and Endovascular Surgery Unit. Complejo Asistencial Universitario de León. León, Spain

ChatGPT (Chat Generative Pre-Trained Transformer) is an open-access artificial intelligence (AI) chatbot developed in 2022 by OpenAI (San Francisco, CA, United States; 11/30/2022) (1).

ChatGPT is a model with over 175 million parameters trained to perform language-related tasks, from translation to text generation. ChatGPT improves and grows through supervised and reinforced learning processes.

The most remarkable aspect of ChatGPT is its ability to provide accurate and complete responses and express itself naturally with very precise information, making it difficult to distinguish whether a text has been generated by an AI system, which is set to revolutionize the entire editorial process because these programs use advanced techniques and natural language learning processes (2).

In a very short period of time, medical journal editors and investigators have had to consider the role AI systems are going to play in scientific literature writing and whether it is appropriate to cite these systems in the authorship section of the publications (3,4), because there is a real threat of a deluge of machine-generated fake articles that could drag the scientific process into a “sea of garbage” (5).

Journals have reasons to be concerned (6), because the very existence of the peer-review process, a fundamental mechanism governing how we do science, may be compromised. The ease of producing attractive yet unsubstantial articles shows how thin the barrier between actual science and absurdity really is. It is urgent to determine whether convincing scientific work can be written through AI systems.

For the time being, AI cannot come up with new ideas, but it can organize and develop those provided to it, serving as a starting point for developing “human-like” texts in the not-so-distant future that could potentially replace knowledge, creativity, and scientific thinking (7). AI is capable of writing drafts, abstracts, translating, collecting, and analyzing data, doing bibliography searches, and reassembling texts to fit required sizes, formatting or rewriting language to make it more understandable. Additionally, it offers suggestions on the structure of manuscripts quickly and easily, ultimately leading to the completion of the work (8). Such a tool could also bridge the language gap by facilitating the publication of research conducted and written in other languages.

Works written by ChatGPT can be “scientific enough” to deceive, but even articles co-authored by AI are already making their way into the scientific literature of our time. An AI system cannot be an author. A violation of these policies could constitute scientific fraud comparable to image manipulation or plagiarism of existing works, with ethical boundaries yet to be determined (9).

Up until now, the process of writing a scientific paper required the guidance and supervision of expert “human” researchers in the field, ensuring the accuracy, coherence, and credibility of content before submission for publication. Although chatbots can help, they still need to be “fed” by researchers. Therefore, if the input is incorrect, they will generate erroneous results. For this reason, both chatbots and other types of AI cannot replace, for the time being, the expertise, judgment, personality, and responsibility of a researcher.

How can one recognize if a text has been generated by AI? These texts often lack nuances, style, and originality. AI detectors or expert reviewers have also become available. Unfortunately, though, many similar flaws can be found in texts written by “humans” (“copy-paste” from previous works, translation errors in texts written in languages different than the author's native language), thus leading plagiarism detection programs to make mistakes (10). For this reason, to protect themselves, publishers should add AI detectors as part of the editorial process.

For future reference, AI could be trained to automatically extract and “understand” all relevant information from electronic health records and patient data (vital signs, lab test results, medical histories, etc.) to assist professionals in the decision-making process or draft discharge reports (11). These days, electronic health records have already been implemented in all hospitals, and health care systems tend to be automated, especially regarding documentation. Chatbots can also be used in health care aspects, minimizing the chances of error, for instance, in emergency triage areas where rapid actions are needed, both in-person and remotely (12).

Is it appropriate to include ChatGPT in the authorship section of a manuscript? This question, still unanswered, could have unpredictable consequences. The International Committee of Medical Journal Editors (13) recommends evaluating authorship based on four different criteria. To be listed as an author, one must have:

  1. Contributed substantially to the idea or design of the manuscript or the acquisition, analysis, and interpretation of data.

  2. Drafted the manuscript or critically revised it by adding remarkable intellectual content.

  3. Approved the final version to be published.

  4. Agreed to be accountable for all aspects of the manuscript, while making sure that issues related to the accuracy or integrity of any part of the manuscript are investigated and resolved appropriately.

For these reasons, sections of articles created with AI should be appropriately specified, and the methodology used to generate them should be explained in the article itself, including the name and version of the software used, in the best interest of transparency.

Presenting works entirely generated by AI is strongly ill-advised, especially regarding systematic literature reviews, among other things, because of the system's immaturity and tendency to perpetuate statistical and selection biases present in the creator's system instructions, unless the studies in question specifically aim to evaluate the reliability of such systems (an objective that should obviously be explicitly stated in the work) (14). Generating images and using them in scientific articles is also ill-advised because it goes against the ethics standards of scientific publications, unless these images are the topic of discussion of research itself.

The process of information verification is what gives value to peer-reviewed journals. However, since doing this appropriately takes a lot of work, it almost inevitably decreases the quality of peer review.

There is an objective problem of content overproduction in science, making it nearly impossible for experts to keep up with advancements made in their own disciplinary field. It is hard to understand why the scientific community should facilitate or promote AI systems that increase the speed and number of articles published, when the best approach would be to publish higher-quality scientific manuscripts with greater statistical significance. Perfecting these tools could transform the ability to write a scientific paper from a prerequisite to an ancillary skill.

What is our responsibility in all this? Adding technology is at the starting line to contribute to medical research and patient care. Medicine must consider the use of chatbots and AI, make sure that they are suitable for this purpose, adhere to the standards of our specialties, and recognize the errors that can occur with their use (15).

There are many ethical questions that the scientific community will still have to reflect upon, because AI will only continue to improve over time: technology is here to stay so let us all learn how to live with it.

PS: This article was not drafted using ChatGPT.

BIBLIOGRAFÍA/REFERENCES

1. OpenAI. Disponible en: https://openai.com/blog/chatGPT [consultado: 14-3-2023]. [ Links ]

2. King MR. The future of AI in medicine: a perspective from a Chatbot. Ann Biomed Eng 2022;51:291-5. DOI: 10.1007/s10439-022-03121-w [ Links ]

3. Thorp HH. ChatGPT is fun, but not an author. Science 2023;379(6630):312-3. DOI: 10.1126/science.adg7879 [ Links ]

4. Stokel-Walker C. ChatGPT listed as author on research papers. Nature 2023;613(7945):620-1. DOI: 10.1038/d41586-023-00107-z [ Links ]

5. Seife C. La inteligencia artificial evidencia una enfermedad en el proceso científico. Letras Libres 10-2-2023 [consultado: 14-3-2023]. Disponible en: https://letraslibres.com/ciencia-tecnologia/future-tense-inteligencia-artificial-chatgpt-revistas-cientificas-arbitrajeLinks ]

6. The Lancet Digital Health. ChatGPT: friend or foe? Lancet Digit Health 2023;5(3):e102. DOI: 10.1016/S2589-7500(23)00023-7 [ Links ]

7. Gordijn B. Have HT. ChatGPT: evolution or revolution? Med Health Care Philos 2023;26(1):1-2. DOI: 10.1007/s11019-023-10136-0 [ Links ]

8. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care 2023;27(1):75. DOI: 10.1186/s13054-023-04380-2 [ Links ]

9. Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature 2022. DOI: 10.1038/d41586-022-04397-7. Online ahead of print. [ Links ]

10. Gao CA, Howard FM, Markov NS, D, et al. Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv preprint. 2022. DOI: https://doi.org/10.1101/2022.12.23.521610 [ Links ]

11. Patel SB, Lam K. ChatGPT: the future of discharge summaries? Lancet Digit Health 2023;5(3):e107-8. DOI: 10.1016/S2589-7500(23)00021-3 [ Links ]

12. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2(2):e0000198. DOI: 10.1101/2022.12.19.22283643 [ Links ]

13. International Committee of Medical Journal Editors (ICMJE). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated May 2022 [consultado: 14-3-23]. Disponible en: https://www.icmje.org/icmje-recommendations.pdfLinks ]

14. Ovadia D. ChatGPT como coautor de artículos científicos: ¿es posible? [consultado: 14-3-23]. Disponible en: https://www.univadis.es/viewarticle/chatgpt-como-coautor-de-art%25C3%25ADculos-cient%25C3%25ADficos-es-2023a10002b0Links ]

15. D'Amico RS, White TG, Shah HA, et al. I Asked a ChatGPT to Write an Editorial About How We Can Incorporate Chatbots into Neurosurgical Research and Patient Care... Neurosurgery 2023;92(4):663-4. DOI: 10.1227/neu.0000000000002414 [ Links ]

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License