How to Report the Use of Artificial Intelligence in Scientific Articles: A Scoping Review and Taxonomy of Editorial Policies

Autores/as

DOI:

https://doi.org/10.37226/rcp.v10i1.17451

Palabras clave:

artificial intelligence, transparency, editorial policies, scientific integrity, disclosure

Resumen

Objective. This study maps and synthesizes, for the period 2023–2025, the policies and guidelines that require or recommend disclosure of artificial intelligence (AI) use in scientific manuscripts. It also introduces a practical taxonomy of disclosure elements with bilingual (EN/ES) templates readily available for authors and editors. Method. A scoping review was conducted in accordance with PRISMA-ScR and PRISMA-S standards. A multi-source search was performed across organizations such as ICMJE, COPE, and WAME; major publishers including AAAS/Science, Nature/Springer Nature, Elsevier, IEEE/ACM, Taylor & Francis, and Wiley; journals and portals such as PLOS; and sector-wide resources including STM and EQUATOR. The analysis covered materials published between January 1, 2023, and October 21, 2025. Official editorial policies, position statements, and public guidelines were included, while individual opinions without institutional endorsement were excluded. Results. A core consensus was identified emphasizing that AI cannot be listed as an author, that its use must be transparently disclosed with non-delegable human responsibility, and that confidentiality prohibits uploading manuscripts or data to non-approved AI services, particularly during peer review. However, operational differences remain regarding where to place the disclosure, the required level of detail—such as the tool and version used, prompts, and verification—and the treatment of images or code, which face strict restrictions in several publishing houses. To harmonize these criteria, the AI Use Disclosure for Research Articles (AI-Use-12) is proposed as a standardized 12-item reporting framework. Conclusions. It is recommended that journals adopt a formal “AI Use Disclosure” section and design editorial workflows that complement rather than replace human judgment with automated detectors of variable accuracy. Comparative tables, a timeline, a checklist, and declaration templates aligned with current editorial policies are provided.

Biografía del autor/a

Juan Aníbal González-Rivera, Ponce Health Sciences University, Ponce, Puerto Rico

El Dr. Juan A. González Rivera (psicólogo clínico) forma parte de la facultad del Programa de Psicología Clínica de la Ponce Health Sciences University, Centro Universitario de San Juan. Ha dictado cursos de psicología, investigación, filosofía, humanidades y grandes religiones. Su formación universitaria comenzó en la Universidad Central de Bayamón donde obtuvo un Bachillerato Summa Cum Laude en Artes en Estudios Religiosos y Filosofía. Luego fue aceptado en la Escuela Graduada de Teología de la misma universidad donde obtuvo el título de Maestría en Artes en Teología y Estudios Bíblicos. Posteriormente, fue admitido al programa de psicología clínica de la Universidad Carlos Albizu, Recinto de San Juan, donde obtuvo una Maestría Summa Distinction en Ciencias en Psicología Clínica y un Doctorado Summa Distinction en Psicología Clínica.

Posee numerosas publicaciones como artículos, capítulos y libros, ha recibido varios premios nacionales y es miembro del consejo editorial de varias revistas. Recientemente, fue reconocido como Investigador del Año en la Convención Anual de Psicología de Puerto Rico. Sus temas de interés e investigación son: Psicología de la Religión y la espiritualidad, Relaciones de Pareja, Tecnología y Redes Sociales, Terapias Contextuales, Psicología Positiva y Bienestar, Desarrollo de Instrumentos de Medición.

Citas

Arksey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19–32. https://doi.org/10.1080/1364557032000119616

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Chemaya, N., & Martin, D. (2024). Perceptions and detection of AI use in manuscript preparation for academic journals. PLOS ONE, 19(7), e0304807. https://doi.org/10.1371/journal.pone.0304807

Collins, G. S., Moons, K. G. M., Dhiman, P., Riley, R. D., Beam, A. L., Van Calster, B., Ghassemi, M., Liu, X., Reitsma, J. B., van Smeden, M., Boulesteix, A. L., Camaradou, J. C., Celi, L. A., Denaxas, S., Denniston, A. K., Glocker, B., Golub, R. M., Harvey, H., Heinze, G., Hoffman, M. M., … Logullo, P. (2024). TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ (Clinical research ed.), 385, e078378. https://doi.org/10.1136/bmj-2023-078378

Committee on Publication Ethics (COPE). (2023). Authorship and AI tools. https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

Elsevier. (2025a). Generative AI policies for journals. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

Elsevier. (2025b). The use of generative AI and AI assisted technologies in writing for Elsevier. https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier

Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L. (2023a). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA, 329(8), 637–639. https://doi.org/10.1001/jama.2023.1344

Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023b). Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA, 330(8), 702–703. https://doi.org/10.1001/jama.2023.12500

Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687

IEEE. (2024). Author guidelines for AI-generated content. https://open.ieee.org/author-guidelines-for-artificial-intelligence-ai-generated-text/

International Committee of Medical Journal Editors (ICMJE). (2023, May). Updated recommendations (May 2023). https://www.icmje.org/news-and-editorials/updated_recommendations_may2023.html

JBI. (2024). JBI Manual for Evidence Synthesis — Scoping reviews (Chapter 10). https://jbi-global-wiki.refined.site/space/MANUAL/355862497

Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5, 69. https://doi.org/10.1186/1748-5908-5-69

Liu, X., Cruz Rivera, S., Moher, D., Calvert, M. J., Denniston, A. K., & SPIRIT-AI and CONSORT-AI Working Group (2020). Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. The Lancet. Digital health, 2(10), e537–e548. https://doi.org/10.1016/S2589-7500(20)30218-1

Májovský, M., Černý, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened. Journal of medical Internet research, 25, e46924. https://doi.org/10.2196/46924

McGowan, J., Sampson, M., Salzwedel, D. M., Cogo, E., Foerster, V., & Lefebvre, C. (2016). PRESS 2015 Guideline for Peer Review of Electronic Search Strategies. Journal of Clinical Epidemiology, 75, 40–46. https://doi.org/10.1016/j.jclinepi.2015.10.021

McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282. https://doi.org/10.11613/BM.2012.031

National Institutes of Health (NIH). (2023). NOT OD 23 149: The use of generative AI is prohibited for the NIH peer review process. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html

National Science Foundation (NSF). (2023, December 14). Notice to the research community: Use of generative AI in the merit review process. https://www.nsf.gov/news/notice-to-the-research-community-on-ai

Nature Editorial. (2023, January 24). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature, 613, 612. https://doi.org/10.1038/d41586-023-00191-1

OpenAI. (2023). New AI classifier for indicating AI-written text. https://openai.com/index/new-ai-classifier-for-indicating-ai-written-text/

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., … Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ (Clinical research ed.), 372, n71. https://doi.org/10.1136/bmj.n71

Peters, U., & Chin-Yee, B. (2025). Generalization bias in large language model summarization of scientific research. Royal Society Open Science, 12(4), 241776. https://doi.org/10.1098/rsos.241776

PLOS. (2023–2024). Research integrity and ethical publishing. https://plos.org/research-integrity-and-ethics/

Rethlefsen, M. L., Kirtley, S., Waffenschmidt, S., Ayala, A. P., Moher, D., Page, M. J., Koffel, J. B., & PRISMA-S Group (2021). PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Systematic reviews, 10(1), 39. https://doi.org/10.1186/s13643-020-01542-z

Rivera, S. C., Liu, X., Chan, A. W., Denniston, A. K., Calvert, M. J., & SPIRIT-AI and CONSORT-AI Working Group (2020). Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. BMJ (Clinical research ed.), 370, m3210. https://doi.org/10.1136/bmj.m3210

Springer Nature. (2025). Editorial policies: AI (incl. generative images). https://www.springernature.com/gp/policies/editorial-policies

STM Association. (2023, December). Generative AI in scholarly communications: Ethical and practical guidelines. https://stm-assoc.org/document/stm-generative-ai-paper-2023/

Taylor & Francis. (2025). AI Policy; Images and figures. https://taylorandfrancis.com/our-policies/ai-policy/ ; https://authorservices.taylorandfrancis.com/editorial-policies/images-and-figures/

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879

Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D. J., Horsley, T., Weeks, L., Hempel, S., Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., Lewin, S., … Straus, S. E. (2018). PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of internal medicine, 169(7), 467–473. https://doi.org/10.7326/M18-0850

Tyndall, J. (2010). AACODS checklist. Flinders University. https://researchnow.flinders.edu.au/en/publications/aacods-checklist

Wiley. (2025). Artificial intelligence in research publishing. https://editors.wiley.com/page/artificial-intelligence-in-research-publishing

Descargas

Publicado

2026-02-20

Cómo citar

González-Rivera, J. A. (2026). How to Report the Use of Artificial Intelligence in Scientific Articles: A Scoping Review and Taxonomy of Editorial Policies. Revista Caribeña De Psicología, 10(1), e17451. https://doi.org/10.37226/rcp.v10i1.17451

Número

Sección

Artículos de Revisión

Artículos más leídos del mismo autor/a

1 2 > >>