Generative AI Policies

This policy is based on and refers to the guidelines outlined in the Generative AI Policies for Journals provided by Elsevier, which can be accessed at the following link: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals.

To ensure transparency, integrity, and accountability in scholarly publishing, the following policies regarding the use of generative AI tools are implemented for authors, reviewers, and editors involved in the journal process:

Responsible Use of Generative AI by Authors

Authors may use generative AI tools (e.g., ChatGPT, GPT models) for specific tasks, such as improving grammar, language, and readability of their manuscripts. However:

  • Disclosure Requirement: If AI tools are used, this must be explicitly disclosed in the manuscript, either in the Methods section or Acknowledgments. Authors must specify the tool used, its version, and how it contributed to the work.
  • No Authorship Credit: AI tools cannot be listed as authors since they lack accountability, decision-making abilities, and the capacity to take responsibility for the published work.
  • Prohibited Uses: Generative AI must not be used for creating or falsifying data, conducting analyses, or interpreting results. Authors must maintain full responsibility for the accuracy, validity, and originality of their work.

Use of AI Tools by Reviewers and Editors

  • Confidentiality: Reviewers and editors must not use AI tools to process or evaluate unpublished manuscripts. This is to protect the confidentiality and intellectual property of authors.
  • Human Oversight: Editors may use AI tools for operational or administrative purposes, such as plagiarism detection or grammar checks, but decisions regarding acceptance or rejection of manuscripts must remain a human responsibility.

Limitations and Risks

  • Accuracy of AI Outputs: AI-generated content may contain inaccuracies or fabricated information. Authors must thoroughly validate and verify any AI-assisted outputs used in their manuscripts.
  • Plagiarism and Ethical Compliance: The use of AI must comply with existing plagiarism and ethical guidelines. Generative AI tools should not contribute to the replication or appropriation of other works without proper attribution.

Review and Enforcement

  • Editors and publishers reserve the right to request details about the use of generative AI during the manuscript preparation process.
  • Manuscripts found to have undisclosed or inappropriate use of generative AI may be rejected, and appropriate ethical actions will be taken.

By adhering to these policies, journals aim to uphold the integrity and quality of scholarly communication in an era of rapidly advancing AI technologies.