AI Policy

General aspects:

AG Editor acknowledges the increasing impact of artificial intelligence (AI) technologies, including large language models (LLMs) and generative tools, on research and publishing. This policy is based on recommendations from ICMJE, WAME, COPE, and follows the guidelines of major international publishers such as Elsevier, Nature, IEEE, and PLOS. Our aim is to promote ethical, transparent, and responsible use of AI at all stages of the editorial process.

The journal does not prohibit the use of large language models (LLMs), such as Chat GPT, and aligns with the recommendations proposed by WAME on this issue (WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications). It should be noted that LLMs do not meet the authorship criteria proposed by the ICMJE, and if any LLM tool is used, it should be declared in the methods section. Finally, the use of LLMs does not exempt the author from responsibility for the accuracy of the content.

For authors:

  • AI tools cannot be listed as authors under any circumstances.
  • If AI was used to draft text, analyze data, or generate tables/figures, this must be explicitly disclosed in the “Methods” or “Acknowledgments” section, including the tool’s name, version, date of use, and prompts employed.
  • Authors are fully responsible for the accuracy, originality, and proper attribution of any content created or assisted by AI.
  • The use of AI to fabricate results or references is strictly prohibited and may be considered scientific misconduct. In this case, we will apply the guidelines for the retraction of papers.
  • Misuse of AI may lead to rejection, retraction, or editorial sanction.

For reviewers:

  • Reviewers must not input manuscript content into open AI tools like ChatGPT that retain user data, as this breaches confidentiality.
  • If AI tools are used to assist in writing the review report, this must be disclosed to the editors and sensitive manuscript content must not be disclosed.
  • Reviewers are accountable for any AI-generated content they include in their reports, including its accuracy and appropriateness.

For editors:

  • Editors must clearly inform authors and reviewers about this policy.
  • Any AI use by the editorial team for correspondence, decision-making, or other purposes must be transparent, documented, and must not compromise manuscript integrity.
  • The editorial office must have access to appropriate tools to detect AI-generated content and potential cases of plagiarism or manipulation.
  • Editorial confidentiality and integrity must be preserved at all times.

Final remarks:

This journal explicitly aligns with Elsevier’s guidance on the ethical, transparent, and responsible use of AI technologies in scientific publishing.

These guidelines can be reviewed here: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

In case of disputes or uncertainty regarding AI use, this editorial office will apply the standards established by the Committee on Publication Ethics (COPE) and the World Association of Medical Editors (WAME).
For further reference: