AI Policy

Policy on the Use of Artificial Intelligence (AI)

The journal acknowledges the growing integration of artificial intelligence (AI) and automated technologies in scholarly communication, including generative systems such as large language models (LLMs). While these tools can assist in various stages of manuscript preparation and editorial processes, their use must not undermine the principles of research integrity, transparency, or confidentiality.

This policy defines the appropriate and responsible use of AI and automated tools by authors, reviewers, editors, and the journal.

Use of AI by Authors

Authors may utilize AI-based tools for tasks such as language editing, grammar correction, formatting, and improving clarity. However, if generative AI technologies are used beyond basic linguistic assistance, authors are required to disclose this use within the manuscript. Such disclosure should specify the name of the tool and the nature of its application.

Authors retain full responsibility for the content of their submissions. This includes ensuring the accuracy, originality, and reliability of any material generated or assisted by AI tools. AI systems cannot be credited as authors, as they lack the capacity for accountability, approval of the final version, and fulfillment of authorship criteria in academic publishing. Furthermore, outputs generated by AI tools should not be cited as primary scholarly sources.

Use of AI by Peer Reviewers

Peer reviewers are obligated to preserve the confidentiality of all submitted manuscripts. The use of generative AI tools to process, analyze, or generate review reports based on manuscript content is strictly prohibited. Uploading or sharing manuscript materials with such systems poses risks to confidentiality and may lead to unreliable or biased evaluations.

The use of AI for drafting or refining review language may be acceptable only if no confidential content is shared and the use is transparently communicated to the journal. Reviewers must ensure that their assessments remain their own independent and critical judgments.

Use of AI by Editors

Editorial decisions must be grounded in expert judgment and the peer review process. Editors should not rely on generative AI systems to evaluate the scholarly merit of submissions or to make decisions regarding acceptance, revision, or rejection.

AI tools may be used to assist with technical and administrative functions, such as plagiarism detection, similarity checks, workflow organization, and identifying potential reviewers. However, all tools must be used with caution and subject to prior evaluation to ensure their reliability and appropriateness.

Use of AI Tools by the Journal

The journal may implement automated systems to support routine editorial operations, including manuscript screening, similarity analysis, and identification of potential ethical concerns. These processes are always supervised by human editors or editorial staff.

Outputs generated by such tools are subject to verification before any editorial action is taken. Final decisions regarding manuscripts and ethical matters remain solely the responsibility of human decision-makers.

Oversight and Responsibility

All uses of AI and automated technologies must remain under human supervision. Authors, reviewers, and editors are responsible for ensuring that these tools are applied ethically, transparently, and in compliance with the journal’s publication standards.

Non-compliance with this policy may result in appropriate actions, including rejection of submissions, publication of corrections, retraction of articles, or other measures in accordance with the journal’s ethical guidelines.