Gen AI Policy
GENERATIVE ARTIFICIAL INTELLIGENCE (AI) POLICIES FOR JOURNALS
These policies are derived from Elsevier's generative AI policies for journals. These policies were initially triggered by the rise of generative AI* and AI-assisted technologies, which were expected to increasingly be used by researchers and have now been updated to reflect evolving ethical practice. These policies aim to provide greater transparency and guidance to authors, reviewers, editors, readers, and contributors. Applied Informatics for Sustainability and Artificial Intelligence Journal (AISAI Journal) will continue to monitor developments in this area and will adjust or refine policies as appropriate.
FOR AUTHORS
The use of generative AI and AI-assisted technologies in manuscript preparation
Applied Informatics for Sustainability and Artificial Intelligence Journal (AISAI Journal) recognizes the potential of generative AI and AI-assisted technologies (“AI Tools”), when used responsibly, to help researchers work efficiently, gain critical insights fast, and achieve better outcomes. Increasingly, these tools, including AI agents and deep research tools, are helping researchers to synthesize complex literature, provide an overview of a field or research question, identify research gaps, generate ideas, and provide tailored support for tasks such as content organization and improving language and readability.
Authors preparing a manuscript for Applied Informatics for Sustainability and Artificial Intelligence Journal (AISAI Journal) can use AI tools to support them. However, these tools must never be used as a replacement for human critical thinking, expertise, and evaluation. AI tools should always be applied with human oversight and control.
Ultimately, authors are responsible and accountable for the contents of their work. This includes accountability for:
-
Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output, including checking sources, since AI-generated references can be incorrect or fabricated.
-
Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their analysis, interpretation, insights, and ideas.
-
Providing a disclosure statement upon submission to ensure that the use of any tools or sources, whether AI-based or not, is clear and transparent to readers.
-
Ensuring the manuscript safeguards data privacy, intellectual property, and other rights by checking the terms and conditions of any AI tool that is used.
Responsible use of AI Tools
Authors must verify the terms and conditions of any AI tool they use to ensure that privacy and confidentiality of their data and inputs, including unpublished manuscripts, are maintained. Extra caution should be taken when handling personally identifiable data. Authors must not generate images that duplicate or refer to existing copyrighted images, real people, identifiable products or brands, or replicate an individual’s voice. All outputs should be checked for factual errors and potential bias.
Authors must also ensure that AI tools are not granted rights beyond what is necessary to provide the service, including rights to train on submitted materials. AI tools must not impose restrictions that could limit subsequent publication.
Disclosure
Authors must disclose their use of AI tools in manuscript preparation by including a separate AI declaration statement during submission, which will also appear in the published article. The declaration should specify:
-
The name of the AI tool
-
The purpose of its use
-
The extent of human oversight
Basic grammar, spelling, and punctuation checks do not require disclosure. If AI tools are used in the research process itself, they must be fully described in the methods section.
Authorship
AI tools must not be listed as authors or co-authors, nor cited as authors. Authorship implies responsibilities and tasks performed solely by humans. Each author is accountable for the integrity and accuracy of the work, approval of the final version, and agreement to submission.
Authors must ensure:
-
The work is original and not previously published.
-
All listed authors qualify for authorship.
-
The manuscript does not infringe third-party rights.
-
Compliance with Applied Informatics for Sustainability and Artificial Intelligence Journal (AISAI Journal) publishing ethics policy.
The use of generative AI and AI-assisted tools in figures, images, and artwork
Applied Informatics for Sustainability and Artificial Intelligence Journal (AISAI Journal) does not permit the use of generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This includes enhancing, obscuring, moving, removing, or introducing features in images or figures. Adjustments to brightness, contrast, or color balance are permitted only if original data is not obscured.
The only exception applies when AI use is part of the research design or methodology (e.g., AI-assisted biomedical imaging). In such cases, detailed, reproducible descriptions must be provided in the methods section, including tool name, version, manufacturer, and usage explanation.
Use of generative AI for graphical abstracts or artwork production is not permitted. Limited exceptions for cover art may apply with prior approval and proper rights clearance.
FOR REVIEWERS
The use of generative AI and AI-assisted technologies in the peer review process
Manuscripts under review must be treated as confidential documents. Reviewers must not upload submitted manuscripts or parts thereof into generative AI tools, as this may violate confidentiality, proprietary rights, and data privacy regulations.
This confidentiality extends to peer review reports. Reviewers must not upload reports into AI tools, even for language improvement.
Peer review requires human judgment. Generative AI must not be used to assist in scientific evaluation, as critical thinking and independent assessment fall outside the scope of such tools. Reviewers are fully responsible for their review content.
FOR EDITORS
The use of generative AI and AI-assisted technologies in the editorial process
Submitted manuscripts must be treated as confidential. Editors must not upload manuscripts, decision letters, or related correspondence into generative AI tools.
Editorial decision-making requires human responsibility and accountability. AI tools must not be used to evaluate manuscripts or assist in editorial decisions, due to risks of bias, inaccuracy, and incomplete analysis.
Editors remain fully responsible for the editorial process, decisions made, and communication with authors.
