Generative AI Policies

PrimaryEdu: Journal of Primary Education
(Aligned with Elsevier journal generative AI policies)

  1. Purpose and principles
    PrimaryEdu supports responsible use of generative AI and AI-assisted technologies to help authors work more efficiently, while safeguarding research integrity, transparency, and human accountability. AI tools must not replace human critical thinking, scholarly judgment, or author responsibility. Authors remain fully accountable for the accuracy, originality, and integrity of all content submitted.

  2. Scope
    This policy applies to all contributors: authors, reviewers, editors, and editorial staff.

  3. Definitions
    Generative AI / AI-assisted technologies: tools that can generate or transform content (e.g., text, code, images, tables, figures) and may include “AI agents” or “deep research” tools.

  4. Acceptable use by authors (assistance, not authorship)
    Authors may use generative AI or AI-assisted tools to support manuscript preparation only with human oversight, careful verification, and substantive author-led editing. Examples of acceptable support include:
    a. Improving language and readability (when authors fully review and edit the output).
    b. Helping organize content or generate ideas as a starting point, provided the manuscript ultimately reflects the authors’ own analysis, interpretation, and scholarly contribution.
    c. Code assistance (e.g., debugging), provided authors verify correctness, document what was done when relevant, and ensure reproducibility.

Important requirement: authors must carefully check AI outputs for factual accuracy, completeness, impartiality, and source validity (including verifying citations, because AI-generated references can be incorrect or fabricated).

  1. Uses that do not require disclosure
    Routine spelling and grammar checkers, and reference managers (e.g., Mendeley/Zotero-like tools), can be used without disclosure under this policy.

  2. Prohibited uses by authors
    a. Authorship attribution to AI: AI tools must not be listed as authors or co-authors.
    b. Direct use of AI-agent/deep-research output as manuscript text: authors must not paste AI output directly into the manuscript as the final scholarly text. AI may only inform drafting; the submitted manuscript must represent the authors’ original contribution and thinking.
    c. Fabrication or manipulation: authors must not use AI to invent, falsify, manipulate, or “hallucinate” data, results, quotations, or references. (Any such behavior will be treated as a serious breach of publication ethics.)

  3. Figures, images, and artwork
    a. Not permitted in submitted manuscripts: the use of generative AI or AI-assisted tools to create or alter images/figures in submitted manuscripts is not allowed (including enhancing, obscuring, moving, removing, or introducing features).
    b. Limited acceptable adjustments: brightness/contrast/color balance adjustments are acceptable only if they do not hide or remove information present in the original.
    c. Exception (methods-based use): the only exception is when AI-assisted image generation/interpretation is part of the research design or methods. In that case, authors must describe the use reproducibly in the Methods section, including how the tool was used and sufficient tool/model identification details.
    d. Graphical abstracts: AI-generated artwork for graphical abstracts is not permitted.
    e. Cover art: AI-generated cover art may be considered only with prior permission from the journal and publisher, and with rights clearance and correct attribution.

  4. Mandatory disclosure (when disclosure is required)
    If generative AI or AI-assisted tools are used beyond routine spelling/grammar checking, authors must include a statement placed at the end of the manuscript, immediately above the References, with the section title:
    Declaration of Generative AI and AI-assisted technologies in the writing process

The statement must include:
a. The name of the tool/service used (and version/model where available).
b. The purpose of use (e.g., language polishing, restructuring, code support).
c. A clear confirmation that the authors reviewed/edited the output and take full responsibility for the published content.

Suggested statement template (PrimaryEdu version, aligned in meaning):
“During the preparation of this work, the authors used [TOOL/SERVICE] to [PURPOSE]. The authors reviewed and edited all AI-assisted output as necessary and accept full responsibility for the content of the publication.”

If no generative AI was used (beyond routine spelling/grammar checkers), authors may state:
“No generative AI or AI-assisted tools were used in the writing process of this manuscript.”

  1. Peer review: rules for reviewers
    a. Confidentiality: reviewers must treat manuscripts as confidential and must not upload any part of a submitted manuscript into any generative AI tool.
    b. No AI-assisted reviewing: reviewers must not use generative AI or AI-assisted technologies to assist in the scientific assessment or to draft peer review reports (including using AI merely to “polish” the review report). Peer review requires human critical judgment and accountability.

  2. Editorial process: rules for editors
    a. Confidentiality: editors must not upload submitted manuscripts or any part of them into generative AI tools, and this prohibition extends to editorial correspondence (including decision letters), even for language improvement.
    b. No AI for editorial decision-making: editors must not use generative AI to support evaluation or decision-making because editorial judgments require human responsibility and may be distorted by incorrect/incomplete/bias-prone outputs.

  3. Compliance, checks, and handling concerns
    PrimaryEdu may perform checks (manual and/or tool-supported) to identify potential policy violations, including suspicious text patterns, inconsistent citations, or image irregularities. If undisclosed or irresponsible AI use is suspected, the journal may request clarification, require revision, reject the submission, or take post-publication actions consistent with publication-ethics guidance (e.g., practices aligned with COPE).

  4. Journal use of AI-assisted technologies
    PrimaryEdu may use identity-protected or licensed tools to support administrative screening (e.g., completeness and plagiarism checks) under appropriate confidentiality and data protection safeguards, aligned with responsible AI principles (e.g., RELX Responsible AI Principles referenced by Elsevier).

  5. Policy review
    This policy will be reviewed periodically to reflect evolving standards and best practices in scholarly communication.