GENERATIVE AI POLICY
This policy outlines Diwan: Jurnal Bahasa dan Sastra Arab's stance on the ethical and responsible use of Artificial Intelligence (AI) and AI-assisted technologies in the preparation of manuscripts submitted for publication. This policy aims to ensure transparency, accountability, and the integrity of the scientific record.
1. Key Definitions
a. Generative AI (GenAI) is an AI system that produces text, images, data, or other content from prompts. Includes: ChatGPT, Claude, Gemini, Copilot, Perplexity, Midjourney, DALL·E, and similar tools.
b. AI-Assistive Tools are software that improve or format existing human-generated content without creating new intellectual substance. Includes: Grammarly, DeepL, Mendeley, Zotero, iThenticate, Turnitin, spell-checkers.
c. AI Copy Editing is an AI-assisted improvement to language, grammar, spelling, punctuation, sentence structure, and stylistic clarity of text already written by the human author, without generating new intellectual content. (See Section 3.)
d. AI-Generated Content is Text, images, figures, or data autonomously produced by a GenAI tool, regardless of subsequent human editing.
2. Authorship and Responsibility
a. AI cannot be an Author: AI tools and AI-assisted technologies (e.g., Large Language Models, Generative AI) do not meet the criteria for authorship as they cannot take responsibility for the content, integrity, or originality of the work. Therefore, AI tools or software cannot be listed as authors on any submitted manuscript.
b. Authors' Full Responsibility: Authors remain fully responsible and accountable for the entire content of their submitted manuscript, including any parts generated, edited, or enhanced by AI tools. This includes the accuracy, integrity, originality, and ethical soundness of the work. Authors must verify the factual correctness of any statements, citations, data, or figures generated by AI.
Human Oversight Required: The use of AI tools must be under direct human supervision. Authors must critically evaluate, edit, and revise any material generated by AI to ensure it aligns with scientific standards, accuracy, and ethical guidelines.
3. Classification of AI Use
Diwan adopts the following three-tier classification and disclosure exemption standards.
a. Tier 1: AI Copy Editing (No Disclosure Required)
Definition: Using an AI tool exclusively to improve the language, grammar, spelling, punctuation, sentence structure, and stylistic clarity of text that has already been written by the human author, without generating new intellectual content.
Exemption clause: Consistent with the policies of Springer Nature, Nature Portfolio, and Elsevier, the use of AI tools solely for copy editing purposes does not require disclosure in Diwan. This exemption recognizes that improving the linguistic expression of pre-existing human ideas is functionally equivalent to professional language editing and does not affect the intellectual integrity of the work.
Examples: Using Claude, ChatGPT, or Copilot to refine word choice, correct grammar, improve sentence flow, or adjust register in text that the author has already drafted. Using Grammarly, DeepL, or similar tools for any language-level improvement.
Boundary condition: If the AI introduces ideas, arguments, framings, or intellectual structures not present in the author's original draft — even in the process of 'editing' — the use has crossed into Tier 2, and disclosure is required.
b. Tier 2: Generative Use (Mandatory Disclosure)
Definition: Using a GenAI tool to generate, draft, restructure, paraphrase, or substantially contribute to any portion of the manuscript's intellectual content — text, arguments, summaries, interpretations, figures, or data.
Examples: Drafting a paragraph or section; generating a literature summary; producing an outline or argument structure subsequently incorporated into the manuscript; generating figures, tables, or visual content; using AI to suggest analytical frameworks or interpretive conclusions that appear in the paper.
Requirement: A clear and specific Disclosure Statement must be included in the manuscript. See Section 4 for placement and content requirements.
c. Tier 3: Prohibited Use (Strictly Forbidden)
The following uses are prohibited regardless of disclosure. Disclosure does not render a prohibited use permissible.
1) Generating fictitious data, fabricated research results, or non-existent references.
2) Using AI to formulate research hypotheses, design methodology, or draw original scholarly conclusions in place of the human author.
3) Misrepresenting methods, results, or conclusions using AI-generated content.
4) Producing plagiarized content — including unattributed reproduction or close paraphrase of others' work.
5) Submitting a manuscript that is substantially or entirely AI-generated.
4. Disclosure Requirements
Authors are required to disclose the use of AI and AI-assisted technologies in the preparation of their manuscript. This disclosure must be explicit, specific, and transparent.
a. What to Disclose: The disclosure should include:
1) The name of the AI tool (s) used: e.g., ChatGPT, Perplexity, GPT-4, Midjourney, etc.
2) The specific purpose (s) for which the AI tool was used: e.g., language refinement, grammar check, drafting of particular sections (specify which sections), brainstorming, data analysis assistance, image generation, etc.
3) The extent of AI involvement: A brief description of how the AI tool contributed to the manuscript.
b. Where to Disclose: This disclosure should typically be included in one of the following sections:
1) Acknowledgments section: (Preferred for general writing assistance)
2) Methods section: (If AI was used for specific methodological steps, e.g., data analysis or coding assistance)
3) Figure or table section (visual content like figures, tables, graphics, and diagrams).
4) A dedicated "Declaration of AI Use" statement just before the References section or in a footnote on the title page.
Example Disclosure Statement: "Portions of this manuscript were drafted/edited/enhanced using [Name of AI tool, e.g., ChatGPT-4 {OpenAI}]. The authors used this tool for [specific purpose, e.g., improving grammar and clarity/drafting an initial version of the Introduction section]. All content generated by the AI was thoroughly reviewed, edited, and validated by the authors, who take full responsibility for the final content." Or for image generation: "Figure X was generated with the assistance of [Name of AI tool, e.g., Midjourney v5]. The authors provided the prompts and edited the output to ensure accuracy and relevance."
5. Peer Reviewers
Reviewers may use Tier 1 AI-assistive tools to improve the language of their own review text, provided no manuscript content is shared with any external AI platform.
If a reviewer uses a GenAI tool for any purpose related to the review, even for improving the language of their commentary, this must be disclosed to the handling editor.
Reviewers are prohibited from using AI tools for the following tasks.
1) Uploading any portion of a confidential manuscript to any publicly accessible AI platform. This constitutes a serious breach of peer-review confidentiality.
2) Using AI to generate the substantive evaluation, critical assessment, or recommendation.
3) Any use that compromises the independence, confidentiality, or integrity of the review process.
6. Editorial Board
Editors may use Tier 1 tools for administrative correspondence, formatting, and non-decision-making tasks. No confidential manuscript content may be shared with public AI platforms.
Editors are prohibited from using AI to make or substantively influence editorial decisions, including desk rejection, revision recommendations, or acceptance/rejection. All editorial decisions must reflect the independent scholarly judgment of a qualified human editor.
7. Submission Declaration
All submitting authors must complete a Declaration of AI Use as a condition of submission. The declaration requires authors to confirm:
a. Whether any GenAI tool was used in the preparation of the manuscript (Tier 2).
b. If yes: the tool(s), section(s), and purpose(s) of use, and that a Disclosure Statement is included.
c. If no: that no GenAI tools beyond Tier 1 assistive tools or standard AI copy editing were used.
d. That all authors have read and agree to comply with this policy.
The editorial team conducts disclosure verification as part of initial screening. AI detection tools, when consulted, are treated as one indicator among several and are never used as the sole grounds for rejection. Authors may submit a written explanation to the Editor-in-Chief if they have concerns about the AI detection tool outputs applied to their manuscript.
8. Consequences of Violation
a. Failure to disclose Tier 2 use is a first-time minor violation. Authors are required to revise the Disclosure Statement with a revised Disclosure Statement. Formal notification will be provided to all authors by the editor.
b. Failure to disclose repeated or substantial Tier 2 use will result in the manuscript being rejected, and the author will be embargoed from submission to the journal.
c. Tier 3 prohibited uses (fabrication, entirely AI-generated manuscripts) will result in immediate rejection or withdrawal, and a permanent ban on submission. The editor will notify the institution and relevant ethics bodies.
d. Breach of reviewer confidentiality (uploading a manuscript to an AI platform). The editor will remove the reviewer's name from the reviewer database. The editor will also notify the institution if necessary.
e. Intentional disclosure of the use of AI in the Declaration is considered research misconduct. The editor will implement the retraction and issue an institutional notification.
9. References of Generative AI Policy
Committee on Publication Ethics (COPE). (2024). COPE Position Statement: Authorship and AI Tools. https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
STM Association. (2025). Recommendations for a Classification of AI Use in Academic Manuscript Preparation. https://stm-assoc.org
STM Association. (2024). Generative AI in Scholarly Communications: Ethical and Practical Guidelines. https://stm-assoc.org
Nature Portfolio / Springer Nature. (2025). Editorial Policies: Artificial Intelligence. https://www.nature.com/nature-portfolio/editorial-policies/ai
Elsevier. (2025). Generative AI Policies for Journals. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
Wiley. (2025). AI Guidelines for Researchers. https://www.wiley.com/en-us/publish/article/ai-guidelines/
ICMJE. (2024). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. https://www.icmje.org/recommendations/













