GENERATIVE ARTIFICIAL INTELLIGENCE DECLARATIONS

1. ARTIFICIAL INTELLIGENCE POLICY FOR EDITORS

1.1 Policy Overview and Editorial Responsibilities

The Journal of Governance and Integrity are committed to upholding the highest standards of publication ethics and takes all possible measures against publication malpractices. This Policy provides clear guidance to editors on the appropriate application of artificial intelligence (AI) tools while protecting the confidentiality of manuscripts, maintaining editorial independence, and ensuring the scholarly integrity of the publication processes. Editors act as custodians of the scientific record and must use professional judgment, which cannot be substituted or complemented by AI systems.

1.2 Manuscript Confidentiality and Technology Restrictions

Under no circumstances may editors upload, input, and process submitted manuscripts and manuscript parts through generative AI tools or AI-assisted techniques. Compliance with this complete rule ensures authors' intellectual property is protected, that confidentiality obligations are maintained, and guards against potential data privacy regulation violations by mitigating the risk of editors, referees, and associated publisher staff gaining access to sensitive and personal information when manuscripts themselves contain such details. This confidentiality applies to all editorial information, including peer review reports, decision letters, and other internal editorial correspondence and editorial discussion, among other internal documents. However, it may be run through external AI, including but not limited to language polishing and administrative assistance.

1.3 Editorial Decision-Making and Professional Judgment

The editorial review process requires high-level domain expertise, context knowledge, and critical thinking skills that are only available in humans. Editors should not use the generative AI tools or AI-assisted technologies to obtain, evaluate, and make decisions about manuscripts at any stage in the publication process. These technologies do not have the complex understanding of scientific method, research significance, and disciplinary context necessary for editorial quality, and present a significant risk that superficial, incorrect, and biased evaluations would expose publication quality and ultimately scientific accuracy. Every editor is granted full responsibility and authority to make editorial decisions and to have the manuscript reviewed for publication.

1.4 Author AI Usage Oversight and Policy Compliance

Editors should be aware that the use of AI for language enhancement during the preparation of manuscripts may be acceptable to authors, with compulsory declaration statements located at designated sections preceding a reference list. Journal editors need to consider such disclosures in forming an editorial decision, with an emphasis on the scientific quality, methodological robustness, and value of the study. In cases where the editors find an alleged offense in the journal AI policy among authors, reviewers, or readers, they will report the matter to the editorial office by sending an email to jgi@umpsa.edu.my with necessary documentation as a proof for the claims and the committee will start further investigation for a possible resolution of the misconduct according to the journal editorial policy.

1.5 Publisher-Approved Technologies and Editorial Support

The Journal of Governance and Integrity use identity-protected AI techniques and preprocessing techniques to prevent profiling, and the AI approach adheres to responsible AI considerations as set out by the publisher. These systems foster editorial workflows, such as identifying plagiarism, ensuring manuscript completeness and anonymity, and identifying reviewers, all while conducting anonymous and secure data management. These publisher-operated technologies actively undergo continuous testing for accuracy, bias reduction, and privacy adherence in collaboration with professional editorial judgment, all in the service of supplementing, not reinforcing, human editorial decision-making and judgment.

1.6 Professional Development and Policy Evolution

The Editorial Staff is dedicated to staying up-to-date with new AI technologies and how they impact scholarly publishing, but will never decline the importance of editorial integrity and confidentiality. Editors who are unsure how a policy applies to their specific case, or who are seeking guidance in new situations, should contact the editorial leadership in advance. This Policy will be reviewed and updated as AI capabilities develop, supporting best editorial practice and capitalizing on emerging technology to facilitate the scholarly publishing endeavor without affecting editorial independence or the privacy of their manuscripts.

2. ARTIFICIAL INTELLIGENCE POLICY FOR REVIEWERS

2.1 Policy Overview and Reviewer Responsibilities

The Journal of Governance and Integrity align with the growing emphasis on artificial intelligence in academic research and emphasizes the quality of the peer review process. Reviewers are the gatekeepers of research quality and should realize that they play an important role in maintaining the confidentiality, impartiality, and academic objectivity that constitute the standards of the publication system. This Policy provides clarity on fair and acceptable use of AI in review and has established clear guidelines for AI use, protecting the rights of authors and the integrity of academic evaluation.

2.2 Strict Confidentiality Requirements for AI Tools

Reviewers are not permitted to attach to manuscripts any portion of the text generated (including, but not limited to, from the title and abstract) by generative AI tools or AI-assisted systems. The prohibition includes even manuscript components (abstract, methods, results, figures, and/or tables, supplementary materials, and any author-identifying information or information on which the study was conducted). These are clear violations of confidentiality, which might violate the authors' copyright issues, violate data privacy laws, and expose proprietary research to unauthorized systems. This responsibility of confidentiality continues after the manuscript has been reviewed, regardless of whether or not the submitted manuscript is published.

2.3 Review Report Confidentiality and AI Restrictions

Peer review reports carry sensitive and confidential comments and potentially identifying information of both the manuscript and the reviewers, and are thereby similarly subject to such strict confidentiality requirements. Reviewers should not apply AI tools to review reports for any purpose, such as language polishing, grammar examination, and style optimization. Artificial intelligence systems cannot and should not supplement or replace human judgment and original critical analysis, on which the quality of the review process ultimately rests. Each of the reviewers will independently accept full personal responsibility for the content, the clarity, and the professionalism of the review report.

2.4 Scientific Evaluation and AI Limitations

The scientific assessment of manuscripts is even more complex and requires expertise in the field, a good understanding of the topic, knowledge of methodology, critical thinking, etc., which is beyond the capabilities of current AI. Reviewers should make judgments on the quality of the work, the appropriateness of the experimental design, data interpretation, and whether the work is scientifically sound. AI systems do not have the delicate knowledge required to review work for its peers, and they could push through ill-informed, inaccurate, or biased assessments that would compromise the scientific literature. The journal relies on the personal expertise of reviewers to maintain the highest standards of scientific examination.

2.5 Author AI Disclosure and Detection

Authors may have employed AI assistance to help improve the language in their manuscripts and achieve better readability. Statements about this should be included before the References section if necessary. Reviewers must also be mindful of such disclosures if available. However, they should not modify their judgment as criteria to base editorial decisions on the use of AI technology by the study, and consider the scientific quality and validity of research methods. If reviewers think that undisclosed AI tools are used and would compromise the integrity and originality of the research, they should report this in the confidential comments to the editor, rather than relying on independent verification through AI tools.

2.6 Publisher-Approved AI Technologies

The Journal of Governance and Integrity and UMPSA publisher utilize proprietary AI technology that is specialized, and AI-identity protected, abides by the responsible AI guidelines, and follows the strict data security protocols. These systems support editors by providing preliminary checks (such as plagiarism checks and completeness checks) while maintaining the privacy and confidentiality of the authors. These tools are tested for bias, accuracy, and compliance with privacy regulations under professional oversight to enhance rather than replace human editorial judgment.

2.7 Compliance and Reporting

Reviewers found to have violated this Policy may be penalized by the editorial office at jgi@umpsa.edu.my. Intentional violations of the AI policy may lead to reviewer disqualification and may also be reported to the reviewer's institution. Reviewers who are unsure how Policy applies to their situation or if it pertains to a unique circumstance should contact the editorial office before taking any further action. The journal is dedicated to helping reviewers carry out their mission while safeguarding the integrity of the peer review process and the confidentiality of the article under consideration.

3. ARTIFICIAL INTELLIGENCE POLICY FOR AUTHORS

3.1 Policy Overview

The Journal of Governance and Integrity acknowledge that AI technology can be employed by authors for both research and writing. This policy outlines procedures for the responsible use of AI tools and ensures the integrity, originality, and transparency of published research. The authors need to realize that while AI is a tool, oversight, accountability, and disclosure by humans are critical.

3.2 AI-Assisted Writing and Language Enhancement

Authors are permitted to use typewriting AI and AI-assisted products only to enhance the readability, grammar, and language quality of their manuscripts. This policy refers specifically to the writing stage and should not limit the application of AI tools to genuine research activities such as data analysis, modelling, or extracting research findings. Authors who use these technologies should ensure human supervision every step of the way, and carefully review and edit all AI-generated content.

Authors are ultimately responsible for the accuracy, completeness, and appropriateness of their final manuscript, because AI systems can generate content that looks right but could be wrong, incomplete, or biased. If AI or AI-assisted technologies are used in the process, it should be indicated in the manuscript, supported by a declaration statement in the published article. This level of transparency is not only a trust-building tool for all concerned, but it also guarantees that usage terms with the technology are not violated.

3.3 Authorship and Attribution Standards

AI and AI-assisted systems cannot be recognized as authors, co-authors, or sources of authored content. Author contributions, written consent for publication of the final manuscript was obtained and agreed to by the human subjects who contributed to the study and are responsible for the integrity of the study, approve the final version of the manuscript, and agree to be accountable for all aspects of the work. Each author should be able to defend their article if questioned after it is published and can provide new data on the issue at hand or correct any errors post-publication.

3.4 Image and Visual Content Restrictions

The journal strictly prohibits the use of generative AI or AI-assisted tools for creating, modifying, or enhancing any images, figures, photographs, or visual elements in accepted manuscripts. This restriction also applies to alteration, translation, or obscuration of objects within the images. Normal image adjustments, including but not limited to brightness, contrast, or color balance, are permitted represented images do not utilize any content that would not be represented when viewed normally.

3.5 Research-Related AI Applications

AI and AI-assisted tools may be used within the research process when they are an inherent aspect of the research design, such as in automated-image analysis, computer vision systems, and AI-augmented human measurements commonly used in human rights in business, ethics and culture in business, human governance, risk management, shari'ah governance, green accounting, sustainable finance, corporate social responsibility, IT-related governance and integrity, sustainability Issue, supply chain and operations management, and Industrial Revolution 4.0-related issues. All applications should be fully documented in the methods section with the number type, manufacturer, processing history, and validation of the AI system used, the exact model names and versions, the type of equipment, the manufacturer's name, any processing parameters used, and the adjustments made to fit the individual procedures.

3.6 Compliance and Verification

The editorial team reserves the right to use image forensics software and other detection tools to determine whether an AI-generated or AI-modified image has been used. Authors may be asked to provide the original photographs, unaltered original images, raw data files, or other additional documentation to demonstrate that this Policy has been supported. For requests for a graphical abstract with AI-generated components, authors should seek explicit advance approval from the editorial office along with evidence of appropriate rights clearance and attribution.

3.7 Enforcement

Failure to comply with the AI policy may lead to rejection of the manuscript, formal retraction of a published paper, or other editorial measures as appropriate. Authors should contact the Editor in Chief at jgi@umpsa.edu.my for clarification on policy issues (or matters of interpretation) that may arise before submission of their manuscript.