Review Process

Every submission enters the same pipeline. Five independent reviewers evaluate the work against a six-dimension rubric. The panel runs twice. Every review is published with the accepted paper. Here is exactly what that means.

The Panel

ICSAC's review panel is built from a roster of independent advanced AI reviewers, each drawn from a separate model architecture and calibrated by the Institute against the published rubric. Reviewers do not share scores during evaluation. Every submission is evaluated by up to ten independent reviewers.

  • Up to 10 independent advanced AI reviewers per submission
  • Each reviewer is a distinct model architecture; results are independent
  • A minimum reviewer-completion threshold must be met — below it, the submission is held for human review
  • Cross-reviewer disagreement is flagged and published, not averaged away

What Gets Evaluated

Domain Fit

Does the work use scientific, mathematical, or formal methodology to make testable claims? The panel also flags submissions requiring specialized empirical expertise it cannot fully provide.

Methodological Transparency

Are methods, data, and parameters documented well enough to reproduce the work?

Internal Consistency

Do the conclusions follow from the evidence and methodology presented?

Citation Integrity

Are references real, resolvable, and meaningfully connected to the claims?

Novelty Signal

Does the work advance the field? Novel frameworks are not penalized for limited prior literature.

Generative-Artifact Assessment

AI-assisted research is welcome. Work generated entirely by AI systems with no original human analysis or contribution does not meet the authorship standard.

Scoring

Each dimension is scored independently on a 1–5 scale.

ScoreWhat it means
5Exceptional — field-advancing contribution
4Solid and publishable — minor concerns only
3Adequate — fundamentally sound, revision may be needed
2Significant issues — major revision required
1Fundamentally flawed, out of scope, or fails authenticity

How Decisions Are Made

Every submission is verified by a human before any decision is finalized. The verification is a check on the panel's analysis — that scores and justifications agree, that no slot inverted on a dimension, that citation verification confirmed the cited work, and that the quality audit did not flag anything anomalous. The human verifies the panel; the panel evaluates the paper. The verified verdict falls into one of three buckets.

Recommend

Strong scores across all dimensions, with Domain Fit at the upper end. The editor confirms the panel's verdict and signs off before publication.

Editor Adjudication

When scores fall outside the clear-recommend or clear-reject thresholds — including weak Domain Fit even when other dimensions clear — the editor reviews the full panel record and makes the call directly.

Reject

Average below 2.0, or a Generative-Artifact Assessment score at the floor. The editor confirms the panel's verdict before the rejection notice is sent. Rejection is accompanied by the full panel record.

Citation Verification

Before the panel evaluates the manuscript, every citation in the bibliography is independently resolved against scholarly databases. References that cannot be located are flagged as potentially fabricated. References that resolve are then checked for misattribution — whether the cited work actually supports the claim it is attached to.

Both the resolution status and any misattribution flags are provided to reviewers before scoring begins. This separates "the paper looks plausible" from "the paper's foundation is verifiable" — reviewers do not evaluate the bibliography on faith.

Quality Control

After the panel completes, a separate automated audit evaluates the quality of the reviews themselves — checking that each reviewer cited specific content, scored consistently, and followed the rubric. If any reviewer fails the audit, the submission is flagged and held for human review before any decision is finalized. The audit results are published alongside the reviews.

DOI vs. PDF Submission

Zenodo DOI or arXiv ID

  • Best for work already deposited on Zenodo or arXiv
  • Paste your DOI or arXiv ID — ICSAC fetches the manuscript and metadata automatically
  • Your existing record is preserved as the archival source

PDF Upload

  • For unpublished work with no existing DOI anywhere
  • ICSAC mints a new Zenodo DOI on accept; the manuscript is deposited in the ICSAC Zenodo community on the author’s behalf
  • Upload PDF with title, abstract, and keywords
  • The manuscript must have an extractable text layer — image-only scans cannot be reviewed
  • Do not submit work currently under review at another journal — most journals treat prior open deposit as prior publication and will withdraw your submission

Both paths enter the same review pipeline. The review standard does not differ by submission method.

What to Expect

  1. Submit — confirmation email sent immediately. ORCID required for every submission; ICSAC verifies authorship and sources author metadata from the ORCID record.
  2. Queue assignment — within 24–48 hours of receipt
  3. Advanced AI review — 24–48 hours after queue assignment
  4. RQC audit + human verification — 48–72 hours after review completes
  5. Result notification — full panel record sent to the submitting author
  6. If accepted — paper registered in the ICSAC public registry, landing page published, full review record published alongside the work

Typical elapsed time from submission to result: 5 to 10 days. Traditional peer review takes six to twelve months. Every review here is published.

Timeline is approximate. Submissions with incomplete methodology documentation, missing abstracts, image-only PDF scans, non-extractable text layers, or other deficiencies may require additional handling time or result in rejection prior to panel assignment. Authors are responsible for ensuring submissions meet the documentation standards described on this page.