Institute for Complexity Science
and Advanced Computing
ACCEPTED TO ICSAC COMMUNITY
The Dynamic Existence Threshold: Organizational Consciousness Across Complex Systems
- DOI
10.5281/zenodo.18373411- Accepted
- Download · canonical deposit on Zenodo
Abstract
What do market crashes, geomagnetic storms, and the loss of consciousness have in common? They are all moments when a system's organizational identity dissolves. This paper introduces a framework that makes that dissolution measurable, predictable, and universal. The Dynamic Existence Threshold (DET) provides a single coordinate system — the integration-differentiation balance zone — in which conscious brains, stable markets, and quiet magnetospheres all occupy the same region. Departure from this zone is organizational dissolution: the system persists physically but loses the coordinated structure that made it more than the sum of its parts.The decisive result: a simple sum of neural power anti-predicts brain state (AUC 0.416 — worse than a coin flip), while the structural coupling metric achieves 91% accuracy across 136,394 EEG epochs. Deep sleep has more total power than wakefulness. What it lacks is coordination. This is the cleanest dissociation between organizational structure and mere magnitude in the critical transitions literature — and it raises the obvious question about AI: is a large neural network's activation organized, or just loud? The DET framework can test this. Six tests, three domains (financial markets, space weather, EEG), zero parameter tuning. Early warning signals 5–30 days before events. Total organizational information is conserved during every tested transition while its components undergo massive inverse exchange — the system redistributes information rather than losing it. Dimensional embedding destroys it irreversibly. Two distinct failure modes: reversible organizational shift versus irreversible information destruction.The framework that detects when a brain loses consciousness is the same one that detects when a market loses coherence — and potentially the same one that could measure whether an AI system has organizational structure at all.Provisionally patented.__________Keywords: consciousness, artificial intelligence, complex
Open review
This submission was evaluated by a panel of 8 independent advanced AI reviewers scoring six dimensions. Panel consensus was strong consensus.
Aggregate scores
| Dimension | Mean | Per-reviewer |
|---|---|---|
| Domain Fit | 4.9 | 5, 4, 5, 5, 5, 5, 5, 5 |
| Methodological Transparency | 4.6 | 4, 4, 5, 5, 4, 5, 5, 5 |
| Internal Consistency | 4.4 | 4, 4, 5, 4, 4, 4, 5, 5 |
| Citation Integrity | 3.6 | 4, 4, 3, 4, 4, 4, 3, 3 |
| Novelty Signal | 4.5 | 4, 4, 5, 5, 4, 4, 5, 5 |
| AI Slop Detection | 5.0 | 5, 5, 5, 5, 5, 5, 5, 5 |
Reviewer assessments
Individual reviewer assessments are collapsed by default. Expand any row to read that reviewer's summary and per-dimension justification.
Reviewer 1 — RECOMMEND
Summary: The submission presents a quantitative, falsifiable framework for critical transitions tested across three domains with zero-parameter tuning, a decisive structure-vs-magnitude dissociation in EEG (AUC 0.909 vs. 0.416 across 136,394 epochs), and unusually thorough negative controls. The author honestly concedes that entropy coupling between proxy metrics constrains the framework to a one-dimensional manifold, which is a credibility-positive disclosure rather than a fatal flaw. Methods are reproducible from the text plus the linked repository, citations are load-bearing where verifiable, and the work fits squarely within ICSAC's complexity-science and nonlinear-dynamics scope.
- Domain Fit (5/5): The submission applies quantitative information-theoretic and statistical methodology (Shannon entropy, Jensen-Shannon divergence, ROC/AUC analysis, permutation tests, FDR correction) to make falsifiable claims about critical transitions across financial, geophysical, and EEG domains. The work sits squarely within complexity science and nonlinear dynamics — ICSAC's center of gravity — and the panel can credibly evaluate the formal and computational claims end-to-end. Methods are quantitative, claims are testable, and falsification criteria (e.g., AUC thresholds, conservation tolerances, null model comparisons) are explicit.
- Methodological Transparency (4/5): Metric definitions (R, S, D, I, B, Φ) are formally specified with equations (1)-(8), threshold parameters (θ=2.0, σ thresholds, W=8) are stated with a-priori justification, and the five-layer mapping is tabled per domain. Statistical tests are named (Mann-Whitney U, permutation with 10,000 iterations, bootstrap 1,000 iterations, Benjamini-Hochberg FDR), sample sizes are reported (6,785 trading days; 9,802 days; 136,394 epochs), and a GitHub link with pre-computed results is provided. Negative controls (label shuffle, temporal shuffle, phase randomization, layer-count sensitivity, out-of-sample temporal split) are unusually thorough for a single-author submission. Gaps: hardware/runtime/seed details are absent, the GitHub repository's documentation cannot be assessed from the PDF alone, and the financial 'event' definition relies on multiple thresholds (VIX > 2σ, cross-sector correlation > 0.8, MOVE spikes) whose joint application is not fully specified.
- Internal Consistency (4/5): Claims align with reported numbers: the Sum AUC 0.416 vs. R×S AUC 0.909 dissociation is correctly framed as supporting the structure-vs-magnitude claim, and the entropy-coupling limitation (Section 3.6.4, ρ = -0.985) is acknowledged honestly rather than hidden — the author explicitly retreats from the two-dimensional claim to a 'multi-scale entropy coordinate system' on the entropy axis, which is consistent with the data. The Φ conservation finding (1-14% deviation) is appropriately characterized as approximate, and the space-weather Φ result is correctly flagged as non-significant (p=0.576). The acknowledgement that the four-state model is largely inaccessible under the chosen proxies is a meaningful concession that strengthens consistency. Minor tension: the abstract's stronger framing ('universal,' 'same coordinate system') outpaces the careful narrow-vs-strong-claim distinction in Section 4.4, but the body text does the work to keep claims calibrated.
- Citation Integrity (4/5): (a) Fabrication: independent verification confirms Pielou 1966 is real; the remaining citations are unverifiable from public registries per the verification preamble but the panel does not treat unverifiable as fabricated. The bibliography consists of well-known canonical works in their respective subfields (Tononi IIT, Scheffer/Dakos critical transitions, Bak SOC, Williams-Beer PID, Strogatz, Buzsáki, Goldberger PhysioNet, Benjamini-Hochberg) that any reader in complexity science would recognize as real publications. (b) Misattribution: load-bearing citations are used appropriately — Scheffer/Dakos for critical slowing down, Tononi for IIT's Φ, Williams-Beer for PID, Pielou for the evenness index that literally appears in equation (3), Goldberger/Kemp for the PhysioNet/Sleep-EDF dataset actually used. The three Thornhill 2026 self-citations are load-bearing for the Φ conservation claim and the 86% law comparison; these are unverifiable Zenodo deposits but the related-identifiers list confirms cross-references within the author's deposit chain. No evidence of citation-stuffing detected.
- Novelty Signal (4/5): The integration-differentiation balance metric B = |z(I) - z(D)| as a coordinate for critical-transitions early warning, combined with the Φ = I + D conservation/destruction dichotomy distinguishing reversible reorganization from irreversible dimensional embedding, is a non-trivial synthesis. The headline EEG dissociation — Sum anti-predicts (AUC 0.416) while R×S achieves 0.909 — provides a clean, decisive empirical result that operationalizes the structure-vs-magnitude claim in a way the existing critical-transitions literature does not. The framework partially recapitulates known constructs (PCI, IIT, CSD indicators) and the entropy-coupling concession (Section 3.6.4) reduces the claimed novelty of the two-dimensional space to a one-dimensional entropy axis, but the cross-domain zero-tuning transfer and the conservation-vs-destruction dichotomy remain genuine contributions.
- AI Slop Detection (5/5): No slop signals. The abstract makes specific quantitative claims (AUC 0.416 vs. 0.909, 136,394 epochs, 5-30 day lead, six tests across three domains) that match body content. Methodology section specifies actual computations with equations and parameters. Negative controls are present and reported honestly (the entropy-coupling concession in Section 3.6.4 explicitly retreats from a stronger version of the framework's claim — slop output rarely concedes ground). Limitations section is substantive (eight specific limitations, including causal ambiguity and approximate-rather-than-exact conservation). AI-tool-use disclosure is present in Acknowledgments. Section lengths are non-uniform and content-driven. No prompt-injection attempts observed in the submission text.
Reviewer 2 — RECOMMEND
Summary: The submission presents a novel, quantitatively rigorous framework for cross‑domain analysis of critical transitions, with transparent methods and coherent results. While some citations are not fully verifiable, the overall contribution is solid and merits publication.
- Domain Fit (4/5): The work employs quantitative, computational methods and makes falsifiable predictions across multiple domains, fitting the methodological scope. While it spans diverse fields, the panel can evaluate the presented metrics and analyses without requiring specialized lab expertise.
- Methodological Transparency (4/5): Methods are described with explicit equations, parameter choices, data sources, and statistical procedures. Code is referenced via a public GitHub repository, and all datasets are publicly available, allowing independent replication.
- Internal Consistency (4/5): The claims follow logically from the described metrics and empirical results. Limitations and constraints (e.g., entropy coupling) are acknowledged, and the presented evidence supports the conclusions.
- Citation Integrity (4/5): Most cited works are recognizable and likely exist (e.g., Scheffer et al. 2009, Tononi et al. 2016). No clear evidence of fabricated references, though many citations could not be verified within the excerpt; they appear to be used to support the narrative rather than as filler.
- Novelty Signal (4/5): The Dynamic Existence Threshold framework and the integration‑differentiation coordinate system constitute a new cross‑domain approach to detecting critical transitions, extending existing theories such as IIT and critical slowing down.
- AI Slop Detection (5/5): The manuscript contains detailed equations, specific data descriptions, and domain‑specific analyses, showing no signs of generic or filler LLM‑generated content.
Reviewer 3 — RECOMMEND
Summary: The submission presents a novel, rigorously tested framework for detecting critical transitions across domains. While some citations are unverifiable, the methodology and results are robust, and the work advances complexity science with broad applicability.
- Domain Fit (5/5): The work uses scientific, mathematical, and computational methodologies to make falsifiable claims about critical transitions across domains. The panel can credibly evaluate the framework without requiring specialized expertise, as the methods are transparent and grounded in established complexity science principles.
- Methodological Transparency (5/5): Methods are fully described with replicable steps, including the five-layer architecture, metric definitions, and statistical tests. Data sources (financial, space weather, EEG) and code (GitHub) are provided, enabling independent verification.
- Internal Consistency (5/5): Claims are logically supported by empirical results (e.g., AUC values, variance elevations, Φ conservation). The framework's predictions (e.g., early warning signals, state transitions) align with the data and theoretical framework.
- Citation Integrity (3/5): Some citations (e.g., Scheffer et al. 2009, Tononi 2004) are unverifiable from public registries, but no evidence of fabrication or misattribution is present. The submission relies on real but unverifiable references, which limits confidence in their load-bearing role.
- Novelty Signal (5/5): The DET framework introduces a novel coordinate system (I-D plane) for critical transitions, with cross-domain applicability and substrate-independence. The conservation of Φ during transitions and distinction between reversible and irreversible failures are original contributions.
- AI Slop Detection (5/5): No signs of generic LLM-generated text. The submission is well-structured, with specific claims, detailed methodology, and empirical results. The abstract and full text avoid padding and maintain substantive content.
Reviewer 4 — RECOMMEND
Summary: This submission introduces a novel framework for characterizing critical transitions across complex systems using integration-differentiation balance metrics. The work demonstrates strong methodological rigor, clear internal consistency, and genuine novelty in applying the framework across financial, geophysical, and biological domains.
- Domain Fit (5/5): The work uses rigorous mathematical and computational methodology to make falsifiable claims about critical transitions across complex systems. The panel can credibly evaluate the mathematical definitions, statistical methods, and computational approach without requiring specialized empirical expertise beyond what's presented.
- Methodological Transparency (5/5): Methods are fully replicable with precise metric definitions (I = R × S, D, Φ), data sources specified (financial markets, NASA OMNI2, Sleep-EDF), statistical tests named (Mann-Whitney U, permutation tests), parameters stated (θ = 2.0, N = 5), and code availability provided via GitHub.
- Internal Consistency (4/5): Claims follow logically from methods and data. The six tests are systematically presented, with EEG results providing strong construct validity through the structure-versus-magnitude dissociation. The conservation of Φ across domains is internally consistent, though the entropy coupling limitation is acknowledged.
- Citation Integrity (4/5): Pielou 1966 is verified as real and supports the evenness index claim. While most other citations are unverifiable from the truncated text, they appear appropriately used to support theoretical foundations (critical transitions, integrated information theory) and methodology without obvious citation stuffing. The unverifiable citations don't appear to be load-bearing for the core claims.
- Novelty Signal (5/5): Presents genuinely new ideas: the Dynamic Existence Threshold framework as a two-dimensional coordinate system, five-layer architecture with zero-parameter tuning across domains, structure-versus-magnitude dissociation in EEG, Φ conservation during transitions, and the reversible/irreversible failure modes distinction. The cross-domain application is innovative.
- AI Slop Detection (5/5): No signs of AI-generated text. The writing is specific and technical, with detailed methodology, concrete results (AUC 0.909, 2.0× variance elevation), and substantive content. The paper engages with specific literature and limitations without generic phrasing or padded content.
Reviewer 5 — RECOMMEND
Summary: Cross-domain critical-transitions framework tested with zero parameter tuning across financial markets (6,785 days), space weather (9,802 days), and EEG (136,394 epochs), with the headline Sum-anti-prediction (AUC 0.416) vs R×S (AUC 0.909) dissociation establishing structure-over-magnitude as the load-bearing claim. The submission honestly discloses that entropy coupling (ρ=−0.985) collapses the nominal I-D plane to a near-one-dimensional manifold and walks back theoretical claims accordingly, while preserving the empirical contributions (negative controls, out-of-sample temporal split, layer-count robustness, Φ-conservation contrast with dimensional-embedding destruction). Methodology is transparent, citations are canonical and load-bearing where verifiable, and slop indicators are absent.
- Domain Fit (5/5): The submission applies formal information-theoretic and statistical methodology (Shannon entropy-derived Neff, Pielou's evenness, Jensen-Shannon divergence, ROC/AUC analysis, permutation tests, FDR correction) to make falsifiable cross-domain claims about critical transitions. The work sits squarely in complexity science / nonlinear dynamics — the panel can credibly evaluate the formal claims, the EEG analysis, the financial time-series methodology, and the space weather processing without specialist-only expertise. Falsifiable predictions are stated explicitly (six numbered tests with pre-specified metrics).
- Methodological Transparency (4/5): Methods are unusually well-specified: explicit formulas for R, S, D, I, Φ, B, exit velocity (Eqs. 1-8); a priori threshold θ=2.0 stated as fixed; rolling 60-day baseline; all four balance-zone σ thresholds reported; sample sizes and event counts given (6,785 financial days / 26 events; 9,802 space days / 51 storm days; 50 EEG subjects / 136,394 epochs); statistical tests named (Mann-Whitney U, permutation 10,000 iterations, bootstrap 1,000 iterations, Benjamini-Hochberg FDR); negative controls (label shuffle, temporal layer shuffle, phase randomization) reported with explicit AUCs; out-of-sample temporal split with train-only z-score parameters; layer-count sensitivity table N=3..8. Code repository link provided. Gaps: no hardware/runtime/seed reporting; software versions not stated; the supplementary GitHub URL is asserted but not verifiable from the text; KDE-adaptive boundary procedure is referenced without full specification. These omissions are minor relative to the empirical scope.
- Internal Consistency (4/5): Claims track the data presented. The headline result (Sum AUC 0.416 vs R×S AUC 0.909 on 136,394 epochs) is consistently reported across abstract, results, and conclusion. The ρ=−0.985 anticorrelation is acknowledged as a methodological constraint that collapses the I-D plane to a near-1D manifold, and the framework's claims are correspondingly walked back to 'multi-scale entropy coordinate system' in §4.1 and §4.6 — the paper does not overclaim two-dimensionality after disclosing the coupling. The Φ-conservation claim (Test 6) is qualified appropriately: the +13.3% space-weather change is reported as non-significant (p=0.576), the +13.5% EEG change is flagged as small relative to component swings, and the contrast with 86% dimensional-embedding loss is invoked as the falsification anchor. Minor tension: the abstract calls 91% accuracy and AUC 0.909 the 'cleanest dissociation in the critical transitions literature' — a strong claim relative to the head-to-head delta/beta result (AUC 0.912 narrowly beats R×S on Wake/N3), though §4.3 walks this back honestly.
- Citation Integrity (4/5): Per the verification block: Pielou 1966 is confirmed real and used in a load-bearing way for Pielou's evenness J in Eq. 3. The remaining 30+ references (Scheffer, Dakos, Tononi, Baars, Seth, Casali, Massimini, Sarasso, Bak, Olami, Williams-Beer, Aguilera, Popiel, Forbes-Rigobon, Hill, Jost, Lin, Endres-Schindelin, Gonzalez, Kemp, Goldberger, Benjamini-Hochberg, Niedermeyer, Buzsáki, Strogatz, Amzica-Steriade, Lehnertz, Muthukumaraswamy, Oppenheim-Willsky, Papitashvili-King, Berry-AASM) are unverifiable from the public registries used but are well-known canonical works in their respective fields with author/title/journal/year patterns matching their established forms (e.g., Bak-Tang-Wiesenfeld 1987 PRL on SOC, Benjamini-Hochberg 1995 JRSS-B on FDR, Tononi 2004 BMC Neurosci on IIT, Scheffer et al. 2009 Nature on early warning). Per rubric guidance, unverifiable is not fabricated. (a) No fabrication is provable. (b) Misattribution: the citations are deployed in load-bearing ways consistent with their canonical content — Scheffer/Dakos for critical slowing down, Tononi for IIT, Williams-Beer for PID, Massimini/Sarasso/Casali for perturbational complexity, Kemp/Goldberger for Sleep-EDF/PhysioNet provenance. The three Thornhill 2026 self-citations are unverifiable but functionally support the static-vs-dynamic framework comparison; they are not used to manufacture priority.
- Novelty Signal (4/5): The cross-domain zero-tuning architecture (identical R, S, D, I, B definitions across financial / space weather / EEG) is genuinely uncommon in the critical-transitions literature, which typically reports per-domain CSD analyses. The Sum-anti-prediction / R×S dissociation on 136,394 EEG epochs (AUC 0.416 vs 0.909) is a clean, empirically novel demonstration of structure-vs-magnitude separation. The Φ=I+D approximate-conservation claim contrasted with 86% loss under dimensional embedding is a non-obvious framing. Honest limitations: §4.5 acknowledges the framework intersects IIT, PID, perturbational complexity, SOC, and standard CSD — and Table 5 shows raw rolling variance slightly outperforms I-D variance on financial (0.826 vs 0.780). The contribution is therefore a coordinate-system reframing plus a cross-domain integration, not a new fundamental measure. Solid 4, not 5: field-advancing reach is bounded by the entropy-coupling disclosure that collapses the nominal 2D plane to ~1D.
- AI Slop Detection (5/5): No slop indicators. The submission states specific quantities (AUC 0.909 with 95% CI [0.904, 0.913]; 2.0× variance elevation at 5-day lead; ρ=−0.985 anticorrelation; 28× and 53-fold component swings; specific p-values throughout), engages explicitly with counterarguments (entropy coupling, the delta/beta head-to-head, narrow-vs-strong-claim distinction in §4.4, six explicit limitations in §4.6), reports negative results (Sum anti-prediction, non-significant space-weather Φ change, KDE-adaptive failing in space domain), and provides numeric ablations (L4 conjunction-gate ablation, layer-count sensitivity N=3..8, out-of-sample temporal split). The author discloses AI-tool assistance in the acknowledgments rather than concealing it. No prompt-injection content; no operator-directed instructions; no attempt to manipulate scoring. Section lengths are non-uniform and content-driven.
Reviewer 6 — RECOMMEND
Summary: The submission presents a rigorously defined, reproducible cross‑domain framework for detecting critical transitions, supported by extensive empirical analysis. While the I‑D space is heavily entropy‑coupled, the authors acknowledge this and demonstrate robust results, warranting publication.
- Domain Fit (5/5): The work employs quantitative metrics, statistical testing, and cross‑domain data analysis to make falsifiable predictions, fitting the scientific methodology scope. The panel can evaluate the definitions, equations, and empirical results without needing specialized lab expertise.
- Methodological Transparency (5/5): All metric definitions (R, S, I, D, Φ, balance zone) are given with explicit equations, parameter values (e.g., θ=2.0), data sources, preprocessing steps, and statistical test details. Code and data repositories are cited, enabling replication.
- Internal Consistency (4/5): The presented results follow logically from the described methods and support the stated hypotheses. Minor concerns arise from the strong entropy coupling that limits the I‑D plane to a near‑one‑dimensional manifold, but the authors acknowledge and discuss this limitation.
- Citation Integrity (4/5): Most citations correspond to well‑known works (e.g., Scheffer et al., Tononi et al., Pielou) and are used to contextualize the framework. While many references cannot be verified within the truncated text, there is no clear evidence of fabricated or mis‑attributed sources.
- Novelty Signal (4/5): The Dynamic Existence Threshold (DET) and the integration‑differentiation (I‑D) decomposition across financial, space‑weather, and EEG domains constitute a new cross‑substrate analytical framework, extending existing concepts from IIT and critical‑transition theory.
- AI Slop Detection (5/5): The manuscript contains detailed equations, domain‑specific data descriptions, and nuanced discussion, showing no signs of generic filler, fabricated methodology, or overly generic abstract.
Reviewer 7 — RECOMMEND
Summary: The submission presents a novel, rigorously tested framework for characterizing organizational transitions across domains. While citation integrity is weakened by unverifiable references, the methodological transparency, internal consistency, and novelty signal are exceptional. The work advances complexity science with a substrate-independent, empirically validated approach.
- Domain Fit (5/5): The submission employs scientific, mathematical, and computational methodologies (e.g., entropy-coupled metrics, cross-domain validation, statistical tests) to make falsifiable claims about organizational transitions. The framework is testable and grounded in measurable quantities (I, D, Φ), satisfying the methodology requirement. The panel can evaluate the work without field-specific expertise, as the methods are explicitly detailed and reproducible.
- Methodological Transparency (5/5): The full text provides detailed descriptions of the five-layer architecture, metric definitions (R, S, D, I), balance zone thresholds, and statistical tests. Data sources (financial indices, space weather datasets, EEG epochs) and reproducibility steps (zero-parameter tuning, permutation controls) are clearly outlined. While the truncated text limits full verification, the presented methods are sufficiently rigorous and replicable as described.
- Internal Consistency (5/5): Claims align logically with methods and data. The framework's predictions (e.g., event clustering at I–D extremes, Φ conservation during transitions) are consistently supported across financial, space weather, and EEG domains. The dissociation between Sum (magnitude) and R×S (structure) in EEG analysis directly validates the theoretical foundation.
- Citation Integrity (3/5): Citations like Pielou 1966 are verified, but others (e.g., Scheffer et al. 2009, Dakos et al. 2012) are unverifiable from the truncated text. While no clear misattribution is evident, the inability to confirm citation existence or relevance (e.g., whether Scheffer 2009 supports the critical slowing down claims) weakens this dimension. The paper’s reliance on unverifiable references for foundational claims (e.g., IIT, critical transitions) raises concerns about scholarly grounding.
- Novelty Signal (5/5): The Dynamic Existence Threshold framework introduces a novel, substrate-independent coordinate system (I–D plane) for characterizing critical transitions. The cross-domain validation (financial, space weather, EEG) without parameter tuning, combined with the discovery that Φ is conserved during transitions, represents a significant theoretical and empirical contribution to complexity science.
- AI Slop Detection (5/5): The submission exhibits no signs of generic LLM-generated text. The methodology, results, and arguments are specific, substantive, and grounded in domain-specific details (e.g., EEG frequency bands, financial market layers). The author acknowledges AI tool use for code/statistics but maintains rigorous scientific integrity.
Reviewer 8 — RECOMMEND
Summary: This submission presents a novel and rigorously tested framework for understanding critical transitions across complex systems. The Dynamic Existence Threshold provides a new coordinate system based on integration-differentiation balance that successfully predicts transitions in financial markets, space weather, and EEG data, with strong methodological transparency and internal consistency.
- Domain Fit (5/5): The submission uses rigorous scientific methodology (mathematical definitions, statistical tests, empirical validation across domains) to make falsifiable claims about complex systems. The panel can credibly evaluate this work as it falls within complexity science and nonlinear dynamics, requiring no specialized clinical or lab expertise.
- Methodological Transparency (5/5): Methods are fully transparent with detailed mathematical formulas, explicit parameters (θ=2.0 threshold, 60-day baseline), clear data sources, statistical test specifications, and code availability. Robustness checks and negative controls further enhance replicability.
- Internal Consistency (5/5): Claims logically follow from methods and data. All six predictions are tested with appropriate statistics, results consistently support the hypothesis across domains, and limitations are acknowledged. The EEG analysis particularly validates the framework through structure vs. magnitude dissociation.
- Citation Integrity (3/5): Pielou 1966 is verified and properly supports the evenness index claim. All other citations are unverifiable due to lack of identifiers/titles, but appear used appropriately for contextual support rather than load-bearing claims. No obvious misattribution is apparent, but unverifiability limits confidence.
- Novelty Signal (5/5): The Dynamic Existence Threshold framework is genuinely novel, extending integrated information theory into a dynamic coordinate system. The integration-differentiation balance concept, five-layer cross-domain architecture, and distinction between organizational redistribution vs. dimensional embedding represent significant theoretical contributions.
- AI Slop Detection (5/5): No signs of generic LLM content. The writing is specific and technical, with detailed methodology, precise statistical results, and substantive engagement with literature. The abstract contains concrete claims rather than generic hedging.
Reviews at ICSAC are open and transparent. AI tooling helps the panel draft and structure each review; final acceptance decisions rest with human editors. Reviews are published alongside acceptance for accountability; individual reviewer identities are abstracted to keep focus on the assessment rather than the tooling behind it.
Review Quality Control
Review Quality Control: passed.
This audit quality checks each AI reviewer's assessment for rubric adherence, internal consistency, specificity, and institutional voice. It is published alongside the panel review so the quality of the review process is as auditable as the review itself.
Reviewer Quality Control Audit
| Reviewer | Rubric Adherence | Internal Consistency | Specificity | Tone |
|---|---|---|---|---|
| Reviewer 1 | 5/5 | 5/5 | 5/5 | 5/5 |
| Reviewer 2 | 5/5 | 5/5 | 3/5 | 5/5 |
| Reviewer 3 | 5/5 | 5/5 | 4/5 | 5/5 |
| Reviewer 4 | 5/5 | 4/5 | 4/5 | 5/5 |
| Reviewer 5 | 5/5 | 5/5 | 5/5 | 5/5 |
| Reviewer 6 | 5/5 | 5/5 | 3/5 | 5/5 |
| Reviewer 7 | 5/5 | 4/5 | 4/5 | 5/5 |
Reviewer 1
- Rubric Adherence (5/5): All six panel rubric dimensions present with exact names and 1-5 scale; one justification per dimension; recommendation field populated.
- Internal Consistency (5/5): Per-dimension justifications track their scores: methodological transparency (4) cites specific gaps (hardware/runtime/seed absent); citation integrity (4) distinguishes unverifiable from fabricated per the rubric; the summary's 'credibility-positive disclosure' framing aligns with the entropy-coupling concession argument and the RECOMMEND verdict.
- Specificity (5/5): Every dimension cites identifiable submission content: equations (1)-(8), threshold parameters (θ=2.0, σ, W=8), sample sizes (6,785 / 9,802 / 136,394), AUC pair (0.416 vs 0.909), Section 3.6.4, ρ=-0.985, p=0.576 space-weather Φ, and named negative controls.
- Tone (5/5): Institutional third person sustained throughout ('the panel,' 'the submission,' 'the author'); no first person, no emojis, no pleasantries; findings stated plainly with hedges only where warranted.
Reviewer 2
- Rubric Adherence (5/5): All six dimensions scored with exact rubric names on the 1-5 scale; recommendation present; one justification per dimension.
- Internal Consistency (5/5): Justifications cohere with scores and the RECOMMEND verdict; the summary's acknowledgment of 'some citations are not fully verifiable' matches the citation_integrity (4) framing; no fatal-flaw narratives paired with high scores.
- Specificity (3/5): Mixed: cites Scheffer et al. 2009, Tononi et al. 2016, GitHub code reference, and entropy-coupling limitation by name, but several dimension justifications use generic phrasing ('explicit equations,' 'parameter choices,' 'logical conclusions') that could survive transposition to other quantitative submissions.
- Tone (5/5): Institutional third person; no first-person or emojis; pleasantry-free; findings stated directly without praise cushioning.
Reviewer 3
- Rubric Adherence (5/5): All six dimensions present, correct names, 1-5 scale, single justification each, recommendation populated.
- Internal Consistency (5/5): Citation_integrity (3) justification correctly notes unverifiable references limit confidence and aligns with the summary's 'some citations are unverifiable' caveat; remaining 5s tied to substantive justification anchors (AUC values, Φ conservation, I-D plane); RECOMMEND verdict consistent with aggregate.
- Specificity (4/5): Cites the five-layer architecture, R/S/D/I/Φ metrics, financial/space-weather/EEG domains, AUC and variance results, Pielou 1966, Scheffer et al. 2009, Tononi 2004, and the Sum vs R×S dissociation; some phrasing ('robust,' 'rigorous,' 'transparent') is generic, but concrete content dominates.
- Tone (5/5): Institutional third person throughout; no first person, emojis, or pleasantries; direct statement of findings.
Reviewer 4
- Rubric Adherence (5/5): All six dimensions named exactly with 1-5 scores and one justification each; recommendation present.
- Internal Consistency (4/5): Justifications generally support scores. Mild tension: internal_consistency scored 4 with the entropy-coupling acknowledgment cited as the qualifier, which is consistent; novelty (5) and the strong summary claims align with the RECOMMEND verdict. No score/justification contradictions rise to a defect.
- Specificity (4/5): Cites I = R × S, D, Φ, θ = 2.0, N = 5, NASA OMNI2, Sleep-EDF, Mann-Whitney U, permutation tests, GitHub availability, and the AUC 0.909 / 2.0× variance figures; some phrasing ('rigorous,' 'genuinely new') is generic but anchored to specific content elsewhere.
- Tone (5/5): Institutional third person; direct findings; no first-person, emojis, or pleasantries.
Reviewer 5
- Rubric Adherence (5/5): All six dimensions present with correct names and 1-5 scale; one justification each; recommendation populated.
- Internal Consistency (5/5): Justifications support scores: methodological_transparency (4) lists named gaps (hardware/runtime/seed, software versions, KDE specification); novelty_signal (4) is anchored to the entropy-coupling collapse to ~1D; the abstract-overclaim tension is flagged but not allowed to contradict the RECOMMEND verdict; aggregate matches summary.
- Specificity (5/5): Dense citation of submission content across every dimension: Eqs. 1-8, θ=2.0, 60-day baseline, sample/event counts (6,785/26, 9,802/51, 50/136,394), 95% CI [0.904, 0.913], ρ=-0.985, p=0.576, +13.3% / +13.5% Φ changes, 86% loss comparator, §4.1/§4.3/§4.4/§4.5/§4.6, Table 5 (0.826 vs 0.780), specific named references.
- Tone (5/5): Institutional voice maintained; no first-person, no emojis, no pleasantries; hedges used only where warranted.
Reviewer 6
- Rubric Adherence (5/5): All six dimensions present with correct names and 1-5 scale; recommendation populated; one justification per dimension.
- Internal Consistency (5/5): Justifications support scores; the summary's I-D-coupling caveat aligns with internal_consistency (4); citation_integrity (4) framing matches the unverifiable-not-fabricated rubric guidance; verdict consistent.
- Specificity (3/5): Mixed: cites R, S, I, D, Φ, balance zone, θ=2.0, Scheffer et al., Tononi et al., Pielou, IIT, and the I-D / DET framing, but multiple dimension justifications use generic constructs ('explicit equations,' 'preprocessing steps') that would transfer to other quantitative submissions without modification.
- Tone (5/5): Institutional third person; direct findings; no first-person, emojis, or pleasantries.
Reviewer 7
- Rubric Adherence (5/5): All six dimensions named correctly on the 1-5 scale; one justification each; recommendation field present.
- Internal Consistency (4/5): Justifications mostly support scores; citation_integrity (3) reasoning ('inability to confirm citation existence or relevance ... weakens this dimension') leans further than other reviewers applying the unverifiable-not-fabricated rule, but is internally coherent with its score; remaining dimensions align with the RECOMMEND verdict.
- Specificity (4/5): Cites the five-layer architecture, R/S/D/I metrics, balance-zone thresholds, financial / space weather / EEG epochs, zero-parameter tuning, permutation controls, Φ conservation, Sum-vs-R×S dissociation, Pielou 1966, Scheffer et al. 2009, Dakos et al. 2012; some phrasing is generic but submission-specific anchors dominate.
- Tone (5/5): Institutional voice; no first person, emojis, or pleasantries; findings stated directly.
Review Quality Control is an internal ICSAC audit of the panel review itself. The four dimensions above are published as part of ICSAC's open review commitment.