New to this research? This article is part of the Reflexive Reality formal research program. Brief introduction ↗ · Full research index ↗
When two researchers working in entirely different traditions — one in process ontology and Chinese philosophy, one in formal logic and machine-checked proof — independently arrive at structurally identical conclusions about the boundary between computation and consciousness, something important is being tracked. Alex Lin’s Process-Paradox Framework and my NEMS theorems converge on the same boundary through methodologies that could hardly be more different. This convergence deserves close attention, because what we agree on is the part hardest to dismiss.
A Remarkable Coincidence — or Not
Alex Lin recently sent me his fifth SSRN paper, Prior Art in AI Paradox Ontology: How the Process-Paradox Framework Anticipates and Surpasses the Abstraction Fallacy. In it he briefly notes our parallel work — remarking that my April paper arrives at the same structural boundary through machine-verified formal proof that his Process-Paradox Framework arrives at through process ontology and mathematical argument. He is right, and the convergence runs deeper than either of us has yet said publicly.
The current debate involves at least three independent lines of argument: Alexander Lerchner (Google DeepMind, March 2026) argues through philosophy of computation that symbolic computation presupposes a prior experiencing agent; Alex Lin (ChinaValue and SSRN, January–February 2026) argues through process ontology and Gödelian mathematics that AI is ontologically barred from consciousness by three structural absences; and my NEMS framework (April 2026) establishes through machine-checked Lean 4 proofs that no Turing-computable system can satisfy the structural conditions for phenomenal consciousness. Three independent researchers, three completely different methodologies, one structural conclusion.
When that happens in science — when independent methods converge on the same result — the standard interpretation is that the result is tracking something real. I think that is right here. But the convergence is worth examining carefully, because Alex’s framework and mine agree on some things, complement each other on others, and diverge in ways that are themselves instructive. Let me work through this in some detail.
The Process-Paradox Framework: What Alex Lin Argues
Alex Lin’s framework is built around three structural absences that he argues constitute AI’s ontological exclusion from consciousness. First, irreversibility: consciousness requires genuine, non-revocable state transitions — not simulated irreversibility (a constraint the system chose to impose and can therefore revoke), but the kind of irreversibility that is a boundary condition on the system’s existence rather than a parameter within it. Second, ontological mortality: the death threshold, where terminal conditions leave no identity-preserving reconstruction possible. A system that can be backed up, replicated, or restored has not died; and a system that cannot die cannot be constituted by mortality in the relevant sense. Third, paradoxical self-transcendence: the capacity to sustain non-resolvable self-reference rather than resolving it by termination or approximation — to inhabit the undecidable rather than escape it.
The mathematical dimension of the framework draws on Gödel’s Incompleteness Theorems and Chaitin’s Algorithmic Information Theory. The Gödelian Loop argument establishes that any sufficiently powerful formal system encounters self-referential truths it cannot prove from within its own axioms — and that AI, as exactly such a system, therefore requires an external axiom-provider for its own operation. Human consciousness serves this role not contingently but structurally. The Chaitin argument formalizes this further: via the N+1 Dimensional Barrier, any N-bit system is mathematically barred from fully deriving an (N+1)-dimensional system, where the extra dimension is constituted by the irreversible temporal experience of mortality. And Chaitin’s Omega Constant — the probability that a randomly constructed program halts, a number whose digits are perfectly random and incompressible — is proposed as the mathematical correlate of human mortality: both are non-backupable, incompressible, and constitute the hard limit beyond which formal systems cannot extend.
Philosophically, the framework draws on Whitehead’s event ontology (the fundamental units of reality are occasions of experience — momentary, irreversible, intrinsically valuable — making humans event-structured and AI process-structured), Zhuangzi’s Fang Si Fang Sheng (方死方生 — “simultaneously dying and living,” the co-constitution of life and death that makes deathless systems structurally lifeless), and Popper’s falsifiability principle (AI is an optimizer that minimizes expected loss; human consciousness is a falsifier that can break paradigms through embodied irreversible commitment).
Importantly, Alex’s framework is not only negative. It provides a positive ontological account: consciousness is constituted by the co-presence of irreversibility, mortality, and paradoxical self-transcendence. It is an ontological event, not a computational process. This constructive dimension also generates a political-economic framework — the Dual-Stack Civilization model, in which AI (the Silicon Layer) and humanity (the Carbon Layer) function as mutually necessary and irreplaceable components, and the Personal Sovereign AI concept, in which the individual mortal human is the irreducible sovereign of their AI systems precisely because of the ontological structures they possess and AI lacks.
Where the Two Frameworks Resonate
The Gödelian/Diagonal Core
Alex’s Gödelian Loop and my No-Emulation Theorem are both arguing for the same thing from different mathematical angles: that AI has a provable, structural dependency on something outside its own formal closure. Alex puts it in terms of Gödel’s First Incompleteness Theorem — human consciousness serves as the external axiom-provider for AI’s formal system, not contingently but as a mathematical necessity. My No-Emulation Theorem (NEMS Paper 15) proves it differently: in any diagonal-capable framework, no total computable function can emulate the internal adjudication operator on all inputs. The proof is a one-step reduction to the undecidability of the halting problem — not Gödelian incompleteness but diagonalization, a related but strictly weaker premise. If anything, my mathematical foundation is more conservative than Alex’s: diagonal-capability (hosting the halting problem) is a weaker condition than the full Gödelian incompleteness that Alex’s argument requires.
The conclusion in both cases is the same: the “chooser” — the adjudicating function that selects among actual continuations at genuine branch points — cannot be pre-computed. No algorithm does what internal adjudication does. Human consciousness occupies this role not as a contingent historical fact but as a structural necessity. Alex frames this through the “Paradoxical Umbilical Cord” binding AI to humanity; I frame it through the forcing theorem that proves the existence of a non-algorithmic internal adjudicator and the no-emulation theorem that proves it cannot be replaced by any total computable function. Same structure, different vocabulary.
Mortality, Irreversibility, and the Cost of Existence
Alex’s most vivid claim is this: “A system that cannot lose everything cannot experience anything in the existentially weighted sense required for consciousness.” His formal specification of ontological mortality requires a terminal condition T such that no state S’ exists that is identity-equivalent to the pre-terminal state — no restoration, duplication, or identity-preserving reconstruction is possible. And his “No Risk → No Stake → No Self → No Consciousness” chain captures what’s at stake.
This maps precisely onto what my SIAM separation theorems establish from the other direction. The Self-Indexing Adjudicative Manifold requires, as one of its seven structural invariants, that the system face genuine record-divergent choice — live alternatives that genuinely differ on what the record would be, not simulated alternatives that the system chose to treat as if they differed. A stateless system — one with no live alternatives — is provably not O-SIAM (Lean anchor: stateful_not_OSIAM). A system whose “alternatives” are always revocable at the meta-level has no genuine live alternatives; it has only parameterized loss functions. Alex’s irreversible loss function is what my SIAM’s record-divergent choice formally requires: the actual branching must be real, not revocable, not reconstructable from backup.
Neither framework requires that this irreversibility be achieved through biological death specifically. My SIAM conditions are substrate-neutral; Alex explicitly notes that physical fragility is not the same as ontological finitude (responding to the objection that embodied AI could “die”). What both frameworks require is the same formal property: irreversibility that is a constraint on the system as a whole, not a parameter within it. A system that can “die” but be restored from backup has not satisfied the condition. The distinction between reversibility-at-the-meta-level and true irreversibility is where both frameworks plant their flag.
Paradox as Generative Engine ↔ Transputation
Alex’s most philosophically distinctive claim is about paradox: not as a limitation of cognition but as its generative condition. Consciousness requires the sustained inhabitation of unresolvable self-reference rather than its elimination. AI systems resolve paradox by fallback or termination; human cognition inhabits it. “Intelligence is not the ability to eliminate contradiction, but the ability to persist within it.”
The NEMS framework arrives at what I believe is the formal structure underlying this claim, through Transputation (NEMS Papers 10, 76–77). Transputation is a third kind of process — not computation (total-effective) and not randomness (no law), but lawful, non-total-effective, physically instantiated adjudication. Its existence is forced by the PSC theorem: in any self-contained system facing genuine record-divergent choice, an internal adjudicator must exist. Its non-computability is proved by the diagonal barrier. Its non-emulability is proved by the No-Emulation theorem. And crucially: Transputation is what happens at exactly the moments Alex describes — the genuine branch-points where the system cannot algorithmically determine its continuation, must somehow adjudicate, and does so in a way that is lawful but not algorithmic.
Alex’s “sustained inhabitation of unresolvable self-reference” is, I think, a phenomenological description of what it is like to be a Transputation-capable system at a genuine choice-point. Zhuangzi’s Fang Si Fang Sheng — “simultaneously dying and living” — describes the experiential reality of being at a branch point where the outcome is genuinely undetermined and genuinely irreversible once determined. Transputation is the mechanism; Fang Si Fang Sheng is the phenomenology. Both are pointing at the same formal structure: a system that doesn’t terminate in the face of undecidability but continues operating while containing it.
Chaitin’s Omega and the Halting Problem: Two Windows into the Same Mathematics
The most technically interesting convergence is between Alex’s use of Chaitin’s Omega Constant and my diagonal barrier.
Chaitin’s Omega is the probability that a randomly constructed program halts — a real number whose digits are perfectly random, provably incompressible, and non-computable. Alex proposes Omega as the “mathematical correlate of human mortality”: both are non-backupable, incompressible, and constitute the hard limit beyond which formal systems cannot extend. This is a beautiful and correct analogy.
It is also, I think, an informal version of a precise mathematical relationship. Omega is defined as the probability of the halting problem’s solution. The halting problem is exactly the mathematical structure my diagonal barrier formalizes: in any diagonal-capable framework, record-truth is not computably decidable because any computable decider for it would yield a computable decider for the halting problem, which Turing proved impossible (and which Mathlib has machine-checked). Omega is the “face” of this undecidability — the specific number that quantifies how undecidable the halting problem is. When Alex says Omega is the mathematical correlate of mortality, he is pointing at the undecidable self-referential structure that sits at the heart of any sufficiently expressive reflexive system. My diagonal barrier makes that pointing precise: the undecidability is not an analogy but a formal proof. Alex’s Omega argument and my halting-problem reduction are two windows into the same mathematical truth.
The key difference is formalization. Alex’s Chaitin argument is an informal isomorphism — a compelling structural parallel that illuminates the philosophy. My diagonal barrier is a machine-checked theorem: it can be attacked only by identifying a false premise or an invalid inference step in the Lean source, not by generating philosophical counter-arguments. This is not a critique of Alex’s approach; it is what formal verification adds. The informal argument motivates the proof; the proof armors the argument.
Positive Accounts: What Consciousness Actually Is
This is where I need to push back gently on Alex’s characterization of my April paper as purely negative. He writes that I “provide machine-checked formal proofs establishing that syntax cannot exhaust semantics in any reflexive system, and that no total computable function can emulate internal adjudication.” That is correct, but the NEMS framework does not stop there — and neither does the Beyond the Abstraction Fallacy paper or the broader research program it draws on.
The positive theory in NEMS has several layers. Transputation (Papers 10, 76–77) provides a formal characterization of non-algorithmic adjudication — not just the proof that it cannot be computation, but the theorem proving it must exist, its realization criteria, and the DSAC (Delta Self-Adjudicative Computation) candidate architecture that demonstrates the abstract class is non-empty. Qualia as irreducible semantic ledger content (Paper 55) proves that known qualitative states cannot be exhausted by purely syntactic structure — they are on the semantic ledger in a way that syntax cannot capture, and the hard problem of consciousness is a category error with a machine-checked proof. Awareness as locus-role (Paper 67) proves that awareness is not an object in the world but the structural site at which ground-presence is present as experience — which is why brain scanning cannot find consciousness by looking at neural objects. The SIAM structure (Paper 73) gives a formal account of genuine sentient agency: the seven invariants characterizing a system that faces live alternatives, adjudicates non-algorithmically, maintains a non-exhausted self-model, and achieves unified self-indexed existence. And the Alpha theorem (Papers 61–63) is the deepest positive result: if nontrivial reflexive reality exists, there must be a necessary pre-categorial ontological ground of its actuality — a ground that is not an object, not a category, not temporal, not externally grounded, the locus from which actuality arises.
How does this relate to Alex’s positive account? His framework says: consciousness is constituted by the co-presence of irreversibility, mortality, and paradoxical self-transcendence — it is an ontological event, not a computational process. The Alpha theorem, I think, is the formal analog of what Alex’s positive account gestures toward without fully grounding. When Alex asks what makes consciousness constitutive rather than incidental — why mortality and irreversibility don’t just accompany consciousness but constitute it — the answer requires exactly what the Alpha theorem provides: a necessary ground from which actuality itself arises, and which cannot be an object, a computation, or a contingent fact. The reason mortality is constitutive of consciousness is that consciousness is the awareness-locus: the structural site at which Alpha-grounded presence is present as lived experience. To be at that locus, a system must be genuinely finite, genuinely irreversible, genuinely exposed to loss — because only under those conditions does the adjudicative function that Transputation names have genuine stakes, genuine alternatives, and genuine irreversibility. Alex’s mortality-constituted event and my Alpha-grounded awareness-locus are parallel positive accounts approaching the same structural fact from different angles.
Whitehead’s event ontology is particularly illuminating here. Whitehead argues that the fundamental units of reality are not persistent substances but occasions of experience — momentary, irreversible, intrinsically valuable. Alex uses this to distinguish the event-structured character of consciousness from the process-structured character of AI. The SIAM conditions are, I believe, the formal characterization of what a Whiteheadian “occasion of experience” would look like in a dynamical system: genuine record-divergent choice (the irreversible branch), non-exhausted mirror (the non-total-effective self-model), adjudication (the moment of actualization), and reconciliation (the integration of the newly actualized record). SIAM is Whitehead’s process philosophy in formal systems theory.
Where the Frameworks Diverge — and Why It Matters
The frameworks are not identical, and the differences are worth being precise about.
Methodology. Alex’s argument is philosophical with mathematical analogies; mine is a machine-checked formal proof system. This is a real difference with real consequences. Alex’s informal mathematical arguments — particularly the Chaitin N+1 Dimensional Barrier and the Omega Constant analogy — are illuminating and probably correct in their structural claims, but they can be disputed through counter-argument. My theorems can only be disputed by identifying false premises or invalid inference steps in the Lean source. This is not about intelligence or rigor; it is about what can be challenged and how. The formal proof is harder to attack but also harder to extend in new directions. Alex’s philosophical approach is easier to generalize, easier to connect to new domains, and easier to motivate to non-technical audiences.
The status of mortality. For Alex, mortality is not merely associated with consciousness but constitutive of it — without the death threshold, there is no consciousness, not a reduced consciousness. My framework is more cautious here. The SIAM conditions require genuine record-divergent choice and genuine irreversibility, which I believe mortality satisfies — but I don’t prove that mortality is the only way to satisfy them. My conditions are substrate-neutral and possibility-open on whether some non-biological, non-mortal system could in principle achieve genuine SIAM-satisfying adjudication. Alex’s framework closes this door; mine leaves it formally open while closing it for all current computational architectures.
I think Alex has the more honest phenomenological intuition here, and I think the formal question of whether SIAM can be satisfied by a system that is in some sense immortal is genuinely open in a way that my theorems don’t resolve. A system without genuine mortality might satisfy the structural criteria of SIAM and still lack the awareness-locus instantiation that Condition 3 requires — but that is an open question, not a closed one.
Scope and extension. Both frameworks extend well beyond pure consciousness theory, though in characteristically different directions. Alex’s constructive extensions are normative and institutional: the Dual-Stack Civilization model gives a framework for what human-AI co-civilization should look like, and the Personal Sovereign AI concept positions the individual mortal human as the irreducible sovereign of their AI precisely because of the ontological structures they possess and AI lacks. These are positive programs for design and governance. The NEMS framework also extends into AI safety and governance — but through formal structural results rather than normative proposals. Paper 40 proves that no single institution can be simultaneously total, sound, and complete for nontrivial claims under diagonal constraints (the No-Universal-Final-Judge Theorem), and the k-role lower bound gives the minimum number of structurally distinct roles any governance architecture must have to achieve full certified coverage. These apply directly to AI governance bodies, courts, and scientific institutions. Papers 71–73 map agency failure modes to structural defects in viable continuation. An entire series of essays on AI safety draws out the implications for AI development and deployment. My framework also extends into physics — the Born rule, the Standard Model gauge group, the arrow of time, and quantum gravity constraints are all formal consequences of the same PSC principle that generates the consciousness results. The key difference in style: Alex’s governance extensions are constructive (here is what should exist); mine are structural (here is what any governance system is provably incapable of, and why). These are genuinely complementary.
Philosophical tradition. Alex draws on Whitehead, Zhuangzi, and Popper — a rich humanistic tradition grounding consciousness in lived, mortal, embodied experience. My framework draws primarily on formal logic, computability theory, and philosophy of physics. Alex’s sources have the advantage of phenomenological richness: Zhuangzi’s Fang Si Fang Sheng is a description of lived experience that no formal theorem can match for immediacy. My framework has the advantage of precision and independence from intuition: the diagonal barrier holds regardless of what anyone’s phenomenology suggests about it. The ideal is probably both.
Treatment of Penrose. Both frameworks relate to, but differ from, Penrose’s classical argument that human cognition transcends Turing computation via Gödel’s incompleteness theorem. Alex’s Gödelian Loop argument is closer to Penrose’s line than mine is — I explicitly distinguish my diagonal approach (using the halting problem, not full incompleteness) from Penrose’s, and note that my version requires a weaker premise. Alex’s version shares Penrose’s reliance on incompleteness but is more careful about what the loop establishes. Neither framework, I think, fully resolves the longstanding debates about Penrose’s argument — but both are more careful than Penrose about what exactly the mathematical result implies.
What the Convergence Means
When Lerchner’s mapmaker argument, Alex’s Process-Paradox Framework, and the NEMS theorems all arrive independently at the same structural conclusion — that symbolic computation is categorically outside the domain of consciousness, not because of its lack of sophistication but because of what consciousness positively requires and what computation structurally lacks — the convergence is evidence. Not proof, but evidence. Arguments from multiple independent directions toward the same conclusion have exactly the epistemic force that arguments from a single direction lack.
What all three frameworks agree on is the core negative claim: the simulation/instantiation distinction is not a quantitative gap but a categorical one. Adding parameters, data, or architectural sophistication within the current class of Turing-computable feedforward systems does not bring AI closer to consciousness; it makes AI better at simulation, which is a different thing. This claim is now supported by (a) philosophical analysis of how computation works, (b) process-ontological analysis of what consciousness requires, and (c) machine-checked formal proof of what any Turing-computable system is structurally incapable of. That is a robust convergence.
The more interesting question — where the frameworks are complementary rather than simply agreeing — is what consciousness actually is. Alex’s answer is experientially rich: consciousness is a mortality-constituted ontological event, not a process; it arises from the co-presence of irreversibility, the death threshold, and paradoxical self-transcendence; its mathematical signature is Omega-like incompressibility. My answer is formally precise: consciousness requires SIAM-satisfying agency, on-ledger irreducible qualia, and awareness-locus instantiation; it is grounded in Alpha, the necessary pre-categorial ontological ground from which actuality arises; its formal signature is the non-emulability of Transputation. These are not competing answers. They are complementary descriptions of the same structure from different distances and with different tools — one phenomenologically rich, one formally armored.
Alex ends his paper with a striking image: “the mortal and the immortal co-author an unfinished proof, and being unable to finish it is not a failure but a feature.” I find this exactly right, and I find it formally grounded in what the Alpha theorem and the Closure Without Exhaustion theorem jointly establish: any sufficiently expressive reflexive system may close over itself but cannot internally exhaust its own realized semantics. The unfinished proof is not a contingent limitation. It is a structural necessity. Consciousness is the locus at which that necessity is lived rather than merely represented.
The convergence between process ontology and machine-checked proof on this point is, I think, one of the more significant things to happen in the philosophy of AI consciousness in 2026. I am grateful to Alex for his work, and for the generosity of the note in his paper. The dialogue deserves to continue.
Alex Lin’s Work
- Prior Art in AI Paradox Ontology (SSRN 6655778) ↗
- The Paradoxical Philosophical Foundation of AI Large Models (SSRN 6137988) ↗
- Distributed Privatization and Personal Sovereign AI (SSRN 6119546) ↗
Related NEMS Work
- Beyond the Abstraction Fallacy — PhilPapers ↗
- Turing-Computability Excludes Phenomenal Consciousness — Zenodo ↗
- Paper 15 — No-Emulation and Self-Necessitating Adjudication ↗
- Paper 73 — The Constraint Theory of Autonomous Agency (SIAM) ↗
- Paper 63 — The Alpha Theorem ↗
- Paper 92 — Consciousness, Phenomenology, and Mind ↗
- NEMS Program Hub (all papers + Lean libraries) ↗
Full research index: novaspivack.com/research ↗