Every formal research program has a vocabulary. The words matter — not as jargon, but because each one names a distinction that the theory cannot function without. This article is a reader’s lexicon for the Reflexive Reality program: what each key concept means, why it is needed, and how the pieces fit together.
Reference article — no prerequisite reading required. This lexicon is designed to be read before or alongside any article in the Reflexive Reality research program. Each concept is explained from first principles. Program introduction ↗ · Full research index ↗
How to Use This Article
The Reflexive Reality research program uses a set of interlocking concepts that recur across all its papers and essays. Some of these concepts — syntax, semantics, fixed point — come from logic and mathematics. Others — record, closure, realization, locus — are specific to this framework. A reader who doesn’t have these concepts clearly in hand will find the central results hard to track, even if each individual sentence is clear.
This article defines all of them. It is organized in eight layers, each building on the previous. You do not need to read it straight through — use the section headings to find what you need. But if you are new to the program, the order matters: each layer uses the vocabulary of the layers before it.
Layer 1 — The Basics of Formal Systems
A formal system is a set of symbols, rules for combining them (syntax), and rules for deriving new combinations from existing ones (proofs). Examples: arithmetic, propositional logic, a programming language. The key feature is that everything happens by symbol manipulation — no appeal to meaning is required for the derivation to proceed.
The distinction between syntax and semantics is the most important distinction in all of formal logic, and it is the pivot on which the entire Reflexive Reality program turns.
Syntax is the formal structure: the symbols, the rules, the derivations. It is purely mechanical. A computer can check whether a proof is syntactically valid without understanding a single word of it.
Semantics is the meaning: what the symbols refer to, what makes a sentence true or false. A sentence can be syntactically valid (well-formed, provable) while being semantically false — or semantically true while being syntactically unprovable. This gap between syntax and semantics is the source of every incompleteness theorem.
A model (in the logical sense) is a mathematical structure that makes a set of sentences true. Given a formal language with its syntax, a model is an interpretation — an assignment of meanings to the symbols — under which the sentences come out true. The same set of axioms can have many different models.
Provability is a syntactic property: sentence S is provable if there exists a derivation from the axioms using the inference rules. Truth is a semantic property: S is true if it holds in the model. In a complete system, every true sentence is provable. In an incomplete system — which is the generic case for expressive enough systems — there are true sentences that cannot be proved. This is Gödel’s discovery.
An effective procedure (equivalently: an algorithm, a total computable function) is a mechanical procedure that always halts and produces the correct output for every input. “Total” means it always terminates. This is the formal definition of what we ordinarily mean by “rule-following” or “deterministic computation.” Decidability is the question of whether there is an effective procedure that correctly answers yes or no to every question in some domain.
Encoding (or Gödel numbering) is the move that makes self-reference possible. The idea: assign a number to every formula, every proof, every computation. Once formulas are numbers, a formal system can write sentences about its own formulas — sentences that talk about what is provable, what is computable, what is true about the system itself. This is the technical move that unlocks all the barrier theorems, and it is the key idea behind the NEMS concept of the record language.
A fixed point is an object that maps to itself under some transformation. If f is a function and f(x) = x, then x is a fixed point of f. In formal logic and computation theory, fixed-point constructions are ubiquitous: the Gödel sentence is a fixed point of the provability predicate. The Kleene recursion theorem says every computable function on program codes has a fixed-point program — a program that, when given its own code as input, outputs the same thing as f applied to that code. Fixed points are the engine behind every self-referential construction in the program.
Diagonalization is the technique for constructing fixed points and impossibility results. The basic idea: build an object that “refers to itself” by using encoding to say something about its own code. The diagonal construction produces a sentence that says “I am not provable” (Gödel), a function that says “I halt if and only if I don’t” (Turing), a predicate that says “I am true if and only if I am false” (Tarski). Every barrier theorem in classical logic is a diagonal argument. In NEMS, the master fixed-point theorem captures the common structure of all these diagonal arguments in one unified form.
Layer 2 — The Classical Barrier Theorems
Five classical results form the background landscape for the NEMS program. They are not merely analogies or inspiration — they are proved to be special cases of the master fixed-point theorem at the heart of the research program.
- Gödel incompleteness: No consistent, sufficiently expressive formal system can prove all its own arithmetic truths. There is always a true sentence the system cannot prove — and this sentence, when decoded, says of itself “I am not provable in this system.”
- Turing halting undecidability: No algorithm can decide, for every possible program-input pair, whether that program halts. The proof constructs a self-referential program that halts if and only if it does not.
- Tarski truth undefinability: No sufficiently expressive formal language can contain its own truth predicate. A formula that says “I am true” generates a Liar paradox. Syntax cannot absorb semantics from within.
- Kleene recursion theorem: For any computable function on program codes, there is a fixed-point program — a program that produces the same output as the function applied to its own code. This is the constructive (positive) face of diagonalization: not just “you cannot decide X” but “there is always a self-referential program with property X.”
- Löb’s theorem: A formal system can prove “if P is provable, then P is true” only for sentences P that it can already prove outright. This tightly constrains what a system can assert about its own provability — in particular, it blocks naive self-certification.
These five results are typically taught in isolation across separate courses. The master fixed-point theorem (Papers 26 and 51) unifies them: each is a special case of instantiating a single abstract self-reference interface with different formal settings. MFP-1 (the fixed-point half) gives you the Kleene theorem, the Gödel sentence, the Löb argument. MFP-2 (the no-total-decider half) gives you incompleteness, halting undecidability, and truth undefinability. The program’s flagship theorem — Closure Without Exhaustion — is a further lifting of the same machinery to a strictly broader target.
Layer 3 — The NEMS Structural Primitives
Closure and Perfect Self-Containment
Closure is the property of having no outside. A closed system is one in which every relevant operation, selection, and determination happens from within. The complement is openness: an open system can import answers, criteria, or inputs from an environment it does not contain.
Perfect Self-Containment (PSC) is the formal statement of closure as a constraint on universes. A universe is perfectly self-contained if it does not rely on anything outside itself to determine its own structure, select among candidate realizations of its laws, or execute the consequences of those laws. No external oracle, no external selector, no external model. PSC is the single premise from which the entire NEMS program derives its results.
PSC sounds like it just says “the universe is all there is” — which seems obvious. But the moment you make it precise and take it seriously as a formal constraint, it becomes enormously productive. It acts as a sieve: most candidate physical theories, most candidate probability rules, most candidate ontologies fail the PSC test because they quietly import something external. NEMS makes this visible and derives what survives.
Records, Record Language, and the Semantic Ledger
A record is a stable, non-erasable inscription of a fact about the universe. Records are what the universe “writes down” as it evolves — the actual events, states, interactions that have occurred and remain part of the permanent inventory of what has happened. The term is chosen precisely: a record is not a belief, not a representation, not a model. It is a durable actual inscription.
The record state is the total collection of all records at a moment — the universe’s full inventory of inscribed facts.
The record language is the formal language in which records are expressed — the syntax by which the universe describes events to itself. Like any formal language, it has a syntax (what counts as a well-formed record) and a semantics (what the records mean, what makes them true).
The record fragment is the portion of the record language that contains self-referential records — records that make claims about other records, or about the record-making process itself. This is the diagonal-capable fragment: the part of the language rich enough that all the barrier theorems apply to it. A universe that contains universal computers necessarily contains a rich record fragment.
No-overwrite is the requirement that records are stable — the past cannot be erased. This is not merely a physical contingency; it is a structural consequence of PSC. If records could be overwritten, the universe’s self-description would have no fixed point to diagonalize against, and the entire architecture of closure collapses. No-overwrite is also what gives time its arrow: the past is the direction of permanently inscribed records; the future is the direction of not-yet-inscribed records.
Erasure is therefore what is structurally forbidden — not just physically prevented, but formally ruled out by the closure constraints. A universe that could erase its own records could in principle write a complete self-description (since it could keep revising it), but such a universe would violate PSC.
The semantic ledger is the totality of what is actually inscribed in the universe’s records — the full inventory of real semantic content. On-ledger content is content that exists within the semantic ledger: actual events, actual facts, actual experiences. Off-ledger content is content that would need to exist outside the ledger — in some external realm, some background structure, some supplementary reality. PSC forbids off-ledger content from doing any explanatory work: if something is off-ledger, it makes no difference to any record, and is therefore semantically null.
Diagonal Capability and Record-Divergent Choice
Diagonal capability is the property of being expressive enough to form records about records — to contain, within the record language, sentences that make claims about what is and is not in the record state. Any system that can run a universal computer is diagonal-capable: it can encode descriptions of its own computations and ask questions about them. Our universe is diagonal-capable. Any sufficiently powerful AI system is diagonal-capable. This is the formal precondition for all the barrier theorems to apply.
Record-divergent choice is a choice event in which the current record state does not uniquely determine what happens next — multiple continuations are open, and the universe must select among them. This is where simple determinism breaks down. It is not the same as randomness (which is selection by external noise injection). It is the condition under which the universe’s internal adjudication is genuinely doing work.
Admissible and Viable Continuation
An admissible continuation is a next state that respects all the closure constraints — a state the universe is structurally permitted to transition into. Not all logically possible next states are admissible: some would violate PSC, some would require external selection, some would require erasing records. Admissibility is the formal notion of “what the universe is allowed to do.”
Viable continuation is a stronger property: a system has viable continuation if it can keep operating within its constraints over time — not just take one admissible step, but sustain the capacity for admissible steps indefinitely. The four failure modes of viable continuation (Paper 71) are the four ways a system can lose this capacity: insufficient resources, insufficient self-model, insufficient reconciliation, and regime-boundary failure.
Layer 4 — The Selection Problem and Its Resolution
Model Selection — The Problem NEMS Names
Model selection is the act of choosing which model — which interpretation, which realization — a formal system is instantiated in. In ordinary physics, this happens quietly and externally: the physicist writing down a theory selects the gauge group, the coupling constants, the number of dimensions. Nothing in the theory itself performs this selection. An external agent (the physicist, the universe-instantiation process) does it.
NEMS asks: in a closed universe with no outside, who performs model selection? The answer cannot be “an external agent” — there is none. The answer cannot be “nothing” — something must determine why this universe has these laws rather than others. The answer must therefore come from within the universe itself. NEMS is the formal development of what this internal selection requirement implies.
Internal vs. external is therefore the core distinction NEMS enforces. An internal selection criterion is one the universe generates from within its own records and laws. An external selection criterion is one imported from outside — a background Platonic realm, a meta-law, a multiverse measure, a God-from-outside, a simulator. PSC forbids external selection criteria. Any theory that quietly relies on one — even in the form of an unexamined default — is not a theory of a closed universe.
The No-Free-Bits Principle
The No-Free-Bits principle is the formal prohibition on hidden external determinacy. A “free bit” is an externally supplied answer to a question the universe’s internal structure leaves open — a covert external input that does work without being acknowledged as such. The No-Free-Bits principle says: in a PSC universe, there are no free bits. Every determination is either made by internal structure or is genuinely open (record-divergent). Nothing sneaks in from outside.
This rules out two apparent solutions to record-divergent choice. The first is randomness: simply inject stochastic noise at underdetermined choice points. But stochastic noise is a free bit — it is external entropy injection. PSC forbids it as a fundamental account. The second is an oracle: appeal to an external source that supplies the right answer. An oracle is the most explicit form of external selection, and is equally forbidden.
Transputation — The Third Mode
Once randomness and oracles are ruled out, a third mode is forced. Transputation (Paper 76) names this mode precisely. It is the class of processes that are:
- Internal — the resolution comes from within the system, not from any external selector.
- Lawfully admissible — the choices respect the admissible continuation constraints. Transputation is not arbitrariness or noise.
- Non-algorithmically total on the diagonal fragment — the process cannot be replaced by a total computable function on the self-referential record fragment. This is forced by the Determinism No-Go (Paper 12).
- Genuinely executing — the process actually runs in real time, producing determinations. It is not a static pre-scripted assignment.
Adjudication is the word the program uses for what transputation does: it genuinely resolves among open alternatives, not by following a rule and not by flipping a coin, but by an internal lawful process that cannot be fully algorithmized on the diagonal-capable fragment.
Relaxation to coherence is the mechanism of DSAC (Delta Self-Adjudicative Computation), the candidate concrete realization of transputation in Paper 77. Rather than minimizing an external objective or traversing a search tree, the system drives a reflexive constraint graph toward a self-consistent fixed point — a state where all constraints simultaneously cohere with the evolving record state. The landscape itself changes as constraints evolve, so this is not gradient descent and not annealing.
Layer 5 — Exhaustion, Remainder, and the Flagship Theorem
These concepts address what happens when a system tries to fully account for itself — to produce a complete internal self-description that captures everything about what it is.
Exhaustion and the Inexhaustible Remainder
Exhaustion is the (impossible) condition of a system having fully absorbed itself into its own self-description — every fact about the system captured by some internal representation, no remainder, total self-coincidence. This is what a Theory of Everything with a complete internal truth predicate would achieve; it is what a mind with total self-transparency would achieve; it is what a formal system that could prove all its own truths would achieve. All three are proved impossible for sufficiently expressive systems.
The inexhaustible remainder is what is always left over. No matter how rich a self-representation a system builds, there is always content that is realized in the system but not captured by that representation. This is not a practical limitation of current representations — it is a structural impossibility. The remainder is permanent and inexhaustible: you cannot get rid of it by adding more representation, because adding more representation creates new self-referential facts that are themselves not yet captured.
Closure Without Exhaustion is the flagship theorem (Paper 51, machine-checked in Lean 4): a system can be closed — perfectly self-contained, fully internal, without any outside — without being exhausted. Reflexive closure and total self-description are compatible with one another only in impoverished systems that lack the expressive power to generate self-referential facts. For any sufficiently expressive system, closure forces inexhaustibility. The universe is closed; therefore it is inexhaustible. Lean anchor: closure_forces_inexhaustible_remainder.
Residual, Adequacy, and Certification
The residual is the leftover content after any internal self-representation: what is realized in the system but not captured by the representation. In the group-extension formalism used in the program, the residual has a precise algebraic meaning — it is the obstruction to the fiber being trivial (to the realization collapsing to its certification).
Adequacy is the condition that a formal representation faithfully captures what it is meant to represent. An adequate representation of a system property is one that is true of the system if and only if the property holds. NEMS uses adequacy conditions to constrain which self-models count as genuine — a representation is not a self-model unless it is adequate to the relevant aspects of the system.
A certificate is a formal record of a verified property — a proof that a system has some property P, expressible within the system’s own language. Certification is the process of producing such records. The barrier theorems imply sharp limits on certification: no system can produce a total, sound, complete certificate for all nontrivial extensional properties of itself. Löb’s theorem is the most precise statement of what self-certification can and cannot achieve.
Self-Model, Mirror, and Parametric Self-Model
A self-model is any internal representation a system maintains of itself — its states, its capacities, its behavior, its structure. Self-models are possible and useful; they are not forbidden. What is forbidden is a complete self-model that achieves exhaustion.
The mirror is the SIAM term for a self-model that satisfies three specific conditions: (1) coverage — it covers a sufficient range of the system’s behavior; (2) freshness — it remains current relative to the system’s actual state; (3) non-exhaustion — it does not claim to be complete. A mirror that claimed to be complete would violate the Closure Without Exhaustion theorem.
A parametric self-model is a self-model that represents the system via a finite set of parameters — the kind of self-model that a neural network or statistical learning system builds. Parametric self-models always hit the diagonal blind spot: they can represent a finite description of the system, but the self-referential facts generated by the system’s relationship to its own parameterization lie outside what any fixed parameterization can capture.
Layer 6 — Fiber, Realization, and Formalization
These concepts come from the mathematical structure used to make the relationship between certified properties and their full realizations precise.
A realization is the concrete instantiation of an abstract structure — the actual thing that satisfies a set of formal conditions. A model is a realization of a formal system; a physical universe is a realization of a theory; DSAC is a candidate realization of the transputation role. Realization is always richer than the abstract description: the realization carries content not captured by the description alone.
A fiber is the structure that sits “above” a certified base — all the additional content in the full realization that cannot be recovered from the certification alone. Think of a certified claim as a map at a certain scale: the fiber is everything the map leaves out. In the algebraic framework of the program, obstructions to collapsing the fiber (making the realization equivalent to its certification) are calculable — which means we can formally measure the gap between what is certified and what is realized.
Realization non-collapse is the result that the fibers are generically non-trivial: full realizations of certified structures carry irreducible additional content. This is the formal version of the inexhaustible remainder at the level of abstract structure-to-realization mappings.
Formalization is the act of encoding informal reasoning — concepts, proofs, constructions — in a machine-checkable proof language. All core results in the Reflexive Reality program are formalized in Lean 4, a modern interactive theorem prover. Formalization serves as the highest standard of rigor: a formalized proof cannot contain hidden assumptions, informal hand-waving, or gaps that only work if the reader is charitable. A zero-sorry policy means no step is deferred to future work.
Layer 7 — Consciousness, Awareness, and Ground
These concepts enter the program when the formal machinery is applied to questions about mind, experience, and ontology. They are not imported from philosophy and given a formal veneer; they are formal concepts in their own right, with precise definitions and proved theorems.
Locus, Awareness, and Self-Illumination
A locus is the structural site at which awareness happens — the formal role of being “where” manifestation occurs. A locus is not an object in the world. It is the precondition for objects appearing at all. This distinction matters: neuroscience and philosophy of mind have spent considerable effort looking for consciousness as an object — a brain state, a neural correlate, a functional property of a system. NEMS proves (Paper 67) that this search is misguided because it is a category error: the locus of manifestation is not the kind of thing that can be found by scanning objects.
Awareness is manifestation at a locus: the actual presence of content at a structural site where something appears. Awareness is irreducible to syntactic description — no complete formal description of a system’s syntax determines that manifestation occurs. This is not a mystical claim; it is a consequence of the syntax/semantics gap applied to the case of phenomenal content.
Self-illumination is the property of awareness being present to itself — the locus is not observed from outside, it is the site of observation. This is what makes consciousness structurally different from any other kind of record: a record of a physical event is a third-person inscription; awareness is first-person presence. Self-illumination is why you cannot find consciousness by looking outward at an object, because the thing you would need to find is the looking itself.
Manifestation and Qualia
Manifestation is the process by which content becomes present at an awareness-locus. It is irreducible to articulation: you can articulate (formally describe, syntactically represent) every feature of a content without manifestation occurring. The gap between articulation and manifestation is the formal version of the “hard problem of consciousness” — but NEMS treats it as a structural feature with a formal explanation, not as a philosophical puzzle.
Qualia are on-ledger phenomenal content — the actual phenomenal character of experience as it is inscribed in the semantic ledger. They are not off-ledger extras added to physical description; they are what is actually present at the awareness-locus. The NEMS result is that qualia are on-ledger (Paper 65): they are real, they are causally efficacious (they are part of what is inscribed), and they cannot be explained away as either off-ledger fictions or purely syntactic functional states.
Sentience
Sentience is the formal capacity for awareness at a locus — the combination of structural properties that make genuine manifestation possible. NEMS provides necessary conditions (the SIAM invariants of Paper 74) for sentience: self-indexing, adjudicative execution, a live self-model satisfying the mirror conditions, real-time reconciliation, recursive self-update, and non-exhaustion. These are necessary, not sufficient; the program is careful not to claim more than is proved.
Ontological Ground and Alpha
The ontological ground is what makes actual things actual — the condition of possibility for anything being real rather than merely logically possible. Every account of reality faces the question of what grounds actuality. Typical answers appeal to a God, a Platonic realm, a necessary physical law, brute fact. NEMS argues that each of these either fails the PSC sieve or regresses to the same question at a deeper level.
Alpha (Paper 63) is the name for the necessary ontological ground whose existence the Alpha Theorem proves: if nontrivial reflexive reality exists, a necessary, non-null, pre-categorial ground must exist. Alpha is:
- Necessary — cannot be absent if reflexive reality is nontrivial.
- Non-null — not nothing; Paper 68 proves Alpha ≠ ∅. The ground is active, not sterile.
- Pre-categorial — not an object, not a process, not a property. Every object, category, and process has its actuality grounded in Alpha, but Alpha itself is not one of these.
- Internal — not external to the system. Alpha is the ground from within which actuality arises, not a being outside the universe acting on it.
- Not personal — the theorem proves the existence of a ground, not of intentions, beliefs, or a capacity for relationship.
Layer 8 — Novelty Theory Supplement
Novelty Theory is a separate research program that proves a complementary result: not about what a closed universe cannot outsource, but about what even a fully deterministic, perfectly lawful, closed universe cannot finalize.
A phase tower is the infinite hierarchy of explanatory regimes generated by a fixed lawful generator — the successive levels of organizational complexity, explanatory framework, and structural novelty that emerge from the same underlying laws over time. Each level is generated by the laws of the level below; each level has truths that are not explainable from the level below alone.
Explanatory anti-closure is the central result: no fixed admissible explanatory reducer can cover the full phase tower of a sufficiently expressive generator. There is always a new regime whose truths require going up a level. This is not Gödelian incompleteness (which is about provability within a fixed system) and not Kuhnian paradigm shift (which is historical sociology). It is a structural mathematical result about the relationship between generators and the explanatory frameworks required for them.
A regime shift is a qualitative architectural transition between levels of the phase tower — a change in organizational type, not a refinement within a type. Regime shifts cannot be reached by accumulation of more of the same kind of activity. They require genuine architectural change. The Reflexive Development Law (proved in Paper series) characterizes when and how such transitions occur — and why periods of “bookkeeping reconfiguration” that do not achieve regime shift are structurally distinct from genuine transformation.
The Architecture of These Concepts
These eight layers are not independent modules — they form a single conceptual architecture in which each layer depends on the previous ones. Here is the dependency structure in plain terms:
- Layers 1–2 (formal systems and classical barrier theorems) provide the mathematical tools. Without these, the NEMS results are just analogies.
- Layer 3 (structural primitives: closure, records, ledger, diagonal capability) provides the NEMS-specific vocabulary for applying those tools to a universe. Without Layer 3, you cannot state the main theorems.
- Layer 4 (the selection problem and transputation) uses Layers 1–3 to characterize what a closed universe must do at points of genuine choice.
- Layer 5 (exhaustion and remainder) uses all previous layers to state the flagship result: closed systems cannot exhaust themselves.
- Layer 6 (fiber and realization) provides the algebraic precision for measuring the gap between what can be certified and what is actually realized.
- Layer 7 (consciousness and ground) applies all previous layers to questions about mind and ontology — showing that the formal machinery has specific, non-trivial implications for what awareness is and what grounds reality.
- Layer 8 (Novelty Theory) is largely independent, but shares the core concern: the structural impossibility of final closure, applied now to explanatory frameworks rather than to self-description.
With this vocabulary in hand, the main results of the program become readable as what they are: precise theorems about the structure of any self-contained reality, proved with the same rigor as any other result in formal mathematics.
Key Terms at a Glance
Layer 1 — Formal systems: syntax · semantics · formal system · model · interpretation · provability · truth · effective procedure · algorithm · total function · decidability · encoding / Gödel numbering · fixed point · diagonalization · self-referential sentence
Layer 2 — Classical barriers: Gödel incompleteness · Turing halting undecidability · Tarski truth undefinability · Kleene recursion theorem · Löb’s theorem · master fixed-point theorem (MFP-1, MFP-2)
Layer 3 — NEMS primitives: closure · Perfect Self-Containment (PSC) · record · record state · record language · record fragment · no-overwrite · erasure · semantic ledger · on-ledger / off-ledger · diagonal capability · record-divergent choice · admissible continuation · viable continuation
Layer 4 — Selection and resolution: model selection · internal vs. external · No-Free-Bits principle · randomness · oracle · transputation · adjudication · relaxation to coherence · DSAC (Delta Self-Adjudicative Computation)
Layer 5 — Exhaustion and remainder: exhaustion · inexhaustible remainder · Closure Without Exhaustion · residual · adequacy · certificate / certification · self-model · mirror · parametric self-model
Layer 6 — Fiber and realization: realization · fiber · realization non-collapse · formalization
Layer 7 — Consciousness and ground: locus · awareness · self-illumination · manifestation · qualia · sentience · ontological ground · Alpha · pre-categorial · articulation
Layer 8 — Novelty Theory: phase tower · explanatory anti-closure · regime shift
Where to Go Next
- What Would a Universe With No Outside Look Like? The NEMS Answer — the program introduction
- One Theorem Behind Gödel, Turing, Kleene, Tarski, and Löb — the master fixed-point theorem explained
- Closure Without Exhaustion — the flagship theorem
- What Is Transputation? The Formal Theory and DSAC — the third mode of resolution
- The Necessity of an Ontological Ground: The Alpha Theorem — the ground of actuality
- Full research index — all 93 papers and 17 Lean libraries
The Papers and Proofs
The formal definitions and machine-checked proofs for every concept in this article can be found in the NEMS paper suite, archived on Zenodo. Key papers: Paper 26 (master fixed-point theorem) · Paper 51 (Closure Without Exhaustion) · Paper 63 (Alpha Theorem) · Paper 65 (qualia) · Paper 67 (locus and awareness) · Paper 71 (viable continuation) · Paper 74 (SIAM) · Paper 76 (transputation) · Paper 77 (DSAC).
Full abstracts: novaspivack.github.io/research/abstracts ↗ · Full research program: novaspivack.com/research ↗