E. Cognitive Computing
Cognition is All You Need — The Next Layer of AI Above Large Language Models presents the case that complex knowledge work requires a cognitive layer above LLMs, proposes a dual-layer neuro-symbolic Cognitive AI architecture, and argues that AGI cannot be achieved by probabilistic approaches alone. Co-authored with Sam Douglas, Michelle Crames, and Tim Connors.
ArXiv preprint (arXiv:2403.02164) · DOI: 10.48550/arXiv.2403.02164
↑ Back to topRelated NEMS Papers on Agency, AI, and Intelligence
The following published papers from the NEMS formal program bear directly on the questions of agency, intelligence, AI safety, and machine consciousness addressed in the Cognitive Computing program.
Paper 17 — Necessary Adjudicators and RSMC
Observer-like systems are structurally necessary in any PSC universe
Paper 22 — Irreducible Agency
Non-algorithmic adjudication is a physical requirement; agency is irreducible
Paper 30 — No Total Self-Certifier
No diagonal-capable AI can certify all nontrivial properties of itself
Paper 31 — Epistemic Agency Under Diagonal Constraints
Diversity necessary for strict improvement; no universal self-verifier
Paper 32 — Self-Improvement Under Diagonal Constraints
No universal self-upgrade certifier; evolution as an attractor architecture
Paper 40 — Institutions Under Diagonal Constraints
k-role lower bound; no universal final judge; governance under closure
Paper 58 — Necessary Reflexive Intelligence
Why nontrivial reflexive worlds are neither random nor robotic
Paper 59 — A Calculus of Intelligence
Five levels; frontier, adjudication, self-modeling; no intelligence without frontier
Paper 73 — The Constraint Theory of Autonomous Agency (SIAM)
First formal definition of genuine agency; feedforward systems excluded by theorem
Paper 89 — Survey for AI, Agents, and AGI Researchers
Theorem-grade constraints on intelligence, agency, safety, and machine consciousness
Related Explanatory Essays
- No AI Can Fully Verify Itself: The Formal Proof
- Scaling Doesn’t Fix the Self-Model Problem
- What Makes Something a Genuine Agent? The SIAM Theorem
- Why AI Cannot Simulate Its Way to Consciousness
- No Institution Can Be the Final Judge
- Why Diversity Is Not Just Good — It Is Structurally Necessary
- A Formal Theory of Intelligence
- Can Machines Become Conscious? The NEMS Answer
- Actual vs. Artificial Intelligence
- How to Build a Sentient Machine
Related Blog Posts
The following posts explore AI cognition, ethics, and safety in depth — complementing the formal Cognitive Computing research program.
- Why AI Systems Can’t Catch Their Own Mistakes — And What to Do About It
- Epistemology and Metacognition in Artificial Intelligence: Defining, Classifying, and Governing the Limits of AI Knowledge
- A Hierarchical Framework for Metacognitive Capability in AI: Eleven Tiers of Epistemic Self-Awareness
- Logical Foundations for Ethical AI: Natural Laws for Artificial Minds
- The Core Principles of AI for Good (AI4G): A Constitutional Framework for Beneficial AGI
- Metacognitive Vulnerabilities in Large Language Models: Logical Override Attacks and Defense Strategies
- The Technopolitical Age: AI, Power, and the Collapse of the Old World