Category Archives: Mindcorp

Why AI Systems Can’t Catch Their Own Mistakes – And What to Do About It

Abstract

Large language models exhibit a critical limitation: they cannot reliably evaluate their own outputs within the same conversational context where those outputs were generated. Recent research demonstrates that when AI systems attempt to check their own reasoning, they confirm their initial responses over 90% of the time regardless of correctness—a phenomenon researchers term “intrinsic self-correction failure.”… Read More “Why AI Systems Can’t Catch Their Own Mistakes – And What to Do About It”

Epistemology and Metacognition in Artificial Intelligence: Defining, Classifying, and Governing the Limits of AI Knowledge

Nova Spivack, Mindcorp

Gillis Jonk, Kearney

June 3, 2025

Abstract

As artificial intelligence, especially large language models (LLMs), becomes increasingly embedded within critical societal functions, understanding and managing their epistemic capabilities and limitations becomes paramount. This paper provides a rigorous and comprehensive epistemological framework for analyzing AI-generated knowledge, explicitly defining and categorizing structural, operational, and emergent knowledge limitations inherent in contemporary AI models.… Read More “Epistemology and Metacognition in Artificial Intelligence: Defining, Classifying, and Governing the Limits of AI Knowledge”

Cognition is All You Need – The Next Layer of AI Above Large Language Models

My arXiv article on the future of AI is live.

Recent studies of the applications of conversational AI tools, such as chatbots powered by large language models, to complex real-world knowledge work have shown limitations related to reasoning and multi-step problem solving.

Read More “Cognition is All You Need – The Next Layer of AI Above Large Language Models”