Let’s Put the Wikipedia in Space: The Arch Project

In this article, I propose an achievable project to seed the solar system and eventually the universe with digital copies of humanity’s most important knowledge — stored in digital archives that I call “Archs.”

There are many reasons to attempt a project like this – for one thing, it’s an inspirational idea if nothing else — but beyond that it could be of benefit to future generations on Earth.

Continue reading

Did Apple Buy Topsy for Contextual Awareness?

The stunning news that Apple bought social search engine, Topsy, for more than $200M has many scratching their heads. Why would Apple want social data, and why would they pay so much for it?

There has been a lot of speculation about the reasons for this acquisition — ranging from making Siri better, to making the App Store smarter, to acquiring big data expertise to develop insights on the Apple firehose.

But I think the reason may be something else altogether: Personalization.

Continue reading

Consciousness is Not a Computation

In the previous article in this series, Is The Universe a Computer? New Evidence Emerges I wrote about some new evidence that appears to suggest that the universe may be like a computer, or least that it contains computer codes of a sort.

But while this evidence is fascinating, I don’t believe that ultimately the universe is in fact a computer. In this article, I explain why.

My primary argument for this is that consciousness is not computable. Since consciousness is an undeniable phenomenon that we directly experience, the universe has to be more than a mere computer, because a computer cannot create or simulate consciousness. No universe that is merely a computer or a computation can generate or account for consciousness. Below I explain this in more detail.

Consciousness is More Fundamental Than Computation

If the universe is a computer, it would have to be a very different kind of computer than what we think of as a computer today. It would have to be capable of a kind of computation that transcends what our computers can presently do. It would have to be capable of generating all the strangeness of general relativity and quantum mechanics. Perhaps one might posit that it is a quantum computer of some sort.

However, it’s not that simple. IF the universe is any kind of computer, it would actually have to be able to create every phenomenon that exists, and that includes consciousness.

The problem is that conscious is notoriously elusive, and may not even be something a computer could ever generate. After decades of thinking about this question from many angles, I seriously doubt that consciousness is computable.

In fact, I don’t think consciousness is an information process, or a material thing, at all. It seems, from my investigations, that consciousness is not a “thing” that exists “in” the universe, but rather it is in the category of fundamentals just like space and time. For example, space and time are not “in” the universe, rather the universe is “in” space and time. I think the same can be said about consciousness. In fact, I would go so far as to say consciousness is probably more fundamental than space and time, they are in “in” it rather than it being “in” them.

There are numerous arguments for why consciousness may be fundamental. Here I will summarize a few of my favorites:

  • Physics and Cosmology. First of all there is evidence in physics, such as the double slit experiment, that indicates there may be a fundamental causal connection between the act of consciously observing something and what is actually observed. Observation seems to be intimately connected to what the universe does, to what is actually measured. It is as if the act of observation — of measurement — actually causes the universe to make choices that collapse possibilities into specific outcomes. This implies that consciousness may be connected to the fundamental physical laws and the very nature of the universe. Taken to the extreme there are even physical theories, such as the anthropic principle, that postulate that the whole point of the universe, and all the physical laws, is consciousness.
  • Simulation. Another approach to analyzing consciousness is to attempt to simulate, or synthesize consciousness with software, where one quickly ends up in either an infinite regress or a system that is not conscious of its own consciousness. Trying to build a conscious machine, even in principle, is very instructive and everyone who is seriously interested in this subject should attempt it until they are convinced it is not possible. In particular self-awareness, the consciousness of consciousness, is hard to model. Nobody has succeeded in designing a conscious machine so far. Nobody has even succeeded in designing a non-conscious machine that can fool a conscious being into thinking it is a conscious being. Try it. I dare you. I tried many times and in end I came to the conclusion that consciousness, and in particular self-consciousness, lead to infinite regresses that computers are not capable of resolving in finite time.
  • Neuroscience. Another approach is to try to locate consciousness in the physical brain, the body, or anywhere in the physical world – nobody has yet found it. Consciousness may have correlates in the brain, but they are not equivalent to consciousness. John Searle and others have written extensively about this issue. Why do we even have brains then? Are they the source of consciousness, or are they more like electrical circuits that merely channel it without originating it, or are brains the source of memory and cognition, but not consciousness itself? There are many possibilities and we’re only at the beginning of understanding the mind-brain connection. However so far, after centuries of dissecting the brain, and mapping it, and measuring it in all kinds of ways, no consciousness has been found inside it.
  • Direct Introspection. One approach is through direct experience: search for an origin of knowing, by observing your own consciousness directly, with your own consciousness. No origin is found. There is no homunculus in the back of our minds that we can identify. In fact, when you search, even mere consciousness is not found, let alone its source. The more we look the more it dissolves. Consciousness is a word we use, but when we look for it we can’t find what it refers to. But that doesn’t mean consciousness isn’t a real phenomenon, or that it is an illusion. It is undeniable that we are aware of things, including of the experience of being conscious. It is unfindable, yet it is not a mere nothingness either – there is definitely some kind of awareness or consciousness taking place that is in fact the very essence of our minds. The nature of consciousness exemplifies the Buddhist concept of the “emptiness” in a manner that we can easily and directly experience for ourselves. But note that “empty” in this sense doesn’t mean nothingness, or non-existence, it means that it exists in a manner that transcends being either something or nothing. From the Buddhist perspective, although consciousness cannot be found, it is in fact the ultimate nature of reality, from which everything else appears.
  • Logic. Another approach is logical: Recognize that all experience is mediated by consciousness — all measurements, all science, all our own personal experience, all our collective experiences. Nothing ever happens or is known by us without first being mediated by consciousness. Thus consciousness is more fundamental than anything we know of, it is the most fundamental experience, even more fundamental than the experience of space and time, or our measurements thereof. From this perspective we cannot honestly say that anything ever can exist apart from consciousness, from someone or something knowing it. In fact, it would appear that everything depends on consciousness to be known, and possibly to exist, because we have no way to establish that anything exists apart from consciousness. Based on the evidence we have, consciousness is therefore fundamental. The universe appears to be in consciousness not vice-versa: This is in fact a more logical and more scientific conclusion than the standard belief that consciousness is an emergent property of the brain, or that it is a separate phenomenon from appearances. In the extreme, this investigation leads to a philosophical view called solipsism. However note that the Buddhist view (above) transcends solipsim because, in fact there is no self in consciousness – anything you can label as “self” or “I” is actually just an appearance in consciousness, not consciousness in pure form. Since there is no self, you cannot claim that you own consciousness, or that everything exists in “your” consciousness – because there is no way to assert a self that owns or is consciousness that contains everything else, nor can any “other” be asserted either. Since consciousness is more fundamental than self, or the self-other dichotomy, the view of solipsism is defeated. Instead consciousness transcends self and other, one and many.
  • Unusual experiences. Yet another approach is to observe consciousness under unusual or extreme conditions such as during dreaming, lucid dreaming, religious experiences, peak experiences, when under the influence of mind-altering drugs, or in numerous well-documented cases of apparent reincarnation, and well-documented near-death experiences. In such cases there is a wealth of both direct and anecdotal evidence suggestive of the idea that consciousness is able to transcend the limits of the body, as well as space and time. Whether you believe such evidence is valid is up to you, however there is an increasing body of careful studies on these topics that are indicative that there is a lot more to consciousness than our day-to-day waking state.

Beyond Computation

Because of the above lines of reasoning and observation I have come to the conclusion that consciousness transcends the physical, material world. It is something different, something special. And it does not seem to be computable, because it has no specific form, substance or even content that can be represented as information or as an information process.

For example, in order to to be the product of a computation, consciousness would need to be comprised of information — there would need to be some way to completely describe and implement it with information, or an a information process — that is, with bits in a computer system. Information processes cannot operate without information – they require bits 1’s and 0’s, and some kind of a program for doing things with them.

So the question is, can any set or process of 1’s and 0’s perfectly simulate or synthesize what is to be conscious? I don’t think so. Because consciousness, when examined, is found to be literally formless and unfindable, it has no content or form that can be represented with 1’s and 0’s. Furthermore, because consciousness, when examined is essentially changeless, it is not a process – for a process requires some kind of change. Therefore it is not information or an information process.

Some people counter the above argument by saying that consciousness is an illusion, a side-effect, or what is called an “epiphenomenon” of the brain. They claim that there is no such thing as actual consciousness, and that there is nothing more to cognition than the machinery of the brain. They are completely missing the fundamental point.

But let’s assume they are right for a moment – if there is no consciousness, then what is taking place when a being knows something, or when they know their own knowing capacity? How could that be modeled in a computer program? Simply creating a data structure and process that represents its own state recursively is not sufficient – because it is static, it is just data – there is no actual qualia of knowing taking place in that system.

Try as one might, there is no way to design a machine or program that manifests the ability to know or experience the actual qualia of experiences. John Searle’s Chinese Room though experiment is a famous logical argument that illustrates this. The simple act of following instructions – which is all a computer can do – never results in actually knowing what those instructions mean, or what it is doing. The knowing aspect of mind – the consciousness – is not computable.

Not only can consciousness not be simulated or synthesized by a computer, it cannot be found in a computer or computer program. It cannot magically emerge in a computer of sufficient complexity.

For example, suppose we build up a computer or computer program by gradually adding tiny bits of additional complexity — at what point does it suddenly transition from being not-conscious to being conscious? There is no sudden transition to consciousness — I call that kind of thinking “magical complexity” – and many people today are guilty of it. However it’s just an intellectual cop-out. There is nothing special about complexity that suddenly and magically causes consciousness to appear out of nowhere.

Consciousness is not an emergent property of anything, nor is dependent on anything. It does not come from the brain, and it does not depend on the brain. It is not part of the brain either. Instead, it would be more correct to say that brain is perhaps an instrument of consciousness, or a projection that occurs within consciousness.

One analogy is that the brain channels consciousness, like an electrical circuit channels electricity. In a circuit the electricity does not come from the circuitry, it’s fundamentally the energy of the universe – the circuit is just a conduit for it.

A better analogy however is that the brain is actually a projection of conscious just as a character in a dream is a projection of the dreaming mind. Within a dream there can be fully functional, free-standing characters that have bodies, personalities and that seem to have minds of their own, but in fact they are all just projections of the dreaming mind. Similarly the brain appears to be a machine that functions a certain way, but it is less fundamental than the consciousness that projects it.

How could this be the case, it sounds so strange! However, if I phrase it differently all of a sudden it sounds perfectly normal. Instead of “consciousness” let’s say “space-time.” The brain is a projection of space-time, space-time does not emerge from the brain. That sounds perfectly reasonable.

The key is that we have to think of consciousness as the same level of phenomena as space-time, as a fundamental aspect of the universe. The brain is a space-time-consciousness machine, and the conceptual mind is what that machine is experiencing and doing. However, space-time-consciousness is more fundamental than the machinery of the brain, and even when the brain dies, space-time-consciousness continues.

For the above reasons, I think that consciousness proves that the universe is not a computer — at least not on the ultimate, final level of analysis. Even if the universe contains computers, or contains processes that compute, the ultimate level of reality is probably not a computer.

But let’s, for the purpose of being thorough, suppose that we take the opposite view, that the universe IS a computer and everything in it is a computation. This view leads to all sorts of problems.

If we say that the universe is a computation, it would imply that everything — all energy, space, time and consciousness — are taking place within the computation. But then what is the computation coming from and where is it happening? A computation requires a computer to compute it — some substrate that does the computation. Where is this substrate? What is it made of? It cannot also be made of energy, space, time or consciousness — those are all “inside” the computation, they are not the substrate, the computer.

Where is the computer that generates this universal computation? Is it generating itself? That is a circular reference that doesn’t make sense. For example, you can’t make a computer program that generates the computer that runs it. The computer has to be there before the program, it can’t come from the program. A computation requires a computer to compute it, and that computer cannot be the same thing as the computation it generates.

If we posit a computer that exists beyond everything – beyond energy, space and time — how could it compute anything? Computation requires energy, space and time — without energy there is no information, and without space and time there is no change, and thus no computation. A computer that exists beyond everything could not actually do any computation.

One might try to answer this by saying that the universal computation takes place on a computer that exists in a meta-level space-time beyond ours — in other words it exists in a meta-universe beyond our universe. But that answer contradicts the claim that our universe is a computer – because it means that what appears to be a universe computer is really not the final level of reality. The final level of reality in this case is the meta-universe that contains the computer that is computing our universe. That just pushes the problem down a level.

Alternatively one could claim that in fact the meta-universe beyond our universe is also a computer – So our universe computer exists inside a meta-level universe computer. In this case it’s “computers all the way down” – an infinite regress of meta-computers containing meta-computers containing meta-computers. But to claim that is a bit of a logical cop-out, because then there is no final computer behind it all – thus there is no source or end of computation. If such infinite chains of computations could exist it would be difficult to say they actually compute anything since they could never start or complete, and thus this claim is not that unlike claiming that the universe is NOT a computer.

In the end we face the same metaphysical problems we’ve always faced – either there is a fundamental level of reality that we cannot ever really understand, or we fall into paradoxes and infinite regress. Digital physics may have some explanatory power, but it has its limits.

But then what does it mean that we find error correcting codes in the equations of supersymmetry? If the fundamental laws of our universe contain computer codes in them, how can we say the universe is not a computer? Perhaps the universe IS a computer, but it’s a computer that is appearing within something that fundamentally is not computable, something like consciousness perhaps. But can something that is not computable generate or contain computations? That’s an interesting question.

Consciousness is certainly capable of containing computations, even if it is not a computation. A simple example of this would be a dream about a computer that is computing something. In such a dream there is an actual computer doing computations, but the computer and the computations depend on something (consciousness) that is not coming from a computer and is not a computation.

In the end I think it’s more likely that ultimate reality is not a computer – that it is a field of consciousness that is beyond computation. But that doesn’t mean that universes that appear to be computations can’t appear within it.

“Once upon a time, I, Chuang Chou, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was Chou. Soon I awaked, and there I was, veritably myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man.” — Chuang Chou

Further Reading

If you are interested in exploring the nature of consciousness more directly, the next article in this series, Recognizing The Significance of Consciousness, explains what consciousness is actually like, in its pure form, and how to develop a better recognition of it for yourself.

Is the Universe a Computer? New Evidence Emerges.

I haven’t posted in a while, but this is blog-worthy material. I’ve recently become familiar with the thinking of University of Maryland physicist, James Gates Jr. Dr. Gates is working on a branch of physics called supersymmetry. In the process of his work he’s discovered the presence of what appear to resemble a form of computer code, called error correcting codes, embedded within, or resulting from, the equations of supersymmetry that describe fundamental particles.

You can read a non-technical description of what Dr. Gates has discovered in this article, which I highly recommend.

In the article, Gates asks, “How could we discover whether we live inside a Matrix? One answer might be ‘Try to detect the presence of codes in the laws that describe physics.'” And this is precisely what he has done. Specifically, within the equations of supersymmetry he has found, quite unexpectedly, what are called “doubly-even self-dual linear binary error-correcting block codes.” That’s a long-winded label for codes that are commonly used to remove errors in computer transmissions, for example to correct errors in a sequence of bits representing text that has been sent across a wire.

Gates explains, “This unsuspected connection suggests that these codes may be ubiquitous in nature, and could even be embedded in the essence of reality. If this is the case, we might have something in common with the Matrix science-fiction films, which depict a world where everything human being’s experience is the product of a virtual-reality-generating computer network.”

Why are these codes hidden in the laws of fundamental particles? “Could it be that codes, in some deep and fundamental way, control the structure of our reality?,” he asks. It’s a good question.

If you want to explore further, here is a Youtube video by someone who is interested in popularizing Dr. Gates’ work, containing an audio interview that is worth hearing. Here, you can hear Gates describe the potential significance of his discovery in layman’s terms. The video then goes on to explain how all of this might be further evidence for Bostrom’s Simulation Hypothesis (in which it is suggested that the universe is a computer simulation). (NOTE: The video is a bit annoying – in particular the melodramatic soundtrack, but it’s still worth watching in order to get a quick high level overview of what this is all about, and some of the wild implications).

Now why does this discovery matter? Well it is more than strange and intriguing that fundamental physics equations that describe the universe would contain these error correcting codes. Could it mean that the universe itself is built with error correcting codes in it, codes that that are just like those used in computers and computer networks? Did they emerge naturally, or are they artifacts of some kind of intelligent design? Or do they indicate the universe literally IS a computer? For example maybe the universe is a cellular automata machine, or perhaps a loop quantum gravity computer.

Digital Physics – A New Kind of Science

The view that the universe is some kind of computer is called digital physics – it’s a relatively new niche field within physics that may be destined for major importance in the future. But these are still early days.

I’ve been fascinated by the possibility that the universe is a computer since college, when I first found out about the work of Ed Fredkin on his theory that the universe is a cellular automaton — for, example, like John Conway’s Game of Life algorithm (particularly this article, excerpted from the book Three Scientists and their Gods).

Following this interest, I ended up interning in a supercomputing lab that was working on testing these possibilites, at MIT, with the authors of this book on “Cellular Automata Machines.”

Later I had the opportunity to become friends with Stephen Wolfram, whose magnum opus, “A New Kind of Science” is the ultimate, and also heaviest, book on this topic.

I asked Stephen about what he thinks about this idea and he said it is, “a bit like saying ‘there’s a Fibonacci sequence there; this must be a phenomenon based on rabbits’.  Error-correcting codes have a certain mathematical structure, associated e.g. with sphere packing.  You don’t have to use them to correct errors. But it’s definitely an amusing thought that one could detect the Matrix by looking for robustification features of code.  Of course, today’s technology/code rarely has these … because our computers are already incredibly reliable (and probably getting more so)”

The work of Dr. Gates, is at the very least, an interesting new development for this field. At best it might turn out to be a very important clue about the nature of the universe, although it’s very early and purely theoretical at this point. It will be interesting to see how this develops.

However, I personally don’t believe the universe will turn out to be a computer or a computation. Read the next article in this series to find out why I think Consciousness is Not a Computation.


  • Seth Lloyd, professor quantum mechanical engineering at MIT, has written a book that describes his theory that the universe is a quantum computer.
  • Here’s a good article that explores various views related to the idea the universe is a computation in some more detail.

My Best Interview: About Global Brain, Consciousness and AI

I was recently interviewed by Stephen Ibaraki and Alex Lin (CEO of ChinaValue) in what turned out to be the most interesting, far-reaching, and multi-disciplinary (and long) interview I’ve ever given. I was very pleased with the depth of their questions and the topics we covered. You can listen to the MP3 version here, or read a full-text transcript here.

Topics covered:

  • My work over the last few decades
  • Big life lessons I’ve had
  • My recent “Venture Production Studio” concept
  • Stealth ventures I’m working on (realtime web, wireless power, etc.)
  • Intelligent assistants
  • Predictions for the future
  • Augmented reality
  • The Singularity
  • Do we have free will? Will that change as Global Mind emerges?
  • The changing nature of individuality
  • The Psychological Singularity
  • The Global Brain – history and implications
  • The WebOS – Which cloud will win?
  • The Semantic Web – what it’s really for, is it being adopted?
  • What level does the brain compute at? Neural vs. quantum.
  • Nature of consciousness (Buddhist view vs. Western Scientific view) – “I think, therefore I am” vs. “I am, therefore I think”
  • The nature of self & possibility of artificial selves
  • The Singularity
  • John Searle’s Chinese Room thought experiment
  • Digital physics & cellular automata; Ed Fredkin & Stephen Wolfram
  • Bostrom’s Simulation Hypothesis
  • Buddhist views on ultimate nature of reality
  • My relationship with Peter Drucker (my grandfather) and his influence (management, knowledge workers, social sector etc.)
  • The shift to a now-centric civilization
  • The fragmentation of the Semantic Web
  • Freeing intelligence from human brains (like we did with knowledge)
  • Symbiosis; Part vs. Whole – When does the Global Brain change to a new level of order?
  • Beyond Homo Sapiens – What’s next? Cyborgs, collective beings, etc.
  • Technological ethics – what kind of future are we building?
  • Combining the best of Asian and Western intellectual approaches
  • IBM-Jeopardy Challenge

The Digital Generation Gap

We exist in a epoch of great technological change. Within the space of just a few generations we have gone from horse drawn carriages to exploring the outer reaches of our solar system, from building with wood, stone and metals to nanoscale construction with individual atoms, and from manual printing presses and physical libraries, to desktop publishing and the World Wide Web. The increasing pace of technological evolution brings with it many gifts, but also poses challenges never-before-faced by humanity. One of these challenges is the digital generation gap.

The digital generation gap is the result of the extremely rapid rise of personal computing, the Internet, mobile applications, and coming next, biotechnology. Never before in the history of our species have we been faced with a situation where each living generation is focused around a different technology platform.

The tools and practices that the elders of our civilization use are still based on the pre-digital analog era. Their children — the Baby Boomers — use entirely different tools and practices based around the PC. And the youth of today — the Boomers’ children, exist in yet another domain: the world of mobile devices.

The digital generation gap presents a major challenge to our civilization. In particular because of the effect this has on education — both informal education that takes place at home and in communities, and formal education that takes place in school settings. The tools that teachers grew up with and now teach with (PC’s) are not the same tools that the students of today are using today to learn and communicate with (mobile devices).

Baby Boomers grew up before the advent of any of these technologies — they lived in an analog world in which daily life took place primarily on the physical, face-to-face human scale, with physical materials and physical information media like printed books and newspapers. This world was similar to the world of their parents and grandparents — even though it was increasingly automated and industrialized during their lives. As children and during their young adult years the Boomers grew up amidst the fruition of the industrial revolution: mass-produced physical and synthetic goods of all kinds. Among the defining shifts of this period was the transition from a world of manual labor to one of increasing automation. The pinnacle of this transition was the adoption of the first generations of computers.

The Boomer’s children — people in their 30’s and 40’s today — arrived to usher in the transition from an automated analog world, to the new digital world. They were born into a civilization where monolithic computers had already taking hold in government and industry, and they witnessed the birth of waves of increasingly powerful, inexpensive and portable personal computers, the Internet, and the Web. This generation built the bridges from the industrial world of the Boomers to the digital world we live in today. They integrated systems, connected devices, and brought the whole world together as one global social and economic network.

Now their children – the children and youth of today — are growing up in a world that is primarily focused around mobile devices and mobile applications. They have always lived with ubiquitous mobile access and social media. No longer concerned with building bridges to the legacy industrial world of their parents and grandparents, they are plunging headlong into an increasingly digital culture. One in which dating, shopping, business, education — almost everything we do as humans — is taking place online, and via mobile devices.

Each generation is out of touch with the means of production and consumption of the other generations. The result is an increasing communications gap between the generations: They use different platforms. And not surprisingly the inter-generational transmission of knowledge, traditions, cultural norms and standards is not operating like it used to. In fact it may be breaking down entirely.

Many of the cultural and social stresses making headline news are related to the digital generation gap. For example, the increasing growth of cyberbullying is the result of parents and teachers being totally out of touch with the mobile world that kids live in today.

Parents and teachers are so out of the loop technologically, compared to kids today, that they are literally unable to see what is going on between them, let alone do anything about it.

It’s no wonder that kids are running wild online, “sexting,” cyberbullying, and cheating in school. There are few adults, and little to no adult-supervision, where they spend their time online keeping order.

There is no period in recent history when this has ever been the case. It used to be that schoolkids took recess breaks in the schoolyard under the watchful eyes their teachers. There was a certain level of adult supervision in school, and also at home. Not today. Teachers and parents can’t see what their kids are up to online and have no control over what they do with their mobile devices. We have a generation of kids who are growing up with less adult oversight and supervision than ever before.

And the newest generation — the babies of today — what will their experience be? Will the pace of technological progress finally start to plateau for them? Will their world be more like the world of their parents?

Instead of a sudden shift to yet a smaller level of scale or a more powerful technology platform, will they and many generations to come, live on a more stable and shared technology platform? If the pace does slow down for a while, we may see inter-generational gaps decrease. Perhaps this will serve to standardize and solidify our emerging global digital culture. A new set of digital norms and traditions will have time to form and be handed down across generations.

Alternatively, what if in fact the pace of change continues to quicken instead? What if the babies of today grow up in a world of augmented reality and industrial-scale genetic engineering? And what if their children (the grandchildren of people in their 40’s today) grow up in a world of direct brain-machine interfaces and personal genetic engineering? Those of us today who think of ourselves as being on the cutting edge will be the elders of tomorrow, and we will be hopelessly out of touch.

A New Layer of the Brain is Evolving: The Metacortex

The human brain is like an archaeological record. Different layers and functional areas have evolved outwards over time. And now a new layer is evolving. I propose we call this new layer of the brain “the metacortex.” (Note: Metacortex also happens to be the company that Neo worked for in the movie, The Matrix)

The metacortex is the Web — our growing global network of information, people, sensors, and computing devices.

The Web is literally a new layer of the human brain that transcends any individual brain. It is a global brain that connects all our brains together. It is intelligent. It is perhaps humanity’s greatest invention.  It collectively senses, reacts, interprets, learns, thinks, and acts in ways that we as individuals can barely comprehend or predict, and this activity comprises an emerging global mind.

Paul Buchheit (creator of Gmail and Friendfeed) calls this “the social brain” — with emphasis on the social networks and collective social interactions that are taking place. I think that while the metacortex includes the social Web, it transcends it — its collective knowledge and cognition include all of the activity taking place on the Internet.

Does the metacortex mirror the structure and process of the neocortex? What can we learn about the neocortex from the metacortex and vice versa? What are the functional areas or lobes of the metacortex? I look forward to your comments.

The Global Brain is About to Wake Up

The emerging realtime Web is not only going to speed up the Web and our lives, it is going to bring about a kind of awakening of our collective Global Brain. It’s going to change how many things happen on online, but it’s also going to change how we see and understand what the Web is doing. By speeding up the Web, it will cause processes that used to take weeks or months to unfold online, to happen in days or even minutes. And this will bring these processes to the human-scale — to the scale of our human “now” — making it possible for us to be aware of larger collective processes than before. We have until now been watching the Web in slow motion. As it speeds up, we will begin to see and understand what’s taking place on the Web in a whole new way.

This process of of quickening is part of a larger trend which I and others call “Nowism.” You can read more of my thoughts about Nowism here. Nowism is an orientation that is gaining momentum and will help to shape this decade, and in particular, how the Web unfolds. It is the idea that the present-timeframe (“the now”) is getting more important, shorter and also more information-rich. As this happens our civilization is becoming more focused on the now, and less focused on past or the future. Simply keeping up with the present is becoming an all-consuming challenge: Both a threat and an opportunity.

The realtime Web —  what I call “The Stream”  (see “Welcome to the Stream”) — is changing the unit of now. It’s making it shorter. The now is the span of time which we have to be aware of to be effective our work and lives, and it is getting shorter. On a personal level the now is getting shorter and denser — more information and change is packed into shorter spans of time; a single minute on Twitter is overflowing with potentially relevant messages and links. In business as well, the now is getting shorter and denser — it used to be about the size of a fiscal quarter, then it became a month, then a week, then a day, and now it is probably about half a day in span. Soon it will be just a few hours.

To keep up with what is going on we have to check in with the world in at least half-day chunks. Important news breaks about once or twice a day. Trends on Twitter take about a day to develop too. So basically, you can afford to just check  the news and the real-time Web once or twice a day and still get by. But that’s going to change.  As the now gets shorter, we’ll have to check in more frequently to keep abreast of change. As the Stream picks up speed in the middle of this decade, to remain competitive will require near-constant monitoring — we will have to always be connected to, and watching, the real-time Web and our personal streams. Being offline at all will risk missing out on big important trends, threats and opportunities that emerge and develop within minutes or hours. But nobody is capable of tracking the Stream all 24/7 — we must at least take breaks to eat and sleep. And this is a problem.

Big Changes to the Web Coming Soon…

With Nowism comes a faster Web, and this will lead to big changes in how we do various activities on the Web:

  • We will spend less time searching. Nowism pushes us to find better alternatives to search, or to eliminate search entirely, because people don’t have time to search anymore. We need tools that do the searching for us and that help with decision support so we don’t have to spend so much of our scarce time doing that. See my article on “Eliminating the Need for Search — Help Engines” for more about that.
  • Monitoring (not searching) the real-time stream becomes more important. We need to stay constantly vigilant about what’s happening, what’s trending. We need to be alerted of the important stuff (to us), and we need a way to filter out what’s not important to us. Probably a filter based on influence of people and tweets, and/or time dynamics of memes will be necessary. Monitoring the real-time stream effectively is different from searching it. I see more value in real-time monitoring than realtime search — I haven’t seen any monitoring tools for Twitter that are smart enough to give me just the content I want yet. There’s a real business opportunity there.
  • The return of agents. Intelligent agents are going to come back. To monitor the realtime Web effectively each of us will need online intelligent agents that can help us — because we don’t have time, and even if we did, there’s just too much information to sift through.
  • Influence becomes more important than relevance. Advertisers and marketers will look for the most influential parties (individuals or groups) on Twitter and other social media to connect with and work through. But to do this there has to be an effective way to measure influence. One service that’s providing a solution for this (which I’ve angel invested in and advise) is Klout.com – they measure influence per person per topic. I think that’s a good start.
  • Filtering content by influence. We also will need a way to find the most influential content. Influential content could be the content most RT’d or most RT’d by most influential people. It would be much less noisy to be able to see only the more influential tweets of people I follow. If a tweet gets RT’d a lot, or is RT’d by really influential people, then I want to see it. If not, then only if it’s really important (based on some rule). This will be the only way to cope with the information overload of the real-time Web and keep up with it effectively. I don’t know of anyone providing a service for this yet. It’s a business opportunity.
  • Nowness as a measure of value of content. We will need a new form of ranking of results by “nowness” – how timely they are now. So for example, in real-time search engines we shouldn’t rank results merely by how recent they are, but also by how timely, influential, and “hot” they are now. See my article from years ago on “A Physics of Ideas” for more about that. Real-time search companies should think of themselves as real-time monitoring companies — that’s what they are really going to be used for in the end. Only the real-time search ventures that think of themselves this way are going to survive the conceptual paradigm shift that the realtime Web is bringing about. In a realtime context, search is actually too late — once something has happened in the past it really is not that important anymore –what matters is current awareness: discovering the trends NOW. To do that one has to analyze the present, and the very recent past, much more than searching the longer term past. The focus has to be on real-time or near-real-time analytics, statistical analysis, topic and trend detection, prediction, filtering and alerting. Not search.
  • New ways to understand and navigate the now. We will need a way to visualize and navigate the now. I’m helping to incubate a stealth startup venture, Live Matrix, that is working on that. It hasn’t launched yet. It’s cool stuff. More on that in the future when they launch.
  • New tools for browsing the Stream. New tools will emerge for making the realtime Web more compelling and smarter. I’m working on incubating some new stealth startups in this area as well. They’re very early-stage so can’t say more about them yet.
  • The merger of semantics with the realtime Web. We need to make the realtime Web semantic — as well as the rest of the Web — in order to make it easier for software to make sense of it for us. This is the best approach to increasing the signal-to-noise ratio of content we have to look at whether searching or monitoring stuff. The Semantic Web standars of the W3C are key to this. I’ve written a long manifesto on this in “Minding The Planet: The Meaning and Future of the Semantic Web” if you’re really interested in that topic.

Faster Leads to Smarter

As the realtime web unfolds and speeds up, I think it will also have a big impact on what some people call “The Global Brain.” The Global Brain has always existed, but in recent times it has been experiencing a series of major upgrades — particularly around how connected, affordable, accessible and fast it is. First we got phone and faxes, then the Internet, the PC and the Web, and now the real-time Web and the Semantic Web. All of these recent changes are making the Global Brain faster, more richly interconnected. And this makes it smarter. For more about my thoughts on the Global Brain, see these two talks:

What’s most interesting to me is that as the rate of communication and messaging on the Web approaches near-real time, we may see a kind of phase change take place – a much smarter Global Brain will sort of begin to appear out of the chaos. In other words, the speed of collective thinking is as important to the complexity or sophistication of collective thinking, in making the Global Brain significantly more intelligent. In other words, I’m proposing that there is a sort of critical speed of collective thinking, before which the Global Brain seems like just a crowd of actors chaotically flocking around memes, and after which the Global Brain makes big leaps — instead of seeming like a chaotic crowd, it starts to look more like an organized group around certain activitities — it is able to respond to change faster, and optimize and even do things collectively more productively than a random crowd could.

This is kind of like film, or animation. When you watch a movie or animation you are really watching a rapid series of frames. This gives the illusion of there being cohesive, continuous characters, things and worlds in the movie — but really they aren’t there at all, it’s just an illusion — our brains put these scenes together and start to recognize and follow higher order patterns. A certain shape appears to maintain itself and move around relative to other shapes, and we name it with a certain label — but there isn’t really something there, let alone something moving or interacting — there are just frames flicking by rapidly . It turns out that after a critical frame rate (around 20 to 60 frames per second) the human brain stops seeing individual frames and starts seeing a continuous movie. When you start flipping pages fast enough it appears to be a coherent animation and then we start seeing things “moving within the sequence” of frames. In the same way, as the unit of time of (aka the speed) of the real-time Web increases, its behavior will start to seem more continuous and smarter — we won’t see separate chunks of time or messages, we’ll see intelligent continuous collective thinking and adaptation processes.

In other words, as the Web gets faster, we’ll start to see processes emerge within it that appear to be cohesive intelligent collective entities in their own right. There won’t really be any actual entities there that we can isolate, but when we watch the patterns on the Web it will appear as if such entities are there. This is basically what is happening at every level of scale — even in the real world. There really isn’t anything there that we can find — everything is divisible down to the quantum level and probably beyond — but over time our brains seem to recognize and label patterns as discrete “things.” This is what will happen across the Web as well. For example, a certain meme (such as a fad or a movement) may become a “thing” in it’s own right, a kind of entity that seemingly takes on a life of its own and seems to be doing something. Similarly certain groups or social networks or activities they engage in may seem to be intelligent entities in their own rights.

This is an illusion in that there really are no entities there, they are just collections of parts that themselves can be broken down into more parts, and no final entities can be found. However, nonethless, they will seem like intelligent entities when not analyzed in detail. In addition, the behavior of these chaotic systems may resist reduction — they may not even be understandable and their behavior may not be predictable through a purely reductionist approach — it may be that they react to their own internal state and their environments virtually in real-time, making it difficult to take a top-down or bottom-up view of what they are doing. In a realtime world, change happens in every direction.

As the Web gets faster, the patterns that are taking place across it will start to become more animated. Big processes that used to take months or years to happen will happen in minutes or hours. As this comes about we will begin to see larger patterns than before, and they will start to make more sense to us — they will emerge out of the mists of time so to speak, and become visible to us on our human timescale — the timescale of our human-level “now. As a result, we will become more aware of higher order dynamics taking place on the real-time Web, and we will begin to participate in and adapt to those dynamics, making those dynamics in turn even smarter. (For more on my thoughts about how the Global Brain gets smarter, see:  “How to Build the Global Mind.”)

See Part II: “Will The Web Become Conscious?” if you want to dig further into the thorny philosophical and scientific issues that this brings up…

Eliminating the Need for Search – Help Engines

We are so focused on how to improve present-day search engines. But that is a kind of mental myopia. In fact, a more interesting and fruitful question is why do people search at all? What are they trying to accomplish? And is there a better way to help them accomplish that than search?

Instead of finding more ways to get people to search, or ways to make existing search experiences better, I am starting to think about how to reduce or  eliminate the need to search — by replacing it with something better.

People don’t search because they like to. They search because there is something else they are trying to accomplish. So search is in fact really just an inconvenience — a means-to-an-end that we have to struggle through to do in order to get to what we actually really want to accomplish. Search is “in the way” between intention and action. It’s an intermediary stepping stone. And perhaps there’s a better way to get to where we want to go than searching.

Searching is a boring and menial activity. Think about it. We have to cleverly invent and try pseudo-natural-language queries that don’t really express what we mean. We try many different queries until we get results that approximate what we’re looking for. We click on a bunch of results and check them out. Then we search some more. And then some more clicking. Then more searching. And we never know whether we’ve been comprehensive, or have even entered the best query, or looked at all the things we should have looked at to be thorough. It’s extremely hit or miss. And takes up a lot of time and energy. There must be a better way! And there is.

Instead of making search more bloated and more of a focus, the goal should really be get search out of the way.  To minimize the need to search, and to make any search that is necessary as productive as possible. The goal should be to get consumers to what they really want with the least amount of searching and the least amount of effort, with the greatest amount of confidence that the results are accurate and comprehensive. To satisfy these constraints one must NOT simply build a slightly better search engine!

Instead, I think there’s something else we need to be building entirely. I don’t know what to call it yet. It’s not a search engine. So what is it?

Bing’s term “decision engine” is pretty good, pretty close to it. But what they’ve actually released so far still looks and feels a lot like a search engine. But at least it’s pushing the envelope beyond what Google has done with search. And this is good for competition and for consumers. Bing is heading in the right direction by leveraging natural language, semantics, and structured data. But there’s still a long way to go to really move the needle significantly beyond Google to be able to win dominant market share.

For the last decade the search wars have been fought in battles around index size, keyword search relevancy, and ad targeting — But I think the new battle is going to be fought around semantic understanding, intelligent answers, personal assistance, and commerce affiliate fees. What’s coming next after search engines are things that function more like assistants and brokers.

Wolfram Alpha is an example of one approach to this trend. The folks at Wolfram Alpha call their system a “computational knowledge engine” because they use a knowledge base to compute and synthesize answers to various questions. It does a lot of the heavy lifting for you, going through various data, computing and comparing, and then synthesizes a concise answer.

There are also other approaches to getting or generating answers for people — for example, by doing what Aardvark does: referring people to experts who can answer their questions or help them. Expert referral, or expertise search, helps reduce the need for networking and makes networking more efficient. It also reduces the need for searching online — instead of searching for an answer, just ask an expert.

There’s also the semantic search approach — perhaps exemplified by my own Twine “T2” project — which basically aims to improve the precision of search by helping you get to the right results faster, with less irrelevant noise. Other consumer facing semantic search projects of interest are Goby and Powerset (now part of Bing).

Still another approach is that of Siri, which is making an intelligent “task completion assistant” that helps you search for and accomplish things like “book a romantic dinner and a movie tonight.” In some ways Siri is a “do engine” not a “search engine.” Siri uses artificial intelligence to help you do things more productively. This is quite needed and will potentially be quite useful, especially on mobile devices.

All of these approaches and projects are promising. But I think the next frontier — the thing that is beyond search and removes the need for search is still a bit different — it is going to combine elements of all of the above approaches, with something new.

For a lack of a better term, I call this a “help engine.” A help engine proactively helps you with various kinds of needs, decisions, tasks, or goals you want to accomplish. And it does this by helping with an increasingly common and vexing problem: choice overload.

The biggest problem is that we have too many choices, and the number of choices keeps increasing exponentially. The Web and globalization have increased the number of choices that are within range for all of us, but the result has been overload. To make a good, well-researched, confident choice now requires a lot of investigation, comparisons, and thinking. It’s just becoming too much work.

For example, choosing a location for an event, or planning a trip itinerary, or choosing what medicine to take, deciding what product to buy, who to hire, what company to work for, what stock to invest in, what website to read about some topic. These kinds of activities require a lot of research, evaluations of choices, comparisons, testing, and thinking. A lot of clicking. And they also happen to be some of the most monetizable activities for search engines. Existing search engines like Google that make money from getting you to click on their pages as much as possible have no financial incentive to solve this problem — if they actually worked so well that consumers clicked less they would make less money.

I think the solution to what’s after search — the “next Google” so to speak — will come from outside the traditional search engine companies. Or at least it will be an upstart project within one of them that surprises everyone and doesn’t come from the main search teams within them. It’s really such a new direction from traditional search and will require some real thinking outside of the box.

I’ve been thinking about this a lot over the last month or two. It’s fascinating. What if there was a better way to help consumers with the activities they are trying to accomplish than search? If it existed it could actually replace search. It’s a Google-sized opportunity, and one which I don’t think Google is going to solve.

Search engines cause choice overload. That wasn’t the goal, but it is what has happened over time due to the growth of the Web and the explosion of choices that are visible, available, and accessible to us via the Web.

What we need now is not a search engine — it’s something that solves the problem created by search engines. For this reason, the next Google probably won’t be Google or a search engine at all.

I’m not advocating for artificial intelligence or anything that tries to replicate human reasoning, human understanding, or human knowledge. I’m actually thinking about something simpler. I think that it’s possible to use computers to provide consumers with extremely good, automated decision-support over the Web and the kinds of activities they engage in. Search engines are almost the most primitive form of decision support imaginable. I think we can do a lot better. And we have to.

People use search engines as a form of decision-support, because they don’t have a better alternative. And there are many places where decision support and help are needed: Shopping, travel, health, careers, personal finance, home improvement, and even across entertainment and lifestyle categories.

What if there was a way to provide this kind of personal decision-support — this kind of help — with an entirely different user experience than search engines provide today? I think there is. And I’ve got some specific thoughts about this, but it’s too early to explain them; they’re still forming.

I keep finding myself thinking about this topic, and arriving at big insights in the process. All of the different things I’ve worked on in the past seem to connect to this idea in interesting ways. Perhaps it’s going to be one of the main themes I’ll be working on and thinking about for this coming decade.

What's After the Real Time Web?

In typical Web-industry style we’re all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call “The Stream,” is not an end in itself, it’s a means to an end. So what will it enable, where is it headed, and what’s it going to look like when we look back at this trend in 10 or 20 years?

In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:

  1. Web Attention Deficit Disorder. The first problem with the real-time Web that is becoming increasingly evident is that it has a bad case of ADD. There is so much information streaming in from so many places at once that it’s simply impossible to focus on anything for very long, and a lot of important things are missed in the chaos. The first generation of tools for the Stream are going to need to address this problem.
  2. Web Intention Deficit Disorder. The second problem with the real-time Web will emerge after we have made some real headway in solving Web attention deficit disorder. This second problem is about how to get large numbers of people to focus their intention not just their attention. It’s not just difficult to get people to notice something, it’s even more difficult to get them to do something. Attending to something is simply noticing it. Intending to do something is actually taking action, expending some energy or effort to do something. Intending is a lot more expensive, cognitively speaking, than merely attending. The power of collective intention is literally what changes the world, but we don’t have the tools to direct it yet.

The Stream is not the only big trend taking place right now. In fact, it’s just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I’m tracking:

  • Messaging. The real-time Web aka The Stream is really about messaging in essence. It’s a subset of the global trend towards building a better messaging layer for the Web. Multiple forms of messaging are emerging, from the publish-and-subscribe nature of Twitter and RSS, to things like Google Wave, Pubsubhubub, and broadcast style messaging or multicasting via screencast, conferencing and media streaming and events in virtual worlds. The effect of these tools is that the speed and interactivity of the Web are increasing — the Web is getting faster. Information spreads more virally, more rapidly — in other words, “memes” (which we can think of as collective thoughts) are getting more sophisticated and gaining more mobility.
  • Semantics. The Web becomes more like a database. The resolution of search, ad targeting, and publishing increases. In other words, it’s a higher-resolution Web. Search will be able to target not just keywords but specific meaning. For example, you will be able to search precisely for products or content that meet certain constraints. Multiple approaches from natural language search to the metadata of the Semantic Web will contribute to increased semantic understanding and representation of the Web.
  • Attenuation. As information moves faster, and our networks get broader, information overload gets worse in multiple dimensions. This creates a need for tools to help people filter the firehose. Filtering in its essence is a process of attenuation — a way to focus attention more efficiently on signal versus noise. Broadly speaking there are many forms of filtering from automated filtering, to social filtering, to personalization, but they all come down to helping someone focus their finite attention more efficiently on the things they care about most.
  • The WebOS.  As cloud computing resources, mashups, open linked data, and open API’s proliferate, a new level of aggregator is emerging. These aggregators may focus on one of these areas or may cut across them. Ultimately they are the beginning of true cross-service WebOS’s. I predict this is going to be a big trend in the future — for example instead of writing Web apps directly to various data and API’s in dozens of places, just write to a single WebOS aggregator that acts as middleware between your app and all these choices. It’s much less complicated for developers. The winning WebOS is probably not going to come from Google, Microsoft or Amazon — rather it will probably come from someone neutral, with the best interests of developers as the primary goal.
  • Decentralization. As the semantics of the Web get richer, and the WebOS really emerges it will finally be possible for applications to leverage federated, Web-scale computing. This is when intelligent agents will actually emerge and be practical. By this time the Web will be far too vast and complex and rapidly changing for any centralized system to index and search it. Only massively federated swarms of intelligent agents, or extremely dynamic distributed computing tools, that can spread around the Web as they work, will be able to keep up with the Web.
  • Socialization. Our interactions and activities on the Web are increasingly socially networked, whether individual, group or involving large networks or crowds. Content is both shared and discovered socially through our circles of friends and contacts. In addition, new technologies like Google Social Search enable search results to be filtered by social distance or social relevancy. In other words, things that people you follow like get higher visibility in your search results. Socialization is a trend towards making previously non-social activities more social, and towards making already-social activities more efficient and broader. Ultimately this process leads to wider collaboration and higher levels of collective intelligence.
  • Augmentation. Increasingly we will see a trend towards augmenting things with other things. For example, augmenting a Web page or data set with links or notes from another Web page or data set. Or augmenting reality by superimposing video and data onto a live video image on a mobile phone. Or augmenting our bodies with direct connections to computers and the Web.

If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it’s collective intelligence — not just of humans, but also our computing systems, working in concert.

Collective Intelligence

I think that these trends are all combining, and going real-time. Effectively what we’re seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.

But that’s in the very distant future still. In the nearer term — the next 100 years or so — we’re going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.

Social Evolution

Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.

Physical Evolution

As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we’ll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:

  • Individual brains will get less good at things like memorization and recall, calculation, reasoning, and long-term planning and action.
  • Individual brains will get better at multi-tasking, information filtering, trend detection, and social communication. The parts of the nervous system involved in processing live information will increase disproportionately to other parts.
  • Our bodies may actually improve in certain areas. We will become more, not less, mobile, as computation and the Web become increasingly embedded into our surroundings, and into augmented views of our environments. This may cause our bodies to get into better health and shape since we will be less sedentary, less at our desks, less in front of TV’s. We’ll be moving around in the world, connected to everything and everyone no matter where we are. Physical strength will probably decrease overall as we will need to do less manual labor of any kind.

These are just some of the changes that are likely to occur as a result of the things we’re working on today. The Web and the emerging Real-Time Web are just a prelude of things to come.

The Future of the Web: BBC Interview

The BBC World Service’s Business Daily show interviewed the CTO of Xerox and me, about the future of the Web, printing, newspapers, search, personalization, the real-time Web. Listen to the audio stream here. I hear this will only be online at this location for 6 more days. If anyone finds it again after that let me know and I’ll update the link here.

Nowism — A Theme for the New Era?

DRAFT 1 — A Work in Progress


Here’s an idea I’ve been thinking about: it’s a concept for a new philosophy, or perhaps just a name for a grassroots philosophy that seems to be emerging on its own. It’s called “Nowism.” The view that now is what’s most important, because now is where one’s life actually happens.

Certainly we have all heard terms like Ram Das’ famous, “Be here now” and we may be familiar with the writings of Eckhart Tolle and his “Power of Now” and others. In addition there was the “Me generation” and the more recent idea of “living in the now.” On the Web there is also now a growing shift towards real-time, what I call the Stream.

These are all examples of the emergence of this trend. But I think these are just the beginnings of this movement — a movement towards a subtle but major shift in the orientation of our civilization’s collective attention. This is a shift towards the now, in every dimension of our lives. Our personal lives, professional lives, in business, in government, in technology, and even in religion and spirituality.

I have a hypothesis that this philosophy — this worldview that the “now” is more important than the past or the future, may come to characterize this new century we are embarking on. If this is true, then it will have profound effects on the direction we go in as a civilization.

It does appear that the world is becoming increasingly now-oriented; more real-time, high-resolution, high-bandwidth. The present moment, the now, is getting increasingly flooded with fast-moving and information-rich streams of content and communication.

As this happens we are increasingly focusing our energy on keeping up with, managing, and making sense of, the now. The now is also effectively getting shorter — in that more happens in less time, making the basic clockrate of the now effectively faster. I’ve written about this elsewhere.

Given that the shift to a civilization that is obsessively focused on the now is occurring, it is not unreasonable to wonder whether this will gradually penetrate into the underlying metaphors and worldviews of coming generations, and how it might manifest as differences from our present-day mindsets.

How might people who live more in the now differ from those who paid more attention to the past, or the future? For example, I would assert that the world in and before the 19th century was focused more on the past than the now or the future. The 20th century was characterized by a shift to focus more on the future than the past or the now. The 21st century will be characterized by a shift in focus onto the now, and away from the past and the future.

How might people who live more in the now think about themselves and the world in coming decades. What are the implications for consumers, marketers, strategists, policymakers, educators?

With this in mind, I’ve attempted to write up what I believe might be the start of a summary of what this emerging worldview of “Nowism” might be like.

It has implications on several levels: social, economic, political, and spiritual.

Nowism Defined

Like Buddhism, Taoism, and other “isms,” Nowism is a view on the nature of reality, with implications for how to live one’s life and how to interpret and relate to the world and other people.

Simply put: Nowism is the philosophy that the span of experience called “now” is fundamental. In other words there is nothing other than now. Life happens in the now. The now is what matters most.

Nowism does not claim to be mutually exclusive with any other religion. It merely claims that all other religions are contained within it’s scope — they, like everything else, take place exclusively within the now, not outside it. In that respect the now, in its actual nature, is fundamentally greater than any other conceivable philosophical or religious system, including even Nowism itself.

Risks of Unawakened Nowism

Nowism is in some ways potentially short-sighted in that there is less emphasis on planning for the future and correspondingly more emphasis on living the present as fully as possible. Instead of making decisions with their effects in the future foremost in mind, the focus is on making the optimal immediate decisions in the context of the present. However, what is optimal in the present may not be optimalover longer spans of time and space.

What may be optimal in the now of a particular individual may not at all be optimal in the nows of other individuals. Nowism can therefore lead to extremely selfish behavior that actually harms others, or it can lead to extremely generous behavior on a scale that far transcends the individual, if one strives to widen their own experience of the now sufficiently.

Very few individuals will ever do the necessary work to develop themselves to the point where their actual experience of now is dramatically wider than average. It is however possible to do this, while quite rare. Such individuals are capable of living exclusively in the now while still always acting with the long-term benefit of both themselves all other beings in mind.

The vast majority of people however will tend towards a more limited and destructive form of Nowism, in which they get lost in deeper forms of consumerism, content and media immersion, hedonism, and conceptualization. Rather than being freed by the now, they will be increasingly imprisoned by it.

This lower form of Nowism — what might be called unawakened Nowism — is characterized by an intense focus on immediate self-gratification, without concern or a sense of responsibility for the consequences of one’s actions on oneself or others in the future. This kind of living in the moment, while potentially extremely fun, tends to end badly for most people. Fortunately most people outgrow this tendency towards extremely unawakened Nowism after graduating college and/or entering the workforce.

Abandoning extremely unawakened Nowist lifestyles doesn’t necessarily result in one realizing any form of awakened Nowism. One might simply remain in a kind of dormant state, sleepwalking through life, not really living fully in the present, not fully experiencing the present in all its potential. To reach this level of higher Nowism, or advanced Nowism, one must either have a direct spontaneous experience of awakening to the deeper qualities of the now, or one must study, practice and work with teachers and friends who can help them to reach such a direct experience of the now.

Benefits of Awakened Nowism: Spiritual and Metaphysical Implications of Nowist Philosophy

In the 21st Century, I believe Nowism may actually become an emerging movement. With it there will come a new conception of the self, and of the divine. The self will be realized to be simultaneously more empty and much vaster than was previously thought. The divine will be understood more directly and with less conceptualization. More people will have spiritual realization this way, because in this more direct approach there is less conceptual material to get caught up in. The experience of now is simply left as it is — as direct and unmediated, unfettered, and unadulterated as possible.

This is a new kind of spirituality perhaps. One in which there is less personification of the divine, and less use of the concept of a personified deity as an excuse or justification for various worldy actions (like wars and laws, for example).

Concepts about the nature of divinity have been used by humans for millenia as tools for various good and bad purposes. But in Nowism, these concepts are completely abandoned. This also means abandoning the notion that there is or is not a divine nature at the core of reality, and each one of us. Nowists do not get caught up in such unresolvable debates. However, at the same time, Nowists do strive for a direct realization of the now — one that is as unmediated and nonconceptual as possible — and that direct realization is considered to BE thedivine nature itself.

Nowism does not assert that nothing exists or that nothing matters. Such views are nihilism not Nowism. Nowism does not assert that what happens is caused or uncaused — such views are those of the materialists and the idealists, not Nowism. Instead Nowism asserts the principles of dependent origination, in which cause and-effect appears to take place, even though it is an illusory process and does not truly exist. On the basis of a relative-level cause-effect process, an ethical system can be founded which seeks to optimize happiness and minimize unhappiness for the greatest number of beings, by adjusting ones actions so as to create causes that lead to increasingly happy effects for oneself and others, increasingly often. Thus the view of Nowism does not lead to hedonism — in fact, anyone who makes a careful study of the now will reach the conclusion that cause and effect operates unfailingly and therefore is a key tool for optimizing happiness in the now.

Advanced Nowists don’t ignore cause-and-effect, in fact quite the contrary: they pay increasingly close attention to cuase-and-effect and their particular actions. The natural result is that they begin to live a life that is both happier and that leads to more happiness for all other beings — at least this is the goal and example of the best-case. The fact that cause-and-effect is in operation, even though it is notfundamentally real, is the root of Nowist ethics. It is precisely the same as the Buddhist conception of the identity of emptiness and dependent-origination.

Numerous principles follow from the core beliefs of Nowism. They include practical guidance for living ones life with a minimum of unnecessary suffering (of oneself as well as others), further principles concerning the nature of reality and the mind, and advanced techniques and principles for reaching greater realizations of the now.

As to the nature of what is taking place right now: from the Nowist perspective, it is beyond concepts, for all concepts, like everything else, appear and disappear like visions or mirages, without ever truly-existing. This corresponds precisely to the Buddhist conception of emptiness.

The scope of the now is unlimited, however for the uninitiated the now is usually considered to be limited to the personal present experience of the individual. Nowist adepts, on the other hand, assert that the scope of the now may be modified (narrowed or widened) through various exercises including meditation, prayer, intense physical activity, art, dance and ritual, drugs, chanting, fasting, etc.

Narrowing the scope of the now is akin to reducing the resolution of present experience. Widening the scope is akin to increasing the resolution. A narrower now is a smaller experience, with less information content. A wider now is a larger experience, with more information content.

Within the context of realizing that now is all there is, one explores carefully and discovers that now does not contain anything findable (such as a self, other, or any entity or fundamental basis for any objective or subjective phenomenon, let alone any nature that could be called “nowness” or the now itself).

In short the now is totally devoid of anything findable whatsoever, although sensory phenomena do continue to appear to arise within it unceasingly. Such phenomena, and the sensory apparatus, body, brain, mind and any conception of self that arises in reaction to them, are all merely illusion-like appearances with no objectively-findable ultimate, fundamental, or independent existence.

This state is not unlike the analogy of a dream in which oneself and all the other places and characters are all equally illusory, or of a completely immersive virtual reality experience that is so convincing one forgets it isn’t real.

Nowism does not assert a divine being or deity, although it also is not mutually exclusive with the existence of one or more such beings. However all such beings are considered to be no more real than any other illusory appearance, such as the appearances of sentient beings, planets, stars, fundamental particles, etc. Any phenomena — whether natural or supernatural — are equally empty of any independent true existince. They are all illusory in nature.

However, Nowists do assert that the nature of the now itself, while completely empty, is in fact the nature of consciousness and what we call life. It cannot be computed, simulated or modeled in an information system, program, machine, or representation of any kind. Any such attempts to represent the now are merely phenomena appearing within the now, not the now itself. The now is fundamentally transcendental in this respect.

The now is not limited to any particular region in space or time, let alone to any individual being’s mind. There is no way to assert there is a single now, or many nows, for no nows are actually findable.

The now is the gap between the past and the future, however, when searched for it cannot really be found, nor can the past or future be found. The past is gone, the future hasn’t happened yet, and the now is infinite, constantly changing, and ungraspable. The entire space-time continuum is in fact within a total all-embracing now, the cosmically extended now that is beyond the limited personalized scope of now we presently think we have. Through practice this can be gradually glimpsed and experienced to greater degrees.

As the now is explored to greater depths, one begins to find that it has astonishing implications. Simultaneously much of the Zen literature — especially the koans — starts to make sense at last.

While Nowism could be said to be a branch of Buddhism, I would actually say it might be the other way arond. Nowism is really the most fundamental, pure, philosophy — stripped of all cultural baggage and historical concepts, and retaining only what is absolutely essential.

Can We Design Better Communities?

(DRAFT 2. A Work-In-Progress)

The Problem: Our Communities are Failing

I’ve been thinking about community lately. There is a great need for a new and better model for communities in the world today.

Our present communites are not working and most are breaking down or stagnating. Cities are experiencing urbanization and a host of ensuing  social and economic challenges. Meanwhile the movement towards cities has drained the people — particularly young professionals — away from rural communities, causing them to stagnate and decline.

Local economies have been challenged by national and global economic integratio — from outsourcing of jobs away to other places, to giant retail chains such as Walmart swooping in and driving out local businesses.

From giant megacities and multi-city urban sprawls, to inner city neighborhoods, to suburban bedroom communities, and rural towns and villages, the pain is being felt everywhere and at all levels.

Our current models for community don’t scale, they don’t work anymore, and they don’t fit the kind of world we are living in today. And why should they? After all, they were designed a long time ago for a very different world.

At the same time there are increasing numbers of singles or couples without children, and even families and neighborhoods that are breaking down as cities get larger.

The need for community is growing not declining — especially as existing communities fail and no other alternatives take their place. Loneliness, social isolation, and social fragmentation are huge and growing problems — they lead to crime, suicide, mental illness, lack of productivity, moral decay, civil unrest, and just about every other social and economic problem there is.

The need for an updated and redesigned model for community is increasingly important to all of us.

Intentional Communities

In particular, I am thinking about intentional communities — communities in which people live geographically near one another, and participate in community together, by choice. They may live together or not, dine together or not, work together or not, worship together or not — but at least they need to live within some limit of proximity to one another and participate in community together. These are the minimum requirements.

But is there a model that works? Or is it time to design a new model that fits the time and place in which we live better?

Is this simply a design problem that we can solve by adopting the right model, or is there something about human nature that makes it impossible to succeed no matter what model we apply?

I am an optimist and I don’t think human nature prevents healthy communities from forming and being sustainable. I think it’s a design problem. I think this problem can (and must) be solved with a set of design principles that work better than the ones we’ve come up with so far. This would be a great problem to solve. It could even potentially improve the lives of billions of people.

Models of Intentional Community

Community is extremely valuable and important. We are social beings. And communities enable levels of support and collaboration, economic growth, resiliance, and perhaps personal growth, that individuals or families cannot achieve on their own.

However, do intentional communities work? What examples can we look at and what can we glean from them about what worked and what didn’t?

All of the cities and towns in the world started as intentional communities but today many seem to have lost their way as they got larger or were absorbed into larger communities.

As for smaller intentional communities — recent decades are littered with all kinds of spectacular failures.

The communes and experiemental communities of the 1960’s and 1970’s have mostly fallen apart.

Spiritual communities seem to either tend towards becoming personality cults that are highly prone to tyrranny and corruption, or they too seem to fall apart eventually as well.

There have been so many communities around various gurus, philosophers, or cult-figures, but they have almost all universally become cults or have broken apart.

Human nature is hard to wrangle without strong leadership, yet strong leadership and the power it entails leads inevitably to ego and corruption.

At least some ashrams in India seem to be working well, although their internal dynamics are usually centered around a single guru or leadership group — and while there may be a strong social agreement within these communities, this is not a model of community that will work for everyone. And in fact, only in extremely rare cases, are there any gurus who are actually selfless enough to hold that position without abusing it.

Other kinds of religious communities are equally prone to problems — however perhaps at least some, such as the Quakers, Shakers, and Amish may have solved this — I am not sure however. If they were so successful, why are there so few of them?

Temporary communities are another type of intentional community, for example, Burning Man, seem to work quite well, but only for temporary periods of time — they would have the same problems of all other communities if they became institutionalized or tried to not be temporary.

Educational communities, such as university towns and campuses, do appear to work in many cases. They combine both an ongoing community (tenured faculty, staff and townspeople) and temporary communities (seasonal student and faculty residents).

Economic communes — such as the communes in Soviet-era Russia were prone to corruption, and failed as economic experiments. In Soviet Russia “some were more equal than others” and that ultimately led to corruption and tyranny.

Political-economic communities such as the neighborhood groups in Maoist China only worked because they were firmly, even brutally, controlled from the central government. They were not exactly voluntary intentional communities.

I don’t know enough about the Israeli Kibbutzim experiments, but they at least seem to be continuing, although I am not sure how well they function — I admit my ignorance on that topic.

One type of intentional community that does seem to work are caregiving communities such as assisting living communities, nursing homes, halfway houses, etc — but perhaps they seem to work only because their members don’t remain very long.

Why Aren’t There More Intentional Communities?

So here is my question: Do intentional communities work? And if they work so well, why aren’t there more of them? Or are they flourishing and multiplying under the radar?

Is there a model (or are there models) for intentional community that have proven long-term success? Where are the examples?

Is the fact that there are not more intentional communities emerging and thriving, evidence that intentional communities just don’t work or have stopped replicating or evolving? Or is it evidence that the communities we already live in work well enough, even though they are no longer intentional for most of us?

I don’t think our present-day communities work well enough, nor are they very healthy or rewarding to their participants. I do believe there is the possibility, and even the opportunity, to come up with a better model — one which works so well that it attracts people, grows and self-replicates around the world rapidly. But I don’t yet know what that new model is.

Design Principles

To design the next-evolution of intentional community, perhaps we can start with a set of design principles gleaned from what we have learned from existing communities?

This set of design principles should be selected to be practical for the world we live in today — a world of rapid transit, economic and social mobility, urban sprawls, cultural and ethnic diversity, cheap air travel, declining birth rates, the 24-7 work week, the Internet, and the globally interdependent economy.

In thinking about this further there are a few key “design principles” which seem to be necessary to make a successful, sustainable, healthy community.

This is not an exhaustive list, but it is what we have thought of so far:

Shared intention.
There has to be a common reason for the group of people to be together. The participants each have to share a common intention to form and participate in a community around common themes and purposes together.

Shared contribution . The participants have to each contribute in various ways to the community as part of their membership.

Shared governance.
The participants each have a role to play in the process of decision making, policy formation, dispute resolution, and operations of the community.

Shared boundaries. There are shared, mutually agreed upon and mutually enforced rules.

Freedom to leave. Anyone can leave the community at any time without pressure to remain.

Freedom of choice.
While in the community people are free to make choices about their roles and participation in the community, within the communities boundaries and governance process. This freedom of choice also includes the freedom to opt out of any role or rule, but that might have the consequence of voluntarily recusing oneself from further participation in the community.

Freedom of expression. The ability for community members to freely and fearlessly express their opinions within the community is an essential element of healthy communities. Systems need to be designed to support and channel this activity. If it is restrained it seeks out other channels anyway (subversion, revolution, etc.). By not restraining expression, but instead desiging a community process that authentically engages members in conversation with one another, the
community can be more self-aware and creativity and innovation can flow more freely.

Representative democratic leadership. The leadership is either by consensus and includes everyone equally, or there is a democratic representative process of electing leaders and making decisions.

Community mobility. This is an interesting topic. In the world today, each person may have different sets of interests and purposes, and they are not all compatible. It may be necessary or desirable to be a member of different communities in different places, times of the year, or periods of one’s life. It
should be possible to be able to be in more than one community, or to rotate through communities, or to change communities as one’s interests, goals, needs and priorities shift over time — so long as one participates in each community fully while they are there. The concept of timesharing in various communities, or what one friend calls “colonies,” is interesting. One might be a member of different colonies — one for their religious interests, one for social kinship, one for a hobby, one for recreation and vacation, etc. These might be in different places and have different members and their role and level of participation might be different in each one. Rather than living in only one particular community, perhaps we need a model where there is more mobility.

Size limitations. One thing I would suggest is that communities work better when they are smaller. The reason for this is that once communities reach a size where each member no longer can maintain a personal relationship with each other member, they stop working and begin to fragment into subgroups. So perhaps limiting the size of a community is a good idea. Or alternatively, when a community reaches a certain size it spawns a new separate community where further growth can happen and all new members go there. In fact, you could even see two communities spawning a new “child” community together to absorb their growth.

Proximity. Communities don’t require that people live near each other — they can function non-locally, for example online. However, the kind of intentional communities I am interested in here are ones where people do live together or near one another, at least part of the time. For this kind of community people need to live and/or dine and/or work together on a periodic, if not frequent basis. An eating co-op in a metropolitan area is an example — at least if everyone has to live within a certain distance and eat together a few times a week, and work a few hours in
the co-op per month. A food co-op, such as co-op grocery store is another example.

Shared Economic Participation. For communities to function there needs to be a form of common currency (either created by the community or from a larger economy the community is situated within), and there should be a form of equitable sharing of collective costs and profits among the community members. There are different ways to distribute the wealth — everyone can be equal no matter what, or reward can be proportional to role, or reward can be proportional to level of contribution, etc. What economic works best in the long-term, for both creating sustainability and growth, for maintaining social order and social justice, and for preventing corruption?

Agility. Communities must be designed to change in order to adapt to new environmental, economic and social realities. Communities that are too rigid in structure or process, or even location, are like species of animals that are unable to continue evolving — and that usually leads to extinction. Part of being agile is being open to new ideas and opportunities. Agility is not just the ability to recognize and react to emerging threats, it is the ability to recognize and react to emerging opportunities as well.

Resiliance. Communities must be designed to be resiliant — Challenges and even damages and setbacks are inevitable. They can be minimized and mitigated, but they will still happen to various degrees. Therefore the design should not assume they can be prevented entirely, but rather should plan for the ability to heal and eventually restore the community as effectively as possible when they do.

Diversity. There are many types of diversity: diversity of opinion, ethnic diversity, age group diversity, religious diversity. Not all communities need to support all kinds of diversity, however it is probably safe to say that for a community to be healthy it must at least support diversity of beliefs and opinions among the membership. No matter what selection criteria is used, there must still be freedom
of thought and belief, and expression, within that group. Communities must be designed to support this diversity, and even encourage it. They also must be designed to manage and process the conversations, conflicts, and changes that diversity brings about. Diversity is a key ingredient that powers growth, agility, and resiliance. In biology diversity is essential to species-survival — mutations are key to evolution. Communities must be designed to mutate, and to intelligently filter in or out those mutations that help or harm the community. Processes that encourange and process diversity are essential for this to happen.

Video: My Talk on The Future of Libraries — "Library 3.0"

If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!

Video: My Talk on the Evolution of the Global Brain at the Singularity Summit

If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.

(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).

How to Build the Global Mind

Kevin Kelly recently wrote another fascinating article about evidence of a global superorganism. It’s another useful contribution to the ongoing evolution of this meme.

I tend to agree that we are at what Kevin calls, Stage III. However, an important distinction in my own thinking is that the superorganism is not comprised just of machines, but it is also comprised of people.

(Note: I propose that we abbreviate the One Machine, as “the OM.” It’s easier to write and it sounds cool.)

Today, humans still make up the majority of processors in the OM. Each human nervous system comprises billions of processors, and there are billions of humans. That’s a lot of processors.

However, Ray Kurzweil posits that the balance of processors is rapidly movingtowards favoring machines — and that sometime in the latter half of this century, machine processors will outnumber or at least outcompute all the human processors combined, perhaps many times over.

While agree with Ray’s point that machine intelligence will soon outnumber human intelligence, I’m skeptical of Kurzweil’s timeline, especially in light of recent research that shows evidence of quantum level computation within microtubules inside nuerons. If in fact the brain computes at the tubulin level then it may have many orders of magnitude more processors than currently estimated. This remains to be determined. Those who argue against this claim that the brain can be modelled on a Classical level and that quantum computing need not be invoked. To be clear, I am not claiming that the brain is a quantum computer, I am claiming that there seems to be evidence that computation in the brain takes place at the quantum level, or near it. Whether quantum effects have any measurable effect on what the brain does is not the question, the question is simply whether microtubules are the lowest level processing elements of the brain. If they are,then there are a whole lot more processors in the brain than previously thought.

Another point worth considering is that much of the brain’s computation is not taking place within the neurons but rather in the gaps between synapses, and this computation happens chemically rather than electrically. There are vastly more synapses than neurons, and computation within the synapses happens at a much faster and more granular level than neuronal firings. It is definitely the case thatchemical-level computations take place with elements that are many orders of magnitude smaller than neurons. This is another case for the brain computing at a much lower level than is currently thought.

In other words the resolution of computation in the human brain is still unknown. We have several competing approximations but no final answer on this. I do think however that evidence points to computation being much more granular than we currently think.

In any case, I do agree with Kurzweil that at least it is definitely the case that artificial computers will outnumber naturally occurring human computers on this planet — it’s just a question of when. In my view it will take a little longer than he thinks: it is likely to happen after 100 to 200 years at the most.

There is another aspect of my thinking on this subject which I think may throw a wrench in the works. I don’t think that what we call “consciousness” is something that can be synthesized. Humans appear to be conscious, but we have no idea what that means yet. It is undeniable that we all have an experience of being conscious, and this experience is mysterious. It is also the case that at least so far, nobody hasbuilt a software program or hardware device that seems to be having this experience. We don’t even know how to test for consciousness in fact. For example, the much touted Turing Test does not test consciousness, it tests humanlike intelligence. There really isn’t a test for consciousness yet. Devising one is an interesting an important goal that we should perhaps be working on.

In my own view, consciousness is probably fundamental to the substrate of the universe, like space, time and energy. We don’t know what space, time and energy actually are. We cannot actually measure them directly either. All our measurements of space, time and energy are indirect — we measure other things that imply that space, time and energy exist. Space, time and energy are inferred by effects we observe on material things that we can measure. I think the same may be true of consciousness. So the question is, what are the measureable effects ofconsciousness? Well one candidate seems to be the Double Slit experiment, which shows that the act of observation causes the quantum wave function to collapse. Are there other effects we can cite as evidence of consciousness?

I have recently been wondering how connected consciousness is to the substrate of the universe we are in. If consciousness is a property of the substrate, then it may be impossible to synthesize. For example, we never synthesize space, time or energy — no matter what we do, we are simply using the space, time and energy of the substrate that is this universe.

If this is the case, then creating consciousness is impossible. The best we can do is somehow channel the consciousness that is already there in the substrate of the universe. In fact, that may be what the human nervous system does: it channels consciousness, much in the way that an electrical circuit channels electricity. The reason that software programs will probably not become conscious is that they aretoo many levels removed from the substrate. There is little or no feedback between the high-level representations of cognition in AI programs and the quantum-level computation (and possibly consciousness) of the physical substrate of the universe. That is not the case in the human nervous system — in the human nervous system the basic computing elements and all the cognitive activity are directly tied to thephysical substrate of the universe. There is at least the potential for two-way feedback to take place between the human mind (the software), the human brain (a sort of virtual machine), and the quantum field (the actual hardware).

So the question I have been asking myself lately is how connected is consciousness to the physical substrate? And furthermore, how important is consciousness to what we consider intelligence to be? If consciousness is important to intelligence, then artificial intelligence may not be achievable through software alone — it mayrequire consciousness, which may in turn require a different kind of computing system, one which is more connected (through bidirectional feedback) to the physical quantum substrate of the universe.

What all this means to me is that human beings may form an important and potentially irreplaceable part of the OM — the One Machine — the emerging global superorganism. In particular today the humans are still the most intelligent parts. But in the future when machine intelligence may exceed human intelligence a billionfold, humans may still be the only or at least most conscious parts of the system. Because of the uniquely human capacity for consciousness (actually, animals and insects are conscious too), I think we have an important role to playin the emerging superorganism. We are it’s awareness. We are who watches, feels, and knows what it is thinking and doing ultimately.

Because humans are the actual witnesses and knowers of what the OM does and thinks, the function of the OM will very likely be to serve and amplify humans, rather than to replace them. It will be a system that is comprised of humans and machines working together, for human benefit, not for machine benefit. This is a very different future outlook than that of people who predict a kind of “Terminator-esque” future in which machines get smart enough to exterminate the human race. It won’t happen that way. Machines will very likely not get that smart for a long time, if ever, because they are not going to be conscious. I think we should be much more afraid of humans exterminating humanity than of machines doing it.

So to get to Kevin Kelly’s Level IV, what he calls “An Intelligent Conscious Superorganism” we simply have to include humans in the system. Machines alone are not, and will not ever be, enough to get us there. I don’t believe consciousness can be sythesized or that it will suddenly appear in a suitably complex computer program. I think it is a property of the substrate, and computer programs are just too many levels removed from the substrate. Now, it is possible that we mightdevise a new kind of computer architecture — one which is much more connected to the quantum field. Perhaps in such a system, consciousness, like electricity, could be embodied. That’s a possibility. It is likely that such a system would be more biological in nature, but that’s just a guess. It’s an interesting direction forresearch.

In any case, if we are willing to include humans in the global superorganism — the OM, the One Machine — then we are already at Kevin Kelly’s Level IV. If we are not willing to include them, then I don’t think will reach Level IV anytime soon, or perhaps ever.

It is also important to note that consciousness has many levels, just like intelligence. There is basic raw consciousness which simply perceives the qualia of what takes place. But there are also forms of consciousness which are more powerful — for example, consciousness that is aware of itself, and consciousness which is so highly tuned that it has much higher resolution, and consciousness which is aware of the physical substrate and its qualities of being spacelike and empty of any kind of fundamental existence. These are in fact the qualities of the quantum substrate we live in. Interestingly, they are also the qualities of reality that Buddhists masters also point out to be the ultimate nature of reality and of the mind (they do not consider reality and mind to be two different things ultimately). Consciousness may or may not be aware of these qualities of consciousness and ofreality itself — consciousness can be dull, or low-grade, or simply not awake. The level to which consciousness is aware of the substrate is a way to measure the grade of consciousness taking place. We might call this dimension of consciousness, “resolution.” The higher the resolution of consciousness is, the more acutely aware it is of the actual nature of phenomena, the substrate. At the highest  resolutionit can directly percieve the space-like, mind-like, quantum nature of what it observes. At the highest level of resolution, there is no perception of duality between observer and observed — consciousness perceives everything to be essentially consciousness appearing in different forms and behaving in a quantum fashion.

Another dimension of consciousness that is important to consider is what we could call “unity.” On the lowest level of the unity scale, there is no sense of unity, but rather a sense of extreme isolation or individuality. At the highest level of the scale there is a sense of total unification of everything within one field of consciousness. That highest-level corresponds to what we could call “omniscience.” TheBuddhist concept of spiritual enlightenment is essentially consciousness that has evolved to BOTH the highest level of resolution and the highest level of unity.

The global superorganism is already conscious, in my opinion, but it has not achieved very high resolution or unity. This is because most humans, and most human groups and organizations, have only been able to achive the most basic levels of consciousness themselves. Since humans, and groups of humans, comprise the consciousness of the global superorganism, our individual and collective conscious evolution is directly related to the conscious evolution of the superorganism as a whole. This is why it is important for individuals and groups to work on their own consciousnesses. Consciousness is “there” as a basic property of the physical substrate, but like mass or energy, it can be channelled and accumulated and shaped. Currently the consciousness that is present in us as individuals, and in groups of us, is at best, nascent and underdeveloped.

In our young, dualistic, materialistic, and externally-obsessed civilization, we have made very little progress on working with consciousness. Instead we have focused most or all of our energy on working with certain other more material-seeming aspects of the substrate — space, time and energy. In my opinion a civilizationbecomes fully mature when it spends equal if not more time on the concsiousness dimension of the substrate. That is something we are just beginning to work on, thanks to the strangeness of quantum mechanics breaking our classical physical paradims and forcing us to admit that consciousness might play a role in our reality.

But there are ways to speed up the evolution of individual and collective consciousness, and in doing so we can advance our civilization as a whole. I have lately been writing and speaking about this in more detail.

On an individual level one way to rapidly develop our own consciousness is the path of meditation and spirituality — this is most important and effective. There may also be technological improvements, such as augmented reality, or sensory augmentation, that can improve how we perceive, and what we perceive. In the not too distant future we will probably have the opportunity to dramatically improve the range and resolution of our sense organs using computers or biological means. We may even develop new senses that we cannot imagine yet. In addition, using the Internet for example, we will be able to be aware of more things at once than ever before. But ultimately, the scope of our individual consciousness has to develop on an internal level in order to truly reach higher levels of resolution and unity.Machine augmentation can help perhaps, but it is not a substitute for actually increasing the capacity of our consciousnesses. For example, if we use machines to get access to vastly more data, but our consciousnesses remain at a relatively low-capacity level, we may not be able to integrate or make use of all that new data anyway.

It is a well known fact that the brain filters out most of the information we actually percieve. Furthermore when taking a a hallucinogenic drug, the filter opens up a little wider, and people become aware of things which were there all along but which they previously filtered out. Widening the scope of consciousness — increasing the resolution and unity of consciousness, is akin to what happens when taking such a drug, except that it is not a temporary effect and it is more controllable and functional on a day-to-day basis. Many great Tibetan lamas I know seem to have accomplished this — the scope of their consciousness is quite vast, and the resolution is quite precise. They literally can and do see every detail of eventhe smallest things, and at the same time they have very little or no sense of individuality. The lack of individuality seems to remove certain barriers which in turn enable them to perceive things that happen beyond the scope of what would normally be considered their own minds — for example they may be able to perceive the thoughts of others, or see what is happening in other places or times. This seems to take place because they have increased the resolution and unity oftheir consciousnesses.

On a collective level, there are also things we can do to make groups, organizations and communities more conscious. In particular, we can build systems that do for groups what the “self construct” does for individuals.

The self is an illusion. And that’s good news. If it wasn’t an illusion we could never see through it and so for one thing spiritual enlightenment would not be possible to achieve. Furthermore, if it wasn’t an illusion we could never hope to synthesize it for machines, or for large collectives. The fact that “self” is an illusion is something that Buddhist, neuroscientists, and cognitive scientists all seem to agree on. The self is an illusion, a mere mental construct. But it’s a very useful one, when applied in the right way. Without some concept of self we humans would find it difficult to communicate or even navigate down the street. Similarly, without some concept of self groups, organizations and communities also cannot function very productively.

The self construct provides an entity with a model of itself, and its environment. This model includes what is taking place “inside” and what is taking place “outside” what is considered to be self or “me.” By creating this artificial boundary, and modelling what is taking place on both sides of the boundary, the self construct is able to measure and plan behavior, and to enable a system to adjust and adaptto “itself” and the external environment. Entities that have a self construct are able to behave far more intelligently than those which do not. For example, consider the difference between the intelligence of a dog and that of a human. Much of this is really a difference in the sophistication of the self-constructs of these two different species. Human selves are far more self-aware, introspective, and sophisticatedthan that of dogs. They are equally conscious, but humans have more developed self-constructs. This applies to simple AI programs as well, and to collective intelligences such as workgroups, enterprises, and online communities. The more sophisticated the self-construct, the smarter the system can be.

The key to appropriate and effective application of the self-construct is to develop a healthy self, rather than to eliminate the self entirely. Eradication of the self is form of nihilism that leads to an inability to function in the world. That is not somethingthat Buddhist or neuroscientists advocate. So what is a healthy self? In an individual, a healthy self is a construct that accurately represents past, present and projected future internal and external state, and that is highly self-aware, rational but not overly so, adaptable, respectful of external systems and other beings, and open to learning and changing to fit new situations. The same is true for a healthy collective self. However, most individuals today do not have healthy selves — they have highly delluded, unhealthy self-constructs. This in turn is reflected in the higher-order self-constructs of the groups, organizations and communities we build.

One of the most important things we can work on now is creating systems that provide collectives — groups, organizations and communities — with sophisticated, healthy, virtual selves. These virtual selves provide collectives with a mirror of themselves. Having a mirror enables the members of those systems to see the whole, and how they fit in. Once they can see this they can then begin to adjust their own behavior to fit what the whole is trying to do. This simplemirroring function can catalyze dramatic new levels of self-organization and synchrony in what would otherwise be a totally chaotic “crowd” of individual entities.

In fact, I think that collectives move through three levels of development:

  • Level 1: Crowds. Crowds are collectives in which the individuals are not aware of the whole and in which there is no unified sense of identity or purpose. Nevertheless crowds do intelligent things. Consider for example, schools of fish, or flocks of birds. There is no single leader, yet the individuals, by adapting to what their nearby neighbors are doing, behave collectively as a single entity of sorts. Crowds are amoebic entities that ooze around in a bloblike fashion. They are not that different from physical models of gasses.
  • Level 2: Groups. Groups are the next step up from crowds. Groups have some form of structure, which usually includes a system for command and control. They are more organized. Groups are capable of much more directed and intelligent behaviors. Families, cities, workgroups, sports teams, armies, universities, corporations, and nations are examples of groups. Most groups have intelligences that are roughly similar to that of simple animals. Theymay have a primitive sense of identity and self, and on the basis of that, they are capable of planning and acting in a more coordinated fashion.
  • Level 3: Meta-Individuals. The highest level of collective intelligence is the meta-individual. This emerges when what was once a crowd of separate individuals, evolves to become a new individual in its own right, and is faciliated by the formation of a sophisticated meta-level self-construct for the collective. This evolutionary leap is called a metasystem transition — the parts join together to form a new higher-order whole that is made of the parts themselves. This new whole resembles the parts, but transcends theirabilities. To evolve a collective to the level of being a true individual, it has to have a well-designed nervous system, it has to have a collective brain and mind, and most importantly it has to achieve a high-level of collective consciousness. High level collective consciousness requires a sophisticated collective self construct to serve as a catalyst. Fortunately, this is something we can actually build, because as has been asserted previously, self is an illusion, a consturct, and therefore selves can be built, even for large collectives comprised of millions or billions of members.

The global superorganism has been called The Global Brain for over a century by a stream of forward looking thinkers. Today we may start calling it the One Machine, or the OM, or something else. But in any event, I think the most important work that we can can do to make it smarter is to provide it with a more developed and accurate sense of collective self. To do this we might start by working on ways toprovide smaller collectives with better selves — for example, groups, teams, enterprises and online communities. Can we provide them with dashboards and systems which catalyze greater collective awareness and self-organization? I really believe this is possible, and I am certain there are technological advances that can support this goal. That is what I’m working on with my own project, Twine.com. But this is just the beginning.

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

Life in Perpetual Beta: The Film

Melissa Pierce is a filmmaker who is making a film about "Life in Perpetual Beta." It’s about how people who are adapting and reinventing themselves in the moment, and a new philosophy or approach to life. She’s interviewed a number of interesting people, and while I was in Chicago recently, she spoke with me as well. Here is a clip about how I view the philosophy of living in Beta. Her film is also in perpetual beta, and you can see the clips from her interviews on her blog as the film evolves. Eventually it will be released through the indie film circuit, and it looks like it will be a cool film. By the way, she is open to getting sponsors so if you like this idea and want your brand on the opening credits, drop her a line!

The Wikipedia, Knowledge Preservation and DNA

I had an interesting thought today about the long-term preservation and transmission of human knowledge.

The Wikipedia may be on its way to becoming the one of the best places in which to preserve knowledge for future generations. But this is just the beginning. What if we could encode the Wikipedia into the Junk DNA portion of our own genome? It appears that something like this may actually be possible — at least according some recent studies of the non-coding regions of the human genome.

If we could actually encode knowledge, like the Wikipedia for example, into our genome, the next logical step would be to find a way to access it directly.

At first we might only be able to access and read the knowledge stored in our DNA through a computationally intensive genetic analysis of an individual’s DNA. In order to correct any errors in the data from mutuation, we would also need to cross-reference this individual data with similar analyses from the DNA of other people who also carry this data in their DNA. But this is just the beginning. There are however ways to stored data such that there is enough redundancy to protect against degradation. Assuming we could do this we might be able to eliminate the need for cross referencing as a form of error correction — the data itself would be self-correcting so to speak. If we could accomplish this then the next step would be to find a way for an individual to access the knowledge stored in their DNA in real-time, directly. That’s a long way off but there may be a way to do this using some future nano-scale genomic-brain interface. This opens up some fascinating areas of speculation to say the least.

Continue reading

A Few Predictions for the Near Future

This is a five minute video in which I was asked to make some predictions for the next decade about the Semantic Web, search and artificial intelligence. It was done at the NextWeb conference and was a fun interview.

Learning from the Future with Nova Spivack from Maarten on Vimeo.

A Universal Classification of Intelligence

I’ve been thinking lately about whether or not it is possible to formulate a scale of universal cognitive capabilities, such that any intelligent system — whether naturally occurring or synthetic — can be classified according to its cognitive capacity. Such a system would provide us with a normalized scientific basis by which to quantify and compare the relative cognitive capabilities of artificially intelligent systems, various species of intelligent life on Earth, and perhaps even intelligent lifeforms encountered on other planets.

One approach to such evaluation is to use a standardized test, such as an IQ test. However, this test is far too primitive and biased towards human intelligence. A dolphin would do poorly on our standardized IQ test, but that doesn’t mean much, because the test itself is geared towards humans. What is needed is a way to evaluate and compare intelligence across different species — one that is much more granular and basic.

What we need is a system that focuses on basic building blocks of intelligence, starting by measuring the presence or ability to work with fundamental cognitive constructs (such as the notion of object constancy, quantities, basic arithmetic constructs, self-constructs, etc.) and moving up towards higher-level abstractions and procedural capabilities (self-awareness, time, space, spatial and temporal reasoning, metaphors, sets, language, induction, logical reasoning, etc.).

What I am asking is whether we can develop a more "universal" way to rate and compare intelligences? Such a system would provide a way to formally evaluate and rate any kind of intelligent system — whether insect, animal, human, software, or alien — in a normalized manner.

Beyond the inherent utility of having such a rating scale, there is an additional benefit to trying to formulate this system: It will lead us to really question and explore the nature of cognition itself. I believe we are moving into an age of intelligence — an age where humanity will explore the brain and the mind (the true "final frontier"). In order to explore this frontier, we need a map — and the rating scale I am calling for would provide us with one, for it maps the range of possible capabilities that intelligent systems are capable of.

I’m not as concerned with measuring the degree to which any system is more or less capable of some particular cognitive capability within the space of possible capabilities we map (such as how fast it can do algebra for example, or how well it can recall memories, etc.) — but that is a useful second step. The first step, however, is to simply provide a comprehensive map of all the possible fundamental cognitive behaviors there are — and to make this map as minimal and elegant as we can. Ideally we should be seeking the simplest set of cognitive building blocks from which all cognitive behavior, and therefore all minds, are comprised.

So the question is: Are there in fact "cognitive universals" or universal cognitive capabilities that we can generalize across all possible intelligent systems? This is a fascinating question — although we are human, can we not only imagine, but even prove, that there is a set of basic universal cognitive capabilities that applies everywhere in the universe, or even in other possible universes? This is an exploration that leads into the region where science, pure math, philosophy, and perhaps even spirituality all converge. Ultimately, this map must cover the full range of cognitive capabilities from the most mundane, to what might be (from our perspective) paranormal, or even in the realm of science fiction. Ordinary cognition as well as forms of altered or unhealthy cognition, as well as highly advanced or even what might be said to be enlightened cognition, all have to fit into this model.

Can we develop a system that would apply not just to any form of intelligence on Earth, but even to far-flung intelligent organisms that might exist on other worlds, and that perhaps might exist in dramatically different environments than humans? And how might we develop and test this model?

I would propose that such a system could be developed and tuned by testing it across the range of forms of intelligent life we find on Earth — including social insects (termite colonies, bee hives, etc.), a wide range of other animal species (dogs, birds, chimpanzees, dolphins, whales, etc.), human individuals, and human social organizations (teams, communities, enterprises). Since there are very few examples of artificial intelligence today it would be hard to find suitable systems to test it on, but perhaps there may be a few candidates in the next decade. We should also attempt to imagine forms of intelligence on other planets that might have extremely different sensory capabilities, totally different bodies, and perhaps that exist on very different timescales or spatial scales as well — what would such exotic, alien intelligences be like, and can our model encompass the basic building blocks of their cognition as well?

It will take decades to develop and tune a system such as this, and as we learn more about the brain and the mind, we will continue to add subtlety to the model. But when humanity finally establishes open dialog with an extraterrestrial civilization, perhaps via SETI or some other means of more direct contact, we will reap important rewards. A system such as what I am proposing will provide us with a valuable map for understanding alien cognition, and that may prove to be the key to enabling humanity to engage in successful interactions and relations with alien civilizations as we may inevitably encounter as humanity spreads throughout the galaxy. While some skeptics may claim that we will never encounter intelligent life on other planets, the odds would indicate otherwise. It may take a long time, but eventually it is inevitable that we will cross paths — if they exist at all. Not to be prepared would be irresponsible.

Artificial Stupidity: The Next Big Thing

There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don’t need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I’m skeptical to say the least. I don’t need or want artificial intelligence.

No, what I really need is artificial stupidity.

I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks — like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.

The human brain is the result of millions of years of evolution. It’s already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don’t require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it’s going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.

The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don’t mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren’t good at." In fact humans are really bad at doing relatively simple, "stupid" things — tasks that don’t require much intelligence at all.

For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That’s what computers are for – or should be for at least.

Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving — but we are just terrible at managing email, or making sense of the Web. Let’s play to our strengths and use computers to compensate for our weaknesses.

I think it’s time we stop talking about artificial intelligence — which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.

Radar Networks Announces Twine.com

My company, Radar Networks, has just come out of stealth. We’ve announced what we’ve been working on all these years: It’s called Twine.com. We’re going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There’s lot’s of press coming out where you can read about what we’re doing in more detail. The team is extremely psyched and we’re all working really hard right now so I’ll be brief for now. I’ll write a lot more about this later.

Continue reading

Scientist Says "Never in Our Imagination Could This Happen." Famous Last Words?

Whenever a scientist says something like, don’t worry our new experiment could never get out of the lab, or don’t worry the miniature black hole we are going to generate couldn’t possibly swallow up the entire planet, I tend to get a little worried. The problem is that just about every time a scientist has said something is patently absurd, totally impossible or could never ever happen, it usually turns out that in fact it isn’t as impossible as they thought. Now here’s a new article about scientists creating new artificial lifeforms, based on new genetic building blocks — and once again there’s one of those statements. I’m guessing that this means that in about 10 years some synthetic life form is going to be found to have done the impossible and escaped from the lab — perhaps into our food supply, or maybe into our environment. Don’t get me wrong — I’m in favor of this kind of research into new frontiers. I just don’t think anyone can guarantee it won’t escape from the lab.

Knowledge Networking

I’ve been thinking for several years about Knowledge Networking. It’s not a term I invented, it’s been floating around as a meme for at least a decade or two. But recently it has started to resurface in my own work.

So what is a knowledge network? I define a knowledge network as a form of collective intelligence in which a network of people (two or more people connected by social-communication relationships) creates, organizes, and uses a collective body of knowledge. The key here is that a knowledge network is not merely a site where a group of people work on a body of information together (such as the wikipedia), it’s also a social network — there is an explicit representation of a social relationship within it. So it’s more like a social network than for example a discussion forum or a wiki.

I would go so far as to say that knowledge networks are the third-generation of social software. (Note this is based in-part on ideas that emerged in conversations I have had with Peter Rip, so this also his idea):

  • First-generation social apps were about communication (eg.
    messaging such as Email, discussion boards, chat rooms, and IM)
  • Second-generation social apps were about people and content (eg. Social networks, social media sharing, user-generated content)
  • Third-generation social apps are about relationships and knowledge  (eg. Wikis, referral networks, question and answer systems, social recommendation systems, vertical knowledge and expertise portals, social mashup apps, and coming soon, what we’re building at Radar Networks)

Just some thoughts on a Saturday morning…

Enriching the Connections of the Web — Making the Web Smarter

Web 3.0 — aka The Semantic Web — is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.

I  believe that collective intelligence primarily comes from connections — this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain’s connections than in the neurons alone. There are several kinds of connections on the Web:

  1. Connections between information (such as links)
  2. Connections between people (such as opt-in social relationships, buddy lists, etc.)
  3. Connections between applications (web services, mashups, client server sessions, etc.)
  4. Connections between information and people (personal data collections, blogs, social bookmarking, search results, etc.)
  5. Connections between information and applications (databases and data sets stored or accessible by particular apps)
  6. Connections between people and applications (user accounts, preferences, cookies, etc.)

Are there other kinds of connections that I haven’t listed — please let me know!

I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.

In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It’s a very simple, yet very flexible and extensible data model that can represent any kind of data structure.

The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used.  The meaning of these connections can be very specific or very general.

For example one might define a type of connection called "friend of" or a type of connection called "employee of" — these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.

This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It’s a new place to put meaning in fact — you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole — the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).

Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood — it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.

It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.

Now where will all these rich semantic connections come from? That’s the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people — for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" — far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.

These are subtle points that are very hard for non-specialists to see — without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!

Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I’m saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.

Breaking the Collective IQ Barrier — Making Groups Smarter

I’ve been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call “The Collective IQ Barrier.” Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.

In a nutshell, here is how I define this barrier:

The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.

Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?

I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.

The Effective Size of Groups

For millions of years — in fact since the dawn of humanity — humansocial organizations have been limited in effective size. Groups aremost effective when they are small, but they have less collectiveknowledge at their disposal. Slightly larger groups optimize both effectiveness and access to resources such as knowledge and expertise. In my own experience working on many different kinds of teams, I think that the sweet-spot is between 20and 50 people. Above this size groups rapidly become inefficient andunproductive.

The Invention of Hierarchy

The solution that humans have used to get around this limitation in the effective size of groups is hierarchy.When organizations grow beyond 50 people we start to break them intosub-organizations of less than 50 people. As a result if you look atany large organization, such as a Fortune 100 corporation, you find ahuge complex hierarchy of nested organizations and cross-functionalorganizations. This hierarchy enables the organization to createspecialized “cells” or “organs” of collective cognition aroundparticular domains (like sales, marketing, engineering, HR, strategy,etc.) that remain effective despite the overall size of theorganization.

By leveraging hierarchy an organization of even hundreds ofthousands of members can still achieve some level of collective IQ as awhole. The problem however is that the collective IQ of the wholeorganization is still quite a bit lower than the combined collectiveIQ’s of the sub-organizations that comprise it. Even in well-structured, well-managed hierarchies, the hierarchy is still less thanthe sum of it’s parts. Hierarchy also has limits — the collective IQof an organization is also inversely proportional to the number ofgroups it contains, and the average number of levels of hierarchybetween those groups (Perhaps this could be defined more elegantly asan inverse function of the average network distance between groups inan organization).

The reason that organizations today still have to make suchextensive use of hierarchy is that our technologies for managingcollaboration, community, knowledge and intelligence on a collectivescale are still extremely primitive. Hierarchy is still one of the only and best solutions we have at our disposal. But we’re getting better fast.

Modern organizations are larger and far more complex than ever would have beenpractical in the Middle Ages, for example. They contain more people,distributed more widely around the globe, with more collaboration andspecialization, and more information, making more rapid decisions, thanwas possible even 100 years ago. This is progress.

Enabling Technologies

There have beenseveral key technologies that made modern organizations possible: the printing press,telegraph, telephone, automobile, airplane, typewriter, radio,television, fax machine, and personal computer. These technologies haveenabled information and materials to flow more rapidly, at less cost,across ever more widely distributed organizations. So we can see that technology does make a big difference in organizational productivity. The question is, can technology get us beyond the Collective IQ Barrier?

The advent of the Internet, and in particular the World Wide Webenabled a big leap forward in collective intelligence. These technologies havefurther reduced the cost to distributing and accessing information andinformation products (and even “machines” in the form of software codeand Web services). They have made it possible for collectiveintelligence to function more rapidly, more dynamically, on a wider scale, and at lesscost, than any previous generation of technology.

As a result of evolution of the Web we have seen new organizationalstructures begin to emerge that are less hierarchical, moredistributed, and often more fluid. For example, virtual teams that caninstantly form, collaborate across boundaries, and then dissolve backinto the Webs they come from when their job is finished. Thisprocess is now much easier than it ever was. Numerous hosted Web-basedtools exist to facilitate this: email, groupware, wikis, messageboards, listservers, weblogs, hosted databases, social networks, searchportals, enterprise portals, etc.

But this is still just the cusp of this trend. Even today with thecurrent generation of Web-based tools available to us, we are still notable to effectively tap much more of the potential Collective IQ of ourgroups, teams and communities. How do we get from where we are today(the whole is dumber than the sum of its parts) to where we want to bein the future (the whole is smarter than the sum of its parts)?

The Future of Productivity

The diagram below illustrates how I think about the past, present and future of productivity. In my view, from the advent of PC’s onwards we have seen a rapid growth in individual and group productivity, enabling people to work with larger sets of information, in larger groups. But this will not last — soon as we reach a critical level of information and groups of ever larger size, productivity will start to decline again, unless new technologies and tools emerge to enable us to cope with these increases in scale and complexity. You can read more about this diagram here.


In the last 20 years the amount of information that knowledgeworkers (and even consumers) have to deal with on a daily basis has mushroomed by a factor of almost 10orders of magnitude and it will continue like this for several moredecades. But our information tools — and particular our tools forcommunication, collaboration, community, commerce and knowledgemanagement — have not advanced nearly as quickly. As a result thetools that we are using today to manage our information andinteractions are grossly inadequate for the task at hand: They were simply not designed tohandle the tremendous volumes of distributed information, and the rate of change ofinformation, that we are witnessing today.

Case in point: Email. Email was never designed for what it is beingused for today. Email was a simple interpersonal notification andmessaging tool and essentially that is what it is good for. But todaymost of us use our email as a kind of database, search engine,collaboration tool, knowledge management tool, project management tool,community tool, commerce tool, content distribution tool, etc. Emailwasn’t designed for these functions and it really isn’t very productive whenapplied to them.

For groups the email problem is even worse than it is for individuals –not only is everyone’s individual email productivity declining anyway,but collectively as groupsize increases (and thus group information size increases as well),there is a multiplier effect that further reduces everyone’semail productivity in inverse proportion to the size of the group.Email becomes increasingly unproductive as group size and informationsize increase.

This is not just true of email, however, it’s true of almost all theinformation tools we use today: Search engines, wikis, groupware,social networks, etc. They all suffer from this fundamental problem.Productivity breaks down with scale — and the problem is exponentially worse than it is for individuals in groups and organizations. But scale is increasing incessantly — that is a fact — and it will continue to do so for decades at least. Unless something is done about this we will simply be completely buried in our own information within about a decade.

The Semantic Web

I think the Semantic Web is a critical enabling technology that will help us get through this transition. It willenable the next big leap in productivity and collective intelligence.It may even be the technology that enables humans to flip the ratio so thatfor the first time in human history, larger groups of people canfunction more productively and intelligently than smaller groups. Itall comes down to enabling individuals and groups to maintain (andultimately improve) their productivity in theface of the continuing explosion in information and social complexitythat they areexperiencing.

The Semantic Web provides a richer underlying fabric for expressing,sharing, and connecting information. Essentially it provides a betterway to transform information into useful knowledge, and to share andcollaborate with it. It essentially upgrades the medium — in this case the Web and any other data that is connected to the Web — that we use for our information today.

By enriching the medium we can inturn enable new leaps in how applications, people, groups andorganizations can function. This has happened many times before in thehistory of technology.  The printing press is one example. The Web is a more recent one. The Web enriched themedium (documents) with HTML and a new transport mechanism, HTTP, forsharing it. This brought about one of the largest leaps in humancollective cognition and productivity in history. But HTML really onlydescribes formatting and links. XML came next, to start to provide away to enrich the medium with information about structure –the parts of documents. The Semantic Web takes this one step further –it provides a way to enrich the medium with information about the meaning of the structure — what are those parts, what do various links actually mean?

Essentially the Semantic Web provides a means to abstract andexternalize human knowledge about information — previously the meaningof information lived only in our heads, and perhaps in certainspecially-written software applications that were coded to understandcertain types of data. The Semantic Web will disrupt this situation by providingopen-standards for encoding this meaning right into the medium itself.Any application that can speak the open-standards of the Semantic Webcan then begin to correctly interpret the meaning of information, andtreat it accordingly, without having to be specifically coded tounderstand each type of data it might encounter.

This is analogous to the benefit of HTML. Before HTML everyapplication had to be specifically coded to each different documentformat in order to display it. After HTML applications could all juststandardize on a single way to define the formats of differentdocuments. Suddenly a huge new landscape of information becameaccessible both to applications and to the people who used them.The Semantic Web does something similar: It provides a way to makethe data itself “smarter” so that applications don’t have to know somuch to correctly interpret it. Any data structure — a document or adata record of any kind — that can be marked up with HTML to define its formatting, can also be marked up with RDFand OWL (the languages of the Semantic Web) to define its meaning.

Once semantic metadata is added, the document can not only bedisplayed properly by any application (thanks to HTML and XML), but itcan also be correctly understood by that application. For example theapplication can understand what kind of document it is, what it isabout, what the parts are, how the document relates to other things,and what particular data fields and values mean and how they map todata fields and values in other data records around the Web.

The Semantic Web enriches information with knowledge about what thatinformation means, what it is for, and how it relates to other things.With this in hand applications can go far beyond the limitations ofkeyword search, text processing, and brittle tabular data structures.Applications can start to do a much better job of finding, organizing,filtering, integrating, and making sense of ever larger and morecomplex distributed data sets around the Web.

Another great benefit ofthe Semantic Web is that this additional metadata can be added in atotally distributed fashion. The publisher of a document can add theirown metadata and other parties can then annotate that with their ownmetadata. Even HTML doesn’t enable that level of cooperative markup (exceptperhaps in wikis). It takes a distributed solution to keep up with ahighly distributed problem (the Web). The Semantic Web is just such adistributed solution.

The Semantic Web will enrich information and this in turn will enable people, groups and applications to work with information more productively. In particular groups and organizations will benefit the most because that is where the problems of information overload and complexity are the worst. Individuals at least know how they organize their own information so they can do a reasonably good job of managing their own data. But groups are another story — because people don’t necessarily know how others in their group organize their information. Finding what you need in other people’s information is much harder than finding it in your own.

Where the Semantic Web can help with this is by providing a richer fabric for knowledge management. Information can be connected to an underlying ontology that defines not only the types of information available, but also the meaning and relationships between different tags or subject categories, and even the concepts that occur in the information itself. This makes organizing and finding group knowledge easier. In fact, eventually the hope is that people and groups will not have to organize their information manually anymore — it will happen in an almost fully-automatic fashion. The Semantic Web provides the necessary frameworks for making this possible.

But even with the Semantic Web in place and widely adopted, moreinnovation on top of it will be necessary before we can truly breakpast the Collective IQ Barrier such that organizations can in practiceachieve exponential increases in Collective IQ. Human beings are only able to cope with a few chunks ofinformation at a given moment, and our memories and ability to processcomplex data sets are limited. When group size and data size growbeyond certain limits, we simply cannot cope, we become overloaded andjammed, even with rich Semantic Web content at our disposal.

Social Filtering and Social Networking — Collective Cognition

Ultimately, to remain productive in the face of such complexity wewill need help. Often humans in roles that require them to cope with large scales of information, relationships andcomplexity hire assistants, but not all of us can affordto do that, and in some cases even assistants are not able to keep upwith the complexity that has to be managed.

Social networking andsocial filtering are two ways to expand the number of “assistants” weeach have access to, while also reducing the price of harnessing the collective intelligence of those assistants to just about nothing. Essentially these methodologies enable people toleverage the combined intelligence and attention of large communitiesof like-minded people who contribute their knowledge and expertise for free. It’s a collective tit-for-tat form of altruism.

For example, Diggis a community that discovers the most interesting news articles. Itdoes this by enabling thousands of people to submit articles and voteon them. What Digg adds are a few clever algorithms on top of this for rankingarticles such that the most active ones bubble up to the top. It’s notunlike a stock market trader’s terminal, but for a completely differentclass of data. This is a great example of social filtering.

Anothergood example are prediction markets, where groups of people vote onwhat stock or movie or politician is likely to win — in some cases bybuying virtual stock in them — as a means to predict the future. Ithas been shown that prediction markets do a pretty good job of makingaccurate predictions in fact. In addition expertise referral serviceshelp people get answers to questions from communities of experts. Theseservices have been around in one form or another for decades and haverecently come back into vogue with services like Yahoo Answers. Amazonhas also taken a stab at this with their Amazon Mechanical Turk, whichenables “programs” to be constructed in which people perform the work.

I think social networking, social filtering, prediction markets,expertise referral networks, and collective collaboration are extremelyvaluable. By leveraging other people individuals and groups can stayahead of complexity and can also get the benefit of wide-areacollective cognition. These approaches to collective cognition arebeginning to filter into the processes of organizations and othercommunities. For example, there is recent interest in applying socialnetworking to niche communities and even enterprises.

The Semantic Webwill enrich all of these activities — making social networks andsocial filtering more productive. It’s not an either/or choice — thesetechnologies are extremely compatible in fact. By leveraging acommunity to tag, classify and organize content, for example, themeaning of that content can be collectively enriched. This is alreadyhappening in a primitive way in many social media services. TheSemantic Web will simply provide a richer framework for doing this.

The combination of the Semantic Web with emerging social networkingand social filtering will enable something greater than either on it’sown. Together, together these two technologies will enable much smarter groups, social networks, communities and organizations. But this still will not get us all the way past the Collective IQBarrier. It may get us close the threshold though. To cross thethreshold we will need to enable an even more powerful form ofcollective cognition.

The Agent Web

To cope with the enormous future scale andcomplexity of the Web, desktop and enterprise, each individual and group willreally need not just a single assistant, or even a community of humanassistants working on common information (a social filtering communityfor example), they will need thousands or millions of assistants working specificallyfor them. This really only becomes affordable and feasible if we canvirtualize what an “assistant” is.

Human assistants are at the top ofthe intelligence pyramid — they are extremely smart and powerful, and they are expensive — they  should not beused for simple tasks like sorting content, that’s just a waste oftheir capabilities. It would be like using a supercomputer array tospellcheck a document. Instead, we need to free humans up to do thereally high-value information tasks, and find a way to farm out thelow-value, rote tasks to software. Software is cheap or even free and it can be replicated as much asneeded in order to parallelize. A virtual army of intelligent agents is less expensive than a single human assistant, and much more suited to sifting through millions of Web pages every day.

But where will these future intelligent agents get their intelligence? In past attempts at artificial intelligence, researchers tried to buildgigantic expert systems that could reason as well as a small child forexample. These attempts met with varying degrees of success, but theyall had one thing in common: They were monolithic applications.

I believe that that future intelligent agents should be simple. They should not be advanced AI programs or expert systems. They should be capable of a few simple behaviors, the most important of which is to reason against sets of rules and semantic data. The basic logic necessary for reasoning is not enormous and does not require any AI — it’s just the ability to follow logical rules and perhaps do set operations. They should be lightweight and highly mobile. Insteadof vast monolithic AI, I am talking about vast numbers of very simpleagents that working together can do  emergent, intelligent operations en masse.

For example search — you might deploy a thousand agents to search all the sites about Italy for recipes and then assemble those results into a database instantaneously.  Or you might dispatch a thousand or more agents to watch for a job that matches your skills and goals across hundreds of thousands or millions of Websites. They could watch and wait until jobs that matched your criteria appeared, and then they could negotiate amongst themselves to determine which of the possible jobs they found were good enough to show you. Another scenario might be commerce — you could dispatch agents to find you the best deal on a vacation package, and they could even negotiate an optimal itinerary and price for you. All you would have to do is choose between a few finalist vacation packages and make the payment. This could be a big timesaver.

The above examples illustrate how agents might help an individual, but how might they help a group or organization? Well for one thing agents could continuously organize and re-organize information for a group. They could also broker social interactions — for example, by connecting people to other people with matching needs or interests, or by helping people find experts who could answer their questions. One of the biggest obstacles to getting past the Collective IQ Barrier is simply that people cannot keep track of more than a few social relationships and information sources at aany given time — but with an army of agents helping them, individuals might be able to cope with more relationships and data sources at once; the agents would act as their filters, deciding what to let through and how much priority to give it. Agents can also help to make recommendations, and to learn to facilitate and even automate various processes such as finding a time to meet, or polling to make a decision, or escalating an issue up or down the chain of command until it is resolved.

To make intelligent agents useful, they will need access to domain expertise. But the agents themselves will not contain any knowledge or intelligence of their own. The knowledge will exist outside on the Semantic Web, and so will the intelligence. Their intelligence, like their knowledge, will be externalized and virtualized in the form of axioms or rules that will exist out on the Web just like web pages.

For example, a set of axioms about travel could be published to the Web in the form of a document that formally defined them. Any agent that needed to process travel-related content could reference these axioms in order to reason intelligently about travel in the same way that it might reference an ontology about travel in order to interpret travel data structures. The application would not have to be specifically coded to know about travel — it could be a generic simple agent — but whenever it encountered travel-related content it could call up the axioms about travel from the location on the Web where they were hosted, and suddenly it could reason like an expert travel agent. What’s great about this is that simple generic agents would be able to call up domain expertise on an as-needed basis for just about any domain they might encounter. Intelligence — the heuristics, algorithms and axioms that comprise expertise, would be as accessible as knowledge — the data and connections between ideas and information on the Web.

The axioms themselves would be created by human experts in various domains, and in some cases they might even be created or modified by agents as they learned from experience. These axioms might be provided for free as a public service, or as fee-based web-services via API’s that only paying agents could access.

The key is that model is extremely scaleable — millions or billions of axioms could be created, maintained, hosted, accessed, and evolved in a totally decentralized and parallel manner by thousands or even hundreds of thousands of experts all around the Web. Instead of a few monolithic expert systems, the Web as a whole would become a giant distributed system of experts. There might be varying degrees of quality among competing axiom-sets available for any particular domain, and perhaps a ratings system could help to filter them over time. Perhaps a sort of natural selection of axioms might take place as humans and applications rated the end-results of reasoning using particular sets of axioms, and then fed these ratings back to the sources of this expertise, causing them to get more or less attention from other agents in the future. This process would be quite similar to the human-level forces of intellectual natural-selection at work in fields of study where peer-review and competition help to filter and rank ideas and their proponents.

Virtualizing Intelligence

What I have been describing is the virtualization of intelligence — making intelligence and expertise something that can be “published” to the Web and shared just like knowledge, just like an ontology, a document, a database, or a Web page. This is one of the long-term goals of the Semantic Web and it’s already starting now via new languages, such as SWRL, that are being proposed for defining and publishing axioms or rules to the Web. For example, “a non-biologicalparent of a person is their step-parent” is asimple axiom. Another axiom might be, “A child of a sibling of your parent is your cousin.” Using such axioms, an agent could make inferences and do simple reasoning about social relationships for example.

SWRL and other proposed rules languages provide potentialopen-standards for defining rules and publishing them to the Web sothat other applications can use them. By combining these rules withrich semantic data, applications can start to do intelligent things,without actually containing any of the intelligence themselves. The intelligence– the rules and data — can live “out there” on the Web, outside the code of various applications.

All theapplications have to know how to do is find relevant rules, interpret them, and apply them. Even the reasoning that may be necessary can be virtualized into remotely accessible Web services so applications don’t even have to do that part themselves (although many may simply include open-source reasoners in the same way that they include open-source databases or search engines today).

In other words, just as HTML enables any app to process and formatany document on the Web, SWRL + RDF/OWL may someday enable any application to reasonabout what the document discusses. Reasoning is the last frontier. Byvirtualizing reasoning — the axioms that experts use to reason aboutdomains — we can really begin to store the building blocks of humanintelligence and expertise on the Web in a universally-accessibleformat. This to me is when the actual “Intelligent Web” (what I callWeb 4.0) will emerge.

The value of this for groups and organizations is that they can start to distill their intelligence from individuals that comprise them into a more permanent and openly accessible form — axioms that live on the Web and can be accessed by everyone. For example, a technical support team for a product learns many facts and procedures related to their product over time. Currently this learning is stored as knowledge in some kind of tech support knowledgebase. But the expertise for how to find and apply this knowledge still resides mainly in the brains of the people who comprise the team itself.

The Semantic Web provides ways to enrich the knowledgebase as well as to start representing and saving the expertise that the people themselves hold in their heads, in the form of sets of axioms and procedures. By storing not just the knowledge but also the expertise about the product, the humans on the team don’t have to work as hard to solve problems — agents can actually start to reason about problems and suggest solutions based on past learning embodied in the common set of axioms. Of course this is easier said than done — but the technology at least exists in nascent form today. In a decade or more it will start to be practical to apply it.

Group Minds

Someday in the not-too-distant-future groups will be able toleverage hundreds or thousands of simple intelligent agents. Theseagents will work for them 24/7 to scour the Web, the desktop, theenterprise, and other services and social networks they are related to. They will help both the individuals as well as the collectives as-a-whole. They willbe our virtual digital assistants, always alert and looking for thingsthat matter to us, finding patterns, learning on our behalf, reasoningintelligently, organizing our information, and then filtering it,visualizing it, summarizing it, and making recommendations to us sothat we can see the Big Picture, drill in wherever we wish, and makedecisions more productively.

Essentially these agents will give groups something like their own brains. Today the only brains in a group reside in the skulls of the people themselves. But in the future perhaps we will see these technologies enable groups to evolve their own meta-level intelligences: systems of agents reasoning on group expertise and knowledge.

This will be a fundamental leap to a new order of collective intelligence. For the first time groups will literally have minds of their own, minds that transcend the mere sum of the individual human minds that comprise their human, living facets. I call these systems “Group Minds” and I think they are definitely coming. In fact there has been quite a bit of research on the subject of facilitating group collaboration with agents, for example, in government agencies such as DARPA and the military, where finding ways to help groups think more intelligently is often a matter of life and death.

The big win from a future in which individuals and groups canleverage large communities of intelligent agents is that they will bebetter able to keep up with the explosive growth of information complexity andsocial complexity. As the saying goes, “it takes a village.” There is just too much information, and too many relationships, changing too fast and this is only going to get more intense in years to come. The only way to cope with such a distributed problem is a distributed solution.

Perhaps by 2030 it will not be uncommon for Individuals and groups to maintain largenumbers of virtual assistants — agents that will help them keep abreast of themassively distributed, always growing and shifting information and sociallandscapes. When you really think about this, how else could we eversolve this? This is really the only practical long-term solution. But today it is still a bit of a pipedream; we’re not there yet. The key however is that we are closer than we’ve ever been before.


The Semantic Web provides the key enabling technology for all ofthis to happen someday in the future. By enriching the content of theWeb it first paves the way to a generation of smarter applications andmore productive individuals, groups and organizations.

The next majorleap will be when we begin to virtualize reasoning in the form ofaxioms that become part of the Semantic Web. This will enable a newgeneration of applications that can reason across information andservices. This will ultimately lead to intelligent agents that will be able to assist individuals,groups, social networks, communities, organizations and marketplaces sothat they can remain productive in the fact of the astonishinginformation and social network complexity in our future.

By adding more knowledge into our information, the Semantic Webmakes it possible for applications (and people) to use information moreproductively. By adding more intelligence between people,  information,and applications, the Semantic Web will also enable people andapplications to become smarter. In the future, these more-intelligentapps will facilitate higher levels of individual and collectivecognition by functioning as virtual intelligent assistants forindividuals and groups (as well as for online services).

Once we begin to virtualize not just knowledge (semantics) but alsointelligence (axioms) we will start to build Group Minds — groups that have primitive minds of their own. When we reach this point we will finally enable organizations to breakpast the Collective IQ Barrier: Organizations will start to becomesmarter than the sum of their parts. The intelligence of anorganization will not just be from its people, it will also come fromits applications. The number of intelligent applications in anorganization may outnumber the people by 1000 to 1, effectivelyamplifying each individual’s intelligence as well as the collectiveintelligence of the group.

Because software agents work all the time,can self-replicate when necessary, and are extremely fast and precise,they are ideally-suited to sifting in parallel through the millions or billions ofdata records on the Web, day in and day out. Humans and even groups ofhumans will never be able to do this as well. And that’s not what theyshould be doing! They are far too intelligent for that kind of work.Humans should be at the top of the pyramid, making the decisions,innovating, learning, and navigating.

When we finally reach this stage where networks of humans and smartapplications are able to work together intelligently for common goals,I believe we will witness a real change in the way organizations arestructured. In Group Minds, hierarchy will not be as necessary — the maximum effectivesize of a human Group Mind will be perhaps in the thousands or even themillions instead of around 50 people. As a result the shape of organizations in thefuture will be extremely fluid, and most organizations will be flat orcontinually shifting networks. For more on this kind of organization,read about virtual teams and networking, such as these books (by friends of mine who taught me everything I know about network-organization paradigms.)

I would also like to note that I am not proposing “strong AI” — a vision in which we someday makeartificial intelligences that are as or more intelligent thanindividual humans. I don’t think intelligent agents will individually be very intelligent. It will only be in vast communities of agents that intelligence will start to emerge. Agents are analogous to the neurons in the human brain — they really aren’t very powerful on their own.

I’m also not proposing that Group Minds will beas or more intelligent as the individual humans in groups anytime soon. I don’t think thatis likely in our lifetimes. The cognitive capabilities of an adult human are the product of millions of years of evolution. Even in the accelerated medium of the Web where evolution can take place much faster in silico, it may still take decades or even centuries to evolve AI that rivals the human mind (and I doubt such AI will ever be truly conscious, which means that humans, with their inborn natural consciousness, may always play a special and exclusive role in the world to come, but that is the subject of a different essay). But even if they will not be as intelligent as individual humans, Ido think that Group Minds, facilitated by masses of slightly intelligent agents and humans working in concert, can goa long way in helping individuals and groups become more productive.

It’s important to note that the future I am describing is notscience-fiction, but it also will not happen overnight. It will take atleast several decades, if not longer. But with the seeminglyexponential rate of change of innovation, we may make very large stepsin this direction very soon. It is going to be an exciting lifetime forall of us.