What's After the Real Time Web?

In typical Web-industry style we’re all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call “The Stream,” is not an end in itself, it’s a means to an end. So what will it enable, where is it headed, and what’s it going to look like when we look back at this trend in 10 or 20 years?

In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:

  1. Web Attention Deficit Disorder. The first problem with the real-time Web that is becoming increasingly evident is that it has a bad case of ADD. There is so much information streaming in from so many places at once that it’s simply impossible to focus on anything for very long, and a lot of important things are missed in the chaos. The first generation of tools for the Stream are going to need to address this problem.
  2. Web Intention Deficit Disorder. The second problem with the real-time Web will emerge after we have made some real headway in solving Web attention deficit disorder. This second problem is about how to get large numbers of people to focus their intention not just their attention. It’s not just difficult to get people to notice something, it’s even more difficult to get them to do something. Attending to something is simply noticing it. Intending to do something is actually taking action, expending some energy or effort to do something. Intending is a lot more expensive, cognitively speaking, than merely attending. The power of collective intention is literally what changes the world, but we don’t have the tools to direct it yet.

The Stream is not the only big trend taking place right now. In fact, it’s just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I’m tracking:

  • Messaging. The real-time Web aka The Stream is really about messaging in essence. It’s a subset of the global trend towards building a better messaging layer for the Web. Multiple forms of messaging are emerging, from the publish-and-subscribe nature of Twitter and RSS, to things like Google Wave, Pubsubhubub, and broadcast style messaging or multicasting via screencast, conferencing and media streaming and events in virtual worlds. The effect of these tools is that the speed and interactivity of the Web are increasing — the Web is getting faster. Information spreads more virally, more rapidly — in other words, “memes” (which we can think of as collective thoughts) are getting more sophisticated and gaining more mobility.
  • Semantics. The Web becomes more like a database. The resolution of search, ad targeting, and publishing increases. In other words, it’s a higher-resolution Web. Search will be able to target not just keywords but specific meaning. For example, you will be able to search precisely for products or content that meet certain constraints. Multiple approaches from natural language search to the metadata of the Semantic Web will contribute to increased semantic understanding and representation of the Web.
  • Attenuation. As information moves faster, and our networks get broader, information overload gets worse in multiple dimensions. This creates a need for tools to help people filter the firehose. Filtering in its essence is a process of attenuation — a way to focus attention more efficiently on signal versus noise. Broadly speaking there are many forms of filtering from automated filtering, to social filtering, to personalization, but they all come down to helping someone focus their finite attention more efficiently on the things they care about most.
  • The WebOS.  As cloud computing resources, mashups, open linked data, and open API’s proliferate, a new level of aggregator is emerging. These aggregators may focus on one of these areas or may cut across them. Ultimately they are the beginning of true cross-service WebOS’s. I predict this is going to be a big trend in the future — for example instead of writing Web apps directly to various data and API’s in dozens of places, just write to a single WebOS aggregator that acts as middleware between your app and all these choices. It’s much less complicated for developers. The winning WebOS is probably not going to come from Google, Microsoft or Amazon — rather it will probably come from someone neutral, with the best interests of developers as the primary goal.
  • Decentralization. As the semantics of the Web get richer, and the WebOS really emerges it will finally be possible for applications to leverage federated, Web-scale computing. This is when intelligent agents will actually emerge and be practical. By this time the Web will be far too vast and complex and rapidly changing for any centralized system to index and search it. Only massively federated swarms of intelligent agents, or extremely dynamic distributed computing tools, that can spread around the Web as they work, will be able to keep up with the Web.
  • Socialization. Our interactions and activities on the Web are increasingly socially networked, whether individual, group or involving large networks or crowds. Content is both shared and discovered socially through our circles of friends and contacts. In addition, new technologies like Google Social Search enable search results to be filtered by social distance or social relevancy. In other words, things that people you follow like get higher visibility in your search results. Socialization is a trend towards making previously non-social activities more social, and towards making already-social activities more efficient and broader. Ultimately this process leads to wider collaboration and higher levels of collective intelligence.
  • Augmentation. Increasingly we will see a trend towards augmenting things with other things. For example, augmenting a Web page or data set with links or notes from another Web page or data set. Or augmenting reality by superimposing video and data onto a live video image on a mobile phone. Or augmenting our bodies with direct connections to computers and the Web.

If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it’s collective intelligence — not just of humans, but also our computing systems, working in concert.

Collective Intelligence

I think that these trends are all combining, and going real-time. Effectively what we’re seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.

But that’s in the very distant future still. In the nearer term — the next 100 years or so — we’re going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.

Social Evolution

Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.

Physical Evolution

As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we’ll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:

  • Individual brains will get less good at things like memorization and recall, calculation, reasoning, and long-term planning and action.
  • Individual brains will get better at multi-tasking, information filtering, trend detection, and social communication. The parts of the nervous system involved in processing live information will increase disproportionately to other parts.
  • Our bodies may actually improve in certain areas. We will become more, not less, mobile, as computation and the Web become increasingly embedded into our surroundings, and into augmented views of our environments. This may cause our bodies to get into better health and shape since we will be less sedentary, less at our desks, less in front of TV’s. We’ll be moving around in the world, connected to everything and everyone no matter where we are. Physical strength will probably decrease overall as we will need to do less manual labor of any kind.

These are just some of the changes that are likely to occur as a result of the things we’re working on today. The Web and the emerging Real-Time Web are just a prelude of things to come.

Video: My Talk on the Evolution of the Global Brain at the Singularity Summit

If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.

(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).

Fast Company Interview — "Connective Intelligence"

In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

How about Web 3G?

I’m here at the BlogTalk conference in Cork, Ireland with a range of bloggers and technologists discussing the emerging social Web. Including myself, Ian Davis and Paul Miller from Talis, there are also a bunch of other Semantic Web folks including Dan Brickley, and a group from DERI Galway.

Over dinner a few of us were discussing the terms “Semantic Web” versus “Web 3.0” and we all felt a better term was needed. After some thinking, Ian Davis suggested “Web 3G.” I like this term better than Web 3.0 because it loses the “version number” aspect that so many objected to. It has a familiar ring to it as well, reminding me of the 3G wireless phone initiative. It also suggests Tim Berners-Lee’s “Giant Global Graph” or GGG — a synonym for the Semantic Web. Ian stayed up late and put together a nice blog post about the term, echoing many of my own sentiments about how this term should apply to a decade (the third decade of the Web), rather than to a particular technology.

Enriching the Connections of the Web — Making the Web Smarter

Web 3.0 — aka The Semantic Web — is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.

I  believe that collective intelligence primarily comes from connections — this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain’s connections than in the neurons alone. There are several kinds of connections on the Web:

  1. Connections between information (such as links)
  2. Connections between people (such as opt-in social relationships, buddy lists, etc.)
  3. Connections between applications (web services, mashups, client server sessions, etc.)
  4. Connections between information and people (personal data collections, blogs, social bookmarking, search results, etc.)
  5. Connections between information and applications (databases and data sets stored or accessible by particular apps)
  6. Connections between people and applications (user accounts, preferences, cookies, etc.)

Are there other kinds of connections that I haven’t listed — please let me know!

I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.

In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It’s a very simple, yet very flexible and extensible data model that can represent any kind of data structure.

The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used.  The meaning of these connections can be very specific or very general.

For example one might define a type of connection called "friend of" or a type of connection called "employee of" — these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.

This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It’s a new place to put meaning in fact — you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole — the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).

Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood — it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.

It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.

Now where will all these rich semantic connections come from? That’s the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people — for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" — far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.

These are subtle points that are very hard for non-specialists to see — without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!

Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I’m saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.

Intelligence is in the Connections

Google’s Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry’s idea is that intelligence is a function of massive computation, not of “fancy whiteboard algorithms.” In other words, in his conception the brain doesn’t do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively “dumb” but from the combined power of all of them working together “intelligent” behaviors emerge.

Larry’s view is, in my opinion, an oversimplification that will not lead to actual AI. It’s certainly correct that some activities that we call “intelligent” can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible — they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today — which is still a long way short of true AI!

Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don’t think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software — the higher level cognitive algorithms and heuristics that the brain “runs” — also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).

Larry’s view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It’s a highly sophisticated system comprised of simple parts — and actually, the jury is still out on exactly how simple the parts really are — much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.

Perhaps the Web as a whole is the closest analogue we have today for the brain — with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We’re not talking about a few hundred thousand linux boxes — we’re talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.

One reader who commented on Larry’s talk made an excellent point on what this missing piece may be: “Intelligence is in the connections, not the bits.”The point is that most of the computation in the brain actually takesplace via the connections between neurons, regions, and perhapsprocesses. This writer also made some good points about quantumcomputation and how the brain may make use of it, a view that forexample Roger Penrose and others have spent a good deal of time on.There is some evidence that brain may make use of microtubules andquantum-level computing. Quantum computing is inherently about fields,correlations and nonlocality. In other words the connections in thebrain may exist on a quantum level, not just a neurological level.

Whether quantum computation is the key or not still remains to bedetermined. But regardless, essentially, Larry’s approach is equivalentto just aiming a massively parallel supercomputer at the Web and hopingthat will do the trick. Larry mentions for example that if allknowledge exists on the Web you should be able to enter a query and geta perfect answer. In his view, intelligence is basically just search ona grand scale. All answers exist on the Web, and the task is just tomatch questions to the right answers. But wait? Is that all thatintelligence does? Is Larry’s view too much of an oversimplification?Intelligence is not just about learning and recall, it’s also aboutreasoning and creativity. Reasoning is not just search. It’s unclearhow Larry’s approach would address that.

In my own opinion, for global-scale AI to really emerge the Web has toBE the computer. The computation has to happen IN the Web, betweensites and along connections — rather than from outside the system. Ithink that is how intelligence will ultimately emerge on a Web-widescale. Instead of some Google Godhead implementing AI from afar for thewhole Web, I think it is more likely that every site, app and person onthe Web will help to implement it. It will be much more of a hybridsystem that combines decentralized human and machine intelligences andtheir interactions along data connections and social relationships. Ithink this may emerge from a future evolution of the Web that providesfor much richer semantics on every piece of data and hyperlink on theWeb, and for decentralized learning, search, and reasoning to takeplace within every node on the Web. I think the Semantic Web is anecessary technology for this to happen, but it’s only the first step.More will need to happen on top of it for this vision to reallymaterialize.

My view is more of an “agent metaphor” for intelligence — perhaps itis similar to Marvin Minsky’s Society of Mind ideas. I think that mindsare more like communities than we presently think. Even in our ownindividual minds for example we experience competing thoughts, multiplethreads, and a kind of internal ecology and natural selection of ideas.These are not low-level processes — they are more like agents — theyare actually each somewhat “intelligent” on their own, they seem to besomewhat autonomous, and they interact in intelligent almost socialways.

Ideas seem to be actors, not just passive data points — they arecompeting for resources and survival in a complex ecology that existsboth within our individual minds and between them in socialrelationships and communities. As the theory of memetics proposes,ideas can even transport themselves through language, culture, andsocial interactions in order to reproduce and evolve from mind to mind.It is an illusion to think that there is some central self or “I” thatcontrols the process (that is just another agent in the community infact, perhaps one with a kind of reporting and selection role).

I’m not sure the complex social dynamics of these communities ofintelligence can really be modeled by a search engine metaphor. Thereis a lot more going on than just search. As well as communication andreasoning between different processes, there may in fact be feedbackacross levels from the top-down as well as the from the bottom-up.Larry is essentially proposing that intelligence is a purely bottom-upemergent process that can be reduced to search in the ideal, simplestcase. I disagree. I think there is so much feedback in every directionthat medium and the content really cannot be separated. The thoughtsthat take place in the brain ultimately feedback down to the neuralwetware itself, changing the states of neurons and connections –computation flows back down from the top, it doesn’t only flow up fromthe bottom. Any computing system that doesn’t include this kind offeedback in its basic architecture will not be able to implement trueAI.

In short, Google is not the right architecture to truly build a globalbrain on. But it could be a useful tool for search andquestions-and-answers in the future, if they can somehow keep up withthe growth and complexity of the Web.

Must-Know Terms for the 21st Century Intellectual

Read this fun article that lists and defines some of the key concepts that every post-singularity transhumanist meta-intellectual should know! (via Kurzweil)

Minding The Planet — The Meaning and Future of the Semantic Web

NOTES

Prelude

Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, “Minding the Planet” about how the Internet would enable the evolution of higher forms of collective intelligence.

My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, “One thing is certain: Someday, you will write this book.” We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.

A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.

But ever since that day on the porch with my grandfather, I remembered what he said: “Someday, you will write this book.” I’ve tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I’ve continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it’s the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.

This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?

I’ve often joked that it is ironic that a term that contains theword “semantic” has such an ambiguous meaning for most people. Mostpeople just have no idea what this means, they have no context for it,it is not connected to their experience and knowledge. This is aproblem that people who are deeply immersed in the trenches of theSemantic Web have not been able to solve adequately — they have notfound the words to communicate what they can clearly see, what they areworking on, and why it matters for everyone. In this article I havetried, and hopefully succeeded, in providing a detailed introductionand context for the Semantic Web fornon-technical people. But even technical people working in the fieldmay find something of interest here as I piece together the fragmentsinto a Big Picture and a vision for what might be called “Semantic Web2.0.”

I hope the reader will bear with me as Ibounce around across different scales of technology and time, and fromthe extremes of core technology to wild speculation in order to tellthis story. If you are looking for the cold hardscience of it all, this article will provide an understanding but willnot satisfy your need for seeing the actual code; there are otherplaceswhere you can find that level of detail and rigor. But if you want tounderstand what it all really means and what the opportunity and futurelookslike – this may be what you are looking for.

I should also note that all of this is my personal view of what I’vebeen working on,and what it really means to me. It is not necessarily the official viewof the mainstream academic Semantic Web community — although there arecertainly many places where we all agree. But I’m sure that somereaders will certainly disagree or raise objections to some of myassertions, and certainly to my many far-flung speculations about thefuture. I welcome those different perspectives; we’re all trying tomake sense of this and the more of us who do that together, the more wecan collectively start to really understand it. So please feel free towrite your own vision or response, and please let me know so I can linkto it!

So with this Prelude in mind, let’s get started…

The Semantic Web Vision

The Semantic Web is a set of technologies which are designed toenable aparticular vision for the future of the Web – a future in which allknowledge exists on the Web in a format that software applications canunderstand andreason about. By making knowledge more accessible to software, softwarewillessentially become able to understand knowledge, think about knowledge,and createnew knowledge. In other words, software will be able to be moreintelligent –not as intelligent as humans perhaps, but more intelligent than say,your wordprocessor is today.

The dream of making software more intelligent has been around almost as longas software itself. And although it is taking longer to materialize than past experts hadpredicted, progress towards this goal is being steadilymade. At the same time, the shape of this dream is changing. It is becomingmore realistic and pragmatic. The original dream of artificial intelligence wasthat we would all have personal robot assistants doing all the work we don’twant to do for us. That is not the dream of the Semantic Web. Instead, today’sSemantic Web is about facilitating what humans do – it is about helping humansdo things more intelligently. It’s not a vision in which humans do nothing andsoftware does everything.

The Semantic Web vision is not just about helping software become smarter –it is about providing new technologies that enable people, groups,organizations and communities to be smarter.

For example, by providing individuals with tools that learn about what theyknow, and what they want, search can be much more accurate and productive.

Using software that is able to understand and automatically organize largecollections of knowledge, groups, organizations and communities can reachhigher levels of collective intelligence and they can cope with volumes ofinformation that are just too great for individuals or even groups tocomprehend on their own.

Another example: more efficient marketplaces can be enabled by software thatlearns about products, services, vendors, transactions and market trends andunderstands how to connect them together in optimal ways.

In short, the Semantic Web aims to make software smarter, not just for itsown sake, but in order to help make people, and groups of people, smarter. Inthe original Semantic Web vision this fact was under-emphasized, leading to theimpression that Semantic Web was only about automating the world. In fact, it isreally about facilitating the world.

The Semantic Web Opportunity

The Semantic Web is one of the most significant things to happen since theWeb itself. But it will not appear overnight. It will take decades. It willgrow in a bottom-up, grassroots, emergent, community-driven manner just likethe Web itself. Many things have to converge for this trend to really take off.

The core open standards already exist, but the necessary development tools haveto mature, the ontologies that define human knowledge have to come into beingand mature, and most importantly we need a few real “killer apps” to prove thevalue and drive adoption of the Semantic Web paradigm. The first generation ofthe Web had its Mozilla, Netscape, Internet Explorer, and Apache – and it alsohad HTML, HTTP, a bunch of good development tools, and a few killer apps andservices such as Yahoo! and thousands of popular Web sites. The same things arenecessary for the Semantic Web to take off.

And this is where we are today – this all just about to start emerging.There are several companies racing to get this technology, or applications ofit, to market in various forms. Within a year or two you will see mass-consumerSemantic Web products and services hit the market, and within 5 years therewill be at least a few “killer apps” of the Semantic Web. Ten years from nowthe Semantic Web will have spread into many of the most popular sites andapplications on the Web. Within 20 years all content and applications on theInternet will be integrated with the Semantic Web. This is a sea-change. A bigevolutionary step for the Web.

The Semantic Web is an opportunity to redefine, or perhaps to better define,all the content and applications on the Web. That’s a big opportunity. Andwithin it there are many business opportunities and a lot of money to be made. It’snot unlike the opportunity of the first generation of the Web. There areplatform opportunities, content opportunities, commerce opportunities, searchopportunities, community and social networking opportunities, and collaborationopportunities in this space. There is room for a lot of players to compete andat this point the field is wide open.

The Semantic Web is a blue ocean waiting to be explored. And like anyunexplored ocean its also has its share of reefs, pirate islands, hidden treasure, shoals,whirlpools, sea monsters and typhoons. But there are new worlds out there to be discovered,and they exert an irresistible pull on the imagination. This is an excitingfrontier – and also one fraught with hard technical and social challenges thathave yet to be solved. For early ventures in the Semantic Web arena, it’s notgoing to be easy, but the intellectual and technological challenges, and the potentialfinancial rewards, glory, and benefit to society, are worth the effort andrisk. And this is what all great technological revolutions are made of.

Semantic Web 2.0

Some people who have heard the term “Semantic Web” thrown around too muchmay think it is a buzzword, and they are right. But it is not just a buzzword –it actually has some substance behind it. That substance hasn’t emerged yet,but it will. Early critiques of the Semantic Web were right – the early visiondid not leverage concepts such as folksonomy and user-contributed content atall. But that is largely because when the Semantic Web was originally conceivedof Web 2.0 hadn’t happened yet. The early experiments that came out of researchlabs were geeky, to put it lightly, and impractical, but they are already beingfollowed up by more pragmatic, user-friendly approaches.

Today’s Semantic Web – what we might call “Semantic Web 2.0” is a kinder,gentler, more social Semantic Web. It combines the best of the original visionwith what we have all learned about social software and community in the last10 years. Although much of this is still in the lab, it is already starting totrickle out. For example, recently Yahoo! started a pilot of the Semantic Webbehind their food vertical. Other organizations are experimenting with usingSemantic Web technology in parts of their applications, or to store or mapdata. But that’s just the beginning.

The Google Factor

Entrepreneurs, venture capitalists and technologists are increasinglystarting to see these opportunities. Who will be the “Google of the SemanticWeb?” – will it be Google itself? That’s doubtful. Like any entrenchedincumbent, Google is heavily tied to a particular technology and worldview. Andin Google’s case it is anything but semantic today. It would be easier for anupstart to take this position than for Google to port their entireinfrastructure and worldview to a Semantic Web way of thinking.

If it is goingto be Google it will most likely be by acquisition rather than by internal origination. Andthis makes more sense anyway – for Google is in a position where they can just wait and buy the winner,at almost any price, rather than competing in the playing field. One thing to note however is that Google has at least one product offering that shows some potential for becoming a key part of the Semantic Web. I am speaking of Google Base, Google’s open database which is meant to be a registry for structured data so that it can be found in Google search. But Google Base does not conform to or make use of the many open standards of the Semantic Web community. That may or may not be a good thing, depending on your perspective.

Of course the downside of Google waiting to join the mainstream Semantic Web community until after the winner is announced is very large – once there is a winner it may be too late for Google to beat them. Thewinner of the Semantic Web race could very well unseat Google. The strategistsat Google are probably not yet aware of this but as soon as they seesignificant traction around a major Semantic Web play it will become of interestto them.

In any case, I think there won’t be just one winner, there will be severalmajor Semantic Web companies in the future, focusing on different parts of theopportunity. And you can be sure that if Google gets into the game, every majorportal will need to get into this space at some point or risk becomingirrelevant. There will be demand and many acquisitions. In many ways the Semantic Web will not be controlled by just one company — it will be more like a fabric that connects them all together.

Context is King — The Nature ofKnowledge

It should be clear by now that the Semantic Web is all about enablingsoftware (and people) to work with knowledge more intelligently. But what isknowledge? Knowledge is not just information. It is meaningful information – itis information plus context. For example, if I simply say the word “sem” toyou, it is just raw information, it is not knowledge. It probably has nomeaning to you other than a particular set of letters that you recognize and asound you can pronounce, and the mere fact that this information was stated byme.

But if I tell you that “sem” it is the Tibetan word for “mind” then suddenly,“sem means mind in Tibetan” to you. If I further tell you that Tibetans have about as many words for “mind” as Eskimos have for “snow,” this is further meaning. Thisis context, in other words, knowledge, about the sound “sem.” The sound is raw information. When it is given context itbecomes a word, a word that has meaning, a word that is connected to conceptsin your mind – it becomes knowledge. By connecting raw information to context,knowledge is formed.

Once you have acquired a piece of knowledge such as “sem means mind in Tibetan,” you may then also form further knowledgeabout it. For example, you may form the memory, “Nova said that ‘sem means mind in Tibetan.’” You mightalso connect the word “sem” to networks of further concepts you have about Tibet and your understanding of what the word “mind” means.

The mind is the organ of meaning – mind is where meaning is stored,interpreted and created. Meaning is not “out there” in the world, it is purelysubjective, it is purely mental. Meaning is almost equivalent to mind in fact.For the two never occur separately. Each of our individual minds has some way of internally representing meaning — when we read or hear a word that we know, our minds connect that to a network of concepts about it and at that moment it means something to us.

Digging deeper, if you are really curious,or you happen to know Greek, you may also find that a similar sound occurs inthe Greek word, sēmantikós – which means “having meaning” and in turn is the root of the English word “semantic”which means “pertaining to or arising from meaning.” That’s an odd coincidence!“Sem” occurs in Tibetan word for mind, and the English and Greek words that allrelate to the concepts of “meaning” and “mind.” Even stranger is that not only do these words have a similar sound, they have a similar meaning.

With all this knowledge at yourdisposal, when you then see the term “Semantic Web” you may be able to inferthat it has something to do with adding “meaning” to the Web. However, if youwere a Tibetan, perhaps you might instead think the term had something to dowith adding “mind” to the Web. In either case you would be right!

Discovering New Connections

We’ve discovered a new connection — namely that there is an implicit connectionbetween “sem” in Greek, English and Tibetan: they all relate to meaning andmind. It’s not a direct, explicit connection – it’s not evident unless you digfor it. But it’s a useful tidbit of knowledge once it’s found. Unlike the direct migration of the sound “sem” from Greek to English,there may not have ever been a direct transfer of this sound from Greek toSanskrit to Tibetan. But in a strange and unexpected way, they are all connected. This connectionwasn’t necessarily explicitly stated by anyone before, but was uncovered byexploring our network of concepts and making inferences.

The sequence of thought about “sem”above is quite similar to kind of intellectual reasoning and discovery that theactual Semantic Web seeks to enable software to do automatically.  How is this kind of reasoning and discovery enabled? The Semantic Web providesa set of technologies for formally defining the context of information. Just asthe Web relies on a standard formal specification for “marking up” informationwith formatting codes that enable any applications that understand those codesto format the information in the same way, the Semantic Web relies on newstandards for “marking up” information with statements about its context – itsmeaning – that enable any applications to understand, and reason about, the meaning of those statements in the same way.

By applying semantic reasoning agents to large collections of semantically enhanced content, all sorts of new connections may be inferred, leading to new knowledge, unexpected discoveries and useful additional context around content. This kind of reasoning and discovery is already taking place in fields from drug discovery and medical research, to homeland security and intelligence. The Semantic Web is not the only way to do this — but it certainly will improve the process dramatically. And of course, with this improvement will come new questions about how to assess and explain how various inferences were made, and how to protect privacy as our inferencing capabilities begin to extend across ever more sources of public and private data. I don’t have the answers to these questions, but others are working on them and I have confidence that solutions will be arrived at over time.

Smart Data

By marking up information with metadata that formally codifies its context, we can make the data itself “smarter.” The data becomes self-describing. When you get a piece of data you also get the necessary metadata for understanding it. For example, if I sent you a document containing the word “sem” in it, I could add markup around that word indicating that it is the word for “mind” in the Tibetan language.

Similarly, a document containing mentions of “Radar Networks” could contain metadata indicating that “Radar Networks” is an Internet company, not a product or a type of radar technology. A document about a person could contain semantic markup indicating that they are residents of a certain city, experts on Italian cooking, and members of a certain profession. All of this could be encoded as metadata in a form that software could easily understand. The data carries more information about its own meaning.

The alternative to smart data would be for software to actually read and understand natural language as well as humans. But that’s really hard. To correctly interpret raw natural language, software would have to be developed that knew as much as a human being. But think about how much teaching and learning is required to raise a human being to the point where they can read at an adult level. It is likely that similar training would be necessary to build software that could do that. So far that goal has not been achieved, although some attempts have been made. While decent progress in natural language understanding has been made, most software that can do this is limited around particular vertical domains, and it’s brittle — it doesn’t do a good job of making sense of terms and forms of speech that it wasn’t trained to parse and make sense of.

Instead of trying to make software a million times smarter than it is today, it is much easier to just encode more metadata about what our information means. That turns out to be less work in the end. And there’s an added benefit to this approach — the meaning exists with the data and travels with it. It is independent of any one software program — all software can access it. And because the meaning of information is stored with the information itself, rather than in the software, the software doesn’t have to be enormous to be smart. It just has to know the basic language for interpreting the semantic metadata it finds on the information it works with.

Smart data enables relatively dumb software to be smarter with less work. That’s an immediate benefit. And in the long-term as software actually gets smarter, smart data will make it easier for it to start learning and exploring on its own. So it’s a win-win approach. Start with by adding semantic metadata to data, end up with smarter software.

Making Statements About the World

Metadata comes down to making statements about the world in a manner that machines, and perhaps even humans, can understand unambiguously. The same piece of metadata should be interpreted in the same way by different applications and readers.

There are many kinds of statementsthat can be made about information to provide it with context. For example, youcan state a definition such as “person” means “a human being or a legalentity.” You can state an assertion such as “Sue is a human being.” You canstate a rule such that “if x is a human being, then x is a person.”

From thesestatements it can then be inferred that “Sue is a person.” This inference is soobvious to you and me that it seems trivial, but most software today cannot dothis. It doesn’t know what a person is, let alone what a name is. But ifsoftware could do this, then it could for example, automatically organizedocuments by the people they are related to, or discover connections betweenpeople who were mentioned in a set of documents, or it could find documentsabout people who were related to particular topics, or it could give you a listof all the people mentioned in a set of documents, or all the documents relatedto a person.

Of course this is a very basicexample. But imagine if your software didn’t just know about people – it knewabout most of the common concepts that occur in your life. Your software wouldthen be able to help you work with your documents just about as intelligentlyas you are able to do by yourself, or perhaps even more intelligently, becauseyou are just one person and you have limited time and energy but your softwarecould work all the time, and in parallel, to help you.

Examples and Benefits

How could the existence of the Semantic Web and all the semantic metadata that defines it be really useful toeveryone in the near-term?

Well, for example, the problem of email spam would finally be cured:your software would be able to look at a message and know whether it wasmeaningful and/or relevant to you or not.

Similarly, you would never have to file anything by hand again. Your software could atuomate all filing and information organization tasks for you because it would understand your information and your interests. It would be able to figure out when to file something in a single folder, multiple folders, or new ones. It would organize everything — documents, photos, contacts, bookmarks, notes, products, music, video, data records — and it would do it even better and more consistently than you could on your own. Your software wouldn’t just organize stuff, it would turn it into knowledge by connecting it to more context. It could this not just for individuals, but for groups, organizations and entire communities.

Another example: search would bevastly better: you could search conversationally by typing in everyday naturallanguage and you would get precisely what you asked for, or even what youneeded but didn’t know how to ask for correctly, and nothing else. Your searchengine could even ask you questions to help you narrow what you want. You wouldfinally be able to converse with software in ordinary speech and it would understandyou.

The process of discovery would be easier too. You could have software agent that worked as your personal recommendation agent. It would constantly be looking in all the places you read or participate in for things that are relevant to your past, present and potential future interests and needs. It could then alert you in a contextually sensitive way, knowing how to reach you and how urgently to mark things. As you gave it feedback it could learn and do a better job over time.

Going even further with this,semantically-aware software – software that is aware of context, software thatunderstands knowledge – isn’t just for helping you with your information, itcan also help to enrich and facilitate, and even partially automate, yourcommunication and commerce (when you want it to). So for example, your software could help you with your email. It would be able to recommend responses to messages for you, or automate the process. It would be able to enrich your messaging anddiscussions by automatically cross-linking what you are speaking about withrelated messages, discussions, documents, Web sites, subject categories,people, organizations, places, events, etc.

Shopping and marketplaces wouldalso become better – you could search precisely for any kind of product, withany specific attributes, and find it anywhere on the Web, in any store. You could post classified ads and automatically get relevant matches according to your priorities, from all over the Web, or only from specific places and parties that match your criteria for who you trust. You could also easily invent a new custom datastructure for posting classified ads for a new kind of product or service and publishit to the Web in a format that other Web services and applications couldimmediately mine and index without having to necessarily integrate with yoursoftware or data schema directly.

You could publish an entiredatabase to the Web and other applications and services could immediately startto integrate your data with their data, without having to migrate your schemaor their own. You could merge data from different data sources together to create new data sources without having to ever touch or look at an actual database schema.

Bumps on the Road

The above examples illustrate thepotential of the Semantic Web today, but the reality on the ground is that the technology isstill in the early phases of evolution. Even for experienced software engineersand Web developers, it is difficult to apply in practice. The main obstaclesare twofold:

(1) The Tools Problem:

There are very few commercial-gradetools for doing anything with the Semantic Web today – Most of the tools forbuilding semantically-aware applications, or for adding semantics toinformation are still in the research phase and were designed for expertcomputer scientists who specialize in knowledge representation, artificialintelligence, and machine learning.

These tools require a largelearning curve to work with and they don’t generally support large-scaleapplications – they were designed mainly to test theories and frameworks, notto actually apply them. But if the Semantic Web is ever going to becomemainstream, it has to be made easier to apply – it has to be made moreproductive and accessible for ordinary software and content developers.

Fortunately, the tools problem isalready on the verge of being solved. Companies such as my own venture, RadarNetworks, are developing the next generation of tools for building Semantic Webapplications and Semantic Web sites. These tools will hide most of thecomplexity, enabling ordinary mortals to build applications and content thatleverage the power of semantics without needing PhD’s in knowledge representation.

(2) The Ontology Problem:

The Semantic Web providesframeworks for defining systems of formally defined concepts called “ontologies,”that can then be used to connect information to context in an unambiguous way. Withoutontologies, there really can be no semantics. The ontologies ARE the semantics,they define the meanings that are so essential for connecting information tocontext.

But there are still few widely used or standardized ontologies. Andgetting people to agree on common ontologies is not generally easy. Everyonehas their own way of describing things, their own worldview, and let’s face itnobody wants to use somebody else’s worldview instead of their own.Furthermore, the world is very complex and to adequately describe all the knowledgethat comprises what is thought of as “common sense” would require a very largeontology (and in fact, such an ontology exists – it’s called Cyc and it is solarge and complex that only experts can really use it today).

Even to describe the knowledge ofjust a single vertical domain, such as medicine, is extremely challenging. Tomake matters worse, the tools for authoring ontologies are still very hard touse – one has to understand the OWL language and difficult, buggy ontologyauthoring tools in order to use them. Domain experts who are non-technical andnot trained in formal reasoning or knowledge representation may find theprocess of designing ontologies frustrating using current tools. What is needed are commercial quality tools for buildingontologies that hide the underlying complexity so that people can just pourtheir knowledge into them as easily as they speak. That’s still a ways off, butnot far off. Perhaps ten years at the most.

Of course the difficulty ofdefining ontologies would be irrelevant if the necessary ontologies alreadyexisted. Perhaps experts could define them and then everyone else could justuse them? There are numerous ontologies already in existence, both on thegeneral level as well as about specific verticals. However in my own opinion,having looked at many of them, I still haven’t found one that has the rightbalance of coverage of the necessary concepts most applications need, andaccessibility and ease-of-use by non-experts. That kind of balance is arequirement for any ontology to really go mainstream.

Furthermore, regarding the presentcrop of ontologies, what is still lacking is standardization. Ontologists havenot agreed on which ontologies to use. As a result it’s anybody’s guess whichontology to use when writing a semantic application and thus there is a highdegree of ontology diversity today. Diversity is good, but too much diversityis chaos.

Applications that use differentontologies about the same things don’t automatically interoperate unless theirontologies have been integrated. This is similar to the problem of databaseintegration in the enterprise. In order to interoperate, different applicationsthat use different data schemas for records about the same things, have to bemapped to each other somehow – either at the application-level or the data-level.This mapping can be direct or through some form of middleware.

Ontologies canbe used as a form of semantic middleware, enabling applications to be mapped atthe data-level instead of the applications-level. Ontologies can also be usedto map applications at the applications level, by making ontologies of Webservices and capabilities, by the way. This is an area in which a lot ofresearch is presently taking place.

The OWL language can expressmappings between concepts in different ontologies. But if there are manyontologies, and many of them partially overlap, it is a non-trivial task toactually make the mappings between their concepts.

Even though concept A inontology one and concept B in ontology two may have the same names, and evensome of the same properties, in the context of the rest of the concepts intheir respective ontologies they may imply very different meanings. So simplymapping them as equivalent on the basis of their names is not adequate, theirconnections to all the other concepts in their respective ontologies have to beconsidered as well. It quickly becomes complex. There are some potential waysto automate the construction of mappings between ontologies however – but theyare still experimental. Today, integrating ontologies requires the help ofexpert ontologists, and to be honest, I’m not sure even the experts have itfigured out. It’s more of an art than a science at this point.

Darwinian Selection of Ontologies

All that is needed for mainstream adoption to begin is for a largebody of mainstream content to become semantically tagged andaccessible. This will cause whatever ontology is behind that content to become popular.

When developers see that there is significant content andtraction around aparticular ontology, they will use that ontology for their ownapplicationsabout similar concepts, or at least they will do the work of mappingtheir ownontology to it, and in this way the world will converge in a Darwinianfashionaround a few main ontologies over time.

These main ontologies will then beworth thetime and effort necessary to integrate them on a semantic level,resulting in acohesive Semantic Web. We may in fact see Darwinian natural selection take place not just at the ontology level, but at the level of pieces of ontologies.

A certain ontology may do a good job of defining what a person is, while another may do a good job of defining what a company is. These definitions may be used for a lot of content, and gradually they will become common parts of an emergent meta-ontology comprised of the most-popular pieces from thousands of ontologies. This could be great or it could be a total mess. Nobody knows yet. It’s a subject for further research.

Making Sense of Ontologies

Since ontologies are so important,it is helpful to actually understand what an ontology really is, and what itlooks like. An ontology is a system of formally defined related concepts. Forexample, a simple ontology is this set of statements such as this:

A human is a living thing.

A person is a human.

A person may have a first name.

A person may have a last name.

A person must have one and only onedate of birth.

A person must have a gender.

A person may be socially related toanother person.

A friendship is a kind of socialrelationship.

A romantic relationship is a kindof friendship.

A marriage is a kind of romanticrelationship.

A person may be in a marriage withonly one other person at a time.

A person may be employed by anemployer.

An employer may be a person or anorganization.

An organization is a group ofpeople.

An organization may have a productor a service.

A company is a type organization.

We’ve just built a simple ontologyabout a few concepts: humans, living things, persons, names, socialrelationships, marriages, employment, employers, organizations, groups,products and services. Within this system of concepts there is particular logic,some constraints, and some structure. It may or may not correspond to yourworldview, but it is a worldview that is unambiguously defined, can becommunicated, and is internally logically consistent, and that is what isimportant.

The Semantic Web approach providesan open-standard language, OWL, for defining ontologies. OWL also provides fora way to define instances of ontologies. Instances are assertions within theworldview that a given ontology provides. In other words OWL provides a meansto make statements that connect information to the ontology so that softwarecan understand its meaning unambiguously. For example, below is a set ofstatements based on the above ontology:

There exists a person x.

Person x has a first name “Sue”

Person x  has a last name “Smith”

Person x has a full name “Sue Smith”

Sue Smith was born on June 1, 2005

Sue Smith has a gender: female

Sue Smith has a friend: Jane, who isanother person.

Sue Smith is married to: Bob, anotherperson.

Sue Smith is employed by Acme, Inc, a company

Acme Inc. has a product, Widget2.0.

The set of statements above, plusthe ontology they are connected to, collectively comprise a knowledge basethat, if represented formally in the OWL markup language, could be understoodby any application that speaks OWL in the precise manner that it was intendedto be understood.

Making Metadata

The OWL language provides a way tomarkup any information such as a data record, an email message or a Web pagewith metadata in the form of statements that link particular words or phrasesto concepts in the ontology. When software applications that understand OWLencounter the information they can then reference the ontology and figure outexactly what the information means – or at least what the ontology says that itmeans.

But something has to add thesesemantic metadata statements to the information – and if it doesn’t add them or adds thewrong ones, then software applications that look at the information will getthe wrong idea. And this is another challenge – how will all this metadata getcreated and added into content? People certainly aren’t going to add it all byhand!

Fortunately there are many ways tomake this easier. The best approach is to automate it using special softwarethat goes through information, analyzes the meaning and adds semantic metadataautomatically. This works today, but the software has to be trained or providedwith rules and that takes some time. It also doesn’t scale cost-effectively tovast data-sets.

Alternatively, individuals can beprovided with ways to add semantics themselves as they author information. Whenyou post your resume in a semantically-aware job board, you could fill out aform about each of your past jobs, and the job board would connect that data toappropriate semantic concepts in an underlying employment ontology. As anend-user you would just fill out a form like you are used to doing;under-the-hood the job board would add the semantics for you.

Another approach is to leveragecommunities to get the semantics. We already see communities that are addingbasic metadata “tags” to photos, news articles and maps. Already a few simpletypes of tags are being used pseudo-semantically: subject tags and geographicaltags. These are primitive forms of semantic metadata. Although they are notexpressed in OWL or connected to formal ontologies, they are at leastsemantically typed with prefixes or by being entered into fields or specificnamespaces that define their types.

Tagging by Example

There may also be another solution to the problem of how to add semantics to content in the not to distant future. Once asuitable amount of content has been marked up with semantic metadata,it may be possible, through purely statistical forms of machinelearning, for software to begin to learn how to do a pretty good job ofmarking up new content with semantic metadata.

For example, if thestring “Nova Spivack” is often marked up with semantic metadata statingthat it indicates a person, and not just any person but a specificperson that is abstractly represented in a knowledge base somewhere,then when software applications encounter a new non-semanticallyenhanced document containing strings such as “Nova Spivack” or”Spivack, Nova” they can make a reasonably good guess that thisindicates that same specific person, and they can add the necessarysemantic metadata to that effect automatically.

As more and more semanticmetadata is added to the Web and made accessible it constitutes a statisticaltraining set that can be learned and generalized from. Although humansmay need to jump-start the process with some manually semantic tagging,it might not be long before software could assist them and eventuallydo all the tagging for them. Only in special cases would software needto ask a human for assistance — for example when totally new terms orexpressions were encountered for the first several times.

The technology for doing this learning already exists — and actually it’s not very different from how search engines like Google measure the community sentiment around web pages. Each time something is semantically tagged with a certain meaning that constitutes a “vote” for it having that meaning. The meaning that gets the most votes wins. It’s an elegant, Darwinian, emergent approach to learning how to automatically tag the Web.

One this is certain, if communities were able to tagthings with more types of tags, and these tags were connected to ontologies andknowledge bases, that would result in a lot of semantic metadata being added tocontent in a completely bottom-up, grassroots manner, and this in turn would enable this process to start to become automated or at least machine-augmented.

Getting the Process Started

But making the userexperience of semantic tagging easy (and immediately beneficial) enough that regular people will do it, is a challenge that has yet to be solved.However, it will be solved shortly. It has to be. And many companies andresearchers know this and are working on it right now. This does have to be solved to get the process of jump-starting the Semantic Web started.

I believe that the Tools Problem – the lack of commercial grade tools forbuilding semantic applications – is essentially solved already (although theproducts have not hit the market yet; they will within a few years at most).The Ontology Problem is further from being solved. I think the way this problemwill be solved is through a few “killer apps” that result in the building up ofa large amount of content around particular ontologies within particular onlineservices.

Where might we see this content initially arising? In my opinion it will most likely be within vertical communities of interest, communities of practice, and communities of purpose. Within such communities there is a need to create a common body of knowledge and to make that knowledge more accessible, connected and useful.

The Semantic Web can really improve the quality of knowledge and user-experience within these domains. Because they are communities, not just static content services, these organizations are driven by user-contributed content — users play a key role in building content and tagging it. We already see this process starting to take place in communities such as Flickr, del.icio.us, the Wikipedia and Digg. We know that communities of people do tag content, and consume tagged content, if it is easy and beneficial enough for to them to do so.

In the near future we may see miniature Semantic Webs arising around particular places, topics and subject areas, projects, and other organizations. Or perhaps, like almost every form of new media in recent times, we may see early adoption of the Semantic Web around online porn — what might be called “the sementic web.”

Whether you like it or not, it is a fact that pornography was one of the biggest drivers of early mainstream adoption of personal video technology, CD-ROMs, and also of the Internet and the Web.

But I think it probably is not necessary this time around. While, I’m sure that the so-called “sementic web” could become better from the Semantic Web, it isn’t going to be the primary driver of adoption of the Semantic Web. That’s probably a good thing — the world can just skip over that phase of development and benefit from this technology with both hands so to speak.

The World Wide Database

In some ways one could think of theSemantic Web as “the world wide database” – it does for the meaning of data records what theWeb did for the formatting documents. But that’s just the beginning. It actually turnsdocuments into richer data records. It turns unstructured data into structureddata. All data becomes structured data in fact. The structure is not merelydefined structurally, but it is defined semantically.

In other words, it’s notmerely that for example, a data record or document can be defined in such a wayas to specify that it contains a certain field of data with a certain label ata certain location – it defines what that field of data actually means in anunambiguous, machine understandable way. If all you want is a Web of data,XML is good enough. But if you want to make that data interoperable and machineunderstandable then you need RDF and OWL – the Semantic Web.

Like any database,the Semantic Web, or rather the myriad mini-semantic-webs that will comprise it,have to overcome the challenge of data integration. Ontologies provide a betterway to describe and map data, but the data still has to be described andmapped, and this does take some work. It’s not a magic bullet.

The Semantic Webmakes it easier to integrate data, but it doesn’t completely remove the dataintegration problem altogether. I think the eventual solution to this problemwill combine technology and community folksonomy oriented approaches.

The Semantic Web in HistoricalContext

Let’s transition now and zoom out to see the bigger picture. The Semantic Webprovides technologies for representing and sharing knowledge in new ways. Inparticular, it makes knowledge more accessible to software, and thus to otherpeople. Another way of saying this is that it liberates knowledge fromparticular human minds and organizations – it provides a way to make knowledgeexplicit, in a standardized format that any application can understand. This isquite significant. Let’s put this in historical perspective.

Before the invention of the printing press, there were two ways to spreadknowledge – one was orally, the other was in some symbolic form such as art orwritten manuscripts. The oral transmission of knowledge had limited range and ahigh error-rate, and the only way to learn something was to meet someone whoknew it and get them to tell you. The other option, symbolic communicationthrough art and writing, provided a means to communicate knowledgeindependently of particular people – but it was only feasible to produce a fewcopies of any given artwork or manuscript because they had to be copied byhand. So the transmission of knowledge was limited to small groups or at leastsmall audiences. Basically, the only way to get access to this knowledge was tobe one of the lucky few who could acquire one of its rare physical copies.

The invention of the printing press changed this – for the first timeknowledge could be rapidly and cost-effectively mass-produced and mass-distributed.Printing made it possible to share knowledge with ever-larger audiences. Thisenabled a huge transformation for human knowledge, society, government,technology – really every area of human life was transformed by thisinnovation.

The World Wide Web made the replication and distribution of knowledge eveneasier – With the Web you don’t even have to physically print or distributeknowledge anymore, the cost of distribution is effectively zero, and everyonehas instant access to everything from anywhere, anytime. That’s a lot betterthan having to lug around a stack of physical books. Everyone potentially haswhatever knowledge they need with no physical barriers. This has been anotherhuge transformation for humanity – and it has affected every area of humanlife. Like the printing press, the Web fundamentally changed the economics ofknowledge.

The Semantic Web is the next big step in this process – it will make all theknowledge of the human race accessible to software. For the first time,non-human things (software applications) will be able to start working withhuman knowledge to do things (for humans) on their own. This is a big leap – aleap like the emergence of a new species, or the symbiosis of two existingspecies into a new form of life.

The printing press and the Web changed the economics of replicating,distributing and accessing knowledge. The Semantic Web changes the economics ofprocessing knowledge. Unlike the printing press and the Web, the Semantic Webenables knowledge to be processed by non-human things.

In other words, humans don’t have to do all the thinking on their own, theycan be assisted by software. Of course we humans have to at least first createthe software (until we someday learn to create software that is smart enough tocreate software too), and we have to create the ontologies necessary for thesoftware to actually understand anything (until we learn to create software thatis smart enough to create ontologies too), and we have to add the semanticmetadata to our content in various ways (until our software is smart enough todo this for us, which it almost is already). But once we do the initial work ofmaking the ontologies and software, and adding semantic metadata, the systemstarts to pick up speed on its own, and over time the amount of work we humanshave to do to make it all function decreases. Eventually, once the system hasencoded enough knowledge and intelligence, it starts to function withoutneeding much help, and when it does need our help, it will simply ask us andlearn from our answers.

This may sound like science-fiction today, but in fact it a lot of this isalready built and working in the lab. The big hurdle is figuring out how to getthis technology to mass-market. That is probably as hard as inventing thetechnology in the first place. But I’m confident that someone will solve iteventually.

Once this happens the economics of processing knowledge will truly bedifferent than it is today. Instead of needing an actual real-live expert, theknowledge of that expert will be accessible to software that can act as theirproxy – and anyone will be able to access this virtual expert, anywhere,anytime. It will be like the Web – but instead of just information beingaccessible, the combined knowledge and expertise of all of humanity will alsobe accessible, and not just to people but also to software applications.

The Question of Consciousness

The Semantic Web literally enables humans to share their knowledge with eachother and with machines. It enables the virtualization of human knowledge andintelligence. With respect to machines, in doing this, it will lend machines“minds” in a certain sense – namely in that they will at least be able tocorrectly interpret the meaning of information and replicate the expertise ofexperts.

But will these machine-minds be conscious? Will they be aware of themeanings they interpret, or will they just be automatons that are simplyfollowing instructions without any awareness of the meanings they areprocessing? I doubt that software will ever be conscious, because from what Ican tell consciousness — or what might be called the sentient awareness ofawareness itself as well as other things that are sensed — is an immaterialphenomena that is as fundamental as space, time and energy — or perhaps evenmore fundamental. But this is just my personal opinion after having searchedfor consciousness through every means possible for decades. It just cannot befound to be something, yet it is definitely and undeniably taking place.

Consciousness can be exemplified through the analogy of space (but unlikespace, consciousness has this property of being aware, it’s not a mere lifelessvoid). We all agree space is there, but nobody can actually point to itsomewhere, and nobody can synthesize space. Space is immaterial andfundamental. It is primordial. So is electricity. Nobody really knows whatelectricity is ultimately, but if you build the right kind of circuit you canchannel it and we’ve learned a lot about how to do that.

Perhaps we may figure out how to channel consciousness like we channelelectricity with some sort of synthetic device someday, but I think that ishighly unlikely. I think if you really want to create consciousness it’s mucheasier and more effective to just have children. That’s something ordinarymortals can do today with the technology they were born with. Of course whenyou have children you don’t really “create” their consciousness, it seems to bethere on its own. We don’t really know what it is or where it comes from, orwhen it arises there. We know very little about consciousness today.Considering that it is the most fundamental human experience of all, it isactually surprising how little we know about it!

In any case, until we truly delve far more deeply into the nature of themind, consciousness will be barely understood or recognized, let aloneexplained or synthesized by anyone. In many eastern civilizations there aremulti-thousand year traditions that focus quite precisely on the nature ofconsciousness. The major religions have all universally concluded thatconsciousness is beyond the reach of science, beyond the reach of concepts,beyond the mind entirely. All those smart people analyzing consciousness for solong, and with such precision, and so many methods of inquiry, may have a pointworth listening to.

Whether or not machines will ever actually “know” or be capable of beingconscious of that meaning or expertise is a big debate, but at least we can allagree that they will be able to interpret the meaning of information and rulesif given the right instructions. Without having to be conscious, software willbe able to process semantics quite well — this has already been proven. It’sworking today.

While consciousness is and may always be a mystery that we cannot synthesize– the ability for software to follow instructions is an established fact. Inits most reduced form, the Semantic Web just makes it possible to providericher kinds of instructions. There’s no magic to it. Just a lot of details. Infact, to play on a famous line, “it’s semantics all the way down.”

The Semantic Web does not require that we make conscious software. It justprovides a way to make slightly more intelligent software. There’s a bigdifference. Intelligence is simply a form of information processing, for themost part. It does not require consciousness — the actual awareness of what isgoing on — which is something else altogether.

While highly intelligentsoftware may need to sense its environment and its own internal state andreason about these, it does not actually have to be conscious to do this. Theseoperations are for the most part simple procedures applied vast numbers of timeand in complex patterns. Nowhere in them is there any consciousness nor doesconsciousness suddenly emerge when suitable levels of complexity are reached.

Consciousness is something quite special and mysterious. And fortunately forhumans, it is not necessary for the creation of more intelligent software, noris it a byproduct of the creation of more intelligent software, in my opinion.

The Intelligence of the Web

So the real point of the Semantic Web is that it enables the Web to becomemore intelligent. At first this may seem like a rather outlandish statement,but in fact the Web is already becoming intelligent, even without the SemanticWeb.

Although the intelligence of the Web is not very evident at first glance,nonetheless it can be found if you look for it. This intelligence doesn’t existacross the entire Web yet, it only exists in islands that are few and farbetween compared to the vast amount of information on the Web as a whole. Butthese islands are growing, and more are appearing every year, and they arestarting to connect together. And as this happens the collective intelligenceof the Web is increasing.

Perhaps the premier example of an “island of intelligence” is theWikipedia, but there are many others: The Open Directory, portals such as Yahooand Google, vertical content providers such as CNET and WebMD, commercecommunities such as Craigslist and Amazon, content oriented communities such asLiveJournal, Slashdot, Flickr and Digg and of course the millions of discussionboards scattered around the Web, and social communities such as MySpace andFacebook. There are also large numbers of private islands of intelligence onthe Web within enterprises — for example the many online knowledge andcollaboration portals that exist within businesses, non-profits, andgovernments.

What makes these islands “intelligent” is that they are places where people(and sometimes applications as well) are able to interact with each other tohelp grow and evolve collections of knowledge. When you look at them close-upthey appear to be just like any other Web site, but when you look at what theyare doing as a whole – these services are thinking.They are learning, self-organizing, sensing their environments, interpreting,reasoning, understanding, introspecting, and building knowledge. These are theactivities of minds, of intelligent systems.

The intelligence of a system such as the Wikipedia exists on several levels– the individuals who author and edit it are intelligent, the groups that helpto manage it are intelligent, and the community as a whole – which isconstantly growing, changing, and learning – is intelligent.

Flickr and Digg also exhibit intelligence. Flickr’s growing system of tagsis the beginnings of something resembling a collective visual sense organ onthe Web. Images are perceived, stored, interpreted, and connected to conceptsand other images. This is what the human visual system does. Similarly, Digg isa community that collectively detects, focuses attention on, and interpretscurrent news. It’s not unlike a primitive collective analogue to the humanfacility for situational awareness.

There are many other examples of collective intelligence emerging on theWeb. The Semantic Web will add one more form of intelligent actor to the mix –intelligent applications. In the future, after the Wikipedia is connected tothe Semantic Web, as well as humans, it will be authored and edited by smartapplications that constantly look for new information, new connections, and newinferences to add to it.

Although the knowledge on the Web today is still mostly organized withindifferent islands of intelligence, these islands are starting to reach out andconnect together. They are forming trade-routes, connecting their economies,and learning each other’s languages and cultures. The next-step will be forthese islands of knowledge to begin to share not just content and services, butalso their knowledge — what they know about their content and services. The SemanticWeb will make this possible, by providing an open format for the representationand exchange of knowledge and expertise.

When applications integrate their content using the Semantic Web they willalso be able to integrate their context, their knowledge – this will make thecontent much more useful and the integration much deeper. For example, when anapplication imports photos from another application it will also be able toimport semantic metadata about the meaning and connections of those photos.Everything that the community and application know about the photos in theservice that provides the content (the photos) can be shared with the servicethat receives the content. Better yet, there will be no need for customapplication integration in order for this to happen: as long as both servicesconform to the open standards of the Semantic Web the knowledge is instantlyportable and reusable.

Freeing Intelligence from Silos

Today much of the real value of the Web (and in the world) is still lockedaway in the minds of individuals, the cultures of groups and organizations, andapplication-specific data-silos. The emerging Semantic Web will begin to unlockthe intelligence in these silos by making the knowledge and expertise theyrepresent more accessible and understandable.

It will free knowledge and expertise from the narrow confines of individualminds, groups and organizations, and applications, and make them not only moreinteroperable, but more portable. It will be possible for example for a personor an application to share everything they know about a subject of interest aseasily as we share documents today. In essence the Semantic Web provides acommon language (or at least a common set of languages) for sharing knowledgeand intelligence as easily as we share content today.

The Semantic Web also provides standards for searching and reasoning moreintelligently. The SPARQL query language enables any application to ask forknowledge from any other application that speaks SPARQL. Instead of merekeyword search, this enables semantic search. Applications can search forspecific types of things that have particular attributes and relationships toother things.

In addition, standards such as SWRL provide formalisms for representing andsharing axioms, or rules, as well. Rules are a particular kind of knowledge –and there is a lot of it to represent and share, for example proceduralknowledge, and logical structures about the world. An ontology provides a meansto describe the basic entities, their attributes and relations, but rulesenable you to also make logical assertions and inferences about them. Withoutgoing into a lot of detail about rules and how they work here, the importantpoint to realize is that they are also included in the framework. All forms ofknowledge can be represented by the Semantic Web.

Zooming Way, Waaaay Out

So far in this article, I’ve spenta lot of time talking about plumbing – the pipes, fluids, valves, fixtures,specifications and tools of the Semantic Web. I’ve also spent some time onillustrations of how it might be useful in the very near future to individuals,groups and organizations. But where is it heading after this? What is thelong-term potential of this and what might it mean for the human race on ahistorical time-scale?

For those of you who would prefer not to speculate, stop reading here. Forthe rest of you, I believe that the true significance of the Semantic Web, on along-term timescale is that it provides an infrastructure that will enable theevolution of increasingly sophisticated forms of collective intelligence. Ultimatelythis will result in the Web itself becoming more and more intelligent, untilone day the entire human species together with all of its software andknowledge will function as something like a single worldwide distributed mind –a global mind.

Just the like the mind of a single human individual, the global mind will bevery chaotic, yet out of that chaos will emerge cohesive patterns of thoughtand decision. Just like in an individual human mind, there will be feedbackbetween different levels of order – from individuals to groups to systems ofgroups and back down from systems of groups to groups to individuals. Becauseof these feedback loops the system will adapt to its environment, and to itsown internal state.

The coming global mind will collectively exhibit forms of cognition andbehavior that are the signs of higher-forms of intelligence. It will form andreact to concepts about its “self” – just like an individual human mind. Itwill learn and introspect and explore the universe. The thoughts it thinks maysometimes be too big for any one person to understand or even recognize them –they will be comprised of shifting patterns of millions of pieces of knowledge.

The Role of Humanity

Every person on the Internet will be a part of the global mind. Andcollectively they will function as its consciousness. I do not believe some newform of consciousness will suddenly emerge when the Web passes some thresholdof complexity. I believe that humanity IS the consciousness of the Web anduntil and unless we ever find a way to connect other lifeforms to the Web, orwe build conscious machines, humans will be the only form of consciousness ofthe Web.

When I say that humans will function as the consciousness of the Web I meanthat we will be the things in the system that know. The knowledge of theSemantic Web is what is known, but what knows that knowledge has to besomething other than knowledge. A thought is knowledge, but what knows thatthought is not knowledge, it is consciousness, whatever that is. We can figureout how to enable machines to represent and use knowledge, but we don’t knowhow to make them conscious, and we don’t have to. Because we are alreadyconscious.

As we’ve discussed earlier in this article, we don’t need conscious machines, we just need more intelligent machines.Intelligence – at least basic forms of it – does not require consciousness. It may be the case that the very highest forms of intelligence require or are capable of consciousness. This may mean that software will never achieve the highest levels of intelligence and probably guaranteesthat humans (and other conscious things) will always play a special role in theworld; a role that no computer system will be able to compete with. We providethe consciousness to the system. There may be all sorts of other intelligent,non-conscious software applications and communities on the Web; in fact therealready are, with varying degrees of intelligence. But individual humans, andgroups of humans, will be the only consciousness on the Web.

The Collective Self

Although the software of the Semantic Web will not be conscious we can say that system as a whole contains or is conscious to the extent that human consciousnesses are part of it. And like most conscious entities, it may also start to be self-conscious.

If the Web ever becomes a global mind as I am predicting, will it have a“self?” Will there be a part of the Web that functions as its central self-representation?Perhaps someone will build something like that someday, or perhaps it will evolve.Perhaps it will function by collecting reports from applications and people inreal-time – a giant collective zeitgeist.

In the early days of the Web portals such as Yahoo! provided this function — they were almost real-time maps of the Web and what was happening. Today making such a map is nearly impossible, but services such as Google Zeitgeist at least attempt to provide approximations of it. Perhaps through random sampling it can be done on a broader scale.

My guess is that the global mind will need a self-representation at somepoint. All forms of higher intelligence seem to have one. It’s necessary forunderstanding, learning and planning. It may evolve at first as a bunch ofcompeting self-representations within particular services or subsystems withinthe collective. Eventually they will converge or at least narrow down to just afew major perspectives. There may also be millions of minor perspectives thatcan be drilled down into for particular viewpoints from these top-level “portals.”

The collective self, will function much like the individual self – as amirror of sorts. Its function is simply to reflect. As soon as it exists theentire system will make a shift to a greater form of intelligence – because forthe first time it will be able to see itself, to measure itself, as a whole. Itis at this phase transition when the first truly global collective self-mirroring function evolves, that we can say that the transition from a bunch of cooperating intelligent parts toa new intelligent whole in its own right has taken place.

I think that the collective self, even if it converges on a few majorperspectives that group and summarize millions of minor perspectives, will becommunity-driven and highly decentralized. At least I hope so – because theself-concept is the most important part of any mind and it should be designedin a way that protects it from being manipulated for nefarious ends. At least Ihope that is how it is designed.

Programming the Global Mind

On the other hand, there are times when a little bit of adjustment or guidance iswarranted – just as in the case of an individual mind, the collective selfdoesn’t merely reflect, it effectively guides the interpretation of the pastand present, and planning for the future.

One way to change the direction ofthe collective mind, is to change what is appearing in the mirror of thecollective self. This is a form of programming on a vast scale – When thisprogramming is dishonest or used for negative purposes it is called “propaganda,” but there are cases whereit can be done for beneficial purposes as well. An example of this today ispublic service advertising and educational public television programming. Allforms of mass-media today are in fact collective social programming. When yourealize this it is not surprising that our present culture is violent andmessed up – just look at our mass-media!

In terms of the global mind, ideally one would hope that it would be able tolearn and improve over time. One would hope that it would not have the collective equivalent of psycho-social disorders. To facilitate this, just like any form of higherintelligence, it may need to be taught, and even parented a bit. It also mayneed a form of therapy now and then. These functions could be provided by thepeople who participate in it. Again, I believe that humans serve a vital and irreplaceablerole in this process.

How It All Might Unfold

Now how is this all going to unfold? I believe that there are a number ofkey evolutionary steps that Semantic Web will go through as the Web evolvestowards a true global mind:

1. Representing individual knowledge. The first step is to make individuals’knowledge accessible to themselves. As individuals become inundated withincreasing amounts of information, they will need better ways of managing it,keeping track of it, and re-using it. They will (or already do) need”personal knowledge management.”

2. Connecting individual knowledge. Next, once individual knowledge isrepresented, it becomes possible to start connecting it and sharing it acrossindividuals. This stage could be called “interpersonal knowledgemanagement.”

3. Representing group knowledge. Groups of individuals also need ways ofcollectively representing their knowledge, making sense of it, and growing itover time. Wikis and community portals are just the beginning. The Semantic Webwill take these “group minds” to the next level — it will make the collective knowledge ofgroups far richer and more re-usable.

4. Connecting group knowledge. This step is analogous to connectingindividual knowledge. Here, groups become able to connect their knowledge togetherto form larger collectives, and it becomes possible to more easily access andshare knowledge between different groups in very different areas of interest.

5. Representing the knowledge of the entire Web. This stage — what might becalled “the global mind” — is still in the distant future, but atthis point in the future we will begin to be able to view, search, and navigatethe knowledge of the entire Web as a whole. The distinction here is thatinstead of a collection of interoperating but separate intelligentapplications, individuals and groups, the entire Web itself will begin tofunction as one cohesive intelligent system. The crucial step that enables thisto happen is the formation of a collective self-representation. This enablesthe system to see itself as a whole for the first time.

How it May be Organized

I believe the global mind will be organized mainly in the form of bottom-up and lateral, distributed emergent computation andcommunity — but it will be facilitated by certain key top-down services thathelp to organize and make sense of it as a whole. I think this future Web willbe highly distributed, but will have certain large services within it as well– much like the human brain itself, which is organized into functionalsub-systems for processes like vision, hearing, language, planning, memory,learning, etc.

As the Web gets more complex there will come a day when nobody understandsit anymore – after that point we will probably learn more about how the Web isorganized by learning about the human mind and brain – they will be quitesimilar in my opinion. Likewise we will probably learn a tremendous amountabout the functioning of the human brain and mind by observing how the Webfunctions, grows and evolves over time, because they really are quite similarin at least an abstract sense.

The internet and its software and content is like a brain, and the state ofits software and the content is like its mind. The people on the Internet arelike its consciousness. Although these are just analogies, they are actuallyuseful, at least in helping us to envision and understand this complex system. Asthe field of general systems theory has shown us in the past, systems at verydifferent levels of scale tend to share the same basic characteristics and obeythe same basic laws of behavior. Not only that, but evolution tends to convergeon similar solutions for similar problems. So these analogies may be more thanjust rough approximations, they may be quite accurate in fact.

The future global brain will require tremendous computing and storageresources — far beyond even what Google provides today. Fortunately as Moore’s Law advances thecost of computing and storage will eventually be low enough to do thiscost-effectively. However even with much cheaper and more powerful computingresources it will still have to be a distributed system. I doubt that therewill be any central node because quite simply no central solution will be ableto keep up with all the distributed change taking place. Highly distributed problemsrequire distributed solutions and that is probably what will eventually emergeon the future Web.

Someday perhaps it will be more like a peer-to-peer network, comprised ofapplications and people who function sort of like the neurons in the human brain.Perhaps they will be connected and organized by higher-level super-peers orsuper-nodes which bring things together, make sense of what is going on andcoordinate mass collective activities. But even these higher-level serviceswill probably have to be highly distributed as well. It really will bedifficult to draw boundaries between parts of this system, they will all beconnected as an integral whole.

In fact it may look very much like a grid computing architecture – in whichall the services are dynamically distributed across all the nodes such that atany one time any node might be working on a variety of tasks for differentservices. My guess is that because this is the simplest, most fault-tolerant,and most efficient way to do mass computation, it is probably what will evolvehere on Earth.

The Ecology of Mind

Where we are today in this evolutionary process is perhaps equivalent to therise of early forms of hominids. Perhaps Austrolapithecus or Cro-Magnon, ormaybe the first Homo Sapiens. Compared to early man, the global mind is like the rise of 21stcentury mega-cities. A lot of evolution has to happen to get there. But itprobably will happen, unless humanity self-destructs first,which I sincerely hope we somehow manage to avoid. And this brings me to afinal point. This vision of the future global mind is highly technological;however I don’t think we’ll ever accomplish it without a new focus on ecology.

Ecology probably conjures up images of hippies and biologists, or maybehippies who are biologists, or at least organic farmers, for most people, but infact it is really the science of living systems and how they work. And anysystem that includes living things is a living system. This means that the Webis a living system and the global mind will be a living system too. As a living system, the Web is an ecosystem and is alsoconnected to other ecosystems. In short, ecology is absolutely essential tomaking sense of the Web, let alone helping to grow and evolve it.

In many ways the Semantic Web and the collective minds, and the global mind,that it enables, can be seen as an ecosystem of people, applications,information and knowledge. This ecosystem is very complex, much like naturalecosystems in the physical world. An ecosystem isn’t built, it’s grown, andevolved. And similarly the Semantic Web, and the coming global mind, will notreally be built, they will be grown and evolved. The people and organizationsthat end up playing a leading role in this process will be the ones thatunderstand and adapt to the ecology most effectively.

In my opinion ecology is going to be the most important science anddiscipline of the 21st century – it is the science of healthysystems. What nature teaches us about complex systems can be applied to everykind of system – and especially the systems we are evolving on the Web. Inorder to ever have a hope of evolving a global mind, and all the wonderfullevels of species-level collective intelligence that it will enable, we have tonot destroy the planet before we get there. Ecology is the science that cansave us, not the Semantic Web (although perhaps by improving collectiveintelligence, it can help).

Ecology is essentially the science of community – whether biological,technological or social. And community is a key part of the Semantic Web atevery level: communities of software, communities of people, and communities ofgroups. In the end the global mind is the ultimate human community. It is thereward we get for finally learning how to live together in peace and balancewith our environment.

The Necessity of Sustainability

The point of this discussion of the relevance of ecology to the future ofthe Web, and my vision for the global mind, is that I think that it is clearthat if the global mind ever emerges it will not be in a world that is anythinglike what we might imagine. It won’t be like the Borg in Star Trek, it won’t belike living inside of a machine. Humans won’t be relegated to the roles ofslaves or drones. Robots won’t be doing all the work. The entire world won’t becoated with silicon. We won’t all live in a virtual reality. It won’t be one ofthese technological dystopias.

In fact, I think the global mind can only come to pass in a much greener,more organic, healthier, more balanced and sustainable world. Because it willtake a long time for the global mind to emerge, if humanity doesn’t figure outhow to create that sort of a world, it will wipe itself out sooner or later,but certainly long before the global mind really happens. Not only that, butthe global mind will be smart by definition, and hopefully this intelligencewill extend to helping humanity manage its resources, civilizations andrelationships to the natural environment.

The Smart Environment

The global mind also needs a global body so to speak. It’s not going to bean isolated homunculus floating in a vat of liquid that replaces the physicalworld! It will be a smart environment that ubiquitously integrates with ourphysical world. We won’t have to sit in front of computers or deliberatelylogon to the network to interact with the global mind. It will be everywhere.

The global mind will be physically integrated into furniture, houses,vehicles, devices, artworks, and even the natural environment. It will sensethe state of the world and different ecosystems in real-time and alert humansand applications to emerging threats. It will also be able to allocateresources intelligently to compensate for natural disasters, storms, andenvironmental damage – much in the way that the air traffic control systemsallocates and manages airplane traffic. It won’t do it all on its own, humansand organizations will be a key part of the process.

Someday the global mind may even be physically integrated into our bodiesand brains, even down the level of our DNA. It may in fact learn how to curediseases and improve the design of the human body, extending our lives, sensorycapabilities, and cognitive abilities. We may be able to interact with it bythought alone. At that point it will become indistinguishable from a limitedfrom of omniscience, and everyone may have access to it. Although it will onlyextend to wherever humanity has a presence in the universe, within thatboundary it will know everything there is to know, and everyone will be able toknow any of it they are interested in.

Enabling a Better World

By enabling greater forms of collective intelligence to emerge we really arehelping to make a better world, a world that learns and hopefully understandsitself well enough to find a way to survive. We’re building something thatsomeday will be wonderful – far greater than any of us can imagine. We’re helpingto make the species and the whole planet more intelligent. We’re building thetools for the future of human community. And that future community, if it ever arrives,will be better, more self-aware, more sustainable than the one we live intoday.

I should also mention that knowledge is power, and power can be used forgood or evil. The Semantic Web makes knowledge more accessible. This puts more power in the hands of the many, not just the few. As long as we stick to this vision — we stick to making knowledge open and accessible, using open standards, in as distributed a fashion as we can devise, then the potential power of the Semantic Web will be protected against being coopted or controlled by the few at the expense of the many. This is where technologists really have to be socially responsible when making development decisions. It’s important that we build a more open world, not a less open world. It’s important that we build a world where knowledge, integration and unification are balanced with respect for privacy, individuality, diversity and freedom of opinion.

But I am not particularly worried that the Semantic Web and the future globalmind will be the ultimate evil – I don’t think it is likely that we will end upwith a system of total control dominated by evil masterminds with powerfulSemantic Web computer systems to do their dirty work. Statistically speaking, criminal empires don’t last very long because theyare run by criminals who tend to be very short-sighted and who also surroundthemselves with other criminals who eventually unseat them, or theyself-destruct. It’s possible that the Semantic Web, like any other technology,may be used by the bad guys to spy on citizens, manipulate the world, and doevil things. But only in the short-term.

In the long-term either our civilization will get tired of endlesssuccessions of criminal empires and realize that the only way to actuallysurvive as a species is to invent a form of government that is immune to beingtaken over by evil people and organizations, or it will self-destruct. Eitherway, that is a hurdle we have to cross before the global mind that I envisioncan ever come about. Many civilizations came before ours, and it is likely thatours will not be the last one on this planet. It may in fact be the case that adifferent form of civilization is necessary for the global mind to emerge, andis the natural byproduct of the emergence of the global mind.

We know that the global mind cannot emerge anytime soon, and therefore, ifit ever emerges then by definition it must be in the context of a civilizationthat has learned to become sustainable. A long-term sustainable civilization is a non-evil civilization. And that is why I think it is a safebet to be so optimistic about the long-term future of this trend.

The Sell-Off of America

This article by Germany’s best-known economics writer provides a fast and high-level overview of how the American empire is losing (has lost?) its economic power. While the dollar is still the world’s currency of choice, the USA no longer controls it. Furthermore, with increasing trade deficits, the outsourcing of labor, and spiraling debt, the US economy is poised on the edge of collapse. And the American middle-class has and will bear the brunt of this shift as it plays out over the next few decades.

Is There Room for The Soul? – Good Article on Cognitive Science

This is a surprisingly good article on the nature of consciousness — providing a survey of the current state-of-the-art in cognitive science research. It covers the question from a number of perspectives and interviews many of the leading current researchers.

Why Machines Will Never be Conscious

Below is the text of my bet on Long Bets. Go there to vote.

“By 2050 no synthetic computer nor machine intelligence will have become truly self-aware (ie. will become conscious).”

Spivack’s Argument:

(This summary includes my argument, a method for judging the outcomeof this bet and some other thoughts on how to measure awareness…)

A. MY PERSPECTIVE…

Even if a computer passes the Turing Test it will not really beaware that it has passed the Turing Test. Even if a computer seems tobe intelligent and can answer most questions as well as an intelligent,self-aware, human being, it will not really have a continuum ofawareness, it will not really be aware of what it seems to “think” or”know,” it will not have any experience of it’s own reality or being.It will be nothing more than a fancy inanimate object, a clevermachine, it will not be a truly sentient being.

Self-awareness is not the same thing as merely answering questionsintelligently. Therefore even if you ask a computer if it is self-awareand it answers that it is self-aware and that it has passed the TuringTest, it will not really be self-aware or really know that it haspassed the Turing Test.

AsJohn Searle and others have pointed out, the Turing Test does notactually measure awareness, it just measures informationprocessing—particularly the ability to follow rules or at leastimitate a particular style of communication. In particular it measuresthe ability of a computer program to imitate humanlike dialogue, whichis different than measuring awareness itself. Thus even if we succeedin creating good AI, we won’t necessarily succeed in creating AA(“Artificial Awareness”).

But why does this matter? Becauseultimately, real awareness may be necessary to making an AI that is asintelligent as a human sentient being. However, since AA istheoretically impossible in my opinion, truly self-aware AI will neverbe created and thus no AI will ever be as intelligent as a humansentient being even if it manages to fool someone into thinking it is(and thus passing the Turing Test).

In my opinion, awareness isnot an information process at all and will never be simulated orsynthesized by any information process. Awareness cannot be measured byan information processing system, it can only be measured by awarenessitself—something no formal information processing system can eversimulate or synthesize.

One might ask how it is that a humanhas awareness then? My answer is that awareness does not arise from thebody or the brain, nor does it arise from any physical cause. Awarenessis not in the body or the brain, but rather the body and the brain arein awareness. The situation is analagous to a dream, a simulation orvirtual reality, such as that portrayed in the popular film “TheMatrix.”

We exist in the ultimate virtual reality. The mediumof this virtual reality is awareness. That is to say that whateverappears to be happening “out there” or “within the mind” is happeningwithin a unified, nondualistic field of awareness: both the “subject”and the “object” exist equally within this field and neither is thesource of awareness.

This is similar to the case where weproject ourselves as dream protagonists in our own dreams—even thoughour dream bodies appear to be different than other dream-images theyare really equally dream appearances, they are no more fundamental thandream-objects. We identify with our dream-bodies out of habit andbecause it’s practical because the stories that take place appear fromthe perspective of particular bodies. But just because this virtualreality is structured as if awareness is coming from within our heads,it does not mean that is actually the case. In fact, quite the oppositeis taking place.

Awareness is not actually “in” the VR, the VR is”in” awareness. Things are exactly the opposite of how they appear. Ofcourse this is just an analogy—for example, unlike the Matrix, thevirtual reality we live in is not running on some giant computersomewhere and there is no other hidden force controlling it from behindthe scenes. Awareness is the fabric of reality and there is nothingdeeper, nothing creating it, it is not running on some cosmic computer,it comes out of of nowhere yet everything else comes out of it.

Ifwe look for awareness we can’t find anything to grasp, it is empty yetnot a mere nothingness, it is an emptiness that is awake, creative,alert, radiant, self-realizing.

Awareness is empty andfundamental like space, but it goes beyond space for it is also lucid.If we look for space we don’t find anything there. Nobody has evertouched or grasped space directly! But unlike space, awareness can atleast be measured directly–it can measure itself, it knows its ownnature.

Awareness is simply fundamental, a given, theunderlying meta-reality in which everything appears. How did it come tobe? That is unanswerable. What is it? That is unanswerable as well. Butthere is no doubt that awareness is taking place. Each sentient beinghas a direct and intimate experience of their own self-awareness.

Each of us experiences a virtual reality in which we and our world areprojections. That which both projects these projections and experiencesthem is awareness. This is like saying that the VR inherently knows itsown content. But in my opinion this knowing comes from outside thesystem, not from some construct that we can create inside it. So anyawareness that arises comes from the transcendental nature of realityitself, not from our bodies, minds, or any physical system within aparticular reality.

So is there one cosmic awareness out therethat we are all a part of? Not exactly, there is not one awareness norare there many awarenesses because awareness is not a physical thingand cannot be limited by such logical materialist extremes. After allif it is not graspable how can we say it is one or many or any otherlogical combination of one or many? All we can say is that we are it,whatever it is, and that we cannot explain it further. In beingawareness, we are all equal, but we are clearly not the same. We aredifferent projections and on a relative level we are each unique, eventhough on an ultimate level perhaps we are also unified by beingprojections within the same underlying continuum. Yet this continuum isfundamentally empty, impossible to locate or limit, and infinitelybeyond the confines of any formal system or universe, so it cannotreally be called a “thing” and thus we are not “many” or “one” inactuality, what we really are is totally beyond such dualisticdistinctions.

Awareness is like space or reality, something sofundamental, so axiomatic, that it is impossible to prove, grasp ordescribe from “inside” the system using the formal logical tools of thesystem. Since nothing is beyond awareness, there is no outside, no wayto ever gain a perspective on awareness that is not mediated byawareness itself.

Therefore there is no way to reduce awarenessto anything deeper; there is no way to find anything more fundamentalthan awareness. But despite this awareness can be directly experienced,at least by itself.

That which is aware is self-aware.Self-awareness is the very nature of awareness. The self-awareness ofawareness does not come from something else, it is inherent toawareness itself. Only awareness is capable of awareness. Nothing thatis not aware can ever become aware.

This means awareness istruly fundamental, it has always been present everywhere. Awareness isinherent in the universe as the very basis of everything, it is notsomething anyone can synthesize and we cannot build a machine that cansuddenly experience awareness.

Only beings who are awarealready can ever experience awareness. The fact that we are aware nowmeans that we were always aware, even before we were born! Otherwise wenever could have become aware in the first place!

Each of us “is”awareness. The experience of being aware is unique and undeniable. Ithas its own particular nature, but this cannot be expressed it can onlybe known directly. There is no sentient being that is not aware.Furthermore, it would be a logical contradiction to claim that “I amnot aware that I am aware” or “that I am aware that I am not aware” andthus if anyone claims they are not aware or have ever experienced, orcan even imagine, there not being awareness they are lying. There isnobody who does not experience their own awareness, even if they don’trecognize or admit that they experience it.

The experience ofbeing self-aware is the unique experience of “being” — an experienceso basic that it is indescribable in terms of anything else —something that no synthetic computer will ever have.

Eventually, it will be proved that no formal information processingsystem is capable of self-awareness and that thus formal computerscannot be self-aware in principle. This proof will use the abstractself-referential structure of self-awareness to establish that noformal computer can ever be self-aware.

Simplyput, computers and computer programs cannot be truly self-referential:they always must refer to something else—there must at least be a setof fixed meta-rules that are not self-referential for a computer orprogram to work. Awareness is not like this however, awareness isperfectly self-referential without referring to anything else.

Thequestion will then arise as to what self-awareness is and how it ispossible. We will eventually conclude that systems that are self-awareare not formal systems and that awareness must be at least asfundamental as, or more fundamental than, space, time and energy.

Currentlymost scientists and non-scientists consider the physical world to beoutside of awareness and independent of it. But considering that nobodyhas or will ever experience anything without awareness it is illogicalto assume that anything is really outside of awareness. It is actuallyfar more rational to assume that whatever arises or is experienced isinside awareness, and that nothing is outside of awareness. Thisassumption of everything being within awareness would actually be amore scientific, observation-based conclusion than the oppositeassumption which is entirely unfounded on anything we have ever or willever be able to observe. After all, we have never observed anythingapart from awareness have we? Thus contrary to current beliefs, theonus is on scientists to prove that anything is outside of awareness,not the other way around!

Awareness is quite simply theultimate primordial basic nature of reality itself—without awarenessthere could be no “objective reality” at all and no “subjective beings”to experience it. Awareness is completely transcendental, beyond alllimitations and boundaries, outside of all possible systems. Whathubris to think we can simply manufacture, or evolve, awareness with apile of electrified silicon hardware and some software rules.

Nomatter how powerful the computer, no matter what it is made of, and nomatter how sophisticated or emergent the software is, it will stillnever be aware or evolve awareness. No computer or machine intelligencewill ever be aware. Even a quantum computer—if it is equivalent to afinite non-quantum computer at least—will not be capable ofawareness, and even if it is a transinfinite computer I still have mydoubts that it could ever be aware. Awareness is simply not aninformation process.

B. METHOD OF JUDGING THIS BET…

So the question ultimately is, how do we measureawareness or at least determine whether a computer is or is not aware?How can we judge the outcome of this bet?

I propose a method here: we let the bettors mutually agree on a judge.If the judge is a computer, fine. If the judge is a human, fine. Butboth bettors must agree on the judge. If both bettors accept that partyas the judge then the result will be deemed final and reliable. If acomputer is chosen by both parties to judge this, then I will concededefeat—but it would take a lot for any computer to convince me thatit is aware and thus qualified to judge this competition. On the otherhand, my opponent in this debate may accept a human judge—butobviously since they believe that computers can be aware if they accepta human judge they would be contradicting their own assertion—if acomputer is really intelligent and aware why would they choose a humanjudge over a computer judge?

This “recursive” judge-selection judging approach appeals to ourinherent direct human experience of awareness and the fact that wetrust another aware sentient being more than an inaminate machine tojudge whether or not something is aware. This may be the only practicalsolution to this problem: If both parties agree that a computer canjudge and the computer says the other computer is aware, then so be it!If both parties agree that a human can judge and the human says thatthe computer is not aware, so be it! May the best judge win!

Now, as long as we’re on the subject, how do we know that otherhumans, such as our potential human judge(s), are actually aware? Iactually believe that self-awareness is detectable by other beings thatare also aware, but not detectable by computers that are not aware.

C. A REVERSE TURING TEST FOR DETECTING AWARENESS IN A COMPUTER…

Ipropose a reversal of the Turing test for determining whether acomputer is aware (and forgive me in advance if anyone else has alreadyproposed this somewhere, I would be happy to give them credit).

Here is the test: Something is aware if whenever it is presented with acase where a human being and a synthetic machine intelligence areequally intelligent and capable of expression and interaction BUT notequally aware (the human is aware and the machine is not actuallyaware), then it can reliably and accurately figure out that the humanbeing is really aware and the machine is not really aware.

Ibelieve that only systems that are actually aware can correctlydifferentiate between two equally intelligent entities where one issentient and the other just a simulation of sentience, given enoughtime and experience with those systems.

How can such a differentiation be made? Assuming the human andcomputer candidates are equally intelligent and interactive, what isthe signature of awareness or lack of awareness? What difference isthere that can be measured? In my opinion there is a particular, yetindescribable mutual recognition that takes place when I encounteranother sentient being. I recognize their self-awareness with my ownself-awareness. Think of it as the equivalent of a “network handshake”that occurs at a fundamental level between entities that are actuallyaware.

How is this recognition possible? Perhaps it is due tothe fact that awareness, being inherently self-aware, is alsoinherently capable of recognizing awareness when it encounters it.

Onanother front, I actually have my doubts that any AI will ever beequally intelligent and interactive as a human sentient being. Inparticular I think this is not merely a matter of the difficulty ofbuilding such a complex computer, but rather it is a fundamentaldifference between machine cognition and the congition of a sentientbeing.

A human sentient being’s mind transcends computation.Sentient cognition transcends the limits of formal computation, it isnot equivalent to Turing Machine, it is much more powerful than that.We humans are not formal systems, we are not Turing Machines. Humanscan think in a way that no computer will ever be able to match letalone imitate convincingly. We are able to transcend our own logics,our own belief systems, our own programs, we are able to enter andbreak out of loops at will, we are able to know inifinities, to docompletely irrational, spontaneous and creative things. We are muchcloser to infinity than any finite state automaton can ever be. We aresimply not computers, although we can sometimes think like them theycannot really think like us.

In any case, this may be “faith”but for now at least I am quite certain that I am aware and that otherhumans and animals are also aware but that machines, plants and otherinanimate objects are not aware. I am certain that my awareness vastlytranscends any machine intelligence that exists or ever will exist. Iam certain that your awareness is just as transcendent as mine.Although I cannot prove that I am aware or that you are aware to you Iam willing to state such on the basis of my own direct experience and Iknow that if you take a moment to meditate on your own self-awarenessyou will agree.

After all, we cannot prove the existence of spaceor time either—these are just ideas and even physics has notexplained their origins nor can anyone even detect them directly, yetwe both believe they exist, don’t we?

Now if I claimed that asuitably complex computer simulation would someday suddenly containreal physical space and time that was indistinguishable in any way fromthe physical space and time outside the simulation—you would probablydisagree. You would say that the only “real” space-time is actually notin the computer but containing the computer, and any space-time thatappears within the computer simulation is but a mere lower-orderimitation and nothing like the real space-time that contains thecomputer.

No simulation can ever be exactly the same as what itsimulates, even if it is functionally similar or equivalent, forseveral reasons. On a purely information basis, it should be obviousthat if simulation B is within something else called A, then for B tobe exactly the same as A it must contain A and B and so on infinitely.At least if there is a finite amount of space and time to work with wesimply cannot build anything like this, we cannot build a simulationthat contains an exact simulation of itself without getting into aninfinite regression. Beyond this, there is a difference in medium: Inthe case of machine intelligence the medium is physical space, time andenergy—that is what machine intelligence is made of. In the case ofhuman awareness the medium is awareness itself, something at least asfundamental as space-time-energy if not more fundamental. Althoughhuman sentience can perform intelligent cognition, using a brain forexample, it is not a computer and it is not made of space-time-energy.Human sentience goes beyond the limits of space-time-energy andtherefore beyond computers.

If someone builds a Turing Machine that simulates a Turing Machinesimulating a Turing Machine, the simulation will never even start, letalone be useable! As the saying goes, it’s Turtles All The Way Down! Ifyou have a finite space and time, but an infinite initial condition, ittakes forever to simply set up the simulation let alone to compute it.

Thisis the case with self-awareness as well: It is truly self-referential.No finite formal system can complete an infinitely self-referentialprocess in finite time. We sentient beings can do this however.Whenever we realize our own awareness direclty—that is whenever weARE aware (as opposed to just representing this fact as a thought) weare being infinitely self-referential in finite time. That must mean weare either able to do an infinite amount of computing in a finiteamount of time, or we are not computing at all. Perhaps self-awarenessjust happens instantly and inherently rather than iteratively.

On a practical level as well we can see that there is adiffernece between a simulated experience within a simulation and theactual reality it attempts to simulate that exists outside thesimulation. For example, suppose I make a computer simulation ofchocolate and a simulated person who can eat the chocolate. Even thoughthat simulated person tastes the simulated chocolate, they do notreally taste chocolate at all—they have no actual experience of whatchocolate really tastes like to beings in reality (beings outside thesimulation).

Even if there are an infinite number of levels ofsimulation above the virtual reality we are in now, awareness is alwaysultimately beyond them all—it is the ultimate highest-level ofreality, there is nothing beyond it.

Thus even an infinitelyhigh-end computer simulation of awareness will be nothing like actualawareness and will not convince a truly aware being that it is actuallyaware.

Breakthrough in Finding Patterns in Complex Data Such as Sound

A new mathematical technique provides a dramatically better way to
analyze data, such as audio data, radar, sonar, or any other form of
time-frequency data
.

Humans have 200 million light receptors in their eyes,
10 to 20 million receptors devoted to smell, but only 8,000 dedicated
to sound. Yet despite this miniscule number, the auditory system is the
fastest of the five senses. Researchers credit this discrepancy to a
series of lightning-fast calculations in the brain that translate
minimal input into maximal understanding. And whatever those
calculations are, they’re far more precise than any sound-analysis
program that exists today.

Managing Different Thinking Styles in Organizations

My father, Mayer Spivack, has written an interesting piece on managing thinking styles in organizations. He points out the difference between the thinking styles in early and later stage companies, and the challenge of managing and integrating these two aspects of the organization’s cognitive process. I think that the syncretic-associative mode (curious, inventive, exploratory, enthusiastic, adventurous) tends to be more externally focused, whereas the linear-logical mode tends to be more internally focused (careful, reductivist, analyticial, skeptical, habitual). You could say this is the difference between an organization being extraverted or introverted, and the challenge is for organizations to evolve balanced personalities. This could be an interesting way to approach management consulting — and I wouldn’t be surprised if there are others thinking about organizations in these terms.

The Syncretic Management Process

Understanding The Value Of Associative Thinking In A World Of Linear Decision-Making

Decisions and communications among individuals in organizations are
frequently initiated, managed and concluded almost entirely from within
a framework of linear-logical thinking. Paradoxically, the products and
services comprising the intellectual property and capital creative upon
which most business are founded owe their existence to a non-linear,
but nonetheless logical, syncretic process of associative thinking.
Syncretic thinking is a mental process that makes non-linear, and
therefore often unexpected, connections among seemingly divergent
phenomena or data on the basis of common qualities.

By understanding this paradox between the creative syncretic process
characterizing the founding stage of an organization, and the
conservative linear processes that characterize later stages we can
generate a new mix of creative thinking that effectively includes both
elements. These two modes of thought highlight several differences
between the mind-sets that typify the young innovative start-up phase
of a business compared with that same business at a later more mature
phase, settled into it’s niche. The associative and inventive thinking
that generated a novel product or service and founded an organization
or industry may, at maturity, have yielded to a more rigorous calculus
of risk, competition and strategic analysis. In this later phase of
organization, linear frameworks of thinking that tend to conserve
capital and to advance the organizations goals incrementally within an
established niche are strongly reinforced and rewarded. Thus linearity,
alone, is widely believed to be essential for survival. However,
neither framework in isolation is likely to encourage the growth of new
ideas that may form the future of the organization. Nor could an
organization develop far beyond the initial concept stage without the
benefit of both modalities operating together.

Harnessing The Collective Mind

Today I read an interesting article in the New York Times about a company called Rite-Solutions which is using a home-grown stock market for ideas to catalyze bottom-up innovation across all levels of personnel in their organization. This is a way to very effectively harness and focus the collective creativity and energy in an organization around the best ideas that the organization generates.

Using virtual stock market systems to measure community sentiment is not a new concept but it is a new frontier. I don’t think we’ve even scratched the surface of what this paradigm can accomplish. For lots of detailed links to resources on this topic see the wikipedia entry on prediction markets. This prediction markets portal also has collected interesting links on the topic. Here is an informative blog post about recent prediction market attempts. Here is a scathing critique of some prediction markets.

There are many interesting examples of prediction markets on the Web:

  • Google uses a similar kind of system — their own version of a prediction market — to enable staff members to collaboratively predict the likelihood that various internal projects and events will occur on-schedule.
  • Yahoo also has a prediction market called BuzzGame that enables visitors to help predict technology trends. 
  • Newsfutures Exchange is a prediction market about the news, which is powered by a commercial prediction market engine sold by a company called Newsfutures.
  • BlogShares, a fantasy stock market for Weblogs in which players invest virtual money in the blogs they think will gain the most audience share.
  • Intrade is another exchange for trading on idea futures.
  • The Iowa Political Futures Exchange is a prediction market that focuses on political change.
  • Tradesports is a prediction market around sports topics.
  • The Hollywood Stock Exchange is a prediction market around movies.
  • The Foresight Exchange is another prediction market for predicting future events.

Here are some interesting, more detailed discussions of prediction market ideas and potential features.

Another area that is related, but highly underleveraged today, are ways to enable communities to help establish whether various ideas are correct using argumentation. By enabling masses of people to provide reasons to agree or disagree with ideas, and with those reasons as well, we can automatically rate what ideas are most agreed with or disagreed with. One very interesting example of this is TruthMapping.com. Some further concepts related to this approach are discussed in this thread.

Quantum Evolution — A Radical Theory

The theory of quantum evolution is a radical new take on how mutations
in DNA occur. Basically the theory postulates that DNA molecules are in
fact macroscopic quantum objects that undergo quantum interference. It
is spearheaded by Johnjoe McFadden, a professor in the UK and makes for an interesting
read. Here is a brief overview of the main ideas of the theory. He also has some interesting ideas about a possible interaction between electromagnetic fields and consciousness.  It’s way too early to tell whether he is correct in his hypoetheses, but I give him high marks for original thinking! Very interesting stuff.

Collective Intelligence 2.0

Introduction:

This article proposes the creation of a new open, nonprofit service on the Web that will provide something akin to “collective self-awareness” back to the Web. This service is like a “Google Zeitgeist” on steroids, but with a lot more real-time, interactive, participatory data, technology and features init. The goal is to measure and visualize the state of the collective mind of humanity, and provide this back to humanity in as close to real-time as is possible, from as many data sources as we can handle — as a web service.

By providing this service, we will enable higher levels of collective intelligence to emerge and self-organize on the Web. The key to collective intelligence (or any intelligence in fact) is self-awareness. Self-awareness is, in essence, a feedback loop in which a system measures its own internal state and the state of its environment, then builds a representation of that state, and then reasons about and reacts to that representation in order to generate future behavior. This feedback loop can be provided to any intelligent system — even the Web, even humanity as-a-whole. If we can provide the Web with such a service, then the Web can begin to “see itself” and react to its own state for the first time. And this is the first step to enabling the Web, and humanity as-a-whole, to become more collectively intelligent.

It should be noted that by “self-awareness” I don’t mean consciousness or sentience –I think that the consciousness comes from humans at this point and we are not trying to synthesize it (we don’t need to; it’s already there). Instead, by “self-awareness” I mean a specific type of feedback loop — a specific Web service — that provides a mirror of the state of the whole back to its parts. The parts are the conscious elements of the system – whether humans and/or machines – and can then look at this meta-mirror to understand the whole as wellas their place in it. By simply providing this meta-level mirror, along with ways that the individual parts of the system can report their state to it, and get the state of the whole back from it, we can enable a richer feedback loop between the parts and the whole. And as soon as this loop exists the entire system suddenly can and will become much more collectively intelligent.

What I am proposing is something quite common in artificial intelligence. For example, in the field of robotics, such as when building an autonomous robot. Until a robot is provided with a means by which it can sense itsown internal state and the state of its nearby environment, it cannot behave intelligently or very autonomously. But once this self-representation and feedback loop is provided, it can then react to it’s own state and environment and suddenly can behave far more intelligently. All cybernetic systems rely on this basic design pattern. I’m simply proposing we implement something like this for theentire Web and the mass of humanity that is connected to it. It’s just a larger application of an existing pattern. Currently people get their views of “the whole” from the news media and the government – but these views suffer from bias, narrowness, lack of granularity, lack of real-time data, and the fact that they are one-way, top-down services with no feedback loop capabilities. Our global collective self-awareness — in order to be truly useful and legitimate really must be two-way, inclusive, comprehensive, real-time and democratic. In the global collective awareness, unlike traditional media, the view of the whole is created in a bottom-up, emergent fashion from the sum of the reports from all the parts (instead of just a small pool of reporters or publishers, etc.).

The system Ienvision would visualize the state of the global mind on a number of key dimensions, in real-time, based on what people and software and organizations that comprise its “neurons” and “regions” report to it (or what it can figure out by mining artifacts they create). For example, this system would discover and rank the current most timely and active topics, current events, people, places, organizations, events, products, articles, websites, in the world right now. From these topics it would link to related resources, discussions, opinions, etc. It would also provide a real-time mass opinion polling system, where people could start polls, vote on them, and see the results in real-time. And it would provide real-time statistics about the Web, the economy, the environment, and other key indicators.

The idea is to try to visualize the global mind – to make it concrete and real for people, to enable them to see what it is thinking, what is going on, and where they fit in it – and to enable them to start adapting and guiding their own behavior to it. By giving the parts of the system more visibility into the state of the whole, they can begin to self-organize collectively which in turn makes the whole system function more intelligently

Essentially I am proposing the creation of the largest and most sophisticated mirror ever built – a mirror that can reflect the state of the collective mind of humanity back to itself. This will enable an evolutionary process which eventually will result in humanity becoming more collectively self-aware and intelligent as-a-whole (instead of what it is today– just a set of separate interacting intelligent parts). By providing such a service, we can catalyze the evolution of higher-order meta-intelligence on this planet — the next step in human evolution. Creating this system is a grand cultural project of profound social value to all people on earth, now and in the future.

This proposal calls for creating a nonprofit organization to build and host this service as a major open-source initiative on the Web, like the Wikipedia, but with a very different user-experience and focus. It also calls for implementing the system with a hybrid central and distributed architecture. Although this vision is big, the specific technologies, design patterns, and features that are necessary to implement it are quite specific and already exist. They just have to be integrated, wrapped and rolled out. This will require an extraordinary and multidisciplinary team. If you’re interested in getting involved and think you can contribute resources that this project will need, let me know (see below for details).

Further Thoughts

Today I re-read this beautiful, visionary article by Kevin Kelley, about the birth of the global mind, in which he states:

The planet-sized “Web” computer is already more complex than a human brain and has surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. In 10 years,it will be ubiquitous. So will superintelligence emerge on the Web, not a supercomputer?

Kevin’s article got me thinking once again about an idea that has been on my mind for over a decade. I have often thought that the Web is growing into the collective nervous system of our species. This will in turn enable the human species to function increasingly as an intelligent superorganism, for example, like a beehive, or an ant colony — but perhaps even more intelligent. But the key to bringing this process about is self-awareness. In short, the planetary supermind cannot become truly intelligent until it evolves a form of collective self-awareness. Self-awareness is the most critical component of human intelligence — the sophistication of human self-awareness is what makes humans different from dumb machines, and from less intelligent species.

The Big Idea that I have been thinking about for over a decade is that if we can build something that functions like a collective self-awareness, then this could catalyze a huge leap in collective intelligence that would essentially “wake up” the global supermind and usher in a massive evolution in its intelligence and behavior. As the planetary supermind becomes more aware of its environment, its own state, and its own actions and plans, it will then naturally evolve higher levels of collective intelligence around this core. This evolutionary leap is of unimaginable importance to the future of our species.

In order for the collective mind to think and act more intelligently it must be able to sense itself and its world, and reason about them, with more precision — it must have a form of self-awareness. The essence of self-awareness is self-representation — the ability to sense, map,  reason about, and react to, one’s own internal state and the state of one’s nearby environment. In other words, self-awareness is a feedback loop by which a system measures and reacts to its own self-representations. Just as is the case with the evolution of individual human intelligence, the evolution of more sophisticated collective human intelligence will depend on the emergence of better collective feedback loops and self-representations. By enabling a feedback loop in which information can flow in both directions between the self-representations of individuals and a meta-level self-representation for the set of all individuals, the dynamics of the parts and the whole become more closely coupled. And when this happens, the system can truly start to adapt to itself intelligently, as a single collective intelligence instead of a collection of single intelligences.

In summary, in order to achieve higher levels of collective intelligence and behavior, the global mind will first need something that functions as its collective self-awareness — something that enables the parts to better sense and react to the state of the whole, and the whole to better sense and react to the state of its parts. What is needed essentially is something that functions as a collective analogue to a self — a global collective self.

Think of the global self as a vast mirror, reflecting the state of the global supermind back to itself. Mirrors are interesting things. At first they merely reflect, but soon they begin to guide decisionmaking. By simply providing humanity with a giant virtual mirror of what is going on across the minds of billions of individuals, and millions of groups and organizations, the collective mind will crystallize, see itself for the first time, and then it will begin to react to its own image. And this is the beginning of true collective cognition. When the parts can see themselves as a whole and react in real-time, then they begin to function as a whole instead of just a collection of separate parts. As this shift transpires the state of the whole begins to feedback into the behavior of the parts, and the state of the parts in turns feeds back to the state of the whole. This cycle of bidirectional feedback between the parts and whole is the essence of cognition in all intelligent systems, whether individual brains, artificial intelligences, or entire worlds.

I believe that the time has come for this collective self to emerge on our planet. Like a vast virtual mirror, it will function as the planetary analogue to our own individual self-representations — that capacity of our individual minds which represents us back to ourselves. It will be comprised of maps that combine real-time periodic data updates, and historical data, from perhaps trillions of data sources (one for each person, group, organization and software agent on the grid). The resulting visualizations will be something like a vast fluid flow, or a many particle simulation. It will require a massive computing capability to render it — perhaps a distributed supercomputer comprised of the nodes on the Web themselves, each hosting a part of the process. It will require new thinking about how to visualize trends in such vast amounts of data and dimensions. This is a great unexplored frontier in data visualization and knowledge discovery.

How It Might Work

I envision the planetary self functioning as a sort of portal — a Web service that aggregates and distributes all kinds of current real-time and historical data about the state of the whole, as well as its past states and future projected states. This portal would collect opinions, trends, and statistics about the human global mind, the environment, the economy, society, geopolitical events, and other indicators, and would map them graphically in time, geography, demography, and subject space — enabling everyone to see and explore the state of the global mind from different perspectives, with various overlays, and at arbitrary levels of magnification.

I think this system should provide an open datamodel, and open API for adding and growing data sets, querying, remixing, visualizing, and subscribing to the data.All services that provide data sets, analysis orvisualizations (or other interpretations) of potential value tounder standing the state of the whole would be able to post data into our service for anyone to find and use. Search engines could post inthe top search query terms. Sites that create tag clouds could post intags and tag statistics. Sites that analyze the blogosphere could post in statistics about blogs, bloggers, and blog posts. Organizations that do public opinion polling, market and industry research, trend analysis, social research, or economic research could post instatistics they are generating. Academic researchers could post instatistics generated by projects they are doing to analyze trends on the Web, or within our data-set itself.

As data is pushed to us, orpulled by us, we would grow the largest central data repository aboutthe state of the whole. Others could then write programs to analyze andremix our data, and then post their results back into the system forothers to use as well. We would make use of our data for our ownanalysis, but anyone else could also do research and share theiranalysis through our system. End users and others could also subscribeto particular data, reports, or visualizations from our service, andcould post in their own individual opinions, attention data feeds, orother inputs. We would serve as a central hub for search, analysis,and distribution of collective self-awareness.

The collective self would provide a sense of collective identity: who are we, how do we appear, what are we thinking about, what do we think about what we are thinking about, what are we doing, how well are we doing it, where are we now, where have we been, where are we going next. Perhaps it could be segmented by nation, or by age group, or by other dimensions as well to view various perspectives on these questions within it. It could gather its data by mining for it, as well as through direct push contributions from various data-sources. Individuals could even report on their own opinions, state, and activities to it if they wanted to, and these votes and data points would be reflected back in the whole in real time. Think of it as a giant emergent conversation comprised of trillions of participants, all helping to make sense of the same subject — our global self identity — together. It could even have real-time views that are animated and alive — like a functional brain image scan — so that people could see the virtual neurons and pathways in the global brain firing as they watch.

If this global self-representation existed, I would want to subscribe to it as a data feed on my desktop. I would want to run it in a dashboard in the upper right corner of my monitor — that I could expand at any time to explore further. It would provide me with alerts when events transpired that matched my particular interests, causes, or relationships. It would solicit my opinions and votes on issues of importance and interest to me. It would simultaneously function as my window to the world, and the world’s window to me. It would be my way of participating in the meta-level whole, whenever I wanted to. I could tell it my opinions about key issues, current events, problems, people, organizations, or even legislative proposals. I could tell it about the quality of life from my perspective, where I am living, in my industry and demographic niche. I could tell it about my hopes and fears for the future. I could tell it what I think is cool, or not cool, interesting or not interesting, good or bad, etc. I could tell it what news I was reading and what I think is noteworthy or important. And it would listen and learn, and take my contributions into account democratically along with those of billions of other people just like me all around the world. From this would emerge global visualizations and reports about what we are all thinking and doing, in aggregate, that I could track and respond to. Linked from these flows I could then find relevant news, conversations, organizations, people, products, services, events, and knowledge. And from all of this would emerge something greater than anything I can yet imagine — a thought process too big for any one human mind to contain.

I want to build this. I want to build the planetary Self. I am not suggesting that we build the entire global mind, I am just suggesting that we build the part of the system that functions as its collective self-awareness. The rest of the global mind is already there, as raw potential at least, and doesn’t have to be built. The Web, human minds, software agents, and organizations already exist. Their collective state just needs to be reflected in a single virtual mirror. As soon as this mirror exists they can begin to collectively self-organize and behave more intelligently, simply because they will have, for the first time, a way of measuring their collective state and behavior. Once there is a central collective self-awareness loop, the intelligence of the global mind will emerge and self-organize naturally over time. This collective self-awareness infrastructure is the central enabling technology that has to be there first for the next-leap in intelligence of the global mind to evolve.

Project Structure

I think this should be created as a non-profit open-source project. In fact, that is the only way that it can have legitimacy — it must be independent of any government, cultural or commercial perspective. It must be by and for the people, as purely and cleanly as possible. My guess is that to build this properly we would need to create a distributed grid computing system to collect, compute, visualize and distribute the data — it could be similar to SETI@Home; everyone could help host it. At the center of this grid, or perhaps in a set of supernodes, would be a vast supercomputing array that would manage the grid, do focused computations and data fusion operations. There would also need to be some serious money behind this project as well — perhaps from major foundations and donors. This system would be a global resource of potential incalculable value to the future of human evolution. It would be a project worth funding.

My Past Writing On This Topic

A Physics of Ideas: Measuring the Physical Properties of MemesTowards a Worldwide Database

The Metaweb: A Graph of the Future

From Semantic Web to Global Mind

The Birth of the Metaweb

Are Organizations Organisms?

From Application-Centric to Data-Centric Computing

The Human Menome Project

Other Noteworthy Projects

Principia Cybernetica — the Global Mind Group

The Global Consciousness Project

W3C – The Semantic Web Working Group

Amazon’s Mechanical Turk

CHI — Harnessing Networks of Humans

Big Thinkers' Most Dangerous Ideas

The Edge has published mini-essays by 119 "big thinkers" on their "most dangerous ideas" — fun reading.

The history of science is replete with discoveries
that were considered socially, morally, or emotionally
dangerous in their time; the Copernican and
Darwinian revolutions are the most obvious.
What is your dangerous idea? An idea you think
about (not necessarily one you originated)
that is dangerous not because it is assumed to be false, but because it might be true?

 

A Possible Future of Physics

Today I read this nice article which provides a short consumer-friendly overview of the history of the Digital Physics paradigm. Digital Physics is not mainstream physics — but it is growing and someday could become huge. It brings together computer scientists and physicists in an interdisciplinary approach to physics. While many advocates simply take the position that some physical processes resemble computations, the most extreme would go so far as to posit that the universe is actually a giant computation taking place on some sort of primordial computing fabric.

I’ve been involved with this field since the 1980’s when, as a college student at Oberlin, I got interested in cellular automata as a tool for modeling both the brain and the universe. This led to summer research on cellular automata simulations of physical systems on the CAM-6 parallel processor at the lab of Tomasso Toffoli and Norman Margolus at MIT. They were among the first experimentalists in the digital physics field — running massive cellular automata simulations of fluid dynamics, population biology, optics, and spin glasses, among other things. Since then I’ve had the opportunity to spend some time with both Ed Fredkin and Stephen Wolfram, discussing the future of digital physics and the quest for a Theory of Everything.

I think that the Digital Physics is the The Next Revolution in physics. But it may still be another 50 to 100 years before it really takes root. But it’s just the beginning of what I think may be an ongoing process of future physical models. Below, I speculate about where this trend will lead us (disclaimer: Wild Speculation ahead: Read at your own risk!)

Humans have always used their most advanced technologies as metaphors for the physical universe — a process which can be seen in the history of physics itself. For example, Newtonian physics and Einstein’s Relativity were largely based on the metaphor of clocks –which were for a long-time the most sophisticated “computers” on earth. Whereas Newton’s vision was mainly influenced by the mechanics of clocks, Einstein’s’ vision was more influenced by the dynamics of clocks. Thanks to clocks we have been able to develop sophisticated computers. And now, with the advent of the computer era, we have begun to model our universe using computers, and even to think of it as a computer.

This emerging Computer Model of physics (aka “Digital Physics”) takes the computer metaphor to its fullest extent by viewing the universe itself as a computer program running on a vast cosmic computer of some sort.  The next step in this emerging model will be enabled as quantum computing and the theory of quantum computation begin to be applied to physics. Quantum computers will revolutionize both theoretical and experimental physics by enabling the simulation and testing of infinitely complex physical systems in finite time, using finite computing resources. This will naturally evolve the digital physics paradigm such that the universe is conceived of as aquantum computer.

After the Computer Model of the universe, the next model will come when we start to view our universe not as a single computer, but rather as a vast network of computers. This shift mirrors the evolution of computers and networks, which has led us to the Internet. We will start to view our universe as something like a vast computer network in which countless computations interact, move around, compete, reproduce and evolve higher levels of fitness in an almost Darwinian manner –instead of a single isolated Perfect Computer In Space.

Out of this Network Model of the universe, we will begin to view the cosmos as something that resembles a nervous system. The human nervous system is a computing network that is conditioned by countless internal and external factors at every level of scale, all at once. Feedback is essential to how it functions. It is neither a bottom-up nor a top-down system — rather it is an all-directions system. The universe is also like this — we cannot adequately explain it with a reductionist model– ultimately, the only way to understand it is from all directions at once, as a network. The Network Model will have three phases of development — the first will focus on the inner-workings of the nodes, the second on the functioning of the links, and the third on the interactions that take place via nodes and links.

What’s beyond the Network Model? My hunch is that it will have something to do with a realization that the universal computer networks capable of self-modification such that the output of the programs that run on it affects the very structure of the computing nodes and links that comprise it. When we cross that bridge we will realize that it is not precisely correct to conceive of a division between the computing fabric that comprises the network, the software programs that run on that network, and the output of those programs — instead we will see this as a single self-modifying system. Rather than a static primordial computing fabric on which various programs run like so many experiments in a lab, we will view the entire system as a recursive loop in which each output is taken as an input for the next step in time. In the earlier Computer Model and Network Model, the physical laws were conceived of as being somehow “hard coded” into the computer, but in the model beyond these — what might be called the Evolutionary Model– we will see that the physical laws themselves are evolving. In other words, we will see that there is a feedback loop between the output of the universal computation, the structure of the underlying computing fabric, and the definition of the programs that run on it. These three layers will come to be seen a single evolving, self-reproducing, self-modifying, system. The activity that takes place in the universe will be understood to directly affect the underlying physical laws themselves, and vice-versa.

Next we will begin a phase which could be called the Organic Model of the universe, in which we will begin to view the cosmos as a kind of meta-organism — a creative, evolving, living thing. Our knowledge of the Network Model will enable us to map what takes place on different levels of scale to familiar physical processes that take place within the human organism. Our understanding of the universe will start to take on a distinctly biological character. We will begin to look at computational pathways and the equivalent of organs on a cosmic scale just as we do within the chemical and biological systems of the human body. We will begin to view the functioning of the cosmos as intelligent and creative and even capable of rudimentary adaptive learning on both the smallest and vastest scales. We will begin to become open to the possibility that there are forms of intelligence and life that are vastly smaller as well as vastly larger than what we experience on our human-centric measurement perspective.

Beyond this phase, we will begin to look at our evolving cosmos as a kind of meta-organism in a community of other similar meta-organisms. We might call this the Social Model. Beyond just focusing on our individual universe in isolation, we will begin to look at it as a member of a community of similarly evolving universes — an infinite array of interacting generations of universes that are subject to a process of cosmic natural selection of universes.  These different universes will be understood to be capable of communicating, and even reproducing to form new universes. We might call this the Metaverse. Physicists have already glimpsed this level of reality from a theoretical perspective, but we lack the tools to really describe its mechanics, let alone dynamics. But we’ll get there eventually, if our species doesn’t destroy itself first.

Simulated Universes and the Nature of Consciousness

Researchers in Europe have completed the first phase of what may be the largest computational physics experiment in history: They built and ran a simulated universe through 14 billion years of development. The experiment used up 25 million megabytes of memory, and the biggest supercomputer in Europe for a month. The result was a “Cube of Creation” of 20 billion light years per side, containing 20 million simulated galaxies. Now they’re studying it to see what evolved. They hope to gain insights into the function of black holes, and other cosmological principles. This is an amazing piece of work — definitely the future of cosmology research.

In previous articles, I’ve speculated that our own universe might also be such a simulation, perhaps run by a much more advanced civilization in a meta-universe outside ours. But in fact, I think our universe is probably quite different from a mere computer simulation (despite how cool it would be if it were a computer simulation!) — because I don’t believe we can explain everything there is in terms of information and computation: I think consciousness doesn’t fit in that model.  After exploring this issue for more than 20 years from the perspectives of computer science and physics, philosophy and religion, I’ve come to believe that consciousness cannot be reduce to, or emerge from, information or computation. As far as I can tell, it’s something at least as or more fundamental than space, time, matter and energy. I would even go so far as to say that we won’t ever really understand what the universe is or how it develops or functions without first understanding consciousness much more deeply.

There are basically two fundamental, mutually exclusive camps on the issue of consciousness that have been sparring for millennia. Either you are in the camp that believes consciousness is something that emerges from the physical universe, or you are in the camp that believes that the physical universe is something that emerges from consciousness. (Note: Even the Buddhist theory of interdependent origination, which says that physical phenomena and consciousness arise in co-dependence at the same time, rather than one from the other, can still be reduced to a version of Camp #2 because in that theory, interdependent events take place by virtue of a primordial unification of mind and phenomena that is equivalent to what I mean when I say “everything emerges from consciousness” — in other words, nothing is truly separate from consciousness)

I am a Camp 2 person. I believe that consciousness is more fundamental than anything else.  The example of a dream can be used to illustrate my view on consciousness: Although everything in a dream is a projection of consciousness, nothing in a dream is conscious. For example if you dream of Sue that is not really Sue: that dream-image of Sue is not a really a conscious person, it’s just a projection of your consciousness. Similarly, in a dream, if you find yourself interacting with a dream-image of Sue, your dream body in that dream is also not conscious, it is equally just a projection of your consciousness.

Even if you experience your dream from the perspective of being a particular character, looking through their eyes, thinking their thoughts, etc., that which is actually having that experience — the consciousness that is dreaming the dream — is outside the dream. It doesn’t appear anywhere within it, it cannot be measured within it, and it has no form or location. But still, as the one having the dream, it is undeniable that there is a dream appearing and an experience of that appearance. Furthermore, the nature of consciousness itself is self-aware — it can realize its own capacity of cognizance — the fact that it is aware, even though nothing to grasp as “consciousness” can actually be located anywhere. This self-awareness, in my view, is not a function of the brain or the body, or any physical system, rather it is completely beyond material phenomena  — beyond all possible universes in fact.

So who or what is projecting the universe in a manner of a dream? Is the universe nothing more than a dream in fact? This question cannot be answered by physics — it can neither be disproved or proved. Even various religions disagree about how to answer it — some label consciousness as soul, or universal or eternal Self or as God, while other systems, such as Buddhism instead argue that it is in fact so completely transcendental that it is entirely empty of self-nature and therefore cannot be reified as one or many, something or nothing, self or other, or truly-existent or non-existent.

Please note that when claiming that everything comes from consciousness, and using the example of a dream to illustrate that, I am not suggesting the philosophical view of solipsism, which posits that everything is just in my own mind, or some cosmic mind perhaps. Nor am I proposing an eternalistic argument that claims that “all is one” or that there is an ultimate, truly-existing soul, or that there are or are not really other beings. From my perspective, which comes largely from my studies of the Buddhist theory of dependent-arising and emptiness, what I am calling  “consciousness” cannot really be conceived of — because it is literally beyond thoughts, and even beyond the universe; is not a thing. Therefore, there is no way within this universe to frame or express the nature of consciousness. All we can do is use analogies, which are just shadows of the real thing, not the real thing itself. However, although we cannot describe consciousness, we can directly experience it as it really is, without using concepts or analogies, because we are it.

There are a number of difficult subtleties that have to be carefully sorted out when you really go deeply into this view of consciousness. In particular, regarding the question of whether other beings exist, or whether there is really a universe “out there” apart from your own mind (whether there is actually a sound when a tree falls and there is nobody there to hear it, for example). My opinion is that it is certainly possible for there to be multiple beings with their own experiences — and furthermore that is certainly what appears to be taking place. Yet to be precise,  we cannot prove that what appear to be other beings are truly-existent “out there” nor can we prove that there are no other beings apart from ourselves — in fact, we really cannot decide one way or the other about this question, at least if we want to be hairsplittingly precise. Therefore, from a philosophical perspective, the best thing to do is to simply not take a position on that question.

There is no way to prove that “everything is a dream” or that “everything is not a dream,” and so we simply have to avoid forgetting that we really don’t know which position is correct. Most people err on the side of thinking “everything is not a dream” and so they get totally absorbed in the intricacies of of daily life and the material world — they become mindless materialists. On the other hand, those who err on the opposite side of thinking that “everything is just a dream” tend to fall into the extreme of being spaced-out spiritualists. So our task, as rational observers of reality is to try to be as true to what we really can observe for ourselves as possible — meaning we have to avoid becoming either mindless materialists or spaced-out spiritualists. To be most true to what we can observe, we have to take the “middle road” and avoid falling into any extreme philosophical viewpoints. This means, in particular, that we should not fall into a an overly materialistic view of thinking that everything that seems to be “out there” really truly exists apart from our own minds, nor should we fall into a nihilistic view of thinking that there is only ourselves or that there is nothing at all.

From the philosophical view of Tibetan Buddhism (which happens to be my favorite), the most accurate way to portray consciousness might be to say that it, and in fact anything else we can label, is neither nothing, one, nor many — and so therefore we avoid falling into the extremes of eternalism and nihilism. Eternalism is materialism — it is the belief that phenomena truly-exist on their own — that they can be decomposed to irreducible particles. Western science is basically materialism. Nihilism is the extreme opposite of materialism — it posits that nothing exists at all. Nihilism leads to all sorts of delusions and bad behavior but is fortunately quite easy to refute: indeed, the very fact that anyone is able to hold the belief that they are a nihilist refutes their belief in nothingness.

I should also mention that while I am definitely a Camp 2 person, I don’t discount the utility of science for explaining how the material world appears to function, I just don’t think that it can explain what the material world really is, nor what consciousness is. I think that science is ideally suited to explaining the dynamics of matter and energy in time and space — the various physical patterns that we observe. But at the same time, I think that to really explain everything — a theory also has to explain consciousness, and I don’t think there is a material, scientific explanation for that because consciousness is not simply a pattern in the physical world — it is completely transcendental.

Some scientists try to use the fact that consciousness cannot be located or measured like physical phenomena to be proof that it doesn’t exist at all, but that argument is fallacious. Just because no scientist has ever isolated consciousness or measured it doesn’t mean it’s not there, it just means it’s beyond the scope of their measurement tools. For example, if our only measurement tool is a microscope we cannot prove that galaxies don’t exist simply because we cannot see them through it; for if we later have a telescope we definitely can see galaxies. In the case of consciousness the situation is even simpler: no material measurement tool can measure consciousness, but that doesn’t mean it doesn’t exist because everyone, including every scientist, directly knows that they are conscious. So in a sense we could say that the only “measurement tool” that can detect consciousness is consciousness itself. There’s another interesting fact that is worth mentioning here: no scientist has ever directly seen or measured space, energy, or time (all measurements of these are in fact indirect and inferential)  — but for some reason they are willing to believe in those phenomena. Why are scientists willing to believe in their inferences about the nature of space, time and energy but not consciousness? I find this puzzling.

Science began as what was called “natural philosophy” — it wasn’t confined to the material domain but actually was a much broader undertaking, an attempt to explain everything. Natural philosophers, such as Sir Isaac Newton, for example, were interested in all the dimensions of experience, including the mind, soul and even the possibility of God. They truly wanted to understand the world, and they considered anything observable to be within the scope of science. Gradually, however, this open-mindedness was lost and science became increasingly limited in focus. Today science is incredibly myopic and close-minded — it has in fact become institutionalized to the point where, to succeed and be respected by their peers, scientists must specialize and conform to the point of losing almost all originality and intellectual freedom. A side-effect of this is that scientists have gotten so focused on trying to observe what everyone else observes, that they no longer notice what they themselves observe — they no longer consider their own minds, consciousness, or their own experiences to be valid subjects of observation, nor do they consider themselves to be qualified observers of their own minds, consciousness and experience.

This belief comes from the mistaken idea that it is impossible objectively to observe one’s own experience. Modern science is built on the notion that only observations which can be demonstrated to, and repeated by, other scientists are considered valid. The problem is that all observations, whether of one’s own mind or of some external experimental result are ultimately subjective. So in fact, although scientists like to think that their methods are not subjective, in fact, to be precise they are subjective. Because ultimately all observations are subjective, observing one’s own mind directly is really no more subjective than observing any external physical experiment or phenomena: We cannot really demonstrate anything objectively, whether internal or external, to anyone because ultimately we all sense things subjectively.

If this isn’t enough to hammer in the point that self-observation of the mind is a valid pursuit of science, there is another argument: If consciousness is truly fundamental, then it is not conditioned by anything it observes and so it is perfectly objective in nature. Of course, here we have to be very careful to make a clear distinction between consciousness itself and the many layers of thoughts that may obscure it (thoughts are not consciousness). Using consciousness (without thoughts) to look directly at consciousness is perhaps the most objective scientific experiment possible!

Therefore, just because nobody can demonstrate their consciousness to anyone else doesn’t mean that consciousness doesn’t exist or that it is unscientific to study it by direct self-observation. In fact, the only way to directly study consciousness is by direct self-observation — that is the best measuring device for the job, so to speak. Furthermore, it is indeed possible to “demonstrate” one’s own observations of consciousness to others in a repeatable manner — simply: if they follow the same steps and end up observing the same things about their own consciousness, then your experiment has been repeated successfully. So in fact the direct study of consciousness is valid, objective and repeatable. In short, it is and should be within the scope of scientific study.

Until scientists discover this fact and look inwards at their own minds, they are never going to make real progress in the scientific study of consciousness, because this is the only way to actually study consciousness. (Note for the brain scientists in the audience: merely studying the physical brain is not really studying consciousness itself — consciousness is not a brainstate but is rather that which is capable of knowing or being. In fact, consciousness itself has no content — it is not a set of thoughts or sensations. Brainstates may represent the content of consciousness at a given moment just as a frame on a movie film print represents the content of that particular moment of the movie, but in this analogy consciousness is the light in the movie projector, not the film or the patterns on the film. Don’t mistake the content of a given frame for the light of the movie projector!).

So, I hope I’ve made the point by now that from the perspective of what we can directly observe, for ourselves, it actually makes more sense to start with the hypothesis that consciousness is fundamental — since nobody has ever directly experienced any phenomena outside the scope of their own consciousness. As far as anyone can directly observe, wherever phenomena are found there is also an observer of those phenomena at that very moment. Furthermore, as far as anyone can measure, there is no way to establish that phenomena actually exist “out there” when they are not observed. So from a truly rational, scientific point-of-view, consciousness appears to be fundamental in that it is ever-present in our experience of the universe, and as or more necessary to having that experience, as are space, time and energy.

It is in fact more rational and scientific to hold that consciousness is fundamental until proven otherwise than to hold the reverse hypothesis. After all, as far as we ourselves can observe, our experience of the universe is mediated by consciousness and there is no way to establish that the universe we perceive is separate from our consciousness. All the evidence seems to indicate the contrary: that the universe is not separate from our consciousness. Many scientists who pride themselves on their rationality in all other areas, seem to overlook this fact (are they literally “blinded by science?”). They think of consciousness as some kind of process within the physical brain. Some even attempt to “explain away” consciousness as some sort of epiphenomenon (e.g. an illusion that can be reduced to something physical), or worse, as a mathemagical result of “complex enough” computation (the absurd but oft cited, “someday the computer just gets sooooooo complex that it suddenly wakes up” argument). But none of these approaches to consciousness can account for the actual experience (what the philosopher, John Searle calls “qualia”) of being conscious — an experience which each of has direct and undeniable access to.

I am skeptical that any computer will ever be able to simulate, let alone embody, the actual experience of consciousness. Since our universe and everything material, is in my opinion, emergent from consciousness, not vice-versa, it is not possible to cause consciousness to emerge from physical things: I don’t think you can build a machine that will become conscious. I don’t think we can synthesize consciousness — it’s already there and we don’t create it. We might be able to build very smart machines, but they still won’t be conscious in the same way that truly conscious beings are. In fact, I think the best and fastest way to make something conscious, if that’s what you want to do, is to just have a baby.

Consciousness is not a material thing, nor is it a result of a material process. It can neither be created nor destroyed and it never actually “inhabits” physical matter, which is why we cannot find consciousness anywhere in the brain or body when we try to measure it (i.e. the brain and body are within consciousness — they can be found within consciousness, but consciousness cannot be found within them). And if that’s the case, then no computer simulation will ever really contain actual consciousness — at best it will be merely a projection in the consciousness of whomever makes the simulation. Now why does this matter? Well for one thing it means that we will never succeed in creating artificial intelligence simulations that are conscious, and furthermore, that no simulation of any kind will be conscious. And it follows then that no simulated universe will truly be like our universe — because there won’t be any real conscious beings in the simulation.

My point here is that to really simulate our universe completely we would have to be able to make a simulation that contains conscious beings, but we can’t do that because we cannot make consciousness. And this is important because consciousness is not just some minor force in our universe — in fact it may have a vastly larger role in shaping our universe than we can presently see or understand. Some physicists even go so far as to postulate that if it weren’t for consciousness our universe wouldn’t exist, or alternatively, that our universe has evolved specifically to support conscious life (what is called the anthropic principle). But although we cannot prove or disprove such views at present, we can certainly see the effect that conscious life has had on our home planet: If consciousness can transform our planet from a jungle to a teeming metropolis in a matter of a few million years, then by extension it could do the same thing to entire solar systems, and perhaps over billions of years, interstellar civilizations of consciousness beings could literally transform galaxies. No simulated universe will be able to truly model or account for such effects.

Research into quantum mechanics is also arriving at the fact that consciousness plays an important, but not yet understood, role in shaping physical reality. It is clear that consciousness has a major impact on the outcome of certain types of experiments, for example. Whether you observe a particle or not determines how it seems to behave. Whether you observe a system, determines whether or not it is in one of various possible states. The act of observation seems to be the catalyst which collapses infinite possibility into a particular event. This can actually be measured experimentally on very small scales, but there is speculation that it operates similarly at larger scales too, in some circumstances. But even if merely at the very smallest scales, consciousness — “the act of observation” – is built-in to how physics works, then it follows that it has a emergent effect on the largest scale  — the whole universe. But who knows, maybe the effects of consciousness on the whole are direct, not merely emergent? We don’t know yet.

There’s another reason that consciousness may throw a wrench in computer simulations of the mind and universes alike: Free will. Given that consciousness is totally transcendental, it is not conditioned by anything material. Yet, since everything is a projection of consciousness, consciousness can affect the world. To understand this, we can go back to the dream analogy again: For example, a dreaming consciousness can sense its dream projections, and it is even possible to have a lucid dream in which the dreamer controls the content of the dream, but at no time does the content of the dream ever have the ability to limit or condition the dreaming consciousness. In other words, it’s a one-way interaction. Consciousness can condition what it projects, but projections cannot condition consciousness.

Note here that consciousness is at an entirely different level from thoughts and sensory experiences –they are just mere appearances, not consciousness itself. This means that ultimately conscious beings have free will: They can effect what appears to their consciousness, but what appears to them cannot ultimately effect their consciousness in return — consciousness remains basically free, empty, pure, unconditioned, and untarnished at all times, regardless of what projections currently appear to be taking place. And, if consciousness has free will, then no computer simulation will be able to model it. The reason for this is simple: Computers are logic machines that follow instructions. They don’t have free will, they just follow sequences of logical operations. Nowhere in a computer or computer program is there anything that is truly free. At best we might be able to simulate computer intelligences that act as if they are free, but in fact, their seemingly free behavior is still actually caused by an underlying computer program at some level. Even non-deterministic, “emergent computations” are still reducible to underlying programs. But real free will is irreducible — it is not a result of any programming and cannot be conditioned by any external forces. In other words, consciousness is not a computer program, it is inherently unconditioned and free. No computer program can replicate that freedom.

In conclusion, I think our present civilization is at least several thousand years from really understanding much about consciousness and how it fits into physics, or vice-versa, but if we keep going the way we’re going our civilization probably won’t last that long. So to save time, we could look more deeply into the cosmologies of earlier civilizations that were much more advanced when it comes to consciousness than we are (for example the Buddhist cosmology as represented in the Kalachakra system for example, or the Mayan cosmology, both of which are far more inclusive of consciousness in their explanations of the universe.) I’m not suggesting that those cosmologies are going to help us understand black holes — for that our modern cosmology is probably better — but they certainly could help us understand how consciousness fits in.

There’s a lot more that could be said about this view of consciousness, but I’m not enough of an expert on Buddhist philosophy to explain it all in detail. I should also add that it is possible that my view is not exactly equivalent to the Buddhist view, and if that is the case, then any differences or mistakes herein are my own. If you’re interested in going directly the source (which is by far superior than anything I could write), I would suggest that you start reading up on the philosophy of dependent-arising and emptiness, for example the work of the ancient Indian philosopher Nagarjuna, and then perhaps start reading about the Buddhist conception of mind. There are lots of good books available on these subjects, although some are quite scholarly and difficult for beginners. Another, more accessible approach is to discuss these issues with a qualified scholar and teacher of Buddhist philosophy.

New Ice Age Coming Much Sooner than Expected?

Significant new research findings indicate that a new ice age may be starting sooner than anyone expected…

CLIMATE change researchers have detected the
first signs of a slowdown in the Gulf Stream — the mighty ocean current
that keeps Britain and Europe from freezing.

They have found that one of the “engines” driving the Gulf Stream —
the sinking of supercooled water in the Greenland Sea — has weakened to
less than a quarter of its former strength.

The
weakening, apparently caused by global warming, could herald big
changes in the current over the next few years or decades.
Paradoxically, it could lead to Britain and northwestern and Europe
undergoing a sharp drop in temperatures.

Click here for the full article

Scale-Free Networks and Mobile Services

Here is an interesting article about an analysis of SMS messaging versus e-mail messaging on mobile networks. The conclusion is that e-mail messaging is more efficient for mobile consumers because email networks are scale-free networks. The article predicts that services based on scale-free topologies will ultimately win out over less optimal alternatives. Thanks to Murli.

Creator of Sim City Previews Amazing New Game

Many years ago I spoke with Will Wright — one of the most interesting visionaries I’ve met (and who happens to be the creator of Sim City) about his dream of a universe game — one in which the player could evolve life from the simple cellular level all the way up through galactic scale civilizations. Well it seems he has been busy working on this dream, and it sounds fascinating. He previewed it recently at a meeting of game designers, where he discussed the emergent, unpredictable and open-ended nature of the game, which is called Spore. When I spoke to Will about this years ago, I remember that he spoke of wanting to create a game that would enable players to experience the wonder and creative potential of the universe at all levels of scale. It sounds amazing, I can’t wait to try it.

Confabulation: New Theory of Cognition Announced

After 30 years of research, a very interesting new theory of cognition has been announced. The theory posits that all human cognition and behavior is based on just one simple, non-algorithmic procedure that has been named confabulation. If the theory is correct it could offer a radical new approach to artificial intelligence, knowledge discovery, and knowledge management.

Brain Study Reveals Differences Between Semantic and Episodic Memory

This interesting new brain
study reveals processing differences between Semantic Memory and Episodic Memory in human brains.
Nature performs these functions differently, and there is probably a good reason why that
is so. On the Web we don’t really have an equivalent of Episodic Memory or Semantic Memory yet… but we’re working on it!

If the Universe is a Simulation, then What?

Here’s an interesting speculation. Assume for the moment that our universe is in fact a simulation running on a vast computing system created a race of beings that is far more advanced than we can presently imagine. The next logical question would be, “Why would an advanced civilization want or need to undertake such a project?”

Without debating whether or not such a project is possible, let’s simply address this second question. I think that one reason why it might be of value to simulate an entire universe is to in fact understand the universe that one is already in. It may turn out that cosmology-research in “super advanced civilizations” takes place via such universe simulation rather than via observations of their own universes.

Why might this be the case? Well one reason is that Godel proved that in any formal system either there are truths that cannot be proved to be true using that formal system, or the formal system will result in contradictions. It is not possible to design a formal system (that is equivalent to mathematics as commonly defined) that is both logically complete and consistent. Perhaps because of this fundamental limitation on knowledge, at a certain level of physics sophistication, there will be a similar limitation to knowledge: either some truths about the universe cannot be proved using existing physics, or existing physics will result in contradictions.

But there may be a “workaround” to this problem — a way to discover unprovable truths about the universe, without having to derive them from a particular formal physical system — namely, simulating lots of potential universes, each with different physics, to see what the results are. Perhaps by doing a meta-level study of the behavior of different sets of physical laws on different sets of initial conditions, meta-laws can be discovered that apply not only to particular universes, but to all possible universes.

Perhaps these meta-laws can only be discovered and understood outside the context of one particular physics, or at least outside the context of one particular universe. Perhaps, the only way to see beyond the “Godel Horizon,” is via simulation. By simulating myriad potential universes (on a hypothetical quantum computer, which could run infinite simulations on infinite data sets in finite time, for example),meta-theorems could be derived that transcended the Godel Horizon of any one particular physics or universe. This could be one explanation for why our universe is a simulation, assuming that it is a simulation at all (which I actually don’t believe, by the way — but I used to, which is why I am still interested in this question).

There may also be other reasons for simulating universes, besides just physics and cosmology research. In particular, one major motivation might be social research, genetic research, or perhaps research into time-travel and the complexity of changes in the continuum of causes and effects. These are wild speculations, I know, but worth pondering as long as we are on the subject. Another interesting possibility is that it may be easier to generate lots of universes in which various races evolve and work to solve the riddle of the universe in parallel, then to try to solve it oneself in one’s own universe using only one’s own resources. But this would only be practical if in fact it were possible to run such simulations at the same clockspeed, or better yet an even faster one, than the clockspeed (the speed of light) of one’s own present universe.

Now an interesting follow-on idea that stems from this concept is that perhaps there is a way to detect whether or not our universe is a simulation. We simply need to look for some phenomena that no formal system can fully describe — something that cannot be simulated perfectly, even on a suitably complex computer. If we can find such a phenomena then our universe cannot be a simulation or formal system, at least not one based on our concept of what a formal system is. I propose that consciousness is an example of such a unsimulatable-thing. If we find that consciousness cannot be simulated by a computer, then I would conclude that our universe (which contains consciousness, seemingly) cannot be a computer simulation. It might still be a simulation however — but not a simulation running on anything equivalent to a Turing Machine. For example, perhaps in really advanced civilizations there is another way of simulating things that does not rely on Turing Machines — for example, a simulation technology that relies instead on the application of dreaming as a means to generate and test various possible worlds. But that is an extreme fringe-speculation that I would be the first to admit is even farther out in the realm of science-fiction than the rest of this article.

If it turns out that we cannot find anything in our own universe that cannot be simulated perfectly such that it is effectively synthesized, in principle at least, then that does not prove that our universe is a simulation, only that it is not impossible for it to be a simulation. To prove that our universe IS a simulation, we would have to locate certain facts about our universe that are inconsistent with what we would expect if it were not a simulation. For example, perhaps there are certain non-random patterns in space-time, or our number system, or the physical constants that are extremely unlikely to have happened by accident. In fact, such patterns have been found. But even this is not sufficient evidence to convince me, or most scientists, that our universe is intelligently designed and just a simulation. So that won’t suffice as proof.

What might suffice? Well for one thing, assuming that there exists a civilization advanced enough to simulate universes, perhaps they are also clever enough to find a way to add clues about their existence and the simulated nature of their universes, into their simulated universes so that they can be found by intelligent beings within those universes. But why would they bother leaving such clues, even if they could? Perhaps they might do so in order to generate recursive computations. For example, they might be able to find their Big Answer faster if the intelligent beings in their simulations could eventually evolve to run their own universe simulations. And in order to help them along in that process, really smart universe-simulators might insert clues and knowledge into their simulated universes necessary to help their simulated civilizations to evolve the technology and knowledge necessary to start running their own simulations of universes!

Now let’s assume for the moment, that our universe is such a simulation, and that the simulators are clever enough to leave us clues to discover this fact — where might they leave them? Well it wouldn’t be in our DNA — that is far too high level and emergent. It would probably have to be in the underlying structure of space-time and the physical laws and constants themselves– for that is the level at which our simulation would most likely been coded. Perhaps there is a message hidden for us to discover in the fabric of mathematics, space, time and physics. It’s worth a look, if nothing else to rule out the possibility that it is there.

Addendum

After thinking about this further for a while, a few additional interesting follow-on ideas emerged:

  • If in fact our universe is a simulation being run on an advanced simulation system by some ultra-advanced race of beings, then it would increase the probability that THEIR UNIVERSE (our meta-universe) is also just a simulation being run on a computer system by an even more advanced race of beings! So perhaps one reason why an advanced race of beings might want to attempt to simulate a universe is in order to determine whether it is possible that their own universe is a simulation. Furthermore, if their own universe is a simulation and if that simulation is a formal system then their knowledge of their universe is certainly limited by Godel’s theorem, and therefore simulating further universes is the only way for them to see beyond those limitations.
  • Another interesting thought is that if a given universe U is a simulation running within another universe U’ then the question arises, how might communication take place between the beings in those two universe? Consider our own case. Life on Earth has only been around for a tiny blip on the cosmic timescale of this universe; and our solar system is a miniscule backwater of our galaxy, let alone our entire universe. Furthermore, we may not be that unique or that intelligent — there could be billions of other species that are equally or more interesting than us. In order to establish communication with our creators, we would have to somehow get their attention first, and this is a cosmic signal-to-noise problem. What could we do to get their attention? I think there are a few options:
    • Do something that affects a large region of space. For example, create a clearly non-random, non-natural arrangement of stars, assuming we could do that. Create a bunch of black holes or pulsars, or make a pulsar emit energy in a noticeable way. (Interesting side-thought — maybe pulsars are beacons created by advanced races within our simulation to signal their presence to one another and to the creators of our simulation — that would be one way to get noticed).
    • Do something that affects a large region of time. We would probably need time-travel technology to do this — but if we had it we could potentially go back to a time just after the Big Bang and make a few simple changes that would result in a vastly different universe today. That would certainly send a big signal, if we could do it.
    • Hack their simulation and try to create a bug or error. This is risky though — it might result in our own accidental destruction (lost or corrupted data; or a bad computer virus running rampant through the cosmos, etc.) or the entire simulation (our universe) being shut down by an annoyed cosmic bug-fixer.
    • Do something that affects the fundamental properties of our universe. For example, could we do something that would change the physical constants somehow? If looking at these properties is a logical place to search for a hidden message from our creators, then these properties might also be a logical place to send messages back to them. I have no idea how we could modify the fundamental physical constants of our universe.

at is far more advanced than we can presently imagine. The next logical question would be, “Why would an advanced civilization want or need to undertake such a project?”

Continue reading