What's After the Real Time Web?

In typical Web-industry style we’re all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call “The Stream,” is not an end in itself, it’s a means to an end. So what will it enable, where is it headed, and what’s it going to look like when we look back at this trend in 10 or 20 years?

In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:

  1. Web Attention Deficit Disorder. The first problem with the real-time Web that is becoming increasingly evident is that it has a bad case of ADD. There is so much information streaming in from so many places at once that it’s simply impossible to focus on anything for very long, and a lot of important things are missed in the chaos. The first generation of tools for the Stream are going to need to address this problem.
  2. Web Intention Deficit Disorder. The second problem with the real-time Web will emerge after we have made some real headway in solving Web attention deficit disorder. This second problem is about how to get large numbers of people to focus their intention not just their attention. It’s not just difficult to get people to notice something, it’s even more difficult to get them to do something. Attending to something is simply noticing it. Intending to do something is actually taking action, expending some energy or effort to do something. Intending is a lot more expensive, cognitively speaking, than merely attending. The power of collective intention is literally what changes the world, but we don’t have the tools to direct it yet.

The Stream is not the only big trend taking place right now. In fact, it’s just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I’m tracking:

  • Messaging. The real-time Web aka The Stream is really about messaging in essence. It’s a subset of the global trend towards building a better messaging layer for the Web. Multiple forms of messaging are emerging, from the publish-and-subscribe nature of Twitter and RSS, to things like Google Wave, Pubsubhubub, and broadcast style messaging or multicasting via screencast, conferencing and media streaming and events in virtual worlds. The effect of these tools is that the speed and interactivity of the Web are increasing — the Web is getting faster. Information spreads more virally, more rapidly — in other words, “memes” (which we can think of as collective thoughts) are getting more sophisticated and gaining more mobility.
  • Semantics. The Web becomes more like a database. The resolution of search, ad targeting, and publishing increases. In other words, it’s a higher-resolution Web. Search will be able to target not just keywords but specific meaning. For example, you will be able to search precisely for products or content that meet certain constraints. Multiple approaches from natural language search to the metadata of the Semantic Web will contribute to increased semantic understanding and representation of the Web.
  • Attenuation. As information moves faster, and our networks get broader, information overload gets worse in multiple dimensions. This creates a need for tools to help people filter the firehose. Filtering in its essence is a process of attenuation — a way to focus attention more efficiently on signal versus noise. Broadly speaking there are many forms of filtering from automated filtering, to social filtering, to personalization, but they all come down to helping someone focus their finite attention more efficiently on the things they care about most.
  • The WebOS.  As cloud computing resources, mashups, open linked data, and open API’s proliferate, a new level of aggregator is emerging. These aggregators may focus on one of these areas or may cut across them. Ultimately they are the beginning of true cross-service WebOS’s. I predict this is going to be a big trend in the future — for example instead of writing Web apps directly to various data and API’s in dozens of places, just write to a single WebOS aggregator that acts as middleware between your app and all these choices. It’s much less complicated for developers. The winning WebOS is probably not going to come from Google, Microsoft or Amazon — rather it will probably come from someone neutral, with the best interests of developers as the primary goal.
  • Decentralization. As the semantics of the Web get richer, and the WebOS really emerges it will finally be possible for applications to leverage federated, Web-scale computing. This is when intelligent agents will actually emerge and be practical. By this time the Web will be far too vast and complex and rapidly changing for any centralized system to index and search it. Only massively federated swarms of intelligent agents, or extremely dynamic distributed computing tools, that can spread around the Web as they work, will be able to keep up with the Web.
  • Socialization. Our interactions and activities on the Web are increasingly socially networked, whether individual, group or involving large networks or crowds. Content is both shared and discovered socially through our circles of friends and contacts. In addition, new technologies like Google Social Search enable search results to be filtered by social distance or social relevancy. In other words, things that people you follow like get higher visibility in your search results. Socialization is a trend towards making previously non-social activities more social, and towards making already-social activities more efficient and broader. Ultimately this process leads to wider collaboration and higher levels of collective intelligence.
  • Augmentation. Increasingly we will see a trend towards augmenting things with other things. For example, augmenting a Web page or data set with links or notes from another Web page or data set. Or augmenting reality by superimposing video and data onto a live video image on a mobile phone. Or augmenting our bodies with direct connections to computers and the Web.

If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it’s collective intelligence — not just of humans, but also our computing systems, working in concert.

Collective Intelligence

I think that these trends are all combining, and going real-time. Effectively what we’re seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.

But that’s in the very distant future still. In the nearer term — the next 100 years or so — we’re going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.

Social Evolution

Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.

Physical Evolution

As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we’ll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:

  • Individual brains will get less good at things like memorization and recall, calculation, reasoning, and long-term planning and action.
  • Individual brains will get better at multi-tasking, information filtering, trend detection, and social communication. The parts of the nervous system involved in processing live information will increase disproportionately to other parts.
  • Our bodies may actually improve in certain areas. We will become more, not less, mobile, as computation and the Web become increasingly embedded into our surroundings, and into augmented views of our environments. This may cause our bodies to get into better health and shape since we will be less sedentary, less at our desks, less in front of TV’s. We’ll be moving around in the world, connected to everything and everyone no matter where we are. Physical strength will probably decrease overall as we will need to do less manual labor of any kind.

These are just some of the changes that are likely to occur as a result of the things we’re working on today. The Web and the emerging Real-Time Web are just a prelude of things to come.

Video: My Talk on the Evolution of the Global Brain at the Singularity Summit

If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.

(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

Virtual Out of Body Experiences

A very cool experiment in virtual reality has shown it is possible to trick the mind into identifying with a virtual body:

Through these goggles, the volunteers could see a camera
view of their own back – a three-dimensional "virtual own body" that
appeared to be standing in front of them.

When the researchers stroked the back of the volunteer
with a pen, the volunteer could see their virtual back being stroked
either simultaneously or with a time lag.

The volunteers reported that the sensation seemed to be
caused by the pen on their virtual back, rather than their real back,
making them feel as if the virtual body was their own rather than a
hologram.

Volunteers

Even when the camera was switched to film the back of a
mannequin being stroked rather than their own back, the volunteers
still reported feeling as if the virtual mannequin body was their own.

And when the researchers switched off the goggles,
guided the volunteers back a few paces, and then asked them to walk
back to where they had been standing, the volunteers overshot the
target, returning nearer to the position of their "virtual self".

This has implications for next-generation video games and virtual reality. It also has interesting implications for consciousness studies in general.

Continue reading

Scientists Encode Message into Bacterial DNA

Japanese scientists have developed a technique that can encode 100-bit messages into the DNA of common bacteria. The bacteria replicate and pass the message down from generation to generation for at least thousands of years. Because there are millions or more copies of the message it can survive gradual degradation or mutuations (so they claim). Perhaps by taking a sample of the message across a large number of descendant bacteriums any errors or mutations can be detected and corrected. The message that was encoded was ""e=mc2 1905".

I’ve written about the potential of storing messages in DNA in the past here, and here.

What’s interesting of course is that since this is possible it begs the question of whether there are already messages encoded into the DNA of various living things on Earth? We might want to look at E Coli, or other common organisms, or perhaps human, dolphin, and whale DNA. We might also want to look at birds and lizards since they come down more directly from dinosaurs. Who knows — maybe a long long time ago someone left us messages there, or their signature at least.

There are two places that I think it is most likely that we will first receive messages from aliens, if we ever do:

  1. Our own DNA (or that of other living species on Earth)
  2. The Internet. It’s the logical place to establish communication with us. Perhaps via a Myspace page…

Capturing Your Digital Life

Nice article in Scientific American about Gordon Bell’s work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web — additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.

Cryotherapy — Freeze Yourself, For Health

And now for some other science news. A new technique called cryotherapy is emerging in which people subject themselves to short bursts of extreme cold, in order to rejuvenate the body:

It’s minus 120 degrees and all I’m wearing is a hat and socks.
Cryotherapy is the latest treatment for a range of illnesses including
arthritis, osteoporosis, and even MS. New Age madness or a genuine
medical breakthrough?

Minding The Planet — The Meaning and Future of the Semantic Web

NOTES

Prelude

Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, “Minding the Planet” about how the Internet would enable the evolution of higher forms of collective intelligence.

My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, “One thing is certain: Someday, you will write this book.” We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.

A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.

But ever since that day on the porch with my grandfather, I remembered what he said: “Someday, you will write this book.” I’ve tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I’ve continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it’s the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.

This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?

I’ve often joked that it is ironic that a term that contains theword “semantic” has such an ambiguous meaning for most people. Mostpeople just have no idea what this means, they have no context for it,it is not connected to their experience and knowledge. This is aproblem that people who are deeply immersed in the trenches of theSemantic Web have not been able to solve adequately — they have notfound the words to communicate what they can clearly see, what they areworking on, and why it matters for everyone. In this article I havetried, and hopefully succeeded, in providing a detailed introductionand context for the Semantic Web fornon-technical people. But even technical people working in the fieldmay find something of interest here as I piece together the fragmentsinto a Big Picture and a vision for what might be called “Semantic Web2.0.”

I hope the reader will bear with me as Ibounce around across different scales of technology and time, and fromthe extremes of core technology to wild speculation in order to tellthis story. If you are looking for the cold hardscience of it all, this article will provide an understanding but willnot satisfy your need for seeing the actual code; there are otherplaceswhere you can find that level of detail and rigor. But if you want tounderstand what it all really means and what the opportunity and futurelookslike – this may be what you are looking for.

I should also note that all of this is my personal view of what I’vebeen working on,and what it really means to me. It is not necessarily the official viewof the mainstream academic Semantic Web community — although there arecertainly many places where we all agree. But I’m sure that somereaders will certainly disagree or raise objections to some of myassertions, and certainly to my many far-flung speculations about thefuture. I welcome those different perspectives; we’re all trying tomake sense of this and the more of us who do that together, the more wecan collectively start to really understand it. So please feel free towrite your own vision or response, and please let me know so I can linkto it!

So with this Prelude in mind, let’s get started…

The Semantic Web Vision

The Semantic Web is a set of technologies which are designed toenable aparticular vision for the future of the Web – a future in which allknowledge exists on the Web in a format that software applications canunderstand andreason about. By making knowledge more accessible to software, softwarewillessentially become able to understand knowledge, think about knowledge,and createnew knowledge. In other words, software will be able to be moreintelligent –not as intelligent as humans perhaps, but more intelligent than say,your wordprocessor is today.

The dream of making software more intelligent has been around almost as longas software itself. And although it is taking longer to materialize than past experts hadpredicted, progress towards this goal is being steadilymade. At the same time, the shape of this dream is changing. It is becomingmore realistic and pragmatic. The original dream of artificial intelligence wasthat we would all have personal robot assistants doing all the work we don’twant to do for us. That is not the dream of the Semantic Web. Instead, today’sSemantic Web is about facilitating what humans do – it is about helping humansdo things more intelligently. It’s not a vision in which humans do nothing andsoftware does everything.

The Semantic Web vision is not just about helping software become smarter –it is about providing new technologies that enable people, groups,organizations and communities to be smarter.

For example, by providing individuals with tools that learn about what theyknow, and what they want, search can be much more accurate and productive.

Using software that is able to understand and automatically organize largecollections of knowledge, groups, organizations and communities can reachhigher levels of collective intelligence and they can cope with volumes ofinformation that are just too great for individuals or even groups tocomprehend on their own.

Another example: more efficient marketplaces can be enabled by software thatlearns about products, services, vendors, transactions and market trends andunderstands how to connect them together in optimal ways.

In short, the Semantic Web aims to make software smarter, not just for itsown sake, but in order to help make people, and groups of people, smarter. Inthe original Semantic Web vision this fact was under-emphasized, leading to theimpression that Semantic Web was only about automating the world. In fact, it isreally about facilitating the world.

The Semantic Web Opportunity

The Semantic Web is one of the most significant things to happen since theWeb itself. But it will not appear overnight. It will take decades. It willgrow in a bottom-up, grassroots, emergent, community-driven manner just likethe Web itself. Many things have to converge for this trend to really take off.

The core open standards already exist, but the necessary development tools haveto mature, the ontologies that define human knowledge have to come into beingand mature, and most importantly we need a few real “killer apps” to prove thevalue and drive adoption of the Semantic Web paradigm. The first generation ofthe Web had its Mozilla, Netscape, Internet Explorer, and Apache – and it alsohad HTML, HTTP, a bunch of good development tools, and a few killer apps andservices such as Yahoo! and thousands of popular Web sites. The same things arenecessary for the Semantic Web to take off.

And this is where we are today – this all just about to start emerging.There are several companies racing to get this technology, or applications ofit, to market in various forms. Within a year or two you will see mass-consumerSemantic Web products and services hit the market, and within 5 years therewill be at least a few “killer apps” of the Semantic Web. Ten years from nowthe Semantic Web will have spread into many of the most popular sites andapplications on the Web. Within 20 years all content and applications on theInternet will be integrated with the Semantic Web. This is a sea-change. A bigevolutionary step for the Web.

The Semantic Web is an opportunity to redefine, or perhaps to better define,all the content and applications on the Web. That’s a big opportunity. Andwithin it there are many business opportunities and a lot of money to be made. It’snot unlike the opportunity of the first generation of the Web. There areplatform opportunities, content opportunities, commerce opportunities, searchopportunities, community and social networking opportunities, and collaborationopportunities in this space. There is room for a lot of players to compete andat this point the field is wide open.

The Semantic Web is a blue ocean waiting to be explored. And like anyunexplored ocean its also has its share of reefs, pirate islands, hidden treasure, shoals,whirlpools, sea monsters and typhoons. But there are new worlds out there to be discovered,and they exert an irresistible pull on the imagination. This is an excitingfrontier – and also one fraught with hard technical and social challenges thathave yet to be solved. For early ventures in the Semantic Web arena, it’s notgoing to be easy, but the intellectual and technological challenges, and the potentialfinancial rewards, glory, and benefit to society, are worth the effort andrisk. And this is what all great technological revolutions are made of.

Semantic Web 2.0

Some people who have heard the term “Semantic Web” thrown around too muchmay think it is a buzzword, and they are right. But it is not just a buzzword –it actually has some substance behind it. That substance hasn’t emerged yet,but it will. Early critiques of the Semantic Web were right – the early visiondid not leverage concepts such as folksonomy and user-contributed content atall. But that is largely because when the Semantic Web was originally conceivedof Web 2.0 hadn’t happened yet. The early experiments that came out of researchlabs were geeky, to put it lightly, and impractical, but they are already beingfollowed up by more pragmatic, user-friendly approaches.

Today’s Semantic Web – what we might call “Semantic Web 2.0” is a kinder,gentler, more social Semantic Web. It combines the best of the original visionwith what we have all learned about social software and community in the last10 years. Although much of this is still in the lab, it is already starting totrickle out. For example, recently Yahoo! started a pilot of the Semantic Webbehind their food vertical. Other organizations are experimenting with usingSemantic Web technology in parts of their applications, or to store or mapdata. But that’s just the beginning.

The Google Factor

Entrepreneurs, venture capitalists and technologists are increasinglystarting to see these opportunities. Who will be the “Google of the SemanticWeb?” – will it be Google itself? That’s doubtful. Like any entrenchedincumbent, Google is heavily tied to a particular technology and worldview. Andin Google’s case it is anything but semantic today. It would be easier for anupstart to take this position than for Google to port their entireinfrastructure and worldview to a Semantic Web way of thinking.

If it is goingto be Google it will most likely be by acquisition rather than by internal origination. Andthis makes more sense anyway – for Google is in a position where they can just wait and buy the winner,at almost any price, rather than competing in the playing field. One thing to note however is that Google has at least one product offering that shows some potential for becoming a key part of the Semantic Web. I am speaking of Google Base, Google’s open database which is meant to be a registry for structured data so that it can be found in Google search. But Google Base does not conform to or make use of the many open standards of the Semantic Web community. That may or may not be a good thing, depending on your perspective.

Of course the downside of Google waiting to join the mainstream Semantic Web community until after the winner is announced is very large – once there is a winner it may be too late for Google to beat them. Thewinner of the Semantic Web race could very well unseat Google. The strategistsat Google are probably not yet aware of this but as soon as they seesignificant traction around a major Semantic Web play it will become of interestto them.

In any case, I think there won’t be just one winner, there will be severalmajor Semantic Web companies in the future, focusing on different parts of theopportunity. And you can be sure that if Google gets into the game, every majorportal will need to get into this space at some point or risk becomingirrelevant. There will be demand and many acquisitions. In many ways the Semantic Web will not be controlled by just one company — it will be more like a fabric that connects them all together.

Context is King — The Nature ofKnowledge

It should be clear by now that the Semantic Web is all about enablingsoftware (and people) to work with knowledge more intelligently. But what isknowledge? Knowledge is not just information. It is meaningful information – itis information plus context. For example, if I simply say the word “sem” toyou, it is just raw information, it is not knowledge. It probably has nomeaning to you other than a particular set of letters that you recognize and asound you can pronounce, and the mere fact that this information was stated byme.

But if I tell you that “sem” it is the Tibetan word for “mind” then suddenly,“sem means mind in Tibetan” to you. If I further tell you that Tibetans have about as many words for “mind” as Eskimos have for “snow,” this is further meaning. Thisis context, in other words, knowledge, about the sound “sem.” The sound is raw information. When it is given context itbecomes a word, a word that has meaning, a word that is connected to conceptsin your mind – it becomes knowledge. By connecting raw information to context,knowledge is formed.

Once you have acquired a piece of knowledge such as “sem means mind in Tibetan,” you may then also form further knowledgeabout it. For example, you may form the memory, “Nova said that ‘sem means mind in Tibetan.’” You mightalso connect the word “sem” to networks of further concepts you have about Tibet and your understanding of what the word “mind” means.

The mind is the organ of meaning – mind is where meaning is stored,interpreted and created. Meaning is not “out there” in the world, it is purelysubjective, it is purely mental. Meaning is almost equivalent to mind in fact.For the two never occur separately. Each of our individual minds has some way of internally representing meaning — when we read or hear a word that we know, our minds connect that to a network of concepts about it and at that moment it means something to us.

Digging deeper, if you are really curious,or you happen to know Greek, you may also find that a similar sound occurs inthe Greek word, sēmantikós – which means “having meaning” and in turn is the root of the English word “semantic”which means “pertaining to or arising from meaning.” That’s an odd coincidence!“Sem” occurs in Tibetan word for mind, and the English and Greek words that allrelate to the concepts of “meaning” and “mind.” Even stranger is that not only do these words have a similar sound, they have a similar meaning.

With all this knowledge at yourdisposal, when you then see the term “Semantic Web” you may be able to inferthat it has something to do with adding “meaning” to the Web. However, if youwere a Tibetan, perhaps you might instead think the term had something to dowith adding “mind” to the Web. In either case you would be right!

Discovering New Connections

We’ve discovered a new connection — namely that there is an implicit connectionbetween “sem” in Greek, English and Tibetan: they all relate to meaning andmind. It’s not a direct, explicit connection – it’s not evident unless you digfor it. But it’s a useful tidbit of knowledge once it’s found. Unlike the direct migration of the sound “sem” from Greek to English,there may not have ever been a direct transfer of this sound from Greek toSanskrit to Tibetan. But in a strange and unexpected way, they are all connected. This connectionwasn’t necessarily explicitly stated by anyone before, but was uncovered byexploring our network of concepts and making inferences.

The sequence of thought about “sem”above is quite similar to kind of intellectual reasoning and discovery that theactual Semantic Web seeks to enable software to do automatically.  How is this kind of reasoning and discovery enabled? The Semantic Web providesa set of technologies for formally defining the context of information. Just asthe Web relies on a standard formal specification for “marking up” informationwith formatting codes that enable any applications that understand those codesto format the information in the same way, the Semantic Web relies on newstandards for “marking up” information with statements about its context – itsmeaning – that enable any applications to understand, and reason about, the meaning of those statements in the same way.

By applying semantic reasoning agents to large collections of semantically enhanced content, all sorts of new connections may be inferred, leading to new knowledge, unexpected discoveries and useful additional context around content. This kind of reasoning and discovery is already taking place in fields from drug discovery and medical research, to homeland security and intelligence. The Semantic Web is not the only way to do this — but it certainly will improve the process dramatically. And of course, with this improvement will come new questions about how to assess and explain how various inferences were made, and how to protect privacy as our inferencing capabilities begin to extend across ever more sources of public and private data. I don’t have the answers to these questions, but others are working on them and I have confidence that solutions will be arrived at over time.

Smart Data

By marking up information with metadata that formally codifies its context, we can make the data itself “smarter.” The data becomes self-describing. When you get a piece of data you also get the necessary metadata for understanding it. For example, if I sent you a document containing the word “sem” in it, I could add markup around that word indicating that it is the word for “mind” in the Tibetan language.

Similarly, a document containing mentions of “Radar Networks” could contain metadata indicating that “Radar Networks” is an Internet company, not a product or a type of radar technology. A document about a person could contain semantic markup indicating that they are residents of a certain city, experts on Italian cooking, and members of a certain profession. All of this could be encoded as metadata in a form that software could easily understand. The data carries more information about its own meaning.

The alternative to smart data would be for software to actually read and understand natural language as well as humans. But that’s really hard. To correctly interpret raw natural language, software would have to be developed that knew as much as a human being. But think about how much teaching and learning is required to raise a human being to the point where they can read at an adult level. It is likely that similar training would be necessary to build software that could do that. So far that goal has not been achieved, although some attempts have been made. While decent progress in natural language understanding has been made, most software that can do this is limited around particular vertical domains, and it’s brittle — it doesn’t do a good job of making sense of terms and forms of speech that it wasn’t trained to parse and make sense of.

Instead of trying to make software a million times smarter than it is today, it is much easier to just encode more metadata about what our information means. That turns out to be less work in the end. And there’s an added benefit to this approach — the meaning exists with the data and travels with it. It is independent of any one software program — all software can access it. And because the meaning of information is stored with the information itself, rather than in the software, the software doesn’t have to be enormous to be smart. It just has to know the basic language for interpreting the semantic metadata it finds on the information it works with.

Smart data enables relatively dumb software to be smarter with less work. That’s an immediate benefit. And in the long-term as software actually gets smarter, smart data will make it easier for it to start learning and exploring on its own. So it’s a win-win approach. Start with by adding semantic metadata to data, end up with smarter software.

Making Statements About the World

Metadata comes down to making statements about the world in a manner that machines, and perhaps even humans, can understand unambiguously. The same piece of metadata should be interpreted in the same way by different applications and readers.

There are many kinds of statementsthat can be made about information to provide it with context. For example, youcan state a definition such as “person” means “a human being or a legalentity.” You can state an assertion such as “Sue is a human being.” You canstate a rule such that “if x is a human being, then x is a person.”

From thesestatements it can then be inferred that “Sue is a person.” This inference is soobvious to you and me that it seems trivial, but most software today cannot dothis. It doesn’t know what a person is, let alone what a name is. But ifsoftware could do this, then it could for example, automatically organizedocuments by the people they are related to, or discover connections betweenpeople who were mentioned in a set of documents, or it could find documentsabout people who were related to particular topics, or it could give you a listof all the people mentioned in a set of documents, or all the documents relatedto a person.

Of course this is a very basicexample. But imagine if your software didn’t just know about people – it knewabout most of the common concepts that occur in your life. Your software wouldthen be able to help you work with your documents just about as intelligentlyas you are able to do by yourself, or perhaps even more intelligently, becauseyou are just one person and you have limited time and energy but your softwarecould work all the time, and in parallel, to help you.

Examples and Benefits

How could the existence of the Semantic Web and all the semantic metadata that defines it be really useful toeveryone in the near-term?

Well, for example, the problem of email spam would finally be cured:your software would be able to look at a message and know whether it wasmeaningful and/or relevant to you or not.

Similarly, you would never have to file anything by hand again. Your software could atuomate all filing and information organization tasks for you because it would understand your information and your interests. It would be able to figure out when to file something in a single folder, multiple folders, or new ones. It would organize everything — documents, photos, contacts, bookmarks, notes, products, music, video, data records — and it would do it even better and more consistently than you could on your own. Your software wouldn’t just organize stuff, it would turn it into knowledge by connecting it to more context. It could this not just for individuals, but for groups, organizations and entire communities.

Another example: search would bevastly better: you could search conversationally by typing in everyday naturallanguage and you would get precisely what you asked for, or even what youneeded but didn’t know how to ask for correctly, and nothing else. Your searchengine could even ask you questions to help you narrow what you want. You wouldfinally be able to converse with software in ordinary speech and it would understandyou.

The process of discovery would be easier too. You could have software agent that worked as your personal recommendation agent. It would constantly be looking in all the places you read or participate in for things that are relevant to your past, present and potential future interests and needs. It could then alert you in a contextually sensitive way, knowing how to reach you and how urgently to mark things. As you gave it feedback it could learn and do a better job over time.

Going even further with this,semantically-aware software – software that is aware of context, software thatunderstands knowledge – isn’t just for helping you with your information, itcan also help to enrich and facilitate, and even partially automate, yourcommunication and commerce (when you want it to). So for example, your software could help you with your email. It would be able to recommend responses to messages for you, or automate the process. It would be able to enrich your messaging anddiscussions by automatically cross-linking what you are speaking about withrelated messages, discussions, documents, Web sites, subject categories,people, organizations, places, events, etc.

Shopping and marketplaces wouldalso become better – you could search precisely for any kind of product, withany specific attributes, and find it anywhere on the Web, in any store. You could post classified ads and automatically get relevant matches according to your priorities, from all over the Web, or only from specific places and parties that match your criteria for who you trust. You could also easily invent a new custom datastructure for posting classified ads for a new kind of product or service and publishit to the Web in a format that other Web services and applications couldimmediately mine and index without having to necessarily integrate with yoursoftware or data schema directly.

You could publish an entiredatabase to the Web and other applications and services could immediately startto integrate your data with their data, without having to migrate your schemaor their own. You could merge data from different data sources together to create new data sources without having to ever touch or look at an actual database schema.

Bumps on the Road

The above examples illustrate thepotential of the Semantic Web today, but the reality on the ground is that the technology isstill in the early phases of evolution. Even for experienced software engineersand Web developers, it is difficult to apply in practice. The main obstaclesare twofold:

(1) The Tools Problem:

There are very few commercial-gradetools for doing anything with the Semantic Web today – Most of the tools forbuilding semantically-aware applications, or for adding semantics toinformation are still in the research phase and were designed for expertcomputer scientists who specialize in knowledge representation, artificialintelligence, and machine learning.

These tools require a largelearning curve to work with and they don’t generally support large-scaleapplications – they were designed mainly to test theories and frameworks, notto actually apply them. But if the Semantic Web is ever going to becomemainstream, it has to be made easier to apply – it has to be made moreproductive and accessible for ordinary software and content developers.

Fortunately, the tools problem isalready on the verge of being solved. Companies such as my own venture, RadarNetworks, are developing the next generation of tools for building Semantic Webapplications and Semantic Web sites. These tools will hide most of thecomplexity, enabling ordinary mortals to build applications and content thatleverage the power of semantics without needing PhD’s in knowledge representation.

(2) The Ontology Problem:

The Semantic Web providesframeworks for defining systems of formally defined concepts called “ontologies,”that can then be used to connect information to context in an unambiguous way. Withoutontologies, there really can be no semantics. The ontologies ARE the semantics,they define the meanings that are so essential for connecting information tocontext.

But there are still few widely used or standardized ontologies. Andgetting people to agree on common ontologies is not generally easy. Everyonehas their own way of describing things, their own worldview, and let’s face itnobody wants to use somebody else’s worldview instead of their own.Furthermore, the world is very complex and to adequately describe all the knowledgethat comprises what is thought of as “common sense” would require a very largeontology (and in fact, such an ontology exists – it’s called Cyc and it is solarge and complex that only experts can really use it today).

Even to describe the knowledge ofjust a single vertical domain, such as medicine, is extremely challenging. Tomake matters worse, the tools for authoring ontologies are still very hard touse – one has to understand the OWL language and difficult, buggy ontologyauthoring tools in order to use them. Domain experts who are non-technical andnot trained in formal reasoning or knowledge representation may find theprocess of designing ontologies frustrating using current tools. What is needed are commercial quality tools for buildingontologies that hide the underlying complexity so that people can just pourtheir knowledge into them as easily as they speak. That’s still a ways off, butnot far off. Perhaps ten years at the most.

Of course the difficulty ofdefining ontologies would be irrelevant if the necessary ontologies alreadyexisted. Perhaps experts could define them and then everyone else could justuse them? There are numerous ontologies already in existence, both on thegeneral level as well as about specific verticals. However in my own opinion,having looked at many of them, I still haven’t found one that has the rightbalance of coverage of the necessary concepts most applications need, andaccessibility and ease-of-use by non-experts. That kind of balance is arequirement for any ontology to really go mainstream.

Furthermore, regarding the presentcrop of ontologies, what is still lacking is standardization. Ontologists havenot agreed on which ontologies to use. As a result it’s anybody’s guess whichontology to use when writing a semantic application and thus there is a highdegree of ontology diversity today. Diversity is good, but too much diversityis chaos.

Applications that use differentontologies about the same things don’t automatically interoperate unless theirontologies have been integrated. This is similar to the problem of databaseintegration in the enterprise. In order to interoperate, different applicationsthat use different data schemas for records about the same things, have to bemapped to each other somehow – either at the application-level or the data-level.This mapping can be direct or through some form of middleware.

Ontologies canbe used as a form of semantic middleware, enabling applications to be mapped atthe data-level instead of the applications-level. Ontologies can also be usedto map applications at the applications level, by making ontologies of Webservices and capabilities, by the way. This is an area in which a lot ofresearch is presently taking place.

The OWL language can expressmappings between concepts in different ontologies. But if there are manyontologies, and many of them partially overlap, it is a non-trivial task toactually make the mappings between their concepts.

Even though concept A inontology one and concept B in ontology two may have the same names, and evensome of the same properties, in the context of the rest of the concepts intheir respective ontologies they may imply very different meanings. So simplymapping them as equivalent on the basis of their names is not adequate, theirconnections to all the other concepts in their respective ontologies have to beconsidered as well. It quickly becomes complex. There are some potential waysto automate the construction of mappings between ontologies however – but theyare still experimental. Today, integrating ontologies requires the help ofexpert ontologists, and to be honest, I’m not sure even the experts have itfigured out. It’s more of an art than a science at this point.

Darwinian Selection of Ontologies

All that is needed for mainstream adoption to begin is for a largebody of mainstream content to become semantically tagged andaccessible. This will cause whatever ontology is behind that content to become popular.

When developers see that there is significant content andtraction around aparticular ontology, they will use that ontology for their ownapplicationsabout similar concepts, or at least they will do the work of mappingtheir ownontology to it, and in this way the world will converge in a Darwinianfashionaround a few main ontologies over time.

These main ontologies will then beworth thetime and effort necessary to integrate them on a semantic level,resulting in acohesive Semantic Web. We may in fact see Darwinian natural selection take place not just at the ontology level, but at the level of pieces of ontologies.

A certain ontology may do a good job of defining what a person is, while another may do a good job of defining what a company is. These definitions may be used for a lot of content, and gradually they will become common parts of an emergent meta-ontology comprised of the most-popular pieces from thousands of ontologies. This could be great or it could be a total mess. Nobody knows yet. It’s a subject for further research.

Making Sense of Ontologies

Since ontologies are so important,it is helpful to actually understand what an ontology really is, and what itlooks like. An ontology is a system of formally defined related concepts. Forexample, a simple ontology is this set of statements such as this:

A human is a living thing.

A person is a human.

A person may have a first name.

A person may have a last name.

A person must have one and only onedate of birth.

A person must have a gender.

A person may be socially related toanother person.

A friendship is a kind of socialrelationship.

A romantic relationship is a kindof friendship.

A marriage is a kind of romanticrelationship.

A person may be in a marriage withonly one other person at a time.

A person may be employed by anemployer.

An employer may be a person or anorganization.

An organization is a group ofpeople.

An organization may have a productor a service.

A company is a type organization.

We’ve just built a simple ontologyabout a few concepts: humans, living things, persons, names, socialrelationships, marriages, employment, employers, organizations, groups,products and services. Within this system of concepts there is particular logic,some constraints, and some structure. It may or may not correspond to yourworldview, but it is a worldview that is unambiguously defined, can becommunicated, and is internally logically consistent, and that is what isimportant.

The Semantic Web approach providesan open-standard language, OWL, for defining ontologies. OWL also provides fora way to define instances of ontologies. Instances are assertions within theworldview that a given ontology provides. In other words OWL provides a meansto make statements that connect information to the ontology so that softwarecan understand its meaning unambiguously. For example, below is a set ofstatements based on the above ontology:

There exists a person x.

Person x has a first name “Sue”

Person x  has a last name “Smith”

Person x has a full name “Sue Smith”

Sue Smith was born on June 1, 2005

Sue Smith has a gender: female

Sue Smith has a friend: Jane, who isanother person.

Sue Smith is married to: Bob, anotherperson.

Sue Smith is employed by Acme, Inc, a company

Acme Inc. has a product, Widget2.0.

The set of statements above, plusthe ontology they are connected to, collectively comprise a knowledge basethat, if represented formally in the OWL markup language, could be understoodby any application that speaks OWL in the precise manner that it was intendedto be understood.

Making Metadata

The OWL language provides a way tomarkup any information such as a data record, an email message or a Web pagewith metadata in the form of statements that link particular words or phrasesto concepts in the ontology. When software applications that understand OWLencounter the information they can then reference the ontology and figure outexactly what the information means – or at least what the ontology says that itmeans.

But something has to add thesesemantic metadata statements to the information – and if it doesn’t add them or adds thewrong ones, then software applications that look at the information will getthe wrong idea. And this is another challenge – how will all this metadata getcreated and added into content? People certainly aren’t going to add it all byhand!

Fortunately there are many ways tomake this easier. The best approach is to automate it using special softwarethat goes through information, analyzes the meaning and adds semantic metadataautomatically. This works today, but the software has to be trained or providedwith rules and that takes some time. It also doesn’t scale cost-effectively tovast data-sets.

Alternatively, individuals can beprovided with ways to add semantics themselves as they author information. Whenyou post your resume in a semantically-aware job board, you could fill out aform about each of your past jobs, and the job board would connect that data toappropriate semantic concepts in an underlying employment ontology. As anend-user you would just fill out a form like you are used to doing;under-the-hood the job board would add the semantics for you.

Another approach is to leveragecommunities to get the semantics. We already see communities that are addingbasic metadata “tags” to photos, news articles and maps. Already a few simpletypes of tags are being used pseudo-semantically: subject tags and geographicaltags. These are primitive forms of semantic metadata. Although they are notexpressed in OWL or connected to formal ontologies, they are at leastsemantically typed with prefixes or by being entered into fields or specificnamespaces that define their types.

Tagging by Example

There may also be another solution to the problem of how to add semantics to content in the not to distant future. Once asuitable amount of content has been marked up with semantic metadata,it may be possible, through purely statistical forms of machinelearning, for software to begin to learn how to do a pretty good job ofmarking up new content with semantic metadata.

For example, if thestring “Nova Spivack” is often marked up with semantic metadata statingthat it indicates a person, and not just any person but a specificperson that is abstractly represented in a knowledge base somewhere,then when software applications encounter a new non-semanticallyenhanced document containing strings such as “Nova Spivack” or”Spivack, Nova” they can make a reasonably good guess that thisindicates that same specific person, and they can add the necessarysemantic metadata to that effect automatically.

As more and more semanticmetadata is added to the Web and made accessible it constitutes a statisticaltraining set that can be learned and generalized from. Although humansmay need to jump-start the process with some manually semantic tagging,it might not be long before software could assist them and eventuallydo all the tagging for them. Only in special cases would software needto ask a human for assistance — for example when totally new terms orexpressions were encountered for the first several times.

The technology for doing this learning already exists — and actually it’s not very different from how search engines like Google measure the community sentiment around web pages. Each time something is semantically tagged with a certain meaning that constitutes a “vote” for it having that meaning. The meaning that gets the most votes wins. It’s an elegant, Darwinian, emergent approach to learning how to automatically tag the Web.

One this is certain, if communities were able to tagthings with more types of tags, and these tags were connected to ontologies andknowledge bases, that would result in a lot of semantic metadata being added tocontent in a completely bottom-up, grassroots manner, and this in turn would enable this process to start to become automated or at least machine-augmented.

Getting the Process Started

But making the userexperience of semantic tagging easy (and immediately beneficial) enough that regular people will do it, is a challenge that has yet to be solved.However, it will be solved shortly. It has to be. And many companies andresearchers know this and are working on it right now. This does have to be solved to get the process of jump-starting the Semantic Web started.

I believe that the Tools Problem – the lack of commercial grade tools forbuilding semantic applications – is essentially solved already (although theproducts have not hit the market yet; they will within a few years at most).The Ontology Problem is further from being solved. I think the way this problemwill be solved is through a few “killer apps” that result in the building up ofa large amount of content around particular ontologies within particular onlineservices.

Where might we see this content initially arising? In my opinion it will most likely be within vertical communities of interest, communities of practice, and communities of purpose. Within such communities there is a need to create a common body of knowledge and to make that knowledge more accessible, connected and useful.

The Semantic Web can really improve the quality of knowledge and user-experience within these domains. Because they are communities, not just static content services, these organizations are driven by user-contributed content — users play a key role in building content and tagging it. We already see this process starting to take place in communities such as Flickr, del.icio.us, the Wikipedia and Digg. We know that communities of people do tag content, and consume tagged content, if it is easy and beneficial enough for to them to do so.

In the near future we may see miniature Semantic Webs arising around particular places, topics and subject areas, projects, and other organizations. Or perhaps, like almost every form of new media in recent times, we may see early adoption of the Semantic Web around online porn — what might be called “the sementic web.”

Whether you like it or not, it is a fact that pornography was one of the biggest drivers of early mainstream adoption of personal video technology, CD-ROMs, and also of the Internet and the Web.

But I think it probably is not necessary this time around. While, I’m sure that the so-called “sementic web” could become better from the Semantic Web, it isn’t going to be the primary driver of adoption of the Semantic Web. That’s probably a good thing — the world can just skip over that phase of development and benefit from this technology with both hands so to speak.

The World Wide Database

In some ways one could think of theSemantic Web as “the world wide database” – it does for the meaning of data records what theWeb did for the formatting documents. But that’s just the beginning. It actually turnsdocuments into richer data records. It turns unstructured data into structureddata. All data becomes structured data in fact. The structure is not merelydefined structurally, but it is defined semantically.

In other words, it’s notmerely that for example, a data record or document can be defined in such a wayas to specify that it contains a certain field of data with a certain label ata certain location – it defines what that field of data actually means in anunambiguous, machine understandable way. If all you want is a Web of data,XML is good enough. But if you want to make that data interoperable and machineunderstandable then you need RDF and OWL – the Semantic Web.

Like any database,the Semantic Web, or rather the myriad mini-semantic-webs that will comprise it,have to overcome the challenge of data integration. Ontologies provide a betterway to describe and map data, but the data still has to be described andmapped, and this does take some work. It’s not a magic bullet.

The Semantic Webmakes it easier to integrate data, but it doesn’t completely remove the dataintegration problem altogether. I think the eventual solution to this problemwill combine technology and community folksonomy oriented approaches.

The Semantic Web in HistoricalContext

Let’s transition now and zoom out to see the bigger picture. The Semantic Webprovides technologies for representing and sharing knowledge in new ways. Inparticular, it makes knowledge more accessible to software, and thus to otherpeople. Another way of saying this is that it liberates knowledge fromparticular human minds and organizations – it provides a way to make knowledgeexplicit, in a standardized format that any application can understand. This isquite significant. Let’s put this in historical perspective.

Before the invention of the printing press, there were two ways to spreadknowledge – one was orally, the other was in some symbolic form such as art orwritten manuscripts. The oral transmission of knowledge had limited range and ahigh error-rate, and the only way to learn something was to meet someone whoknew it and get them to tell you. The other option, symbolic communicationthrough art and writing, provided a means to communicate knowledgeindependently of particular people – but it was only feasible to produce a fewcopies of any given artwork or manuscript because they had to be copied byhand. So the transmission of knowledge was limited to small groups or at leastsmall audiences. Basically, the only way to get access to this knowledge was tobe one of the lucky few who could acquire one of its rare physical copies.

The invention of the printing press changed this – for the first timeknowledge could be rapidly and cost-effectively mass-produced and mass-distributed.Printing made it possible to share knowledge with ever-larger audiences. Thisenabled a huge transformation for human knowledge, society, government,technology – really every area of human life was transformed by thisinnovation.

The World Wide Web made the replication and distribution of knowledge eveneasier – With the Web you don’t even have to physically print or distributeknowledge anymore, the cost of distribution is effectively zero, and everyonehas instant access to everything from anywhere, anytime. That’s a lot betterthan having to lug around a stack of physical books. Everyone potentially haswhatever knowledge they need with no physical barriers. This has been anotherhuge transformation for humanity – and it has affected every area of humanlife. Like the printing press, the Web fundamentally changed the economics ofknowledge.

The Semantic Web is the next big step in this process – it will make all theknowledge of the human race accessible to software. For the first time,non-human things (software applications) will be able to start working withhuman knowledge to do things (for humans) on their own. This is a big leap – aleap like the emergence of a new species, or the symbiosis of two existingspecies into a new form of life.

The printing press and the Web changed the economics of replicating,distributing and accessing knowledge. The Semantic Web changes the economics ofprocessing knowledge. Unlike the printing press and the Web, the Semantic Webenables knowledge to be processed by non-human things.

In other words, humans don’t have to do all the thinking on their own, theycan be assisted by software. Of course we humans have to at least first createthe software (until we someday learn to create software that is smart enough tocreate software too), and we have to create the ontologies necessary for thesoftware to actually understand anything (until we learn to create software thatis smart enough to create ontologies too), and we have to add the semanticmetadata to our content in various ways (until our software is smart enough todo this for us, which it almost is already). But once we do the initial work ofmaking the ontologies and software, and adding semantic metadata, the systemstarts to pick up speed on its own, and over time the amount of work we humanshave to do to make it all function decreases. Eventually, once the system hasencoded enough knowledge and intelligence, it starts to function withoutneeding much help, and when it does need our help, it will simply ask us andlearn from our answers.

This may sound like science-fiction today, but in fact it a lot of this isalready built and working in the lab. The big hurdle is figuring out how to getthis technology to mass-market. That is probably as hard as inventing thetechnology in the first place. But I’m confident that someone will solve iteventually.

Once this happens the economics of processing knowledge will truly bedifferent than it is today. Instead of needing an actual real-live expert, theknowledge of that expert will be accessible to software that can act as theirproxy – and anyone will be able to access this virtual expert, anywhere,anytime. It will be like the Web – but instead of just information beingaccessible, the combined knowledge and expertise of all of humanity will alsobe accessible, and not just to people but also to software applications.

The Question of Consciousness

The Semantic Web literally enables humans to share their knowledge with eachother and with machines. It enables the virtualization of human knowledge andintelligence. With respect to machines, in doing this, it will lend machines“minds” in a certain sense – namely in that they will at least be able tocorrectly interpret the meaning of information and replicate the expertise ofexperts.

But will these machine-minds be conscious? Will they be aware of themeanings they interpret, or will they just be automatons that are simplyfollowing instructions without any awareness of the meanings they areprocessing? I doubt that software will ever be conscious, because from what Ican tell consciousness — or what might be called the sentient awareness ofawareness itself as well as other things that are sensed — is an immaterialphenomena that is as fundamental as space, time and energy — or perhaps evenmore fundamental. But this is just my personal opinion after having searchedfor consciousness through every means possible for decades. It just cannot befound to be something, yet it is definitely and undeniably taking place.

Consciousness can be exemplified through the analogy of space (but unlikespace, consciousness has this property of being aware, it’s not a mere lifelessvoid). We all agree space is there, but nobody can actually point to itsomewhere, and nobody can synthesize space. Space is immaterial andfundamental. It is primordial. So is electricity. Nobody really knows whatelectricity is ultimately, but if you build the right kind of circuit you canchannel it and we’ve learned a lot about how to do that.

Perhaps we may figure out how to channel consciousness like we channelelectricity with some sort of synthetic device someday, but I think that ishighly unlikely. I think if you really want to create consciousness it’s mucheasier and more effective to just have children. That’s something ordinarymortals can do today with the technology they were born with. Of course whenyou have children you don’t really “create” their consciousness, it seems to bethere on its own. We don’t really know what it is or where it comes from, orwhen it arises there. We know very little about consciousness today.Considering that it is the most fundamental human experience of all, it isactually surprising how little we know about it!

In any case, until we truly delve far more deeply into the nature of themind, consciousness will be barely understood or recognized, let aloneexplained or synthesized by anyone. In many eastern civilizations there aremulti-thousand year traditions that focus quite precisely on the nature ofconsciousness. The major religions have all universally concluded thatconsciousness is beyond the reach of science, beyond the reach of concepts,beyond the mind entirely. All those smart people analyzing consciousness for solong, and with such precision, and so many methods of inquiry, may have a pointworth listening to.

Whether or not machines will ever actually “know” or be capable of beingconscious of that meaning or expertise is a big debate, but at least we can allagree that they will be able to interpret the meaning of information and rulesif given the right instructions. Without having to be conscious, software willbe able to process semantics quite well — this has already been proven. It’sworking today.

While consciousness is and may always be a mystery that we cannot synthesize– the ability for software to follow instructions is an established fact. Inits most reduced form, the Semantic Web just makes it possible to providericher kinds of instructions. There’s no magic to it. Just a lot of details. Infact, to play on a famous line, “it’s semantics all the way down.”

The Semantic Web does not require that we make conscious software. It justprovides a way to make slightly more intelligent software. There’s a bigdifference. Intelligence is simply a form of information processing, for themost part. It does not require consciousness — the actual awareness of what isgoing on — which is something else altogether.

While highly intelligentsoftware may need to sense its environment and its own internal state andreason about these, it does not actually have to be conscious to do this. Theseoperations are for the most part simple procedures applied vast numbers of timeand in complex patterns. Nowhere in them is there any consciousness nor doesconsciousness suddenly emerge when suitable levels of complexity are reached.

Consciousness is something quite special and mysterious. And fortunately forhumans, it is not necessary for the creation of more intelligent software, noris it a byproduct of the creation of more intelligent software, in my opinion.

The Intelligence of the Web

So the real point of the Semantic Web is that it enables the Web to becomemore intelligent. At first this may seem like a rather outlandish statement,but in fact the Web is already becoming intelligent, even without the SemanticWeb.

Although the intelligence of the Web is not very evident at first glance,nonetheless it can be found if you look for it. This intelligence doesn’t existacross the entire Web yet, it only exists in islands that are few and farbetween compared to the vast amount of information on the Web as a whole. Butthese islands are growing, and more are appearing every year, and they arestarting to connect together. And as this happens the collective intelligenceof the Web is increasing.

Perhaps the premier example of an “island of intelligence” is theWikipedia, but there are many others: The Open Directory, portals such as Yahooand Google, vertical content providers such as CNET and WebMD, commercecommunities such as Craigslist and Amazon, content oriented communities such asLiveJournal, Slashdot, Flickr and Digg and of course the millions of discussionboards scattered around the Web, and social communities such as MySpace andFacebook. There are also large numbers of private islands of intelligence onthe Web within enterprises — for example the many online knowledge andcollaboration portals that exist within businesses, non-profits, andgovernments.

What makes these islands “intelligent” is that they are places where people(and sometimes applications as well) are able to interact with each other tohelp grow and evolve collections of knowledge. When you look at them close-upthey appear to be just like any other Web site, but when you look at what theyare doing as a whole – these services are thinking.They are learning, self-organizing, sensing their environments, interpreting,reasoning, understanding, introspecting, and building knowledge. These are theactivities of minds, of intelligent systems.

The intelligence of a system such as the Wikipedia exists on several levels– the individuals who author and edit it are intelligent, the groups that helpto manage it are intelligent, and the community as a whole – which isconstantly growing, changing, and learning – is intelligent.

Flickr and Digg also exhibit intelligence. Flickr’s growing system of tagsis the beginnings of something resembling a collective visual sense organ onthe Web. Images are perceived, stored, interpreted, and connected to conceptsand other images. This is what the human visual system does. Similarly, Digg isa community that collectively detects, focuses attention on, and interpretscurrent news. It’s not unlike a primitive collective analogue to the humanfacility for situational awareness.

There are many other examples of collective intelligence emerging on theWeb. The Semantic Web will add one more form of intelligent actor to the mix –intelligent applications. In the future, after the Wikipedia is connected tothe Semantic Web, as well as humans, it will be authored and edited by smartapplications that constantly look for new information, new connections, and newinferences to add to it.

Although the knowledge on the Web today is still mostly organized withindifferent islands of intelligence, these islands are starting to reach out andconnect together. They are forming trade-routes, connecting their economies,and learning each other’s languages and cultures. The next-step will be forthese islands of knowledge to begin to share not just content and services, butalso their knowledge — what they know about their content and services. The SemanticWeb will make this possible, by providing an open format for the representationand exchange of knowledge and expertise.

When applications integrate their content using the Semantic Web they willalso be able to integrate their context, their knowledge – this will make thecontent much more useful and the integration much deeper. For example, when anapplication imports photos from another application it will also be able toimport semantic metadata about the meaning and connections of those photos.Everything that the community and application know about the photos in theservice that provides the content (the photos) can be shared with the servicethat receives the content. Better yet, there will be no need for customapplication integration in order for this to happen: as long as both servicesconform to the open standards of the Semantic Web the knowledge is instantlyportable and reusable.

Freeing Intelligence from Silos

Today much of the real value of the Web (and in the world) is still lockedaway in the minds of individuals, the cultures of groups and organizations, andapplication-specific data-silos. The emerging Semantic Web will begin to unlockthe intelligence in these silos by making the knowledge and expertise theyrepresent more accessible and understandable.

It will free knowledge and expertise from the narrow confines of individualminds, groups and organizations, and applications, and make them not only moreinteroperable, but more portable. It will be possible for example for a personor an application to share everything they know about a subject of interest aseasily as we share documents today. In essence the Semantic Web provides acommon language (or at least a common set of languages) for sharing knowledgeand intelligence as easily as we share content today.

The Semantic Web also provides standards for searching and reasoning moreintelligently. The SPARQL query language enables any application to ask forknowledge from any other application that speaks SPARQL. Instead of merekeyword search, this enables semantic search. Applications can search forspecific types of things that have particular attributes and relationships toother things.

In addition, standards such as SWRL provide formalisms for representing andsharing axioms, or rules, as well. Rules are a particular kind of knowledge –and there is a lot of it to represent and share, for example proceduralknowledge, and logical structures about the world. An ontology provides a meansto describe the basic entities, their attributes and relations, but rulesenable you to also make logical assertions and inferences about them. Withoutgoing into a lot of detail about rules and how they work here, the importantpoint to realize is that they are also included in the framework. All forms ofknowledge can be represented by the Semantic Web.

Zooming Way, Waaaay Out

So far in this article, I’ve spenta lot of time talking about plumbing – the pipes, fluids, valves, fixtures,specifications and tools of the Semantic Web. I’ve also spent some time onillustrations of how it might be useful in the very near future to individuals,groups and organizations. But where is it heading after this? What is thelong-term potential of this and what might it mean for the human race on ahistorical time-scale?

For those of you who would prefer not to speculate, stop reading here. Forthe rest of you, I believe that the true significance of the Semantic Web, on along-term timescale is that it provides an infrastructure that will enable theevolution of increasingly sophisticated forms of collective intelligence. Ultimatelythis will result in the Web itself becoming more and more intelligent, untilone day the entire human species together with all of its software andknowledge will function as something like a single worldwide distributed mind –a global mind.

Just the like the mind of a single human individual, the global mind will bevery chaotic, yet out of that chaos will emerge cohesive patterns of thoughtand decision. Just like in an individual human mind, there will be feedbackbetween different levels of order – from individuals to groups to systems ofgroups and back down from systems of groups to groups to individuals. Becauseof these feedback loops the system will adapt to its environment, and to itsown internal state.

The coming global mind will collectively exhibit forms of cognition andbehavior that are the signs of higher-forms of intelligence. It will form andreact to concepts about its “self” – just like an individual human mind. Itwill learn and introspect and explore the universe. The thoughts it thinks maysometimes be too big for any one person to understand or even recognize them –they will be comprised of shifting patterns of millions of pieces of knowledge.

The Role of Humanity

Every person on the Internet will be a part of the global mind. Andcollectively they will function as its consciousness. I do not believe some newform of consciousness will suddenly emerge when the Web passes some thresholdof complexity. I believe that humanity IS the consciousness of the Web anduntil and unless we ever find a way to connect other lifeforms to the Web, orwe build conscious machines, humans will be the only form of consciousness ofthe Web.

When I say that humans will function as the consciousness of the Web I meanthat we will be the things in the system that know. The knowledge of theSemantic Web is what is known, but what knows that knowledge has to besomething other than knowledge. A thought is knowledge, but what knows thatthought is not knowledge, it is consciousness, whatever that is. We can figureout how to enable machines to represent and use knowledge, but we don’t knowhow to make them conscious, and we don’t have to. Because we are alreadyconscious.

As we’ve discussed earlier in this article, we don’t need conscious machines, we just need more intelligent machines.Intelligence – at least basic forms of it – does not require consciousness. It may be the case that the very highest forms of intelligence require or are capable of consciousness. This may mean that software will never achieve the highest levels of intelligence and probably guaranteesthat humans (and other conscious things) will always play a special role in theworld; a role that no computer system will be able to compete with. We providethe consciousness to the system. There may be all sorts of other intelligent,non-conscious software applications and communities on the Web; in fact therealready are, with varying degrees of intelligence. But individual humans, andgroups of humans, will be the only consciousness on the Web.

The Collective Self

Although the software of the Semantic Web will not be conscious we can say that system as a whole contains or is conscious to the extent that human consciousnesses are part of it. And like most conscious entities, it may also start to be self-conscious.

If the Web ever becomes a global mind as I am predicting, will it have a“self?” Will there be a part of the Web that functions as its central self-representation?Perhaps someone will build something like that someday, or perhaps it will evolve.Perhaps it will function by collecting reports from applications and people inreal-time – a giant collective zeitgeist.

In the early days of the Web portals such as Yahoo! provided this function — they were almost real-time maps of the Web and what was happening. Today making such a map is nearly impossible, but services such as Google Zeitgeist at least attempt to provide approximations of it. Perhaps through random sampling it can be done on a broader scale.

My guess is that the global mind will need a self-representation at somepoint. All forms of higher intelligence seem to have one. It’s necessary forunderstanding, learning and planning. It may evolve at first as a bunch ofcompeting self-representations within particular services or subsystems withinthe collective. Eventually they will converge or at least narrow down to just afew major perspectives. There may also be millions of minor perspectives thatcan be drilled down into for particular viewpoints from these top-level “portals.”

The collective self, will function much like the individual self – as amirror of sorts. Its function is simply to reflect. As soon as it exists theentire system will make a shift to a greater form of intelligence – because forthe first time it will be able to see itself, to measure itself, as a whole. Itis at this phase transition when the first truly global collective self-mirroring function evolves, that we can say that the transition from a bunch of cooperating intelligent parts toa new intelligent whole in its own right has taken place.

I think that the collective self, even if it converges on a few majorperspectives that group and summarize millions of minor perspectives, will becommunity-driven and highly decentralized. At least I hope so – because theself-concept is the most important part of any mind and it should be designedin a way that protects it from being manipulated for nefarious ends. At least Ihope that is how it is designed.

Programming the Global Mind

On the other hand, there are times when a little bit of adjustment or guidance iswarranted – just as in the case of an individual mind, the collective selfdoesn’t merely reflect, it effectively guides the interpretation of the pastand present, and planning for the future.

One way to change the direction ofthe collective mind, is to change what is appearing in the mirror of thecollective self. This is a form of programming on a vast scale – When thisprogramming is dishonest or used for negative purposes it is called “propaganda,” but there are cases whereit can be done for beneficial purposes as well. An example of this today ispublic service advertising and educational public television programming. Allforms of mass-media today are in fact collective social programming. When yourealize this it is not surprising that our present culture is violent andmessed up – just look at our mass-media!

In terms of the global mind, ideally one would hope that it would be able tolearn and improve over time. One would hope that it would not have the collective equivalent of psycho-social disorders. To facilitate this, just like any form of higherintelligence, it may need to be taught, and even parented a bit. It also mayneed a form of therapy now and then. These functions could be provided by thepeople who participate in it. Again, I believe that humans serve a vital and irreplaceablerole in this process.

How It All Might Unfold

Now how is this all going to unfold? I believe that there are a number ofkey evolutionary steps that Semantic Web will go through as the Web evolvestowards a true global mind:

1. Representing individual knowledge. The first step is to make individuals’knowledge accessible to themselves. As individuals become inundated withincreasing amounts of information, they will need better ways of managing it,keeping track of it, and re-using it. They will (or already do) need”personal knowledge management.”

2. Connecting individual knowledge. Next, once individual knowledge isrepresented, it becomes possible to start connecting it and sharing it acrossindividuals. This stage could be called “interpersonal knowledgemanagement.”

3. Representing group knowledge. Groups of individuals also need ways ofcollectively representing their knowledge, making sense of it, and growing itover time. Wikis and community portals are just the beginning. The Semantic Webwill take these “group minds” to the next level — it will make the collective knowledge ofgroups far richer and more re-usable.

4. Connecting group knowledge. This step is analogous to connectingindividual knowledge. Here, groups become able to connect their knowledge togetherto form larger collectives, and it becomes possible to more easily access andshare knowledge between different groups in very different areas of interest.

5. Representing the knowledge of the entire Web. This stage — what might becalled “the global mind” — is still in the distant future, but atthis point in the future we will begin to be able to view, search, and navigatethe knowledge of the entire Web as a whole. The distinction here is thatinstead of a collection of interoperating but separate intelligentapplications, individuals and groups, the entire Web itself will begin tofunction as one cohesive intelligent system. The crucial step that enables thisto happen is the formation of a collective self-representation. This enablesthe system to see itself as a whole for the first time.

How it May be Organized

I believe the global mind will be organized mainly in the form of bottom-up and lateral, distributed emergent computation andcommunity — but it will be facilitated by certain key top-down services thathelp to organize and make sense of it as a whole. I think this future Web willbe highly distributed, but will have certain large services within it as well– much like the human brain itself, which is organized into functionalsub-systems for processes like vision, hearing, language, planning, memory,learning, etc.

As the Web gets more complex there will come a day when nobody understandsit anymore – after that point we will probably learn more about how the Web isorganized by learning about the human mind and brain – they will be quitesimilar in my opinion. Likewise we will probably learn a tremendous amountabout the functioning of the human brain and mind by observing how the Webfunctions, grows and evolves over time, because they really are quite similarin at least an abstract sense.

The internet and its software and content is like a brain, and the state ofits software and the content is like its mind. The people on the Internet arelike its consciousness. Although these are just analogies, they are actuallyuseful, at least in helping us to envision and understand this complex system. Asthe field of general systems theory has shown us in the past, systems at verydifferent levels of scale tend to share the same basic characteristics and obeythe same basic laws of behavior. Not only that, but evolution tends to convergeon similar solutions for similar problems. So these analogies may be more thanjust rough approximations, they may be quite accurate in fact.

The future global brain will require tremendous computing and storageresources — far beyond even what Google provides today. Fortunately as Moore’s Law advances thecost of computing and storage will eventually be low enough to do thiscost-effectively. However even with much cheaper and more powerful computingresources it will still have to be a distributed system. I doubt that therewill be any central node because quite simply no central solution will be ableto keep up with all the distributed change taking place. Highly distributed problemsrequire distributed solutions and that is probably what will eventually emergeon the future Web.

Someday perhaps it will be more like a peer-to-peer network, comprised ofapplications and people who function sort of like the neurons in the human brain.Perhaps they will be connected and organized by higher-level super-peers orsuper-nodes which bring things together, make sense of what is going on andcoordinate mass collective activities. But even these higher-level serviceswill probably have to be highly distributed as well. It really will bedifficult to draw boundaries between parts of this system, they will all beconnected as an integral whole.

In fact it may look very much like a grid computing architecture – in whichall the services are dynamically distributed across all the nodes such that atany one time any node might be working on a variety of tasks for differentservices. My guess is that because this is the simplest, most fault-tolerant,and most efficient way to do mass computation, it is probably what will evolvehere on Earth.

The Ecology of Mind

Where we are today in this evolutionary process is perhaps equivalent to therise of early forms of hominids. Perhaps Austrolapithecus or Cro-Magnon, ormaybe the first Homo Sapiens. Compared to early man, the global mind is like the rise of 21stcentury mega-cities. A lot of evolution has to happen to get there. But itprobably will happen, unless humanity self-destructs first,which I sincerely hope we somehow manage to avoid. And this brings me to afinal point. This vision of the future global mind is highly technological;however I don’t think we’ll ever accomplish it without a new focus on ecology.

Ecology probably conjures up images of hippies and biologists, or maybehippies who are biologists, or at least organic farmers, for most people, but infact it is really the science of living systems and how they work. And anysystem that includes living things is a living system. This means that the Webis a living system and the global mind will be a living system too. As a living system, the Web is an ecosystem and is alsoconnected to other ecosystems. In short, ecology is absolutely essential tomaking sense of the Web, let alone helping to grow and evolve it.

In many ways the Semantic Web and the collective minds, and the global mind,that it enables, can be seen as an ecosystem of people, applications,information and knowledge. This ecosystem is very complex, much like naturalecosystems in the physical world. An ecosystem isn’t built, it’s grown, andevolved. And similarly the Semantic Web, and the coming global mind, will notreally be built, they will be grown and evolved. The people and organizationsthat end up playing a leading role in this process will be the ones thatunderstand and adapt to the ecology most effectively.

In my opinion ecology is going to be the most important science anddiscipline of the 21st century – it is the science of healthysystems. What nature teaches us about complex systems can be applied to everykind of system – and especially the systems we are evolving on the Web. Inorder to ever have a hope of evolving a global mind, and all the wonderfullevels of species-level collective intelligence that it will enable, we have tonot destroy the planet before we get there. Ecology is the science that cansave us, not the Semantic Web (although perhaps by improving collectiveintelligence, it can help).

Ecology is essentially the science of community – whether biological,technological or social. And community is a key part of the Semantic Web atevery level: communities of software, communities of people, and communities ofgroups. In the end the global mind is the ultimate human community. It is thereward we get for finally learning how to live together in peace and balancewith our environment.

The Necessity of Sustainability

The point of this discussion of the relevance of ecology to the future ofthe Web, and my vision for the global mind, is that I think that it is clearthat if the global mind ever emerges it will not be in a world that is anythinglike what we might imagine. It won’t be like the Borg in Star Trek, it won’t belike living inside of a machine. Humans won’t be relegated to the roles ofslaves or drones. Robots won’t be doing all the work. The entire world won’t becoated with silicon. We won’t all live in a virtual reality. It won’t be one ofthese technological dystopias.

In fact, I think the global mind can only come to pass in a much greener,more organic, healthier, more balanced and sustainable world. Because it willtake a long time for the global mind to emerge, if humanity doesn’t figure outhow to create that sort of a world, it will wipe itself out sooner or later,but certainly long before the global mind really happens. Not only that, butthe global mind will be smart by definition, and hopefully this intelligencewill extend to helping humanity manage its resources, civilizations andrelationships to the natural environment.

The Smart Environment

The global mind also needs a global body so to speak. It’s not going to bean isolated homunculus floating in a vat of liquid that replaces the physicalworld! It will be a smart environment that ubiquitously integrates with ourphysical world. We won’t have to sit in front of computers or deliberatelylogon to the network to interact with the global mind. It will be everywhere.

The global mind will be physically integrated into furniture, houses,vehicles, devices, artworks, and even the natural environment. It will sensethe state of the world and different ecosystems in real-time and alert humansand applications to emerging threats. It will also be able to allocateresources intelligently to compensate for natural disasters, storms, andenvironmental damage – much in the way that the air traffic control systemsallocates and manages airplane traffic. It won’t do it all on its own, humansand organizations will be a key part of the process.

Someday the global mind may even be physically integrated into our bodiesand brains, even down the level of our DNA. It may in fact learn how to curediseases and improve the design of the human body, extending our lives, sensorycapabilities, and cognitive abilities. We may be able to interact with it bythought alone. At that point it will become indistinguishable from a limitedfrom of omniscience, and everyone may have access to it. Although it will onlyextend to wherever humanity has a presence in the universe, within thatboundary it will know everything there is to know, and everyone will be able toknow any of it they are interested in.

Enabling a Better World

By enabling greater forms of collective intelligence to emerge we really arehelping to make a better world, a world that learns and hopefully understandsitself well enough to find a way to survive. We’re building something thatsomeday will be wonderful – far greater than any of us can imagine. We’re helpingto make the species and the whole planet more intelligent. We’re building thetools for the future of human community. And that future community, if it ever arrives,will be better, more self-aware, more sustainable than the one we live intoday.

I should also mention that knowledge is power, and power can be used forgood or evil. The Semantic Web makes knowledge more accessible. This puts more power in the hands of the many, not just the few. As long as we stick to this vision — we stick to making knowledge open and accessible, using open standards, in as distributed a fashion as we can devise, then the potential power of the Semantic Web will be protected against being coopted or controlled by the few at the expense of the many. This is where technologists really have to be socially responsible when making development decisions. It’s important that we build a more open world, not a less open world. It’s important that we build a world where knowledge, integration and unification are balanced with respect for privacy, individuality, diversity and freedom of opinion.

But I am not particularly worried that the Semantic Web and the future globalmind will be the ultimate evil – I don’t think it is likely that we will end upwith a system of total control dominated by evil masterminds with powerfulSemantic Web computer systems to do their dirty work. Statistically speaking, criminal empires don’t last very long because theyare run by criminals who tend to be very short-sighted and who also surroundthemselves with other criminals who eventually unseat them, or theyself-destruct. It’s possible that the Semantic Web, like any other technology,may be used by the bad guys to spy on citizens, manipulate the world, and doevil things. But only in the short-term.

In the long-term either our civilization will get tired of endlesssuccessions of criminal empires and realize that the only way to actuallysurvive as a species is to invent a form of government that is immune to beingtaken over by evil people and organizations, or it will self-destruct. Eitherway, that is a hurdle we have to cross before the global mind that I envisioncan ever come about. Many civilizations came before ours, and it is likely thatours will not be the last one on this planet. It may in fact be the case that adifferent form of civilization is necessary for the global mind to emerge, andis the natural byproduct of the emergence of the global mind.

We know that the global mind cannot emerge anytime soon, and therefore, ifit ever emerges then by definition it must be in the context of a civilizationthat has learned to become sustainable. A long-term sustainable civilization is a non-evil civilization. And that is why I think it is a safebet to be so optimistic about the long-term future of this trend.

A Village Where Aging is Sped Up

Here’s an interesting video about a village in India where men have been stricken for over a decade with a disease that causes them to age much faster. Nobody knows what is causing this. Men in their 30’s appear to be 80. It’s strange. Watch the video. Perhaps if someone were to collect some DNA and compare it to DNA of people without this syndrome a cure or at least an explanation could be found. This might also reveal what is different, if anything, about the DNA of people in this village that causes them to age — and if a specific gene or set of genes is involved, this could perhaps provide a key to slowing down aging in healthy people.

Scientist Raises Possibility of Silicon-Based Life

Just read an interesting article on the possibility of "intraterrestrial" silicon-based life on Earth:

SETI spends enormous amounts of money
and resources looking for life outside of Earth’s realm, but life forms
so alien that scientists may simply not have recognized evidence of
their existence could inhabit the Earth, according to a leading
scientist.

Dr Tom Gold, emeritus professor of astronomy at Cornell University in
America, believes that organisms based on silicon – completely
unrelated to all the carbon-based life man has encountered so far – may
live at great depths.

In a forthcoming book he will suggest that scientists should take the
possibility more seriously. Gold, who is a member of the Royal Society,
previously predicted that vast amounts of more conventional bacteria
live miles down within the Earth’s crust. Scientists initially
dismissed the idea, but many now agree with him.

Silicon Lifeform

"So
long as nobody suspects there could be silicon-based life, we may just
not be clever enough to identify it," he said last week.

Rocks bearing signs of silicon-based organisms may already be sitting
in laboratories, he believes, with their significance overlooked.

Every known living organism, from bacteria to mankind, is based on the
chemistry of carbon, which forms the complex molecules such as DNA that
are central to our existence. Scientists believe that if
extraterrestrial life is found, the chances are that it, too, will be
carbon-based.

Editor’s Note: While the prospect of silicon-based life is an interesting subject for further research, what the above scientists failed to note is that there is already a large population of Silicone-based life, particularly in Hollywood. Of course they probably can’t get government funding to research THAT subject!                      
                        
                        
                        
                        
                        

Neuro-Chips

Researchers continue to make progress in fusing living neurons with computer chips:

The
line between living organisms and machines has just become a whole lot
blurrier. European researchers have developed "neuro-chips" in which
living brain cells and silicon circuits are coupled together.

The achievement could one day enable the creation of
sophisticated neural prostheses to treat neurological disorders or the
development of organic computers that crunch numbers using living
neurons.

To create the neuro-chip,
researchers squeezed more than 16,000 electronic transistors and
hundreds of capacitors onto a silicon chip just 1 millimeter square in
size.

They used special proteins found in the brain to glue brain cells, called neurons, onto the chip. However, the proteins acted as more than just a simple adhesive.

"They also provided the link between ionic channels
of the neurons and semiconductor material in a way that neural
electrical signals could be passed to the silicon chip," said study
team member Stefano Vassanelli from the University of Padua in Italy.

The proteins allowed the neuro-chip’s electronic components and its living cells
to communicate with each other. Electrical signals from neurons were
recorded using the chip’s transistors, while the chip’s capacitors were
used to stimulate the neurons.

From: http://www.livescience.com/humanbiology/060327_neuro_chips.html

Big Thinkers' Most Dangerous Ideas

The Edge has published mini-essays by 119 "big thinkers" on their "most dangerous ideas" — fun reading.

The history of science is replete with discoveries
that were considered socially, morally, or emotionally
dangerous in their time; the Copernican and
Darwinian revolutions are the most obvious.
What is your dangerous idea? An idea you think
about (not necessarily one you originated)
that is dangerous not because it is assumed to be false, but because it might be true?

 

Obsessed Tourist Marries Dolphin

Here’s a happy story about true love. An Israeli millionaire tourist recently married a captive dolphin in a formal wedding ceremony. There’s something fishy about this wedding though — I mean did the dolphin really love her for HER, or did he just want her for her money? When asked to comment, the woman repeated that she is "not a pervert" — PHEW! I’m glad she emphasized that point because otherwise I might have had my doubts. Anyway, what I really want to know is: was it a traditional Jewish wedding ceremony or not, and if it was, how did the dolphin crush the wine glass? Do they even let people bring glassware into the pool nowadays? I thought you could only bring plastic cups to the pool. What kind of operation are they running down there anyway? Well, if you ask me this marriage probably won’t last: I think couples should live together for a while before getting married.

A New Kind of Memory Aid

I recently read a report of new neuroscience research in which researchers are able to predict what a person will recall by analyzing their brainstate. You can read a summary here.

This reminds me of an idea I had a while back for using biofeedback to guide brainstates, in order to improve memory. Here’s a hypothetical experiment that illustrates the idea. Show a person a set of photographs, and while they are observing each photo use functional brain imaging to record their brainstate. Later, show them the same photos several more times and make additional recordings of their brainstate, in order to generate a database of brainstates that correspond to their perception of each photo. Next, select a photo secretly (without telling the human subject) and lookup its corresponding recorded brainstates in the database. Then, guide the human subject to generate a brainstate that corresponds to the secretly chosen photo using biofeedback that is tied to their real-time brainstate. For example, provide the human subject with a sound or a computer image that corresponds to their real-time brainstate, and which provides them with positive or negative feedback based on the "distance" from their present brainstate to the desired target brainstate, enabling them to guide their brainstate the correct configuration. After the subject becomes accustomed to using the biofeedback system, apply it to guide them to generate a brainstate that matches or is closely within range of the desired brainstates for the selected photo. Then ask the subject to report which photo they are thinking of. We can measure how well the method works by the accuracy by which the subject reports thinking of the photo we selected originally.

If this process works it could be used someday as a new kind of memory aid. For example, suppose that someday functional brain imaging gets small and portable, or even wearable or implantable, so that everyone has access to their real-time brainstate data. When they want to "remember" something they simply hit the "record" button on their personal brainstate recorder and it measures their brainstate while they are thinking of and/or perceiving what they want to recall. Then they simply give this dataset a label or filename in their personal memory database. Later when they want to recall a specific thing, they just select the label and the system uses biofeedback to guide them back to generating that brainstate, at which point they can then recall whatever it is they were trying to remember.

New Study: Human Hands, Feet and Foreheads Emit Light

Now this is really interesting! New research has found that certain parts of the body emit measurable numbers of photons. This may open up new diagnostic techniques. But that’s just the beginning. Spiritual healers from many different faiths have long said that they experience light coming from their hands, and can feel (and even see) energy from the hands, feet and heads of other people. And of course there’s the classical image of halos around the heads of saints, which can be taken metaphorically, or perhaps literally, in light of this new research. I wonder if the levels of light coming from different people indicates not only health, but perhaps alertness, stress levels, or state of mind. There are many interesting possibilities for this research…

Sept. 6, 2005 — Human hands glow, but fingernails release the
most light, according to a recent study that found all parts of the
hand emit detectable levels of light.

The findings support prior research that suggested most living
things, including plants, release light. Since disease and illness
appear to affect the strength and pattern of the glow, the discovery
might lead to less-invasive ways of diagnosing patients.

Mitsuo Hiramatsu, a scientist at the Central Research Laboratory at
Hamamatsu Photonics in Japan, who led the research, told Discovery News
that the hands are not the only parts of the body that shine light by
releasing photons, or tiny, energized increments of light.

"Not only the hands, but also the forehead and bottoms of our feet
emit photons," Hiramatsu said, and added that in terms of hands "the
presence of photons means that our hands are producing light all of the
time."

The light is invisible to the naked eye, so Hiramatsu and his team used a powerful photon counter to "see"it. 

The detector found that fingernails release 60 photons, fingers
release 40 and the palms are the dimmest of all, with 20 photons
measured.

 
The findings are published in the current Journal of Photochemistry and Photobiology B: Biology

Continue reading

Amazon Launches new Service that Harnesses Networks of Human Minds to Do Tasks

Amazon has launched a new service that seeks to create a marketplace for human intelligence on the Net. The idea is to utilize humans like one might utilize intelligent agents, to help complete tasks that humans do better than computers — for example like image adjustments, formatting, tagging and marking up content, adding metatdata to documents, filing and filtering, etc. The idea is that people can sign up to do these tasks and make money. People who need tasks can farm them out to the marketplace. It’s like a big army of "human agents" who can use "human intelligence" to do stuff for you.

The name of the service is "Amazon Mechanical Turk" — quite bizarre. But OK. It’s a cool idea. I think the combination of human and machine intelligence is ultimately going to be smarter than either form of intelligence on its own. This system is at least a start — it harnesses groups of human intelligence to help do things.

But think about where this could go: For example, the system could actually be built right into applications —  for example, imagine if in Photoshop there was a new menu command for "fix this image" that charged you a dollar and farmed the image out to 2 or 3 humans who each attempted to improve the image. It would function just like a filter, but instead of software doing the work it would be humans. For you, the end-user, it would be functionally equivalent. You would get 3 versions of your adjusted image back in a few minutes and could choose the best one or use them all.

The idea of building in menu options into software and services that actually trigger behaviors among networks of humans is very interesting.

But to do this well you really need and API that all applications can use to harness "human intelligence" and "human functions" in their apps. One the best proposals for how to do this more  is here.  And an update about that is here.

Using DNA to Send Messages into the Distant Future

This article discusses recent research into encoding short 100 word messages into the DNA of living organisms. The error-correcting characteristics of DNA enable such messages to be passed down without degrading across generations. By embedding short messages into hardy organisms such as particular strains of bacteria, it may be possible to preserve information over longer timeframes than by using any other known storage media. This in turn can be used to intentionally send messages into the far future. I blogged about this over a year ago, here, where I suggested that because this is possible, we might want to look to see whether any such messages are already there in our own DNA or that of particularly hardy organisms. Perhaps someone put their signature there for us to see a long long time ago? Perhaps the best way to create a time capsule that can last for thousands or millions of years would be to embed messages across the DNA of a bunch of different organisms in different ecoological niches, to ensure that at least some would get through to the future. Certainly a few strains of bacteria should be included, as well as perhaps cockroaches, some types of fish, some plants, and perhaps even some volunteer humans. Since the message has to be pretty short, I would suggest that we use it to indicate the location of one or more hidden storage locations on the planet (or on the moon?) where larger volumes of information, technology, DNA libraries, etc., could be located. I view this as a kind of global "backup strategy" not unlike backing up a hard-disk. I once had some thoughts about doing this using special satellites as well, which you can read about here.

20% of Your Genes Belong to Them

From Boing Boing today:

Xeni Jardin: A report in this week’s issue of Science
says 20 percent of human genes have been patented in the United States:

The study (…) is the first time that a detailed map has been
created to match patents to specific physical locations on the human genome.
Researchers can patent genes because they are potentially valuable research
tools, useful in diagnostic tests or to discover and produce new drugs.

"It might come as a surprise to many people that in the U.S. patent system
human DNA is treated like other natural chemical products," said Fiona Murray, a
business and science professor at the Massachusetts Institute of Technology in
Cambridge, and a co-author of the study.

I have long felt that patents should not be granted on naturally occuring phenomena — such as the DNA of any species. It is simply absurd to grant a patent on something that has existed in the public domain for millions or even billions of years! It’s simply an absurd abuse of the legal system, for the benefit of corporate greed, in my opinion. I do believe that patents should be granted for new inventions (although I think all patent rights should expire much faster than they presently do — which would solve many of the problems in the patent world) — but it is simply wrong to allow patents on naturally occuring physical phenomena. Discoveries are not inventions.

Storing Data in Human Fingernails — One of my Past Proposals now Under Development

I just read that a Japanese team is actually developing technology to store data in human fingernails. I proposed this concept on this blog last year in this  post.   That may qualify as prior art. I wonder if they are going to try to patent this? Not that I mind, I think it’s a great idea obviously.

Human-Brained Monkeys Pose Ethical Challenge

A cutting-edge research program is injecting human brain cells into monkey brains, to investigate whether this causes their brains to become more "human." This poses a potential ethical challenge: If the monkeys do become more human, would they be considered "human subjects" and be protected by ethical guidelines governing research onto humans? At which point does a monkey qualify as a legal Person? Could a more humanlinke monkey, or a state attorney on it’s behalf, file suit against scientists who harmed it or deprived it of its rights? If a monkey becomes more humanlike, do we have the right to hold it against its will? Super-intelligent monkeys used to be the stuff of the Wizard of Oz, and Flash Gordon science fiction — now we’re actually making them —  what a world we live in! Hey, I have an idea, instead of trying to make monkeys more like men, could we figure out a way to make men LESS like monkeys??? Let’s start with the politicians!

Extracting Video from Cat Brains

Fascinating article about research which has successfully extracted video from monitoring cat neurons. They have actually reconstructed what the cat actually saw from its neural signals. This opens the door to recording our day-to-day perceptions (lifelogs) and perhaps even to recording our dreams. And of course there might be options for playback as well. This is cool stuff.

Hackers Crack Junk DNA?

A group
of researchers working at the Human Genome Project will be
announcing soon that they made an astonishing scientific
discovery: They believe so-called non-coding sequences (97%) in
human DNA is no less than genetic code of an unknown
extraterrestrial life form.

The above excerpt is from an article that has to be one of the most interesting things I’ve read all year — IF IT CAN BE CONFIRMED (note: I am not sure yet whether this information is reliable — it could be a hoax or it could be the real thing — so please read the source article for yourself and help me research the validity of its claims: I have not yet located supporting materials, have you? Link to this post and I’ll see your trackback).

The article is an account of an alleged project to decipher the "junk DNA" portion of the human genome, by treating it as if it were an encrypted message. Essentially they attacked the problem in the same way that one might try to crack an encrypted message.  It turns out, according to their report, that the junk DNA is not junk at all, but actually appears to be DNA that has been "commented out" in much the same way as portions of unused computer code are commented out.

As a result of their study, the researchers found that among other things, they may have uncovered a new approach to curing cancer. Beyond this however, they have come up with some interesting speculations about who wrote this code and then commented it out (yes, it is even suggested that this code could have been written by extra-terrestrials) and how we might take advantage of it to generate new lines of "debugged" humans. While I’m not sure the code they have found is due to aliens (it might be a side-effect of some yet-unknown evolutionary process for example), it is certainly one of the most interesting new frontiers for further research.

I hope that some geneticists will take a look at this research in more detail. Please link to this article so that those with a deeper knowledge of the relevant science can comment and critique it. I have not been able to substantiate the claims that are made, but if it turns out to be correct, this is the kind of research that has the potential to really change the world.

Notes:
– This article — if it turns out to be correct — is possible experimental confirmation of my earlier hypothesis that there could be a hidden message in the junk region of human DNA. This is an idea that has also been proposed by various science-fiction writers in the past.

New Study Finds Stress Causes Aging

A recent study by the University of California, San Francisco, has found that stress causes the same changes in cells that are typically caused by aging:

The
study involved 39 women ages 20 to 50 who had experienced grinding
stress for years because they were caring for a child with a serious
chronic illness, and 19 other women with healthy children.

The
researchers examined structures inside cells called telomeres – the
caps at the ends of chromosomes. Every time a cell divides, telomeres
get shorter. In the natural aging process, the telomeres eventually get
so short that cells can no longer divide, and they die.

The
researchers also measured levels of an enzyme called telomerase, which
helps rebuild telomeres to stave off this process. Telomerase levels
naturally decline with age.

The researchers found that the
longer a woman had been caring for a sick child, the shorter her
telomeres, the lower her levels of telomerase, and the higher her
levels of "oxidative stress," in which so-called free radicals in the
body damage DNA, including telomeres.

Compared to women with
the lowest levels of perceived stress, women with the highest perceived
stress had telomeres equivalent to someone 10 years older, the
researchers found.

This is very interesting — it would also help to explain why meditation, yoga, and other activities that reduce stress may have an effect on slowing the aging process.

New Anti-Aging Pill To Be Released

In February of 2005, a controversial new anti-aging pill called Protandim is slated for release. This drug is claimed to increase the body’s natural production of anti-oxidants, which in turn is believed to combat damage from free-radicals. Preliminary studies on mice demonstrated "reduction of lipid peroxidation by 60% to 75% in both plasma and liver,
as well as a decrease of more than 90% percent in brain tissue. Lipid
peroxidation refers to the oxidation of lipids, a process that can
destroy cell membranes." Human trials have not been compeleted yet however. While these results are incredibly promising, there is still debate about whether damage from free-radicals is the primary cause of aging and age-related illnesses. The fact is, very little is really understood about aging at present. However, it is known that anti-oxidants are beneficial to health and Protandim may be the most effective way to introduce antioxidants into the body. Whether it extends human lifespan to 120 years or more, as some claim it will, remains to be seen. It is certainly an interesting development to track and I think that anti-aging medicine will be a major new market in the next few decades.

If the Universe is a Simulation, then What?

Here’s an interesting speculation. Assume for the moment that our universe is in fact a simulation running on a vast computing system created a race of beings that is far more advanced than we can presently imagine. The next logical question would be, “Why would an advanced civilization want or need to undertake such a project?”

Without debating whether or not such a project is possible, let’s simply address this second question. I think that one reason why it might be of value to simulate an entire universe is to in fact understand the universe that one is already in. It may turn out that cosmology-research in “super advanced civilizations” takes place via such universe simulation rather than via observations of their own universes.

Why might this be the case? Well one reason is that Godel proved that in any formal system either there are truths that cannot be proved to be true using that formal system, or the formal system will result in contradictions. It is not possible to design a formal system (that is equivalent to mathematics as commonly defined) that is both logically complete and consistent. Perhaps because of this fundamental limitation on knowledge, at a certain level of physics sophistication, there will be a similar limitation to knowledge: either some truths about the universe cannot be proved using existing physics, or existing physics will result in contradictions.

But there may be a “workaround” to this problem — a way to discover unprovable truths about the universe, without having to derive them from a particular formal physical system — namely, simulating lots of potential universes, each with different physics, to see what the results are. Perhaps by doing a meta-level study of the behavior of different sets of physical laws on different sets of initial conditions, meta-laws can be discovered that apply not only to particular universes, but to all possible universes.

Perhaps these meta-laws can only be discovered and understood outside the context of one particular physics, or at least outside the context of one particular universe. Perhaps, the only way to see beyond the “Godel Horizon,” is via simulation. By simulating myriad potential universes (on a hypothetical quantum computer, which could run infinite simulations on infinite data sets in finite time, for example),meta-theorems could be derived that transcended the Godel Horizon of any one particular physics or universe. This could be one explanation for why our universe is a simulation, assuming that it is a simulation at all (which I actually don’t believe, by the way — but I used to, which is why I am still interested in this question).

There may also be other reasons for simulating universes, besides just physics and cosmology research. In particular, one major motivation might be social research, genetic research, or perhaps research into time-travel and the complexity of changes in the continuum of causes and effects. These are wild speculations, I know, but worth pondering as long as we are on the subject. Another interesting possibility is that it may be easier to generate lots of universes in which various races evolve and work to solve the riddle of the universe in parallel, then to try to solve it oneself in one’s own universe using only one’s own resources. But this would only be practical if in fact it were possible to run such simulations at the same clockspeed, or better yet an even faster one, than the clockspeed (the speed of light) of one’s own present universe.

Now an interesting follow-on idea that stems from this concept is that perhaps there is a way to detect whether or not our universe is a simulation. We simply need to look for some phenomena that no formal system can fully describe — something that cannot be simulated perfectly, even on a suitably complex computer. If we can find such a phenomena then our universe cannot be a simulation or formal system, at least not one based on our concept of what a formal system is. I propose that consciousness is an example of such a unsimulatable-thing. If we find that consciousness cannot be simulated by a computer, then I would conclude that our universe (which contains consciousness, seemingly) cannot be a computer simulation. It might still be a simulation however — but not a simulation running on anything equivalent to a Turing Machine. For example, perhaps in really advanced civilizations there is another way of simulating things that does not rely on Turing Machines — for example, a simulation technology that relies instead on the application of dreaming as a means to generate and test various possible worlds. But that is an extreme fringe-speculation that I would be the first to admit is even farther out in the realm of science-fiction than the rest of this article.

If it turns out that we cannot find anything in our own universe that cannot be simulated perfectly such that it is effectively synthesized, in principle at least, then that does not prove that our universe is a simulation, only that it is not impossible for it to be a simulation. To prove that our universe IS a simulation, we would have to locate certain facts about our universe that are inconsistent with what we would expect if it were not a simulation. For example, perhaps there are certain non-random patterns in space-time, or our number system, or the physical constants that are extremely unlikely to have happened by accident. In fact, such patterns have been found. But even this is not sufficient evidence to convince me, or most scientists, that our universe is intelligently designed and just a simulation. So that won’t suffice as proof.

What might suffice? Well for one thing, assuming that there exists a civilization advanced enough to simulate universes, perhaps they are also clever enough to find a way to add clues about their existence and the simulated nature of their universes, into their simulated universes so that they can be found by intelligent beings within those universes. But why would they bother leaving such clues, even if they could? Perhaps they might do so in order to generate recursive computations. For example, they might be able to find their Big Answer faster if the intelligent beings in their simulations could eventually evolve to run their own universe simulations. And in order to help them along in that process, really smart universe-simulators might insert clues and knowledge into their simulated universes necessary to help their simulated civilizations to evolve the technology and knowledge necessary to start running their own simulations of universes!

Now let’s assume for the moment, that our universe is such a simulation, and that the simulators are clever enough to leave us clues to discover this fact — where might they leave them? Well it wouldn’t be in our DNA — that is far too high level and emergent. It would probably have to be in the underlying structure of space-time and the physical laws and constants themselves– for that is the level at which our simulation would most likely been coded. Perhaps there is a message hidden for us to discover in the fabric of mathematics, space, time and physics. It’s worth a look, if nothing else to rule out the possibility that it is there.

Addendum

After thinking about this further for a while, a few additional interesting follow-on ideas emerged:

  • If in fact our universe is a simulation being run on an advanced simulation system by some ultra-advanced race of beings, then it would increase the probability that THEIR UNIVERSE (our meta-universe) is also just a simulation being run on a computer system by an even more advanced race of beings! So perhaps one reason why an advanced race of beings might want to attempt to simulate a universe is in order to determine whether it is possible that their own universe is a simulation. Furthermore, if their own universe is a simulation and if that simulation is a formal system then their knowledge of their universe is certainly limited by Godel’s theorem, and therefore simulating further universes is the only way for them to see beyond those limitations.
  • Another interesting thought is that if a given universe U is a simulation running within another universe U’ then the question arises, how might communication take place between the beings in those two universe? Consider our own case. Life on Earth has only been around for a tiny blip on the cosmic timescale of this universe; and our solar system is a miniscule backwater of our galaxy, let alone our entire universe. Furthermore, we may not be that unique or that intelligent — there could be billions of other species that are equally or more interesting than us. In order to establish communication with our creators, we would have to somehow get their attention first, and this is a cosmic signal-to-noise problem. What could we do to get their attention? I think there are a few options:
    • Do something that affects a large region of space. For example, create a clearly non-random, non-natural arrangement of stars, assuming we could do that. Create a bunch of black holes or pulsars, or make a pulsar emit energy in a noticeable way. (Interesting side-thought — maybe pulsars are beacons created by advanced races within our simulation to signal their presence to one another and to the creators of our simulation — that would be one way to get noticed).
    • Do something that affects a large region of time. We would probably need time-travel technology to do this — but if we had it we could potentially go back to a time just after the Big Bang and make a few simple changes that would result in a vastly different universe today. That would certainly send a big signal, if we could do it.
    • Hack their simulation and try to create a bug or error. This is risky though — it might result in our own accidental destruction (lost or corrupted data; or a bad computer virus running rampant through the cosmos, etc.) or the entire simulation (our universe) being shut down by an annoyed cosmic bug-fixer.
    • Do something that affects the fundamental properties of our universe. For example, could we do something that would change the physical constants somehow? If looking at these properties is a logical place to search for a hidden message from our creators, then these properties might also be a logical place to send messages back to them. I have no idea how we could modify the fundamental physical constants of our universe.

at is far more advanced than we can presently imagine. The next logical question would be, “Why would an advanced civilization want or need to undertake such a project?”

Continue reading

Flying by Brain

This is pretty cool stuff — growing brains using live tissue and then teaching them to control software:

from an article in Slashdot: “Scientists at the University of Florida made a living ‘brain’ by extracting 25,000 neurons from a rat’s brain and culturing them inside a glass dish. Then, the neurons began to extend lines to each other, creating a living neural network between them. The dish had a grid of 60 electrodes connected to a computer running a flight simulator. The scientists were able to train the ‘brain’ to control the plane in the simulator and to react to conditions of the plane. Are we getting closer to create an artificially made conscious being, or perhaps, a living computer?” AlphaJoe was one of several readers to add a link to Wired’s article on the experiment.

Humans will live to 150

A leading researcher claims that he is certain that some humans alive today will live to be 150 due to changes in the human lifespan. He even bet money on it. Meanwhile another study has found that certain mutations in our DNA may be causing shorter lifespans. I guess if you combine the enhancements with the mututations our lifespans will balance out to about current levels.

New Technique Turns Animals into Drones; Humans Next?

Scientists have discovered that by blocking the effect of a gene called D2 in a particular part of the brain they can transform normal monkeys into “drones” that will work as hard as they can, continuously, on repetitive tasks, without needing any expection of reward to keep going. In other words, they can transform regular monkeys into the primate-equivalent of worker-bee’s. Normally monkeys (and most humans) tend to work hardest only when they expect a reward, and in particular they don’t usually sustain high productivity on endless repetitive tasks without some sort of “light at the end of the tunnel.” But not anymore; It’s a Brave New World folks — this same drone-gene is expected to also function the same way in humans, raising the spectre of Aldous Huxley’s Epsilon (human drone worker) Caste becoming a reality. (New product idea — Dronicine — the little grey pill that makes you a drone!). It’s a scary thought — but then again, maybe a daily dose of this stuff would have made junior high-school more bearable. Come to think of it by the way, for those of you (like me) who find themselves working 12 hours a day for no reason (and without any big reward in sight for that matter), maybe we’re already drones???