My Father and Me. A Memoir. For Mayer Spivack (1936 – 2011)

My father, Mayer Spivack, passed away on February 12, 2011, in the Kaplan Family House, a beautiful hospice outside of Boston. He passed away, at the young age of 74, after a difficult year and a half battle with colon cancer. During his illness he never lost his spirit of childlike curiosity, enormous compassion, and his dedication to innovation.

His passing was at times difficult, but ultimately peaceful, and took place over five days, during which he was surrounded by love from close family and friends. His presence and spirit, and the intense experiences we all shared over those last days with him are unforgettable: the most incredible experience of love and spiritual connection I have ever had. He was as great in death as he was in life.

This is the story of my relationship with my father: the things I appreciated most about him, what I learned from him, and what he gave to me at the end of his life. By sharing this, I hope to amplify and share his gifts with others.

My father was a truly unique person, and a Boston legend. He was multi-talented and worked in many fields at once, mastering them all (you can read more about his actual work here). He had a vast intelligence, a palpably original approach, and an even greater heart. He was a true Renaissance Man, a great intellectual and artist, and often an unintentionally entertaining and eccentric genius. He had a profound influence on all who knew him well.

As a father, he was a large, warm, loving, fuzzy bear of a man who never really lost his childlike innocence. He was the kind of father everyone wanted to have and when they met him they instantly wanted to hug him. His greatest accomplishment was his compassionate heart: Everyone could feel it.

But despite his brilliance, or perhaps because of it, my father never really fit in. There was no box that could contain him. He was an only child, a loner, and an outsider with little interest in conformity. He had a disdain for formality and social conventions, which always manifested, much to our embarrassment, in the most formal and conventional of settings. He described himself as an iconoclast. Despite his unconventional ways, he was loved and appreciated for his humor, his quirkiness, his unselfconscious originality, and his always out-of-the-box thinking, even (and sometimes especially) by those in the mainstream.

One funny story we recently remembered illustrates his irrepressible spirit: He was invited with his wife to a major European conference of art restorers in Italy. There was a formal reception at the home an Italian Duke. My father, never comfortable with any kind of formality, playfully took one of the candles from the reception, and wore it on his head for the entire night. During the 5 course formal dinner and the reception, he was introduced to various members of the Venetian nobility and the European art world, all the time, balancing this burning little candle on his head, yet also acting completely as if it wasn’t there and not acknowledging it at all. Everyone thought that, because of his first name, “Mayer,” he was actually the eccentric “mayor” of some city in the USA and so despite their horror they were too afraid to point out that there was a candle on his head.

In another infamous incident, my father sat on the Arts Council for the city of Newton, Massachusetts. One day a photo was taken of the Council members, none of whom were actual artists, aside from my father — they were prominent upstanding Newton business leaders and socialites. In the photo they are all wearing three piece suits and looking very formal and proud. My father is also wearing a three piece suit, except that, much to the dismay of the other Council members, his suit pants are tucked into gigantic calf-height silver moon boots (to him it was winter and it was perfectly logical to wear snow boots).

In a similar vein, whenever my father was invited to a black tie event, he would reluctantly attend, dressed appropriately, except with a black dress sock tied around his neck instead of a bow tie. Of course he would never acknowledge this to anyone, and they were all too shocked to point it out to him.

One more example of my father’s individuality: when we were children in the 1970’s in Boston, my father got a great deal on a World War One field ambulance. That was our family “car.” He also had a longstanding love affair with army surplus, to which he had special access through his position on the faculty of Harvard Medical School. From some special warehouse, he acquired a full Coast Guard extreme-weather helicopter rescue snowsuit — a bright orange practically bulletproof insulated monstrosity. To him it was extremely practical – warm, waterproof, and visible even in the worst white-out snowstorm conditions.  He was entirely unselfconscious of the fact that he looked like he had just descended from a rescue helicopter when he wore it. And so this was what he wore, along with his usual silver moon boots, all winter, every winter, through my early childhood.

My poor brother and I would have to be dropped off every morning at elementary school this way: We would pull up in an an antique white ambulance — a big man in an orange emergency jumpsuit, sunglasses, and silver moon boots would get out, tromp through the snow, and open the rear doors (where the stretcher would normally be) and then my younger brother and I would pop out, much to the shock and awe of our fellow schoolmates. Thus were the origins of my own life as an alien and outsider. While these experiences were a source of horror and embarrassment for us growing up, today we laugh hysterically when we remember them — they are what we are made of and I wouldn’t trade them back for anything.

My father was a huge influence for me as an innovator. He was a prolific, constant professional inventor and my childhood was filled with his inventions, in various stages of development. He was such a good inventor that corporations like Polaroid, Otis Elevator and others, would hire him to come up with inventions. I remember him once telling me that he made 100 inventions for Polaroid in 100 days. There was another time when my father was hired to invent new uses for Silly Putty — he received a giant vat of the stuff from the Silly Putty people. With the attention of my father, two kids, and all our friends, the Silly Putty gradually dispersed throughout our house, until little blobs of Silly Putty could be found in every corner, crevice, crack, cranny and nook.

My brother and I grew up inventing things with our father. In fact, we were not allowed to have or watch a TV as children – instead we had three rooms dedicated to making things, in which we spent most of our time: one for building things with wood, one for drawing and painting, and another was my father’s studio. These rooms were stocked with all kinds of tools and art supplies.

As an inventor, my father always had tools and various devices hanging off of him, clipped onto his belt, in fanny packs, in holsters, backpacks, special cases, and in holders of his own making. Our nickname for him at times was “Inspector Gadget.”  He was always infatuated with some new tool or device.

I remember, for example, what we refer to as his “Hot Glue Phase,” when I was in junior high school. Hot glue is a plastic that you melt through a device called a hot glue gun. It creates a white plastic goo that hardens as it cools and is unfortunately able to fasten just about anything together, much to my father’s delight, and our misfortune. I remember going to junior high school with a rip in my pants repaired visibly with hot glue, my sneakers repaired with hot glue, my book bag repaired with hot glue. There was nothing that hot glue couldn’t be used on, we discovered. Clothes. Plates. Furniture. Our house was at one time filled with little spider web strands of hot glue residue, stringing together our possessions, our home, our clothes, us.

One of my father’s most memorable inventions was “The Body Sail” – a precursor to the Windsurfer, on which the sail was not attached to the board  but rather was held by hand using a special boom. He once won the Charles River Boat Festival sailing that contraption – of course, wearing a full body scuba suit. My brother and I used to use his Body Sail on ice skates in the winter, on frozen ponds. My father, of course, preferred to sail it on roller skates, in full bodysuit, helmet and gloves, right through parting waves of startled lunchtime crowds in Harvard Square.

No story about my father would be complete without mentioning his love of sailing. It encompassed not only his Body Sail invention, but a series of boats, particularly multi-hulled boats such as catamarans and eventually trimarans. In his later years he moved to Marblehead outside of Boston, a worldwide center of sailing, where he became an avid fan of high-speed sailing, eventually designing and starting to build his own trimaran out of aerospace composite materials, which, had it ever been finished, would have been among the fastest, and certainly the most computerized and advanced, trimarans on Earth.

My father was also a classically trained artist and particularly a widely shown sculptor — I grew up surrounded by his artworks — photos, drawings, and sculptures made from found objects, industrial artifacts, natural materials. I played in his studios – surrounded by tools for making things, prototyping, and inventing. As an artist, my father was also truly unique. An early pioneer of the use of “found objects,” his artworks were made from rusty pieces of industrial machinery, wooden molds for casting pieces of ships, old rusty farm tools, pieces of found wood and materials from nature. I grew up surrounded by these artworks. There were hundreds of them and he had numerous exhibitions.

One series of works he called “Foundiron” consisted of pieces taken from the intestines of large industrial boilers and furnaces. Another series used wooden molds for casting brass for ships, appeared like a set of primitive human figures – perhaps from Easter Island. Later works included a two ton angelic shape made from the massive steel blades of a snowplow for train tracks, and gossamer drawings in air made from the unwound springs of massive clocks that reminded one of Picasso’s drawings. His Shrine Series included animal bones, bird wings, industrial spindles, parts from clocks, early computers, and metronomes, and melted industrial alloys. One of his larger installations is made from three giant steel train car hitches that he cut apart and welded back together like hands grasping each other, and now stands permanently in Boston’s new South Station.

He was also a photographer and some of his images — for example macro images of honeycombs and turtles, still remain in my mind as if I saw them yesterday. At one point his entire office was rigged up with a complicated system of prisms, blackout shades, lenses, reflective materials, and rear projection screens so that he could take photos of shapes made of pure light that he called Lumia – which he then blew up to massive size and animated with a bank of slide projectors — some of these images can be seen on his weblog.

Another area of life that my father dove into deeply was music. He had a profound connection with music. His music collection included many of the greatest works of classical music, but also Jazz and folk music, and even Indian classical music. Our childhood was filled with music, and also with musical instruments of all kinds – particularly unusual instruments: aboriginal instruments, vibraphones, banjos, harpsichords, flutes, guitars, percussion instruments. My own broad taste in music came from this. My brother, Marin Spivack, took it even further, becoming a masterful Jazz saxophone player, as well as learning to compose for and play guitar, drums, piano, bass.

My father’s fascination with science and his massive appetite for knowledge translated into a home filled with books about science, scientific journals, and discussions about physics, biology, chemistry, brain science, psychology, architecture, engineering, and anthropology. We spent countless hours discussing science, the future, the brain, and technology, and coming up with new theories and inventions.

In my own life as an innovator, my father was my biggest fan and supporter. He taught me to invent – it was his passion. He wrote about it, and refined his theories and methods for innovating and enhancing creativity over the course of his life, and as children my brother and I were his very fortunate experimental guinea pigs.

I can remember being brought by him as a child to MIT, the Massachusetts Institute of Technology, where my father had done his graduate studies — there my brother and I were subjects in early experiments on children and computers: we were observed as we played the early computer game, “Wumpus,” and learned how to use computers, by his colleagues. I still remember my father’s love for MIT — how he took my little brother and I on nighttime expeditions into the hidden catacombs under the campus, and the many times we met with his friends, colleagues and relatives from various MIT departments. My father wore his MIT ring proudly right until his last breath: It was the only club he ever wanted to belong to.

As I got older my father shared with me his work with architects and designers, and his “Design Log” methodology for documenting and improving any kind of design process. Later, as an adult he shared his new theories about human intelligence, learning disabilities, dyslexia, and what he called “syncretic associative thinking.” His theory of syncretic cognition proposes that there are two fundamentally different, yet complementary, forms of human intelligence — linear and syncretic. According to my father’s thinking, syncretic thought is associative and seemingly chaotic, yet out of it great creative leaps and innovations are born.

Dyslexics, of which my father was one, were examples of the extreme case of syncretic thinking: despite difficulties with linear logic, dyslexics are often brilliantly creative; in fact many great geniuses – especially artists, but also scientists — have been dyslexic. My father believed that instead of viewing dyslexics as “learning disabled” they should be viewed as “creativity enabled” and trained and taught differently, to leverage their unique cognitive abilities.

Instead of being viewed as bad at math or slow at reading, dyslexics might instead be viewed as unusually talented at associative thinking, brilliant in the arts and inventing. It was all a matter of perspective. My father advocated passionately for the often-overlooked talents hidden within dyslexia in his own writing, and also in his parallel career as a trained psychotherapist working with hundreds of people, especially learning disabled people, engineers and artists.

My father’s interest in the many flavors of intelligence extended not just to humans but also to animals: He had a long fascination with animal intelligence. His homes were always filled with animals – particularly highly intelligent parrots of various breeds, with whom he would speak, whistle, sing, and explore his theories about learning and cognition. When I was just a newborn, he had a pet crow — which he said was one of the most intelligent of birds.

My father painstakingly studied crows and eventually learned how to mimic their various kinds of calls. I can distinctly remember how, throughout our entire life together, he would suddenly start embarrassingly screeching, “Caaah  caaahhh Caaaaaaahhhh,” whenever he encountered a crow in some random tree.

In another famous story from my father’s MIT days, he became fascinated with echolocation — the form of navigation through sound used by animals bats and dolphins. Bats in particular became a bit of an obsession for my father. Bats navigate with high frequency clicks. These clicks bounce off of surfaces like walls, buildings, plants, insects, other bats and the reflections are turned into images in the bat brain.

My father decided that bat echolocation would be a great way to help the blind navigate through cities. So he invented a bat clicker device you could wear on your head. It would emit rapid loud clicks that were within the range of human hearing. He spent a week blindfolded, wearing this device, walking around the MIT and Harvard campuses, and apparently he was able to navigate successfully with it.

He recounted that after many days of using this contraption, blindfolded the whole time, his brain adapted and he was able to discern the different types of materials, objects and surfaces from the subtle differences in sound reflections. He was able to cross streets, navigate around buildings and obstacles, and could even find his way through crowds (although we all suspected the crowds were probably parting of their own volition around this strange blindfolded man with the clicking machine on his head). The astonished people of Cambridge who encountered him must have thought he was some kind of alien exploring a strange new world. And one can only wonder what the bats themselves must have thought.

At various times in my childhood my father also had pet frogs, lizards, turtles, fish, snakes, squirrels, cats, and later, his beloved pug. We grew up with enormous aquariums, terrariums, and aviaries — as kids these were wonderlands. This love of all kinds of living things would eventually guide him to his second wife: Boston artist, Louise Freedman. We knew they were made for each other when, for their first date, they chose to go to a local cemetery pond to collect pond water and frogs together.

As their lives merged, so did their always increasing menagerie of animals. And gradually there was less and less room, or time, for humans in their house. During my college years, my father and his wife had started raising African Grey parrots, and had also become close friends with Harvard/MIT animal cognition researcher, Irene Pepperberg, and her famous parrot, Alex.

When I would visit their home on school breaks, the parrots were as much a part of the family as my brother and I, and occupied a central location in the family room. A typical mealtime conversation in our family was a combination of English words, chirps, clicks and whistles, spoken by humans and parrots alike. My father and Louise eventually moved into a home that literally was like a tree — surrounded by trees on many levels, on the edge of a huge nature sanctuary on Marblehead Neck. There amongst the branches, they could almost live as birds. My brother I joked — half-seriously — that for an upcoming wedding anniversary, we would throw out their couch and instead replace it with matching human-sized perches for them.

But my father’s fascination with animals wasn’t just about intelligence, it was also about love. I remember one day as a child, while frantically evacuating from Cape Cod ahead of a fast oncoming hurricane, my father suddenly backed up miles of panicked traffic when he stopped the car in the pouring rain and lightning to scramble around on his hands and knees, risking his own life, to rescue a turtle that had strayed onto the freeway. This deep love of animals, and people, that he manifested throughout his life, was at times a source of embarrassment for me, but later became what I admired most about him. For my father, this simple love of all living things was his religion. But for most of my life, I didn’t realize what an accomplishment that was.

Although my father influenced me in so many ways, the most important facet of life that we shared — and struggled over — was spirituality.

He was a dedicated scientific materialist and rejected superstition, which to him included all institutionalized forms of religion. He even sometimes referred to himself as an atheist, although I think more accurately, he was an agnostic. I on the other hand, while also deeply interested in the sciences, had come to the conclusion that science alone could never fully explain reality or consciousness — I felt that there was a common underlying truth in all the great religions which science had so far completely missed, a truth that was essential for a complete and accurate understanding of reality. This debate between science and religion became the fulcrum on which we wrestled endlessly and in many different ways.

I had always known, even as a child, that there is something more than meets the eye about reality that is extremely subtle, yet at once vividly evident. Growing up, I had a number of spontaneous mystical experiences that I could not explain, and later I witnessed highly unusual phenomena taking place in monasteries in Nepal and India that convinced me that there must be more to the mind, and to reality, than our western scientific worldview could presently measure or explain. I was perplexed by the apparent incompatibility of these experiences, and the Western scientific framework that my father and I both lived and worked in.

In my attempts to reconcile these two worlds, I became obsessed with physics, computer science and artificial intelligence. I began searching for a grand unified theory. I sought to create software that could simulate physics, the brain, and the mind.  With some of the world’s most cutting-edge physicists and computer scientists, as well as at some of the top artificial intelligence companies, I worked on on several major initiatives in computational physics, parallel supercomputing, and artificial intelligence, as well as my own software projects and theories.

All of these attempts failed to achieve their goals so thoroughly and so repeatedly that eventually I began to question if it was even possible to do. I reached a point where I began to doubt the assumptions behind these projects — I began to question my own questions. This led me to a deeper exploration of the mind and the foundations of reality – a journey from cognitive science and physics to philosophy, and finally to spirituality. Paradoxically, I ended up back where I began, looking inwards rather than outwards, for the answers.

My quest for spiritual meaning took me through a survey of all the major Western and Eastern religions, and while traveling in Asia for a year after college, I landed in Tibetan Buddhism, with its intense focus on the nature of mind and consciousness. I was home. For me, Tibetan Buddhism had the perfect combination of rational and objective logical analysis (my father’s influence), and the mystical direct experience of the union of consciousness with divinity that I had tasted in my own experience.

In Tibetan Buddhism I finally found a rational yet holistic framework that could account for all the dimensions of observed experience: both the outer physical world and the inner dimensions of consciousness. From the Buddhist perspective, we humans are manifestations or projections of a deeper ultimate nature of reality, as are all sentient beings, and in fact all animate and inanimate things. This deeper level of reality is the origin of both the subjective and objective poles of experience, and it’s nature is transcendental, empty, yet aware.

The direct proof and experience of this can be found many ways: through logical reasoning, through prayer, through love, through nature, through art, through meditation, and perhaps most easily, by searching for the source of one’s own consciousness. Consciousness is a unique phenomena that we all have direct, equal, and immediate access to, yet which science cannot measure let alone explain. By persistently searching for the source of our own consciousness, and discovering that we can’t find it yet it is not non-existent, we are inevitably brought to a direct realization of the ultimate nature of reality.

Over decades of searching for consciousness, first through science, then through Buddhism, I had come to the conclusion that rather than consciousness emerging from the brain, it had to be the other way around: All experience, and indeed the body, brain and even the physical universe, emerge from consciousness. I had discovered that consciousness is a gateway to a sourceless, deep and endless wellspring of mysteries. And more importantly, I had found what I thought would be conclusive evidence that would finally convince my father that I was right.

But when I tried to relate these realizations to my father, he was entirely unconvinced. He argued that my experiences were not really objective, and that consciousness is an epiphenomenon of the brain; a wonderful side-effect, a remarkable illusion that nonetheless could be reduced to neurochemistry and atoms. I countered that in the special case of consciousness, subjective observations could in fact be objective, under the right circumstances. I claimed that it was possible to scientifically and objectively observe consciousness by looking at it under the microscope of carefully trained meditation. But he cast doubts on these claims, citing numerous examples from psychology and neuroscience.

So I tried many other arguments. I cited the work of philosophers like John Searle who provided many illustrations of how conscious experiences could not be reduced to the brain or any kind of machine. I used lines of reasoning from Buddhist logic. I even cited recent findings in quantum theory that seem to imply that the act of conscious observation interacts with experimental results. But all of these arguments failed to convince my father that consciousness was fundamental or irreducible. He remained a skeptic and I felt invalidated. And so I strived even harder to find a way to map my experiences to his worldview, so I could finally prove the scientific foundations for my experience and belief in divinity to him.

This ongoing debate between my father and I — between science and religion — was not unique to us; it had been going on for millennia, and yielded many great works of both science and art. Our conversations were often frustrating and ended in exhaustion and exasperation, but we also sensed that somehow we were getting somewhere, if not mutually, then at least as individuals. We were foils to one another, worthy opponents. Like many who had come before us, the dialectical process of trying to convince one another of our conflicting views of reality, caused us to generated volumes of new writing, theories, inventions, and ideas we could not have arrived at on our own.

Nevertheless, despite my father’s strong rebukes of superstitious belief systems, and his skepticism towards my Buddhist beliefs, he was in fact a deeply spiritual man, in a very human, unembellished way. His spirituality was not tied to any system or institution — it was natural and basic: it was how he lived and the ideals he lived by: Love, Science, and Art. His spirituality was not about words, it was about actions. He expressed it in his art, his good deeds, his compassion, his joyful creativity, and his ability to love and be loved.

What I failed to see was that my father’s spirituality was immensely humble. So humble that he would not even claim to be spiritual, and certainly wouldn’t go so far as to conceptualize it. Instead, he was simply a truly good man, a mensch. While I continued to try new tactics in my campaign to convince him, and as I judged him as closed-minded and non-spiritual, he was in fact actually living my spiritual ideals better than I could understand at the time. But, not realizing this, I was certain he was missing out on something of vital importance, something that I had to convince him of before he died. And so our debate continued.

Then, in the last few months of my father’s life, we were finally able to bridge this divide. As his illness progressed, his wife called me and urged me to visit before it was too late. “He’s really getting worse, and I want you to have a chance to be together while he’s still strong enough,” she said. And so I flew to Boston and we resumed the debate.

Perhaps it was our mutual sense that time was running out, or perhaps it was that we had both exhausted all our prior arguments, but this time we reached a level of discourse that was essentially mathematical in nature; pure logic, pure set theory. Without imposing the assumptions of either science or religion, we started anew from first principles and through pure reason and observation, we derived a new common language, on neutral ground. And with this in hand, we arrived at a single nondual phenomenology — At last we had arrived at the basic nature of reality.

When we finally reached the point of agreement and mutual understanding, after decades of debate, and we both witnessed the simultaneous unification and transcendence of our prior belief systems — we saw that we had always actually agreed on a deeper level. And on that December afternoon, as we sketched out the full picture together, in a way that neither of us had done before on our own, we both breathed a sigh of relief. It was an incredibly cathartic moment for both of us.

At the conclusion of our decades long debate, we sat quietly together, just being in that understanding — a meditation on awareness and knowledge, on physics, time and space — on our mutual respect for the immensity and majesty of the universe. I will always treasure that time.

The day after that experience, before I left to return to California, I sat by my father’s bed. He was almost unable to walk at this point. As I said goodbye, thinking I might never see him again, I said, “Don’t forget what we discovered together, it is the highest realization.” He replied, “There is still one more realization that is higher.” Surprised, I asked him, “What?” He answered, “To live it!”

About a month later my wife called again. “He’s dying,” she said, “come back as soon as you can.” The cancer had advanced unexpectedly fast and so I flew back to be with him one last time.

I stayed by his side, looking into his eyes, talking to him, even though he had lost the ability to move or speak. His eyes smiled back. My brother and I kept telling him, as he labored to breathe for the final two days, “It’s ok to go now, you can let go, we love you, we’ll be ok, we’ll take care of each other.” But his drive to love and protect us all was so strong. He wasn’t ready to go. Even while in the depths of his own suffering, he was still filled with compassion, he was worried about what would happen to all of us. It was noble and beautiful to witness.

We played him the music he loved, the music he played for us as we grew up. We laughed and told him our memories and stories of him. We stroked his hair and his beard and tried to make him as comfortable as possible as he lay there, struggling, and probably frustrated that he couldn’t communicate, and at times in terrible pain. Yet through great effort he still found ways to let us know he heard us, loved us, and was still conscious.

As his breathing changed and we saw the signs of death advancing further through his body, he maintained his clarity and brilliance and even got brighter — we could feel his heart, and see his kind and intelligent spirit in his eyes. He tried to speak to us by making what little sound he could and moving his eyebrows in response to us. “Remember what we talked about, what we realized,” I said to him over and over, and I could see he was living it.

Finally, on the evening of February 12, 2011, he let go and died peacefully in his wife’s arms as she sang to him gently. All of us felt at that moment an incredible, all-embracing, boundless love and bliss, even as we grieved. It was him. My father, Mayer Spivack. Our Buddha. He went into Love.

Video: My Talk on the Evolution of the Global Brain at the Singularity Summit

If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.

(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).

Fast Company Interview — "Connective Intelligence"

In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.

A Universal Classification of Intelligence

I’ve been thinking lately about whether or not it is possible to formulate a scale of universal cognitive capabilities, such that any intelligent system — whether naturally occurring or synthetic — can be classified according to its cognitive capacity. Such a system would provide us with a normalized scientific basis by which to quantify and compare the relative cognitive capabilities of artificially intelligent systems, various species of intelligent life on Earth, and perhaps even intelligent lifeforms encountered on other planets.

One approach to such evaluation is to use a standardized test, such as an IQ test. However, this test is far too primitive and biased towards human intelligence. A dolphin would do poorly on our standardized IQ test, but that doesn’t mean much, because the test itself is geared towards humans. What is needed is a way to evaluate and compare intelligence across different species — one that is much more granular and basic.

What we need is a system that focuses on basic building blocks of intelligence, starting by measuring the presence or ability to work with fundamental cognitive constructs (such as the notion of object constancy, quantities, basic arithmetic constructs, self-constructs, etc.) and moving up towards higher-level abstractions and procedural capabilities (self-awareness, time, space, spatial and temporal reasoning, metaphors, sets, language, induction, logical reasoning, etc.).

What I am asking is whether we can develop a more "universal" way to rate and compare intelligences? Such a system would provide a way to formally evaluate and rate any kind of intelligent system — whether insect, animal, human, software, or alien — in a normalized manner.

Beyond the inherent utility of having such a rating scale, there is an additional benefit to trying to formulate this system: It will lead us to really question and explore the nature of cognition itself. I believe we are moving into an age of intelligence — an age where humanity will explore the brain and the mind (the true "final frontier"). In order to explore this frontier, we need a map — and the rating scale I am calling for would provide us with one, for it maps the range of possible capabilities that intelligent systems are capable of.

I’m not as concerned with measuring the degree to which any system is more or less capable of some particular cognitive capability within the space of possible capabilities we map (such as how fast it can do algebra for example, or how well it can recall memories, etc.) — but that is a useful second step. The first step, however, is to simply provide a comprehensive map of all the possible fundamental cognitive behaviors there are — and to make this map as minimal and elegant as we can. Ideally we should be seeking the simplest set of cognitive building blocks from which all cognitive behavior, and therefore all minds, are comprised.

So the question is: Are there in fact "cognitive universals" or universal cognitive capabilities that we can generalize across all possible intelligent systems? This is a fascinating question — although we are human, can we not only imagine, but even prove, that there is a set of basic universal cognitive capabilities that applies everywhere in the universe, or even in other possible universes? This is an exploration that leads into the region where science, pure math, philosophy, and perhaps even spirituality all converge. Ultimately, this map must cover the full range of cognitive capabilities from the most mundane, to what might be (from our perspective) paranormal, or even in the realm of science fiction. Ordinary cognition as well as forms of altered or unhealthy cognition, as well as highly advanced or even what might be said to be enlightened cognition, all have to fit into this model.

Can we develop a system that would apply not just to any form of intelligence on Earth, but even to far-flung intelligent organisms that might exist on other worlds, and that perhaps might exist in dramatically different environments than humans? And how might we develop and test this model?

I would propose that such a system could be developed and tuned by testing it across the range of forms of intelligent life we find on Earth — including social insects (termite colonies, bee hives, etc.), a wide range of other animal species (dogs, birds, chimpanzees, dolphins, whales, etc.), human individuals, and human social organizations (teams, communities, enterprises). Since there are very few examples of artificial intelligence today it would be hard to find suitable systems to test it on, but perhaps there may be a few candidates in the next decade. We should also attempt to imagine forms of intelligence on other planets that might have extremely different sensory capabilities, totally different bodies, and perhaps that exist on very different timescales or spatial scales as well — what would such exotic, alien intelligences be like, and can our model encompass the basic building blocks of their cognition as well?

It will take decades to develop and tune a system such as this, and as we learn more about the brain and the mind, we will continue to add subtlety to the model. But when humanity finally establishes open dialog with an extraterrestrial civilization, perhaps via SETI or some other means of more direct contact, we will reap important rewards. A system such as what I am proposing will provide us with a valuable map for understanding alien cognition, and that may prove to be the key to enabling humanity to engage in successful interactions and relations with alien civilizations as we may inevitably encounter as humanity spreads throughout the galaxy. While some skeptics may claim that we will never encounter intelligent life on other planets, the odds would indicate otherwise. It may take a long time, but eventually it is inevitable that we will cross paths — if they exist at all. Not to be prepared would be irresponsible.

Artificial Stupidity: The Next Big Thing

There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don’t need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I’m skeptical to say the least. I don’t need or want artificial intelligence.

No, what I really need is artificial stupidity.

I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks — like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.

The human brain is the result of millions of years of evolution. It’s already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don’t require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it’s going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.

The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don’t mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren’t good at." In fact humans are really bad at doing relatively simple, "stupid" things — tasks that don’t require much intelligence at all.

For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That’s what computers are for – or should be for at least.

Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving — but we are just terrible at managing email, or making sense of the Web. Let’s play to our strengths and use computers to compensate for our weaknesses.

I think it’s time we stop talking about artificial intelligence — which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.

Radar Networks Announces Twine.com

My company, Radar Networks, has just come out of stealth. We’ve announced what we’ve been working on all these years: It’s called Twine.com. We’re going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There’s lot’s of press coming out where you can read about what we’re doing in more detail. The team is extremely psyched and we’re all working really hard right now so I’ll be brief for now. I’ll write a lot more about this later.

Continue reading

Virtual Out of Body Experiences

A very cool experiment in virtual reality has shown it is possible to trick the mind into identifying with a virtual body:

Through these goggles, the volunteers could see a camera
view of their own back – a three-dimensional "virtual own body" that
appeared to be standing in front of them.

When the researchers stroked the back of the volunteer
with a pen, the volunteer could see their virtual back being stroked
either simultaneously or with a time lag.

The volunteers reported that the sensation seemed to be
caused by the pen on their virtual back, rather than their real back,
making them feel as if the virtual body was their own rather than a
hologram.

Volunteers

Even when the camera was switched to film the back of a
mannequin being stroked rather than their own back, the volunteers
still reported feeling as if the virtual mannequin body was their own.

And when the researchers switched off the goggles,
guided the volunteers back a few paces, and then asked them to walk
back to where they had been standing, the volunteers overshot the
target, returning nearer to the position of their "virtual self".

This has implications for next-generation video games and virtual reality. It also has interesting implications for consciousness studies in general.

Continue reading

Axons Process Information

I just heard about a very interesting new discovery in neuroscience:. The basic gist is that it appears that axons process information. Until now it has been thought that only the cell body of neurons was the part that processed information. Our present understanding of the brain, and also of psychopharmacology, is based completely on the dendrites and main body of the neuron. If it turns out that axons — the "wires" that connect neurons — are actually major contributors to how the brain computes, then it may point to both a new understanding of cognition, as well as a new frontier in treating mental and neurological disorders. (Thanks to Bram for letting me know).

Knowledge Networking

I’ve been thinking for several years about Knowledge Networking. It’s not a term I invented, it’s been floating around as a meme for at least a decade or two. But recently it has started to resurface in my own work.

So what is a knowledge network? I define a knowledge network as a form of collective intelligence in which a network of people (two or more people connected by social-communication relationships) creates, organizes, and uses a collective body of knowledge. The key here is that a knowledge network is not merely a site where a group of people work on a body of information together (such as the wikipedia), it’s also a social network — there is an explicit representation of a social relationship within it. So it’s more like a social network than for example a discussion forum or a wiki.

I would go so far as to say that knowledge networks are the third-generation of social software. (Note this is based in-part on ideas that emerged in conversations I have had with Peter Rip, so this also his idea):

  • First-generation social apps were about communication (eg.
    messaging such as Email, discussion boards, chat rooms, and IM)
  • Second-generation social apps were about people and content (eg. Social networks, social media sharing, user-generated content)
  • Third-generation social apps are about relationships and knowledge  (eg. Wikis, referral networks, question and answer systems, social recommendation systems, vertical knowledge and expertise portals, social mashup apps, and coming soon, what we’re building at Radar Networks)

Just some thoughts on a Saturday morning…

Enriching the Connections of the Web — Making the Web Smarter

Web 3.0 — aka The Semantic Web — is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.

I  believe that collective intelligence primarily comes from connections — this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain’s connections than in the neurons alone. There are several kinds of connections on the Web:

  1. Connections between information (such as links)
  2. Connections between people (such as opt-in social relationships, buddy lists, etc.)
  3. Connections between applications (web services, mashups, client server sessions, etc.)
  4. Connections between information and people (personal data collections, blogs, social bookmarking, search results, etc.)
  5. Connections between information and applications (databases and data sets stored or accessible by particular apps)
  6. Connections between people and applications (user accounts, preferences, cookies, etc.)

Are there other kinds of connections that I haven’t listed — please let me know!

I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.

In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It’s a very simple, yet very flexible and extensible data model that can represent any kind of data structure.

The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used.  The meaning of these connections can be very specific or very general.

For example one might define a type of connection called "friend of" or a type of connection called "employee of" — these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.

This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It’s a new place to put meaning in fact — you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole — the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).

Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood — it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.

It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.

Now where will all these rich semantic connections come from? That’s the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people — for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" — far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.

These are subtle points that are very hard for non-specialists to see — without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!

Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I’m saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.

Listen to this Discussion on the Future of the Web

If you are interested in the future of the Web, you might enjoy listening to this interview with me, moderated by Dr. Paul Miller of Talis. We discuss, in-depth: the Semantic Web, Web 3.0, SPARQL, collective intelligence, knowledge management, the future of search, triplestores, and Radar Networks.

A Bunch of New Press About Radar Networks

We had a bunch of press hits today for my startup, Radar
Networks

PC World  Article on  Web 3.0 and Radar Networks

Entrepreneur Magazine interview

We’re also proud to announce that Jim
Hendler
, one of the founding gurus of the Semantic Web, has joined our technical advisory board.

Breaking the Collective IQ Barrier — Making Groups Smarter

I’ve been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call “The Collective IQ Barrier.” Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.

In a nutshell, here is how I define this barrier:

The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.

Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?

I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.

The Effective Size of Groups

For millions of years — in fact since the dawn of humanity — humansocial organizations have been limited in effective size. Groups aremost effective when they are small, but they have less collectiveknowledge at their disposal. Slightly larger groups optimize both effectiveness and access to resources such as knowledge and expertise. In my own experience working on many different kinds of teams, I think that the sweet-spot is between 20and 50 people. Above this size groups rapidly become inefficient andunproductive.

The Invention of Hierarchy

The solution that humans have used to get around this limitation in the effective size of groups is hierarchy.When organizations grow beyond 50 people we start to break them intosub-organizations of less than 50 people. As a result if you look atany large organization, such as a Fortune 100 corporation, you find ahuge complex hierarchy of nested organizations and cross-functionalorganizations. This hierarchy enables the organization to createspecialized “cells” or “organs” of collective cognition aroundparticular domains (like sales, marketing, engineering, HR, strategy,etc.) that remain effective despite the overall size of theorganization.

By leveraging hierarchy an organization of even hundreds ofthousands of members can still achieve some level of collective IQ as awhole. The problem however is that the collective IQ of the wholeorganization is still quite a bit lower than the combined collectiveIQ’s of the sub-organizations that comprise it. Even in well-structured, well-managed hierarchies, the hierarchy is still less thanthe sum of it’s parts. Hierarchy also has limits — the collective IQof an organization is also inversely proportional to the number ofgroups it contains, and the average number of levels of hierarchybetween those groups (Perhaps this could be defined more elegantly asan inverse function of the average network distance between groups inan organization).

The reason that organizations today still have to make suchextensive use of hierarchy is that our technologies for managingcollaboration, community, knowledge and intelligence on a collectivescale are still extremely primitive. Hierarchy is still one of the only and best solutions we have at our disposal. But we’re getting better fast.

Modern organizations are larger and far more complex than ever would have beenpractical in the Middle Ages, for example. They contain more people,distributed more widely around the globe, with more collaboration andspecialization, and more information, making more rapid decisions, thanwas possible even 100 years ago. This is progress.

Enabling Technologies

There have beenseveral key technologies that made modern organizations possible: the printing press,telegraph, telephone, automobile, airplane, typewriter, radio,television, fax machine, and personal computer. These technologies haveenabled information and materials to flow more rapidly, at less cost,across ever more widely distributed organizations. So we can see that technology does make a big difference in organizational productivity. The question is, can technology get us beyond the Collective IQ Barrier?

The advent of the Internet, and in particular the World Wide Webenabled a big leap forward in collective intelligence. These technologies havefurther reduced the cost to distributing and accessing information andinformation products (and even “machines” in the form of software codeand Web services). They have made it possible for collectiveintelligence to function more rapidly, more dynamically, on a wider scale, and at lesscost, than any previous generation of technology.

As a result of evolution of the Web we have seen new organizationalstructures begin to emerge that are less hierarchical, moredistributed, and often more fluid. For example, virtual teams that caninstantly form, collaborate across boundaries, and then dissolve backinto the Webs they come from when their job is finished. Thisprocess is now much easier than it ever was. Numerous hosted Web-basedtools exist to facilitate this: email, groupware, wikis, messageboards, listservers, weblogs, hosted databases, social networks, searchportals, enterprise portals, etc.

But this is still just the cusp of this trend. Even today with thecurrent generation of Web-based tools available to us, we are still notable to effectively tap much more of the potential Collective IQ of ourgroups, teams and communities. How do we get from where we are today(the whole is dumber than the sum of its parts) to where we want to bein the future (the whole is smarter than the sum of its parts)?

The Future of Productivity

The diagram below illustrates how I think about the past, present and future of productivity. In my view, from the advent of PC’s onwards we have seen a rapid growth in individual and group productivity, enabling people to work with larger sets of information, in larger groups. But this will not last — soon as we reach a critical level of information and groups of ever larger size, productivity will start to decline again, unless new technologies and tools emerge to enable us to cope with these increases in scale and complexity. You can read more about this diagram here.

http://novaspivack.typepad.com/nova_spivacks_weblog/2007/02/steps_towards_a.html

In the last 20 years the amount of information that knowledgeworkers (and even consumers) have to deal with on a daily basis has mushroomed by a factor of almost 10orders of magnitude and it will continue like this for several moredecades. But our information tools — and particular our tools forcommunication, collaboration, community, commerce and knowledgemanagement — have not advanced nearly as quickly. As a result thetools that we are using today to manage our information andinteractions are grossly inadequate for the task at hand: They were simply not designed tohandle the tremendous volumes of distributed information, and the rate of change ofinformation, that we are witnessing today.

Case in point: Email. Email was never designed for what it is beingused for today. Email was a simple interpersonal notification andmessaging tool and essentially that is what it is good for. But todaymost of us use our email as a kind of database, search engine,collaboration tool, knowledge management tool, project management tool,community tool, commerce tool, content distribution tool, etc. Emailwasn’t designed for these functions and it really isn’t very productive whenapplied to them.

For groups the email problem is even worse than it is for individuals –not only is everyone’s individual email productivity declining anyway,but collectively as groupsize increases (and thus group information size increases as well),there is a multiplier effect that further reduces everyone’semail productivity in inverse proportion to the size of the group.Email becomes increasingly unproductive as group size and informationsize increase.

This is not just true of email, however, it’s true of almost all theinformation tools we use today: Search engines, wikis, groupware,social networks, etc. They all suffer from this fundamental problem.Productivity breaks down with scale — and the problem is exponentially worse than it is for individuals in groups and organizations. But scale is increasing incessantly — that is a fact — and it will continue to do so for decades at least. Unless something is done about this we will simply be completely buried in our own information within about a decade.

The Semantic Web

I think the Semantic Web is a critical enabling technology that will help us get through this transition. It willenable the next big leap in productivity and collective intelligence.It may even be the technology that enables humans to flip the ratio so thatfor the first time in human history, larger groups of people canfunction more productively and intelligently than smaller groups. Itall comes down to enabling individuals and groups to maintain (andultimately improve) their productivity in theface of the continuing explosion in information and social complexitythat they areexperiencing.

The Semantic Web provides a richer underlying fabric for expressing,sharing, and connecting information. Essentially it provides a betterway to transform information into useful knowledge, and to share andcollaborate with it. It essentially upgrades the medium — in this case the Web and any other data that is connected to the Web — that we use for our information today.

By enriching the medium we can inturn enable new leaps in how applications, people, groups andorganizations can function. This has happened many times before in thehistory of technology.  The printing press is one example. The Web is a more recent one. The Web enriched themedium (documents) with HTML and a new transport mechanism, HTTP, forsharing it. This brought about one of the largest leaps in humancollective cognition and productivity in history. But HTML really onlydescribes formatting and links. XML came next, to start to provide away to enrich the medium with information about structure –the parts of documents. The Semantic Web takes this one step further –it provides a way to enrich the medium with information about the meaning of the structure — what are those parts, what do various links actually mean?

Essentially the Semantic Web provides a means to abstract andexternalize human knowledge about information — previously the meaningof information lived only in our heads, and perhaps in certainspecially-written software applications that were coded to understandcertain types of data. The Semantic Web will disrupt this situation by providingopen-standards for encoding this meaning right into the medium itself.Any application that can speak the open-standards of the Semantic Webcan then begin to correctly interpret the meaning of information, andtreat it accordingly, without having to be specifically coded tounderstand each type of data it might encounter.

This is analogous to the benefit of HTML. Before HTML everyapplication had to be specifically coded to each different documentformat in order to display it. After HTML applications could all juststandardize on a single way to define the formats of differentdocuments. Suddenly a huge new landscape of information becameaccessible both to applications and to the people who used them.The Semantic Web does something similar: It provides a way to makethe data itself “smarter” so that applications don’t have to know somuch to correctly interpret it. Any data structure — a document or adata record of any kind — that can be marked up with HTML to define its formatting, can also be marked up with RDFand OWL (the languages of the Semantic Web) to define its meaning.

Once semantic metadata is added, the document can not only bedisplayed properly by any application (thanks to HTML and XML), but itcan also be correctly understood by that application. For example theapplication can understand what kind of document it is, what it isabout, what the parts are, how the document relates to other things,and what particular data fields and values mean and how they map todata fields and values in other data records around the Web.

The Semantic Web enriches information with knowledge about what thatinformation means, what it is for, and how it relates to other things.With this in hand applications can go far beyond the limitations ofkeyword search, text processing, and brittle tabular data structures.Applications can start to do a much better job of finding, organizing,filtering, integrating, and making sense of ever larger and morecomplex distributed data sets around the Web.

Another great benefit ofthe Semantic Web is that this additional metadata can be added in atotally distributed fashion. The publisher of a document can add theirown metadata and other parties can then annotate that with their ownmetadata. Even HTML doesn’t enable that level of cooperative markup (exceptperhaps in wikis). It takes a distributed solution to keep up with ahighly distributed problem (the Web). The Semantic Web is just such adistributed solution.

The Semantic Web will enrich information and this in turn will enable people, groups and applications to work with information more productively. In particular groups and organizations will benefit the most because that is where the problems of information overload and complexity are the worst. Individuals at least know how they organize their own information so they can do a reasonably good job of managing their own data. But groups are another story — because people don’t necessarily know how others in their group organize their information. Finding what you need in other people’s information is much harder than finding it in your own.

Where the Semantic Web can help with this is by providing a richer fabric for knowledge management. Information can be connected to an underlying ontology that defines not only the types of information available, but also the meaning and relationships between different tags or subject categories, and even the concepts that occur in the information itself. This makes organizing and finding group knowledge easier. In fact, eventually the hope is that people and groups will not have to organize their information manually anymore — it will happen in an almost fully-automatic fashion. The Semantic Web provides the necessary frameworks for making this possible.

But even with the Semantic Web in place and widely adopted, moreinnovation on top of it will be necessary before we can truly breakpast the Collective IQ Barrier such that organizations can in practiceachieve exponential increases in Collective IQ. Human beings are only able to cope with a few chunks ofinformation at a given moment, and our memories and ability to processcomplex data sets are limited. When group size and data size growbeyond certain limits, we simply cannot cope, we become overloaded andjammed, even with rich Semantic Web content at our disposal.

Social Filtering and Social Networking — Collective Cognition

Ultimately, to remain productive in the face of such complexity wewill need help. Often humans in roles that require them to cope with large scales of information, relationships andcomplexity hire assistants, but not all of us can affordto do that, and in some cases even assistants are not able to keep upwith the complexity that has to be managed.

Social networking andsocial filtering are two ways to expand the number of “assistants” weeach have access to, while also reducing the price of harnessing the collective intelligence of those assistants to just about nothing. Essentially these methodologies enable people toleverage the combined intelligence and attention of large communitiesof like-minded people who contribute their knowledge and expertise for free. It’s a collective tit-for-tat form of altruism.

For example, Diggis a community that discovers the most interesting news articles. Itdoes this by enabling thousands of people to submit articles and voteon them. What Digg adds are a few clever algorithms on top of this for rankingarticles such that the most active ones bubble up to the top. It’s notunlike a stock market trader’s terminal, but for a completely differentclass of data. This is a great example of social filtering.

Anothergood example are prediction markets, where groups of people vote onwhat stock or movie or politician is likely to win — in some cases bybuying virtual stock in them — as a means to predict the future. Ithas been shown that prediction markets do a pretty good job of makingaccurate predictions in fact. In addition expertise referral serviceshelp people get answers to questions from communities of experts. Theseservices have been around in one form or another for decades and haverecently come back into vogue with services like Yahoo Answers. Amazonhas also taken a stab at this with their Amazon Mechanical Turk, whichenables “programs” to be constructed in which people perform the work.

I think social networking, social filtering, prediction markets,expertise referral networks, and collective collaboration are extremelyvaluable. By leveraging other people individuals and groups can stayahead of complexity and can also get the benefit of wide-areacollective cognition. These approaches to collective cognition arebeginning to filter into the processes of organizations and othercommunities. For example, there is recent interest in applying socialnetworking to niche communities and even enterprises.

The Semantic Webwill enrich all of these activities — making social networks andsocial filtering more productive. It’s not an either/or choice — thesetechnologies are extremely compatible in fact. By leveraging acommunity to tag, classify and organize content, for example, themeaning of that content can be collectively enriched. This is alreadyhappening in a primitive way in many social media services. TheSemantic Web will simply provide a richer framework for doing this.

The combination of the Semantic Web with emerging social networkingand social filtering will enable something greater than either on it’sown. Together, together these two technologies will enable much smarter groups, social networks, communities and organizations. But this still will not get us all the way past the Collective IQBarrier. It may get us close the threshold though. To cross thethreshold we will need to enable an even more powerful form ofcollective cognition.

The Agent Web

To cope with the enormous future scale andcomplexity of the Web, desktop and enterprise, each individual and group willreally need not just a single assistant, or even a community of humanassistants working on common information (a social filtering communityfor example), they will need thousands or millions of assistants working specificallyfor them. This really only becomes affordable and feasible if we canvirtualize what an “assistant” is.

Human assistants are at the top ofthe intelligence pyramid — they are extremely smart and powerful, and they are expensive — they  should not beused for simple tasks like sorting content, that’s just a waste oftheir capabilities. It would be like using a supercomputer array tospellcheck a document. Instead, we need to free humans up to do thereally high-value information tasks, and find a way to farm out thelow-value, rote tasks to software. Software is cheap or even free and it can be replicated as much asneeded in order to parallelize. A virtual army of intelligent agents is less expensive than a single human assistant, and much more suited to sifting through millions of Web pages every day.

But where will these future intelligent agents get their intelligence? In past attempts at artificial intelligence, researchers tried to buildgigantic expert systems that could reason as well as a small child forexample. These attempts met with varying degrees of success, but theyall had one thing in common: They were monolithic applications.

I believe that that future intelligent agents should be simple. They should not be advanced AI programs or expert systems. They should be capable of a few simple behaviors, the most important of which is to reason against sets of rules and semantic data. The basic logic necessary for reasoning is not enormous and does not require any AI — it’s just the ability to follow logical rules and perhaps do set operations. They should be lightweight and highly mobile. Insteadof vast monolithic AI, I am talking about vast numbers of very simpleagents that working together can do  emergent, intelligent operations en masse.

For example search — you might deploy a thousand agents to search all the sites about Italy for recipes and then assemble those results into a database instantaneously.  Or you might dispatch a thousand or more agents to watch for a job that matches your skills and goals across hundreds of thousands or millions of Websites. They could watch and wait until jobs that matched your criteria appeared, and then they could negotiate amongst themselves to determine which of the possible jobs they found were good enough to show you. Another scenario might be commerce — you could dispatch agents to find you the best deal on a vacation package, and they could even negotiate an optimal itinerary and price for you. All you would have to do is choose between a few finalist vacation packages and make the payment. This could be a big timesaver.

The above examples illustrate how agents might help an individual, but how might they help a group or organization? Well for one thing agents could continuously organize and re-organize information for a group. They could also broker social interactions — for example, by connecting people to other people with matching needs or interests, or by helping people find experts who could answer their questions. One of the biggest obstacles to getting past the Collective IQ Barrier is simply that people cannot keep track of more than a few social relationships and information sources at aany given time — but with an army of agents helping them, individuals might be able to cope with more relationships and data sources at once; the agents would act as their filters, deciding what to let through and how much priority to give it. Agents can also help to make recommendations, and to learn to facilitate and even automate various processes such as finding a time to meet, or polling to make a decision, or escalating an issue up or down the chain of command until it is resolved.

To make intelligent agents useful, they will need access to domain expertise. But the agents themselves will not contain any knowledge or intelligence of their own. The knowledge will exist outside on the Semantic Web, and so will the intelligence. Their intelligence, like their knowledge, will be externalized and virtualized in the form of axioms or rules that will exist out on the Web just like web pages.

For example, a set of axioms about travel could be published to the Web in the form of a document that formally defined them. Any agent that needed to process travel-related content could reference these axioms in order to reason intelligently about travel in the same way that it might reference an ontology about travel in order to interpret travel data structures. The application would not have to be specifically coded to know about travel — it could be a generic simple agent — but whenever it encountered travel-related content it could call up the axioms about travel from the location on the Web where they were hosted, and suddenly it could reason like an expert travel agent. What’s great about this is that simple generic agents would be able to call up domain expertise on an as-needed basis for just about any domain they might encounter. Intelligence — the heuristics, algorithms and axioms that comprise expertise, would be as accessible as knowledge — the data and connections between ideas and information on the Web.

The axioms themselves would be created by human experts in various domains, and in some cases they might even be created or modified by agents as they learned from experience. These axioms might be provided for free as a public service, or as fee-based web-services via API’s that only paying agents could access.

The key is that model is extremely scaleable — millions or billions of axioms could be created, maintained, hosted, accessed, and evolved in a totally decentralized and parallel manner by thousands or even hundreds of thousands of experts all around the Web. Instead of a few monolithic expert systems, the Web as a whole would become a giant distributed system of experts. There might be varying degrees of quality among competing axiom-sets available for any particular domain, and perhaps a ratings system could help to filter them over time. Perhaps a sort of natural selection of axioms might take place as humans and applications rated the end-results of reasoning using particular sets of axioms, and then fed these ratings back to the sources of this expertise, causing them to get more or less attention from other agents in the future. This process would be quite similar to the human-level forces of intellectual natural-selection at work in fields of study where peer-review and competition help to filter and rank ideas and their proponents.

Virtualizing Intelligence

What I have been describing is the virtualization of intelligence — making intelligence and expertise something that can be “published” to the Web and shared just like knowledge, just like an ontology, a document, a database, or a Web page. This is one of the long-term goals of the Semantic Web and it’s already starting now via new languages, such as SWRL, that are being proposed for defining and publishing axioms or rules to the Web. For example, “a non-biologicalparent of a person is their step-parent” is asimple axiom. Another axiom might be, “A child of a sibling of your parent is your cousin.” Using such axioms, an agent could make inferences and do simple reasoning about social relationships for example.

SWRL and other proposed rules languages provide potentialopen-standards for defining rules and publishing them to the Web sothat other applications can use them. By combining these rules withrich semantic data, applications can start to do intelligent things,without actually containing any of the intelligence themselves. The intelligence– the rules and data — can live “out there” on the Web, outside the code of various applications.

All theapplications have to know how to do is find relevant rules, interpret them, and apply them. Even the reasoning that may be necessary can be virtualized into remotely accessible Web services so applications don’t even have to do that part themselves (although many may simply include open-source reasoners in the same way that they include open-source databases or search engines today).

In other words, just as HTML enables any app to process and formatany document on the Web, SWRL + RDF/OWL may someday enable any application to reasonabout what the document discusses. Reasoning is the last frontier. Byvirtualizing reasoning — the axioms that experts use to reason aboutdomains — we can really begin to store the building blocks of humanintelligence and expertise on the Web in a universally-accessibleformat. This to me is when the actual “Intelligent Web” (what I callWeb 4.0) will emerge.

The value of this for groups and organizations is that they can start to distill their intelligence from individuals that comprise them into a more permanent and openly accessible form — axioms that live on the Web and can be accessed by everyone. For example, a technical support team for a product learns many facts and procedures related to their product over time. Currently this learning is stored as knowledge in some kind of tech support knowledgebase. But the expertise for how to find and apply this knowledge still resides mainly in the brains of the people who comprise the team itself.

The Semantic Web provides ways to enrich the knowledgebase as well as to start representing and saving the expertise that the people themselves hold in their heads, in the form of sets of axioms and procedures. By storing not just the knowledge but also the expertise about the product, the humans on the team don’t have to work as hard to solve problems — agents can actually start to reason about problems and suggest solutions based on past learning embodied in the common set of axioms. Of course this is easier said than done — but the technology at least exists in nascent form today. In a decade or more it will start to be practical to apply it.

Group Minds

Someday in the not-too-distant-future groups will be able toleverage hundreds or thousands of simple intelligent agents. Theseagents will work for them 24/7 to scour the Web, the desktop, theenterprise, and other services and social networks they are related to. They will help both the individuals as well as the collectives as-a-whole. They willbe our virtual digital assistants, always alert and looking for thingsthat matter to us, finding patterns, learning on our behalf, reasoningintelligently, organizing our information, and then filtering it,visualizing it, summarizing it, and making recommendations to us sothat we can see the Big Picture, drill in wherever we wish, and makedecisions more productively.

Essentially these agents will give groups something like their own brains. Today the only brains in a group reside in the skulls of the people themselves. But in the future perhaps we will see these technologies enable groups to evolve their own meta-level intelligences: systems of agents reasoning on group expertise and knowledge.

This will be a fundamental leap to a new order of collective intelligence. For the first time groups will literally have minds of their own, minds that transcend the mere sum of the individual human minds that comprise their human, living facets. I call these systems “Group Minds” and I think they are definitely coming. In fact there has been quite a bit of research on the subject of facilitating group collaboration with agents, for example, in government agencies such as DARPA and the military, where finding ways to help groups think more intelligently is often a matter of life and death.

The big win from a future in which individuals and groups canleverage large communities of intelligent agents is that they will bebetter able to keep up with the explosive growth of information complexity andsocial complexity. As the saying goes, “it takes a village.” There is just too much information, and too many relationships, changing too fast and this is only going to get more intense in years to come. The only way to cope with such a distributed problem is a distributed solution.

Perhaps by 2030 it will not be uncommon for Individuals and groups to maintain largenumbers of virtual assistants — agents that will help them keep abreast of themassively distributed, always growing and shifting information and sociallandscapes. When you really think about this, how else could we eversolve this? This is really the only practical long-term solution. But today it is still a bit of a pipedream; we’re not there yet. The key however is that we are closer than we’ve ever been before.

Conclusions

The Semantic Web provides the key enabling technology for all ofthis to happen someday in the future. By enriching the content of theWeb it first paves the way to a generation of smarter applications andmore productive individuals, groups and organizations.

The next majorleap will be when we begin to virtualize reasoning in the form ofaxioms that become part of the Semantic Web. This will enable a newgeneration of applications that can reason across information andservices. This will ultimately lead to intelligent agents that will be able to assist individuals,groups, social networks, communities, organizations and marketplaces sothat they can remain productive in the fact of the astonishinginformation and social network complexity in our future.

By adding more knowledge into our information, the Semantic Webmakes it possible for applications (and people) to use information moreproductively. By adding more intelligence between people,  information,and applications, the Semantic Web will also enable people andapplications to become smarter. In the future, these more-intelligentapps will facilitate higher levels of individual and collectivecognition by functioning as virtual intelligent assistants forindividuals and groups (as well as for online services).

Once we begin to virtualize not just knowledge (semantics) but alsointelligence (axioms) we will start to build Group Minds — groups that have primitive minds of their own. When we reach this point we will finally enable organizations to breakpast the Collective IQ Barrier: Organizations will start to becomesmarter than the sum of their parts. The intelligence of anorganization will not just be from its people, it will also come fromits applications. The number of intelligent applications in anorganization may outnumber the people by 1000 to 1, effectivelyamplifying each individual’s intelligence as well as the collectiveintelligence of the group.

Because software agents work all the time,can self-replicate when necessary, and are extremely fast and precise,they are ideally-suited to sifting in parallel through the millions or billions ofdata records on the Web, day in and day out. Humans and even groups ofhumans will never be able to do this as well. And that’s not what theyshould be doing! They are far too intelligent for that kind of work.Humans should be at the top of the pyramid, making the decisions,innovating, learning, and navigating.

When we finally reach this stage where networks of humans and smartapplications are able to work together intelligently for common goals,I believe we will witness a real change in the way organizations arestructured. In Group Minds, hierarchy will not be as necessary — the maximum effectivesize of a human Group Mind will be perhaps in the thousands or even themillions instead of around 50 people. As a result the shape of organizations in thefuture will be extremely fluid, and most organizations will be flat orcontinually shifting networks. For more on this kind of organization,read about virtual teams and networking, such as these books (by friends of mine who taught me everything I know about network-organization paradigms.)

I would also like to note that I am not proposing “strong AI” — a vision in which we someday makeartificial intelligences that are as or more intelligent thanindividual humans. I don’t think intelligent agents will individually be very intelligent. It will only be in vast communities of agents that intelligence will start to emerge. Agents are analogous to the neurons in the human brain — they really aren’t very powerful on their own.

I’m also not proposing that Group Minds will beas or more intelligent as the individual humans in groups anytime soon. I don’t think thatis likely in our lifetimes. The cognitive capabilities of an adult human are the product of millions of years of evolution. Even in the accelerated medium of the Web where evolution can take place much faster in silico, it may still take decades or even centuries to evolve AI that rivals the human mind (and I doubt such AI will ever be truly conscious, which means that humans, with their inborn natural consciousness, may always play a special and exclusive role in the world to come, but that is the subject of a different essay). But even if they will not be as intelligent as individual humans, Ido think that Group Minds, facilitated by masses of slightly intelligent agents and humans working in concert, can goa long way in helping individuals and groups become more productive.

It’s important to note that the future I am describing is notscience-fiction, but it also will not happen overnight. It will take atleast several decades, if not longer. But with the seeminglyexponential rate of change of innovation, we may make very large stepsin this direction very soon. It is going to be an exciting lifetime forall of us.

Diagram: Beyond Keyword (and Natural Language) Search

Here at Radar Networks we are working on practical ways to bring the Semantic Web to end-users. One of the interesting themes that has come up a lot, both internally, as well as in discussions with VC’s, is the coming plateau in the productivity of keyword search. As the Web gets increasingly large and complex, keyword search becomes less effective as a means for making sense of it. In fact, it will even decline in productivity in the future. Natural language search will be a bit better than keyword search, but ultimately won’t solve the problem either — because like keyword search it cannot really see or make use of the structure of information.

I’ve put together a new diagram showing how the Semantic Web will enable the next step-function in productivity on the Web. It’s still a work in progress and may change frequently for a bit, so if you want to blog it, please link to this post, or at least the .JPG image behind the thumbnail below so that people get the latest image. As always your comments are appreciated. (Click the thumbnail below for a larger version).

Futureofproductivity_2

Today a typical Google search returns up to hundreds of thousands or even millions of results — but we only really look at the first page or two of results. What about the other results we don’t look at? There is a lot of room to improve the productivity of search, and the help people deal with increasingly large collections of information.

Keyword search doesn’t understand the meaning of information, let alone its structure. Natural language search is a little better at understanding the meaning of information — but it still won’t help with the structure of information. To really improve productivity significantly as the Web scales, we will need forms of search that are data-structure-aware — that are able to search within and across data structures, not just unstructured text or semistructured HTML. This is one of the key benefits of the coming Semantic Web: it will enable the Web to be navigated and searched just like a database.

Starting with the "data web" enabled by RDF, OWL, ontologies and SPARQL, structured data is becoming increasingly accessible, searchable and mashable. This in turn sets the stage for a better form of search: semantic search. Semantic search combines the best of keyword, natural language, database and associative search capabilities together.

Without the Semantic Web, productivity will plateau and then gradually decline as the Web, desktop and enterprise continue to grow in size and complexity. I believe that with the appropriate combination of technology and user-experience we can flip this around so that productivity actually increases as the size and complexity of the Web increase.

See Also: A Visual Timeline of the Past, Present and Future of the Web

New Findings Overturn our Understanding of How Neurons Communicate

Thanks to Bram for pointing me to this article about how new research indicates that communication in the brain is quite different than we thought. Essentially neurons may release neurotransmitters all along axons, not just within synapses. This may enable new forms of global communication or state changes within the brain, beyond the "circuit model" of neuronal signaling that has been the received view for the last 100 years. It also may open up a wide range of new drugs and discoveries in brain science.

Envisioning the Whole Digital Person

Another article of note on the subject of our evolving digital lives and what user-experience designers should be thinking about:

Our lives are becoming increasingly digitized—from the ways we
communicate, to our entertainment media, to our e-commerce
transactions, to our online research. As storage becomes cheaper and
data pipes become faster, we are doing more and more online—and in the
process, saving a record of our digital lives, whether we like it or
not.

(snip…)

In the coming years, our ability to interact with the information
we’re so rapidly generating will determine how successfully we can
manage our digital lives. There is a great challenge at our doorsteps—a
shift in the way we live with each other.

As designers of user experiences for digital products
and services, we can make people’s digital lives more meaningful and
less confusing. It is our responsibility to envision not only
techniques for sorting, ordering, and navigating these digital
information spaces, but also to devise methods of helping people feel
comfortable with such interactions. To better understand and ultimately
solve this information management problem, we should take a holistic
view of the digital person. While our data might be scattered, people
need to feel whole.

Capturing Your Digital Life

Nice article in Scientific American about Gordon Bell’s work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web — additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.

Intelligence is in the Connections

Google’s Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry’s idea is that intelligence is a function of massive computation, not of “fancy whiteboard algorithms.” In other words, in his conception the brain doesn’t do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively “dumb” but from the combined power of all of them working together “intelligent” behaviors emerge.

Larry’s view is, in my opinion, an oversimplification that will not lead to actual AI. It’s certainly correct that some activities that we call “intelligent” can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible — they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today — which is still a long way short of true AI!

Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don’t think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software — the higher level cognitive algorithms and heuristics that the brain “runs” — also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).

Larry’s view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It’s a highly sophisticated system comprised of simple parts — and actually, the jury is still out on exactly how simple the parts really are — much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.

Perhaps the Web as a whole is the closest analogue we have today for the brain — with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We’re not talking about a few hundred thousand linux boxes — we’re talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.

One reader who commented on Larry’s talk made an excellent point on what this missing piece may be: “Intelligence is in the connections, not the bits.”The point is that most of the computation in the brain actually takesplace via the connections between neurons, regions, and perhapsprocesses. This writer also made some good points about quantumcomputation and how the brain may make use of it, a view that forexample Roger Penrose and others have spent a good deal of time on.There is some evidence that brain may make use of microtubules andquantum-level computing. Quantum computing is inherently about fields,correlations and nonlocality. In other words the connections in thebrain may exist on a quantum level, not just a neurological level.

Whether quantum computation is the key or not still remains to bedetermined. But regardless, essentially, Larry’s approach is equivalentto just aiming a massively parallel supercomputer at the Web and hopingthat will do the trick. Larry mentions for example that if allknowledge exists on the Web you should be able to enter a query and geta perfect answer. In his view, intelligence is basically just search ona grand scale. All answers exist on the Web, and the task is just tomatch questions to the right answers. But wait? Is that all thatintelligence does? Is Larry’s view too much of an oversimplification?Intelligence is not just about learning and recall, it’s also aboutreasoning and creativity. Reasoning is not just search. It’s unclearhow Larry’s approach would address that.

In my own opinion, for global-scale AI to really emerge the Web has toBE the computer. The computation has to happen IN the Web, betweensites and along connections — rather than from outside the system. Ithink that is how intelligence will ultimately emerge on a Web-widescale. Instead of some Google Godhead implementing AI from afar for thewhole Web, I think it is more likely that every site, app and person onthe Web will help to implement it. It will be much more of a hybridsystem that combines decentralized human and machine intelligences andtheir interactions along data connections and social relationships. Ithink this may emerge from a future evolution of the Web that providesfor much richer semantics on every piece of data and hyperlink on theWeb, and for decentralized learning, search, and reasoning to takeplace within every node on the Web. I think the Semantic Web is anecessary technology for this to happen, but it’s only the first step.More will need to happen on top of it for this vision to reallymaterialize.

My view is more of an “agent metaphor” for intelligence — perhaps itis similar to Marvin Minsky’s Society of Mind ideas. I think that mindsare more like communities than we presently think. Even in our ownindividual minds for example we experience competing thoughts, multiplethreads, and a kind of internal ecology and natural selection of ideas.These are not low-level processes — they are more like agents — theyare actually each somewhat “intelligent” on their own, they seem to besomewhat autonomous, and they interact in intelligent almost socialways.

Ideas seem to be actors, not just passive data points — they arecompeting for resources and survival in a complex ecology that existsboth within our individual minds and between them in socialrelationships and communities. As the theory of memetics proposes,ideas can even transport themselves through language, culture, andsocial interactions in order to reproduce and evolve from mind to mind.It is an illusion to think that there is some central self or “I” thatcontrols the process (that is just another agent in the community infact, perhaps one with a kind of reporting and selection role).

I’m not sure the complex social dynamics of these communities ofintelligence can really be modeled by a search engine metaphor. Thereis a lot more going on than just search. As well as communication andreasoning between different processes, there may in fact be feedbackacross levels from the top-down as well as the from the bottom-up.Larry is essentially proposing that intelligence is a purely bottom-upemergent process that can be reduced to search in the ideal, simplestcase. I disagree. I think there is so much feedback in every directionthat medium and the content really cannot be separated. The thoughtsthat take place in the brain ultimately feedback down to the neuralwetware itself, changing the states of neurons and connections –computation flows back down from the top, it doesn’t only flow up fromthe bottom. Any computing system that doesn’t include this kind offeedback in its basic architecture will not be able to implement trueAI.

In short, Google is not the right architecture to truly build a globalbrain on. But it could be a useful tool for search andquestions-and-answers in the future, if they can somehow keep up withthe growth and complexity of the Web.

Must-Know Terms for the 21st Century Intellectual

Read this fun article that lists and defines some of the key concepts that every post-singularity transhumanist meta-intellectual should know! (via Kurzweil)

Minding The Planet — The Meaning and Future of the Semantic Web

NOTES

Prelude

Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, “Minding the Planet” about how the Internet would enable the evolution of higher forms of collective intelligence.

My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, “One thing is certain: Someday, you will write this book.” We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.

A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.

But ever since that day on the porch with my grandfather, I remembered what he said: “Someday, you will write this book.” I’ve tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I’ve continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it’s the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.

This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?

I’ve often joked that it is ironic that a term that contains theword “semantic” has such an ambiguous meaning for most people. Mostpeople just have no idea what this means, they have no context for it,it is not connected to their experience and knowledge. This is aproblem that people who are deeply immersed in the trenches of theSemantic Web have not been able to solve adequately — they have notfound the words to communicate what they can clearly see, what they areworking on, and why it matters for everyone. In this article I havetried, and hopefully succeeded, in providing a detailed introductionand context for the Semantic Web fornon-technical people. But even technical people working in the fieldmay find something of interest here as I piece together the fragmentsinto a Big Picture and a vision for what might be called “Semantic Web2.0.”

I hope the reader will bear with me as Ibounce around across different scales of technology and time, and fromthe extremes of core technology to wild speculation in order to tellthis story. If you are looking for the cold hardscience of it all, this article will provide an understanding but willnot satisfy your need for seeing the actual code; there are otherplaceswhere you can find that level of detail and rigor. But if you want tounderstand what it all really means and what the opportunity and futurelookslike – this may be what you are looking for.

I should also note that all of this is my personal view of what I’vebeen working on,and what it really means to me. It is not necessarily the official viewof the mainstream academic Semantic Web community — although there arecertainly many places where we all agree. But I’m sure that somereaders will certainly disagree or raise objections to some of myassertions, and certainly to my many far-flung speculations about thefuture. I welcome those different perspectives; we’re all trying tomake sense of this and the more of us who do that together, the more wecan collectively start to really understand it. So please feel free towrite your own vision or response, and please let me know so I can linkto it!

So with this Prelude in mind, let’s get started…

The Semantic Web Vision

The Semantic Web is a set of technologies which are designed toenable aparticular vision for the future of the Web – a future in which allknowledge exists on the Web in a format that software applications canunderstand andreason about. By making knowledge more accessible to software, softwarewillessentially become able to understand knowledge, think about knowledge,and createnew knowledge. In other words, software will be able to be moreintelligent –not as intelligent as humans perhaps, but more intelligent than say,your wordprocessor is today.

The dream of making software more intelligent has been around almost as longas software itself. And although it is taking longer to materialize than past experts hadpredicted, progress towards this goal is being steadilymade. At the same time, the shape of this dream is changing. It is becomingmore realistic and pragmatic. The original dream of artificial intelligence wasthat we would all have personal robot assistants doing all the work we don’twant to do for us. That is not the dream of the Semantic Web. Instead, today’sSemantic Web is about facilitating what humans do – it is about helping humansdo things more intelligently. It’s not a vision in which humans do nothing andsoftware does everything.

The Semantic Web vision is not just about helping software become smarter –it is about providing new technologies that enable people, groups,organizations and communities to be smarter.

For example, by providing individuals with tools that learn about what theyknow, and what they want, search can be much more accurate and productive.

Using software that is able to understand and automatically organize largecollections of knowledge, groups, organizations and communities can reachhigher levels of collective intelligence and they can cope with volumes ofinformation that are just too great for individuals or even groups tocomprehend on their own.

Another example: more efficient marketplaces can be enabled by software thatlearns about products, services, vendors, transactions and market trends andunderstands how to connect them together in optimal ways.

In short, the Semantic Web aims to make software smarter, not just for itsown sake, but in order to help make people, and groups of people, smarter. Inthe original Semantic Web vision this fact was under-emphasized, leading to theimpression that Semantic Web was only about automating the world. In fact, it isreally about facilitating the world.

The Semantic Web Opportunity

The Semantic Web is one of the most significant things to happen since theWeb itself. But it will not appear overnight. It will take decades. It willgrow in a bottom-up, grassroots, emergent, community-driven manner just likethe Web itself. Many things have to converge for this trend to really take off.

The core open standards already exist, but the necessary development tools haveto mature, the ontologies that define human knowledge have to come into beingand mature, and most importantly we need a few real “killer apps” to prove thevalue and drive adoption of the Semantic Web paradigm. The first generation ofthe Web had its Mozilla, Netscape, Internet Explorer, and Apache – and it alsohad HTML, HTTP, a bunch of good development tools, and a few killer apps andservices such as Yahoo! and thousands of popular Web sites. The same things arenecessary for the Semantic Web to take off.

And this is where we are today – this all just about to start emerging.There are several companies racing to get this technology, or applications ofit, to market in various forms. Within a year or two you will see mass-consumerSemantic Web products and services hit the market, and within 5 years therewill be at least a few “killer apps” of the Semantic Web. Ten years from nowthe Semantic Web will have spread into many of the most popular sites andapplications on the Web. Within 20 years all content and applications on theInternet will be integrated with the Semantic Web. This is a sea-change. A bigevolutionary step for the Web.

The Semantic Web is an opportunity to redefine, or perhaps to better define,all the content and applications on the Web. That’s a big opportunity. Andwithin it there are many business opportunities and a lot of money to be made. It’snot unlike the opportunity of the first generation of the Web. There areplatform opportunities, content opportunities, commerce opportunities, searchopportunities, community and social networking opportunities, and collaborationopportunities in this space. There is room for a lot of players to compete andat this point the field is wide open.

The Semantic Web is a blue ocean waiting to be explored. And like anyunexplored ocean its also has its share of reefs, pirate islands, hidden treasure, shoals,whirlpools, sea monsters and typhoons. But there are new worlds out there to be discovered,and they exert an irresistible pull on the imagination. This is an excitingfrontier – and also one fraught with hard technical and social challenges thathave yet to be solved. For early ventures in the Semantic Web arena, it’s notgoing to be easy, but the intellectual and technological challenges, and the potentialfinancial rewards, glory, and benefit to society, are worth the effort andrisk. And this is what all great technological revolutions are made of.

Semantic Web 2.0

Some people who have heard the term “Semantic Web” thrown around too muchmay think it is a buzzword, and they are right. But it is not just a buzzword –it actually has some substance behind it. That substance hasn’t emerged yet,but it will. Early critiques of the Semantic Web were right – the early visiondid not leverage concepts such as folksonomy and user-contributed content atall. But that is largely because when the Semantic Web was originally conceivedof Web 2.0 hadn’t happened yet. The early experiments that came out of researchlabs were geeky, to put it lightly, and impractical, but they are already beingfollowed up by more pragmatic, user-friendly approaches.

Today’s Semantic Web – what we might call “Semantic Web 2.0” is a kinder,gentler, more social Semantic Web. It combines the best of the original visionwith what we have all learned about social software and community in the last10 years. Although much of this is still in the lab, it is already starting totrickle out. For example, recently Yahoo! started a pilot of the Semantic Webbehind their food vertical. Other organizations are experimenting with usingSemantic Web technology in parts of their applications, or to store or mapdata. But that’s just the beginning.

The Google Factor

Entrepreneurs, venture capitalists and technologists are increasinglystarting to see these opportunities. Who will be the “Google of the SemanticWeb?” – will it be Google itself? That’s doubtful. Like any entrenchedincumbent, Google is heavily tied to a particular technology and worldview. Andin Google’s case it is anything but semantic today. It would be easier for anupstart to take this position than for Google to port their entireinfrastructure and worldview to a Semantic Web way of thinking.

If it is goingto be Google it will most likely be by acquisition rather than by internal origination. Andthis makes more sense anyway – for Google is in a position where they can just wait and buy the winner,at almost any price, rather than competing in the playing field. One thing to note however is that Google has at least one product offering that shows some potential for becoming a key part of the Semantic Web. I am speaking of Google Base, Google’s open database which is meant to be a registry for structured data so that it can be found in Google search. But Google Base does not conform to or make use of the many open standards of the Semantic Web community. That may or may not be a good thing, depending on your perspective.

Of course the downside of Google waiting to join the mainstream Semantic Web community until after the winner is announced is very large – once there is a winner it may be too late for Google to beat them. Thewinner of the Semantic Web race could very well unseat Google. The strategistsat Google are probably not yet aware of this but as soon as they seesignificant traction around a major Semantic Web play it will become of interestto them.

In any case, I think there won’t be just one winner, there will be severalmajor Semantic Web companies in the future, focusing on different parts of theopportunity. And you can be sure that if Google gets into the game, every majorportal will need to get into this space at some point or risk becomingirrelevant. There will be demand and many acquisitions. In many ways the Semantic Web will not be controlled by just one company — it will be more like a fabric that connects them all together.

Context is King — The Nature ofKnowledge

It should be clear by now that the Semantic Web is all about enablingsoftware (and people) to work with knowledge more intelligently. But what isknowledge? Knowledge is not just information. It is meaningful information – itis information plus context. For example, if I simply say the word “sem” toyou, it is just raw information, it is not knowledge. It probably has nomeaning to you other than a particular set of letters that you recognize and asound you can pronounce, and the mere fact that this information was stated byme.

But if I tell you that “sem” it is the Tibetan word for “mind” then suddenly,“sem means mind in Tibetan” to you. If I further tell you that Tibetans have about as many words for “mind” as Eskimos have for “snow,” this is further meaning. Thisis context, in other words, knowledge, about the sound “sem.” The sound is raw information. When it is given context itbecomes a word, a word that has meaning, a word that is connected to conceptsin your mind – it becomes knowledge. By connecting raw information to context,knowledge is formed.

Once you have acquired a piece of knowledge such as “sem means mind in Tibetan,” you may then also form further knowledgeabout it. For example, you may form the memory, “Nova said that ‘sem means mind in Tibetan.’” You mightalso connect the word “sem” to networks of further concepts you have about Tibet and your understanding of what the word “mind” means.

The mind is the organ of meaning – mind is where meaning is stored,interpreted and created. Meaning is not “out there” in the world, it is purelysubjective, it is purely mental. Meaning is almost equivalent to mind in fact.For the two never occur separately. Each of our individual minds has some way of internally representing meaning — when we read or hear a word that we know, our minds connect that to a network of concepts about it and at that moment it means something to us.

Digging deeper, if you are really curious,or you happen to know Greek, you may also find that a similar sound occurs inthe Greek word, sēmantikós – which means “having meaning” and in turn is the root of the English word “semantic”which means “pertaining to or arising from meaning.” That’s an odd coincidence!“Sem” occurs in Tibetan word for mind, and the English and Greek words that allrelate to the concepts of “meaning” and “mind.” Even stranger is that not only do these words have a similar sound, they have a similar meaning.

With all this knowledge at yourdisposal, when you then see the term “Semantic Web” you may be able to inferthat it has something to do with adding “meaning” to the Web. However, if youwere a Tibetan, perhaps you might instead think the term had something to dowith adding “mind” to the Web. In either case you would be right!

Discovering New Connections

We’ve discovered a new connection — namely that there is an implicit connectionbetween “sem” in Greek, English and Tibetan: they all relate to meaning andmind. It’s not a direct, explicit connection – it’s not evident unless you digfor it. But it’s a useful tidbit of knowledge once it’s found. Unlike the direct migration of the sound “sem” from Greek to English,there may not have ever been a direct transfer of this sound from Greek toSanskrit to Tibetan. But in a strange and unexpected way, they are all connected. This connectionwasn’t necessarily explicitly stated by anyone before, but was uncovered byexploring our network of concepts and making inferences.

The sequence of thought about “sem”above is quite similar to kind of intellectual reasoning and discovery that theactual Semantic Web seeks to enable software to do automatically.  How is this kind of reasoning and discovery enabled? The Semantic Web providesa set of technologies for formally defining the context of information. Just asthe Web relies on a standard formal specification for “marking up” informationwith formatting codes that enable any applications that understand those codesto format the information in the same way, the Semantic Web relies on newstandards for “marking up” information with statements about its context – itsmeaning – that enable any applications to understand, and reason about, the meaning of those statements in the same way.

By applying semantic reasoning agents to large collections of semantically enhanced content, all sorts of new connections may be inferred, leading to new knowledge, unexpected discoveries and useful additional context around content. This kind of reasoning and discovery is already taking place in fields from drug discovery and medical research, to homeland security and intelligence. The Semantic Web is not the only way to do this — but it certainly will improve the process dramatically. And of course, with this improvement will come new questions about how to assess and explain how various inferences were made, and how to protect privacy as our inferencing capabilities begin to extend across ever more sources of public and private data. I don’t have the answers to these questions, but others are working on them and I have confidence that solutions will be arrived at over time.

Smart Data

By marking up information with metadata that formally codifies its context, we can make the data itself “smarter.” The data becomes self-describing. When you get a piece of data you also get the necessary metadata for understanding it. For example, if I sent you a document containing the word “sem” in it, I could add markup around that word indicating that it is the word for “mind” in the Tibetan language.

Similarly, a document containing mentions of “Radar Networks” could contain metadata indicating that “Radar Networks” is an Internet company, not a product or a type of radar technology. A document about a person could contain semantic markup indicating that they are residents of a certain city, experts on Italian cooking, and members of a certain profession. All of this could be encoded as metadata in a form that software could easily understand. The data carries more information about its own meaning.

The alternative to smart data would be for software to actually read and understand natural language as well as humans. But that’s really hard. To correctly interpret raw natural language, software would have to be developed that knew as much as a human being. But think about how much teaching and learning is required to raise a human being to the point where they can read at an adult level. It is likely that similar training would be necessary to build software that could do that. So far that goal has not been achieved, although some attempts have been made. While decent progress in natural language understanding has been made, most software that can do this is limited around particular vertical domains, and it’s brittle — it doesn’t do a good job of making sense of terms and forms of speech that it wasn’t trained to parse and make sense of.

Instead of trying to make software a million times smarter than it is today, it is much easier to just encode more metadata about what our information means. That turns out to be less work in the end. And there’s an added benefit to this approach — the meaning exists with the data and travels with it. It is independent of any one software program — all software can access it. And because the meaning of information is stored with the information itself, rather than in the software, the software doesn’t have to be enormous to be smart. It just has to know the basic language for interpreting the semantic metadata it finds on the information it works with.

Smart data enables relatively dumb software to be smarter with less work. That’s an immediate benefit. And in the long-term as software actually gets smarter, smart data will make it easier for it to start learning and exploring on its own. So it’s a win-win approach. Start with by adding semantic metadata to data, end up with smarter software.

Making Statements About the World

Metadata comes down to making statements about the world in a manner that machines, and perhaps even humans, can understand unambiguously. The same piece of metadata should be interpreted in the same way by different applications and readers.

There are many kinds of statementsthat can be made about information to provide it with context. For example, youcan state a definition such as “person” means “a human being or a legalentity.” You can state an assertion such as “Sue is a human being.” You canstate a rule such that “if x is a human being, then x is a person.”

From thesestatements it can then be inferred that “Sue is a person.” This inference is soobvious to you and me that it seems trivial, but most software today cannot dothis. It doesn’t know what a person is, let alone what a name is. But ifsoftware could do this, then it could for example, automatically organizedocuments by the people they are related to, or discover connections betweenpeople who were mentioned in a set of documents, or it could find documentsabout people who were related to particular topics, or it could give you a listof all the people mentioned in a set of documents, or all the documents relatedto a person.

Of course this is a very basicexample. But imagine if your software didn’t just know about people – it knewabout most of the common concepts that occur in your life. Your software wouldthen be able to help you work with your documents just about as intelligentlyas you are able to do by yourself, or perhaps even more intelligently, becauseyou are just one person and you have limited time and energy but your softwarecould work all the time, and in parallel, to help you.

Examples and Benefits

How could the existence of the Semantic Web and all the semantic metadata that defines it be really useful toeveryone in the near-term?

Well, for example, the problem of email spam would finally be cured:your software would be able to look at a message and know whether it wasmeaningful and/or relevant to you or not.

Similarly, you would never have to file anything by hand again. Your software could atuomate all filing and information organization tasks for you because it would understand your information and your interests. It would be able to figure out when to file something in a single folder, multiple folders, or new ones. It would organize everything — documents, photos, contacts, bookmarks, notes, products, music, video, data records — and it would do it even better and more consistently than you could on your own. Your software wouldn’t just organize stuff, it would turn it into knowledge by connecting it to more context. It could this not just for individuals, but for groups, organizations and entire communities.

Another example: search would bevastly better: you could search conversationally by typing in everyday naturallanguage and you would get precisely what you asked for, or even what youneeded but didn’t know how to ask for correctly, and nothing else. Your searchengine could even ask you questions to help you narrow what you want. You wouldfinally be able to converse with software in ordinary speech and it would understandyou.

The process of discovery would be easier too. You could have software agent that worked as your personal recommendation agent. It would constantly be looking in all the places you read or participate in for things that are relevant to your past, present and potential future interests and needs. It could then alert you in a contextually sensitive way, knowing how to reach you and how urgently to mark things. As you gave it feedback it could learn and do a better job over time.

Going even further with this,semantically-aware software – software that is aware of context, software thatunderstands knowledge – isn’t just for helping you with your information, itcan also help to enrich and facilitate, and even partially automate, yourcommunication and commerce (when you want it to). So for example, your software could help you with your email. It would be able to recommend responses to messages for you, or automate the process. It would be able to enrich your messaging anddiscussions by automatically cross-linking what you are speaking about withrelated messages, discussions, documents, Web sites, subject categories,people, organizations, places, events, etc.

Shopping and marketplaces wouldalso become better – you could search precisely for any kind of product, withany specific attributes, and find it anywhere on the Web, in any store. You could post classified ads and automatically get relevant matches according to your priorities, from all over the Web, or only from specific places and parties that match your criteria for who you trust. You could also easily invent a new custom datastructure for posting classified ads for a new kind of product or service and publishit to the Web in a format that other Web services and applications couldimmediately mine and index without having to necessarily integrate with yoursoftware or data schema directly.

You could publish an entiredatabase to the Web and other applications and services could immediately startto integrate your data with their data, without having to migrate your schemaor their own. You could merge data from different data sources together to create new data sources without having to ever touch or look at an actual database schema.

Bumps on the Road

The above examples illustrate thepotential of the Semantic Web today, but the reality on the ground is that the technology isstill in the early phases of evolution. Even for experienced software engineersand Web developers, it is difficult to apply in practice. The main obstaclesare twofold:

(1) The Tools Problem:

There are very few commercial-gradetools for doing anything with the Semantic Web today – Most of the tools forbuilding semantically-aware applications, or for adding semantics toinformation are still in the research phase and were designed for expertcomputer scientists who specialize in knowledge representation, artificialintelligence, and machine learning.

These tools require a largelearning curve to work with and they don’t generally support large-scaleapplications – they were designed mainly to test theories and frameworks, notto actually apply them. But if the Semantic Web is ever going to becomemainstream, it has to be made easier to apply – it has to be made moreproductive and accessible for ordinary software and content developers.

Fortunately, the tools problem isalready on the verge of being solved. Companies such as my own venture, RadarNetworks, are developing the next generation of tools for building Semantic Webapplications and Semantic Web sites. These tools will hide most of thecomplexity, enabling ordinary mortals to build applications and content thatleverage the power of semantics without needing PhD’s in knowledge representation.

(2) The Ontology Problem:

The Semantic Web providesframeworks for defining systems of formally defined concepts called “ontologies,”that can then be used to connect information to context in an unambiguous way. Withoutontologies, there really can be no semantics. The ontologies ARE the semantics,they define the meanings that are so essential for connecting information tocontext.

But there are still few widely used or standardized ontologies. Andgetting people to agree on common ontologies is not generally easy. Everyonehas their own way of describing things, their own worldview, and let’s face itnobody wants to use somebody else’s worldview instead of their own.Furthermore, the world is very complex and to adequately describe all the knowledgethat comprises what is thought of as “common sense” would require a very largeontology (and in fact, such an ontology exists – it’s called Cyc and it is solarge and complex that only experts can really use it today).

Even to describe the knowledge ofjust a single vertical domain, such as medicine, is extremely challenging. Tomake matters worse, the tools for authoring ontologies are still very hard touse – one has to understand the OWL language and difficult, buggy ontologyauthoring tools in order to use them. Domain experts who are non-technical andnot trained in formal reasoning or knowledge representation may find theprocess of designing ontologies frustrating using current tools. What is needed are commercial quality tools for buildingontologies that hide the underlying complexity so that people can just pourtheir knowledge into them as easily as they speak. That’s still a ways off, butnot far off. Perhaps ten years at the most.

Of course the difficulty ofdefining ontologies would be irrelevant if the necessary ontologies alreadyexisted. Perhaps experts could define them and then everyone else could justuse them? There are numerous ontologies already in existence, both on thegeneral level as well as about specific verticals. However in my own opinion,having looked at many of them, I still haven’t found one that has the rightbalance of coverage of the necessary concepts most applications need, andaccessibility and ease-of-use by non-experts. That kind of balance is arequirement for any ontology to really go mainstream.

Furthermore, regarding the presentcrop of ontologies, what is still lacking is standardization. Ontologists havenot agreed on which ontologies to use. As a result it’s anybody’s guess whichontology to use when writing a semantic application and thus there is a highdegree of ontology diversity today. Diversity is good, but too much diversityis chaos.

Applications that use differentontologies about the same things don’t automatically interoperate unless theirontologies have been integrated. This is similar to the problem of databaseintegration in the enterprise. In order to interoperate, different applicationsthat use different data schemas for records about the same things, have to bemapped to each other somehow – either at the application-level or the data-level.This mapping can be direct or through some form of middleware.

Ontologies canbe used as a form of semantic middleware, enabling applications to be mapped atthe data-level instead of the applications-level. Ontologies can also be usedto map applications at the applications level, by making ontologies of Webservices and capabilities, by the way. This is an area in which a lot ofresearch is presently taking place.

The OWL language can expressmappings between concepts in different ontologies. But if there are manyontologies, and many of them partially overlap, it is a non-trivial task toactually make the mappings between their concepts.

Even though concept A inontology one and concept B in ontology two may have the same names, and evensome of the same properties, in the context of the rest of the concepts intheir respective ontologies they may imply very different meanings. So simplymapping them as equivalent on the basis of their names is not adequate, theirconnections to all the other concepts in their respective ontologies have to beconsidered as well. It quickly becomes complex. There are some potential waysto automate the construction of mappings between ontologies however – but theyare still experimental. Today, integrating ontologies requires the help ofexpert ontologists, and to be honest, I’m not sure even the experts have itfigured out. It’s more of an art than a science at this point.

Darwinian Selection of Ontologies

All that is needed for mainstream adoption to begin is for a largebody of mainstream content to become semantically tagged andaccessible. This will cause whatever ontology is behind that content to become popular.

When developers see that there is significant content andtraction around aparticular ontology, they will use that ontology for their ownapplicationsabout similar concepts, or at least they will do the work of mappingtheir ownontology to it, and in this way the world will converge in a Darwinianfashionaround a few main ontologies over time.

These main ontologies will then beworth thetime and effort necessary to integrate them on a semantic level,resulting in acohesive Semantic Web. We may in fact see Darwinian natural selection take place not just at the ontology level, but at the level of pieces of ontologies.

A certain ontology may do a good job of defining what a person is, while another may do a good job of defining what a company is. These definitions may be used for a lot of content, and gradually they will become common parts of an emergent meta-ontology comprised of the most-popular pieces from thousands of ontologies. This could be great or it could be a total mess. Nobody knows yet. It’s a subject for further research.

Making Sense of Ontologies

Since ontologies are so important,it is helpful to actually understand what an ontology really is, and what itlooks like. An ontology is a system of formally defined related concepts. Forexample, a simple ontology is this set of statements such as this:

A human is a living thing.

A person is a human.

A person may have a first name.

A person may have a last name.

A person must have one and only onedate of birth.

A person must have a gender.

A person may be socially related toanother person.

A friendship is a kind of socialrelationship.

A romantic relationship is a kindof friendship.

A marriage is a kind of romanticrelationship.

A person may be in a marriage withonly one other person at a time.

A person may be employed by anemployer.

An employer may be a person or anorganization.

An organization is a group ofpeople.

An organization may have a productor a service.

A company is a type organization.

We’ve just built a simple ontologyabout a few concepts: humans, living things, persons, names, socialrelationships, marriages, employment, employers, organizations, groups,products and services. Within this system of concepts there is particular logic,some constraints, and some structure. It may or may not correspond to yourworldview, but it is a worldview that is unambiguously defined, can becommunicated, and is internally logically consistent, and that is what isimportant.

The Semantic Web approach providesan open-standard language, OWL, for defining ontologies. OWL also provides fora way to define instances of ontologies. Instances are assertions within theworldview that a given ontology provides. In other words OWL provides a meansto make statements that connect information to the ontology so that softwarecan understand its meaning unambiguously. For example, below is a set ofstatements based on the above ontology:

There exists a person x.

Person x has a first name “Sue”

Person x  has a last name “Smith”

Person x has a full name “Sue Smith”

Sue Smith was born on June 1, 2005

Sue Smith has a gender: female

Sue Smith has a friend: Jane, who isanother person.

Sue Smith is married to: Bob, anotherperson.

Sue Smith is employed by Acme, Inc, a company

Acme Inc. has a product, Widget2.0.

The set of statements above, plusthe ontology they are connected to, collectively comprise a knowledge basethat, if represented formally in the OWL markup language, could be understoodby any application that speaks OWL in the precise manner that it was intendedto be understood.

Making Metadata

The OWL language provides a way tomarkup any information such as a data record, an email message or a Web pagewith metadata in the form of statements that link particular words or phrasesto concepts in the ontology. When software applications that understand OWLencounter the information they can then reference the ontology and figure outexactly what the information means – or at least what the ontology says that itmeans.

But something has to add thesesemantic metadata statements to the information – and if it doesn’t add them or adds thewrong ones, then software applications that look at the information will getthe wrong idea. And this is another challenge – how will all this metadata getcreated and added into content? People certainly aren’t going to add it all byhand!

Fortunately there are many ways tomake this easier. The best approach is to automate it using special softwarethat goes through information, analyzes the meaning and adds semantic metadataautomatically. This works today, but the software has to be trained or providedwith rules and that takes some time. It also doesn’t scale cost-effectively tovast data-sets.

Alternatively, individuals can beprovided with ways to add semantics themselves as they author information. Whenyou post your resume in a semantically-aware job board, you could fill out aform about each of your past jobs, and the job board would connect that data toappropriate semantic concepts in an underlying employment ontology. As anend-user you would just fill out a form like you are used to doing;under-the-hood the job board would add the semantics for you.

Another approach is to leveragecommunities to get the semantics. We already see communities that are addingbasic metadata “tags” to photos, news articles and maps. Already a few simpletypes of tags are being used pseudo-semantically: subject tags and geographicaltags. These are primitive forms of semantic metadata. Although they are notexpressed in OWL or connected to formal ontologies, they are at leastsemantically typed with prefixes or by being entered into fields or specificnamespaces that define their types.

Tagging by Example

There may also be another solution to the problem of how to add semantics to content in the not to distant future. Once asuitable amount of content has been marked up with semantic metadata,it may be possible, through purely statistical forms of machinelearning, for software to begin to learn how to do a pretty good job ofmarking up new content with semantic metadata.

For example, if thestring “Nova Spivack” is often marked up with semantic metadata statingthat it indicates a person, and not just any person but a specificperson that is abstractly represented in a knowledge base somewhere,then when software applications encounter a new non-semanticallyenhanced document containing strings such as “Nova Spivack” or”Spivack, Nova” they can make a reasonably good guess that thisindicates that same specific person, and they can add the necessarysemantic metadata to that effect automatically.

As more and more semanticmetadata is added to the Web and made accessible it constitutes a statisticaltraining set that can be learned and generalized from. Although humansmay need to jump-start the process with some manually semantic tagging,it might not be long before software could assist them and eventuallydo all the tagging for them. Only in special cases would software needto ask a human for assistance — for example when totally new terms orexpressions were encountered for the first several times.

The technology for doing this learning already exists — and actually it’s not very different from how search engines like Google measure the community sentiment around web pages. Each time something is semantically tagged with a certain meaning that constitutes a “vote” for it having that meaning. The meaning that gets the most votes wins. It’s an elegant, Darwinian, emergent approach to learning how to automatically tag the Web.

One this is certain, if communities were able to tagthings with more types of tags, and these tags were connected to ontologies andknowledge bases, that would result in a lot of semantic metadata being added tocontent in a completely bottom-up, grassroots manner, and this in turn would enable this process to start to become automated or at least machine-augmented.

Getting the Process Started

But making the userexperience of semantic tagging easy (and immediately beneficial) enough that regular people will do it, is a challenge that has yet to be solved.However, it will be solved shortly. It has to be. And many companies andresearchers know this and are working on it right now. This does have to be solved to get the process of jump-starting the Semantic Web started.

I believe that the Tools Problem – the lack of commercial grade tools forbuilding semantic applications – is essentially solved already (although theproducts have not hit the market yet; they will within a few years at most).The Ontology Problem is further from being solved. I think the way this problemwill be solved is through a few “killer apps” that result in the building up ofa large amount of content around particular ontologies within particular onlineservices.

Where might we see this content initially arising? In my opinion it will most likely be within vertical communities of interest, communities of practice, and communities of purpose. Within such communities there is a need to create a common body of knowledge and to make that knowledge more accessible, connected and useful.

The Semantic Web can really improve the quality of knowledge and user-experience within these domains. Because they are communities, not just static content services, these organizations are driven by user-contributed content — users play a key role in building content and tagging it. We already see this process starting to take place in communities such as Flickr, del.icio.us, the Wikipedia and Digg. We know that communities of people do tag content, and consume tagged content, if it is easy and beneficial enough for to them to do so.

In the near future we may see miniature Semantic Webs arising around particular places, topics and subject areas, projects, and other organizations. Or perhaps, like almost every form of new media in recent times, we may see early adoption of the Semantic Web around online porn — what might be called “the sementic web.”

Whether you like it or not, it is a fact that pornography was one of the biggest drivers of early mainstream adoption of personal video technology, CD-ROMs, and also of the Internet and the Web.

But I think it probably is not necessary this time around. While, I’m sure that the so-called “sementic web” could become better from the Semantic Web, it isn’t going to be the primary driver of adoption of the Semantic Web. That’s probably a good thing — the world can just skip over that phase of development and benefit from this technology with both hands so to speak.

The World Wide Database

In some ways one could think of theSemantic Web as “the world wide database” – it does for the meaning of data records what theWeb did for the formatting documents. But that’s just the beginning. It actually turnsdocuments into richer data records. It turns unstructured data into structureddata. All data becomes structured data in fact. The structure is not merelydefined structurally, but it is defined semantically.

In other words, it’s notmerely that for example, a data record or document can be defined in such a wayas to specify that it contains a certain field of data with a certain label ata certain location – it defines what that field of data actually means in anunambiguous, machine understandable way. If all you want is a Web of data,XML is good enough. But if you want to make that data interoperable and machineunderstandable then you need RDF and OWL – the Semantic Web.

Like any database,the Semantic Web, or rather the myriad mini-semantic-webs that will comprise it,have to overcome the challenge of data integration. Ontologies provide a betterway to describe and map data, but the data still has to be described andmapped, and this does take some work. It’s not a magic bullet.

The Semantic Webmakes it easier to integrate data, but it doesn’t completely remove the dataintegration problem altogether. I think the eventual solution to this problemwill combine technology and community folksonomy oriented approaches.

The Semantic Web in HistoricalContext

Let’s transition now and zoom out to see the bigger picture. The Semantic Webprovides technologies for representing and sharing knowledge in new ways. Inparticular, it makes knowledge more accessible to software, and thus to otherpeople. Another way of saying this is that it liberates knowledge fromparticular human minds and organizations – it provides a way to make knowledgeexplicit, in a standardized format that any application can understand. This isquite significant. Let’s put this in historical perspective.

Before the invention of the printing press, there were two ways to spreadknowledge – one was orally, the other was in some symbolic form such as art orwritten manuscripts. The oral transmission of knowledge had limited range and ahigh error-rate, and the only way to learn something was to meet someone whoknew it and get them to tell you. The other option, symbolic communicationthrough art and writing, provided a means to communicate knowledgeindependently of particular people – but it was only feasible to produce a fewcopies of any given artwork or manuscript because they had to be copied byhand. So the transmission of knowledge was limited to small groups or at leastsmall audiences. Basically, the only way to get access to this knowledge was tobe one of the lucky few who could acquire one of its rare physical copies.

The invention of the printing press changed this – for the first timeknowledge could be rapidly and cost-effectively mass-produced and mass-distributed.Printing made it possible to share knowledge with ever-larger audiences. Thisenabled a huge transformation for human knowledge, society, government,technology – really every area of human life was transformed by thisinnovation.

The World Wide Web made the replication and distribution of knowledge eveneasier – With the Web you don’t even have to physically print or distributeknowledge anymore, the cost of distribution is effectively zero, and everyonehas instant access to everything from anywhere, anytime. That’s a lot betterthan having to lug around a stack of physical books. Everyone potentially haswhatever knowledge they need with no physical barriers. This has been anotherhuge transformation for humanity – and it has affected every area of humanlife. Like the printing press, the Web fundamentally changed the economics ofknowledge.

The Semantic Web is the next big step in this process – it will make all theknowledge of the human race accessible to software. For the first time,non-human things (software applications) will be able to start working withhuman knowledge to do things (for humans) on their own. This is a big leap – aleap like the emergence of a new species, or the symbiosis of two existingspecies into a new form of life.

The printing press and the Web changed the economics of replicating,distributing and accessing knowledge. The Semantic Web changes the economics ofprocessing knowledge. Unlike the printing press and the Web, the Semantic Webenables knowledge to be processed by non-human things.

In other words, humans don’t have to do all the thinking on their own, theycan be assisted by software. Of course we humans have to at least first createthe software (until we someday learn to create software that is smart enough tocreate software too), and we have to create the ontologies necessary for thesoftware to actually understand anything (until we learn to create software thatis smart enough to create ontologies too), and we have to add the semanticmetadata to our content in various ways (until our software is smart enough todo this for us, which it almost is already). But once we do the initial work ofmaking the ontologies and software, and adding semantic metadata, the systemstarts to pick up speed on its own, and over time the amount of work we humanshave to do to make it all function decreases. Eventually, once the system hasencoded enough knowledge and intelligence, it starts to function withoutneeding much help, and when it does need our help, it will simply ask us andlearn from our answers.

This may sound like science-fiction today, but in fact it a lot of this isalready built and working in the lab. The big hurdle is figuring out how to getthis technology to mass-market. That is probably as hard as inventing thetechnology in the first place. But I’m confident that someone will solve iteventually.

Once this happens the economics of processing knowledge will truly bedifferent than it is today. Instead of needing an actual real-live expert, theknowledge of that expert will be accessible to software that can act as theirproxy – and anyone will be able to access this virtual expert, anywhere,anytime. It will be like the Web – but instead of just information beingaccessible, the combined knowledge and expertise of all of humanity will alsobe accessible, and not just to people but also to software applications.

The Question of Consciousness

The Semantic Web literally enables humans to share their knowledge with eachother and with machines. It enables the virtualization of human knowledge andintelligence. With respect to machines, in doing this, it will lend machines“minds” in a certain sense – namely in that they will at least be able tocorrectly interpret the meaning of information and replicate the expertise ofexperts.

But will these machine-minds be conscious? Will they be aware of themeanings they interpret, or will they just be automatons that are simplyfollowing instructions without any awareness of the meanings they areprocessing? I doubt that software will ever be conscious, because from what Ican tell consciousness — or what might be called the sentient awareness ofawareness itself as well as other things that are sensed — is an immaterialphenomena that is as fundamental as space, time and energy — or perhaps evenmore fundamental. But this is just my personal opinion after having searchedfor consciousness through every means possible for decades. It just cannot befound to be something, yet it is definitely and undeniably taking place.

Consciousness can be exemplified through the analogy of space (but unlikespace, consciousness has this property of being aware, it’s not a mere lifelessvoid). We all agree space is there, but nobody can actually point to itsomewhere, and nobody can synthesize space. Space is immaterial andfundamental. It is primordial. So is electricity. Nobody really knows whatelectricity is ultimately, but if you build the right kind of circuit you canchannel it and we’ve learned a lot about how to do that.

Perhaps we may figure out how to channel consciousness like we channelelectricity with some sort of synthetic device someday, but I think that ishighly unlikely. I think if you really want to create consciousness it’s mucheasier and more effective to just have children. That’s something ordinarymortals can do today with the technology they were born with. Of course whenyou have children you don’t really “create” their consciousness, it seems to bethere on its own. We don’t really know what it is or where it comes from, orwhen it arises there. We know very little about consciousness today.Considering that it is the most fundamental human experience of all, it isactually surprising how little we know about it!

In any case, until we truly delve far more deeply into the nature of themind, consciousness will be barely understood or recognized, let aloneexplained or synthesized by anyone. In many eastern civilizations there aremulti-thousand year traditions that focus quite precisely on the nature ofconsciousness. The major religions have all universally concluded thatconsciousness is beyond the reach of science, beyond the reach of concepts,beyond the mind entirely. All those smart people analyzing consciousness for solong, and with such precision, and so many methods of inquiry, may have a pointworth listening to.

Whether or not machines will ever actually “know” or be capable of beingconscious of that meaning or expertise is a big debate, but at least we can allagree that they will be able to interpret the meaning of information and rulesif given the right instructions. Without having to be conscious, software willbe able to process semantics quite well — this has already been proven. It’sworking today.

While consciousness is and may always be a mystery that we cannot synthesize– the ability for software to follow instructions is an established fact. Inits most reduced form, the Semantic Web just makes it possible to providericher kinds of instructions. There’s no magic to it. Just a lot of details. Infact, to play on a famous line, “it’s semantics all the way down.”

The Semantic Web does not require that we make conscious software. It justprovides a way to make slightly more intelligent software. There’s a bigdifference. Intelligence is simply a form of information processing, for themost part. It does not require consciousness — the actual awareness of what isgoing on — which is something else altogether.

While highly intelligentsoftware may need to sense its environment and its own internal state andreason about these, it does not actually have to be conscious to do this. Theseoperations are for the most part simple procedures applied vast numbers of timeand in complex patterns. Nowhere in them is there any consciousness nor doesconsciousness suddenly emerge when suitable levels of complexity are reached.

Consciousness is something quite special and mysterious. And fortunately forhumans, it is not necessary for the creation of more intelligent software, noris it a byproduct of the creation of more intelligent software, in my opinion.

The Intelligence of the Web

So the real point of the Semantic Web is that it enables the Web to becomemore intelligent. At first this may seem like a rather outlandish statement,but in fact the Web is already becoming intelligent, even without the SemanticWeb.

Although the intelligence of the Web is not very evident at first glance,nonetheless it can be found if you look for it. This intelligence doesn’t existacross the entire Web yet, it only exists in islands that are few and farbetween compared to the vast amount of information on the Web as a whole. Butthese islands are growing, and more are appearing every year, and they arestarting to connect together. And as this happens the collective intelligenceof the Web is increasing.

Perhaps the premier example of an “island of intelligence” is theWikipedia, but there are many others: The Open Directory, portals such as Yahooand Google, vertical content providers such as CNET and WebMD, commercecommunities such as Craigslist and Amazon, content oriented communities such asLiveJournal, Slashdot, Flickr and Digg and of course the millions of discussionboards scattered around the Web, and social communities such as MySpace andFacebook. There are also large numbers of private islands of intelligence onthe Web within enterprises — for example the many online knowledge andcollaboration portals that exist within businesses, non-profits, andgovernments.

What makes these islands “intelligent” is that they are places where people(and sometimes applications as well) are able to interact with each other tohelp grow and evolve collections of knowledge. When you look at them close-upthey appear to be just like any other Web site, but when you look at what theyare doing as a whole – these services are thinking.They are learning, self-organizing, sensing their environments, interpreting,reasoning, understanding, introspecting, and building knowledge. These are theactivities of minds, of intelligent systems.

The intelligence of a system such as the Wikipedia exists on several levels– the individuals who author and edit it are intelligent, the groups that helpto manage it are intelligent, and the community as a whole – which isconstantly growing, changing, and learning – is intelligent.

Flickr and Digg also exhibit intelligence. Flickr’s growing system of tagsis the beginnings of something resembling a collective visual sense organ onthe Web. Images are perceived, stored, interpreted, and connected to conceptsand other images. This is what the human visual system does. Similarly, Digg isa community that collectively detects, focuses attention on, and interpretscurrent news. It’s not unlike a primitive collective analogue to the humanfacility for situational awareness.

There are many other examples of collective intelligence emerging on theWeb. The Semantic Web will add one more form of intelligent actor to the mix –intelligent applications. In the future, after the Wikipedia is connected tothe Semantic Web, as well as humans, it will be authored and edited by smartapplications that constantly look for new information, new connections, and newinferences to add to it.

Although the knowledge on the Web today is still mostly organized withindifferent islands of intelligence, these islands are starting to reach out andconnect together. They are forming trade-routes, connecting their economies,and learning each other’s languages and cultures. The next-step will be forthese islands of knowledge to begin to share not just content and services, butalso their knowledge — what they know about their content and services. The SemanticWeb will make this possible, by providing an open format for the representationand exchange of knowledge and expertise.

When applications integrate their content using the Semantic Web they willalso be able to integrate their context, their knowledge – this will make thecontent much more useful and the integration much deeper. For example, when anapplication imports photos from another application it will also be able toimport semantic metadata about the meaning and connections of those photos.Everything that the community and application know about the photos in theservice that provides the content (the photos) can be shared with the servicethat receives the content. Better yet, there will be no need for customapplication integration in order for this to happen: as long as both servicesconform to the open standards of the Semantic Web the knowledge is instantlyportable and reusable.

Freeing Intelligence from Silos

Today much of the real value of the Web (and in the world) is still lockedaway in the minds of individuals, the cultures of groups and organizations, andapplication-specific data-silos. The emerging Semantic Web will begin to unlockthe intelligence in these silos by making the knowledge and expertise theyrepresent more accessible and understandable.

It will free knowledge and expertise from the narrow confines of individualminds, groups and organizations, and applications, and make them not only moreinteroperable, but more portable. It will be possible for example for a personor an application to share everything they know about a subject of interest aseasily as we share documents today. In essence the Semantic Web provides acommon language (or at least a common set of languages) for sharing knowledgeand intelligence as easily as we share content today.

The Semantic Web also provides standards for searching and reasoning moreintelligently. The SPARQL query language enables any application to ask forknowledge from any other application that speaks SPARQL. Instead of merekeyword search, this enables semantic search. Applications can search forspecific types of things that have particular attributes and relationships toother things.

In addition, standards such as SWRL provide formalisms for representing andsharing axioms, or rules, as well. Rules are a particular kind of knowledge –and there is a lot of it to represent and share, for example proceduralknowledge, and logical structures about the world. An ontology provides a meansto describe the basic entities, their attributes and relations, but rulesenable you to also make logical assertions and inferences about them. Withoutgoing into a lot of detail about rules and how they work here, the importantpoint to realize is that they are also included in the framework. All forms ofknowledge can be represented by the Semantic Web.

Zooming Way, Waaaay Out

So far in this article, I’ve spenta lot of time talking about plumbing – the pipes, fluids, valves, fixtures,specifications and tools of the Semantic Web. I’ve also spent some time onillustrations of how it might be useful in the very near future to individuals,groups and organizations. But where is it heading after this? What is thelong-term potential of this and what might it mean for the human race on ahistorical time-scale?

For those of you who would prefer not to speculate, stop reading here. Forthe rest of you, I believe that the true significance of the Semantic Web, on along-term timescale is that it provides an infrastructure that will enable theevolution of increasingly sophisticated forms of collective intelligence. Ultimatelythis will result in the Web itself becoming more and more intelligent, untilone day the entire human species together with all of its software andknowledge will function as something like a single worldwide distributed mind –a global mind.

Just the like the mind of a single human individual, the global mind will bevery chaotic, yet out of that chaos will emerge cohesive patterns of thoughtand decision. Just like in an individual human mind, there will be feedbackbetween different levels of order – from individuals to groups to systems ofgroups and back down from systems of groups to groups to individuals. Becauseof these feedback loops the system will adapt to its environment, and to itsown internal state.

The coming global mind will collectively exhibit forms of cognition andbehavior that are the signs of higher-forms of intelligence. It will form andreact to concepts about its “self” – just like an individual human mind. Itwill learn and introspect and explore the universe. The thoughts it thinks maysometimes be too big for any one person to understand or even recognize them –they will be comprised of shifting patterns of millions of pieces of knowledge.

The Role of Humanity

Every person on the Internet will be a part of the global mind. Andcollectively they will function as its consciousness. I do not believe some newform of consciousness will suddenly emerge when the Web passes some thresholdof complexity. I believe that humanity IS the consciousness of the Web anduntil and unless we ever find a way to connect other lifeforms to the Web, orwe build conscious machines, humans will be the only form of consciousness ofthe Web.

When I say that humans will function as the consciousness of the Web I meanthat we will be the things in the system that know. The knowledge of theSemantic Web is what is known, but what knows that knowledge has to besomething other than knowledge. A thought is knowledge, but what knows thatthought is not knowledge, it is consciousness, whatever that is. We can figureout how to enable machines to represent and use knowledge, but we don’t knowhow to make them conscious, and we don’t have to. Because we are alreadyconscious.

As we’ve discussed earlier in this article, we don’t need conscious machines, we just need more intelligent machines.Intelligence – at least basic forms of it – does not require consciousness. It may be the case that the very highest forms of intelligence require or are capable of consciousness. This may mean that software will never achieve the highest levels of intelligence and probably guaranteesthat humans (and other conscious things) will always play a special role in theworld; a role that no computer system will be able to compete with. We providethe consciousness to the system. There may be all sorts of other intelligent,non-conscious software applications and communities on the Web; in fact therealready are, with varying degrees of intelligence. But individual humans, andgroups of humans, will be the only consciousness on the Web.

The Collective Self

Although the software of the Semantic Web will not be conscious we can say that system as a whole contains or is conscious to the extent that human consciousnesses are part of it. And like most conscious entities, it may also start to be self-conscious.

If the Web ever becomes a global mind as I am predicting, will it have a“self?” Will there be a part of the Web that functions as its central self-representation?Perhaps someone will build something like that someday, or perhaps it will evolve.Perhaps it will function by collecting reports from applications and people inreal-time – a giant collective zeitgeist.

In the early days of the Web portals such as Yahoo! provided this function — they were almost real-time maps of the Web and what was happening. Today making such a map is nearly impossible, but services such as Google Zeitgeist at least attempt to provide approximations of it. Perhaps through random sampling it can be done on a broader scale.

My guess is that the global mind will need a self-representation at somepoint. All forms of higher intelligence seem to have one. It’s necessary forunderstanding, learning and planning. It may evolve at first as a bunch ofcompeting self-representations within particular services or subsystems withinthe collective. Eventually they will converge or at least narrow down to just afew major perspectives. There may also be millions of minor perspectives thatcan be drilled down into for particular viewpoints from these top-level “portals.”

The collective self, will function much like the individual self – as amirror of sorts. Its function is simply to reflect. As soon as it exists theentire system will make a shift to a greater form of intelligence – because forthe first time it will be able to see itself, to measure itself, as a whole. Itis at this phase transition when the first truly global collective self-mirroring function evolves, that we can say that the transition from a bunch of cooperating intelligent parts toa new intelligent whole in its own right has taken place.

I think that the collective self, even if it converges on a few majorperspectives that group and summarize millions of minor perspectives, will becommunity-driven and highly decentralized. At least I hope so – because theself-concept is the most important part of any mind and it should be designedin a way that protects it from being manipulated for nefarious ends. At least Ihope that is how it is designed.

Programming the Global Mind

On the other hand, there are times when a little bit of adjustment or guidance iswarranted – just as in the case of an individual mind, the collective selfdoesn’t merely reflect, it effectively guides the interpretation of the pastand present, and planning for the future.

One way to change the direction ofthe collective mind, is to change what is appearing in the mirror of thecollective self. This is a form of programming on a vast scale – When thisprogramming is dishonest or used for negative purposes it is called “propaganda,” but there are cases whereit can be done for beneficial purposes as well. An example of this today ispublic service advertising and educational public television programming. Allforms of mass-media today are in fact collective social programming. When yourealize this it is not surprising that our present culture is violent andmessed up – just look at our mass-media!

In terms of the global mind, ideally one would hope that it would be able tolearn and improve over time. One would hope that it would not have the collective equivalent of psycho-social disorders. To facilitate this, just like any form of higherintelligence, it may need to be taught, and even parented a bit. It also mayneed a form of therapy now and then. These functions could be provided by thepeople who participate in it. Again, I believe that humans serve a vital and irreplaceablerole in this process.

How It All Might Unfold

Now how is this all going to unfold? I believe that there are a number ofkey evolutionary steps that Semantic Web will go through as the Web evolvestowards a true global mind:

1. Representing individual knowledge. The first step is to make individuals’knowledge accessible to themselves. As individuals become inundated withincreasing amounts of information, they will need better ways of managing it,keeping track of it, and re-using it. They will (or already do) need”personal knowledge management.”

2. Connecting individual knowledge. Next, once individual knowledge isrepresented, it becomes possible to start connecting it and sharing it acrossindividuals. This stage could be called “interpersonal knowledgemanagement.”

3. Representing group knowledge. Groups of individuals also need ways ofcollectively representing their knowledge, making sense of it, and growing itover time. Wikis and community portals are just the beginning. The Semantic Webwill take these “group minds” to the next level — it will make the collective knowledge ofgroups far richer and more re-usable.

4. Connecting group knowledge. This step is analogous to connectingindividual knowledge. Here, groups become able to connect their knowledge togetherto form larger collectives, and it becomes possible to more easily access andshare knowledge between different groups in very different areas of interest.

5. Representing the knowledge of the entire Web. This stage — what might becalled “the global mind” — is still in the distant future, but atthis point in the future we will begin to be able to view, search, and navigatethe knowledge of the entire Web as a whole. The distinction here is thatinstead of a collection of interoperating but separate intelligentapplications, individuals and groups, the entire Web itself will begin tofunction as one cohesive intelligent system. The crucial step that enables thisto happen is the formation of a collective self-representation. This enablesthe system to see itself as a whole for the first time.

How it May be Organized

I believe the global mind will be organized mainly in the form of bottom-up and lateral, distributed emergent computation andcommunity — but it will be facilitated by certain key top-down services thathelp to organize and make sense of it as a whole. I think this future Web willbe highly distributed, but will have certain large services within it as well– much like the human brain itself, which is organized into functionalsub-systems for processes like vision, hearing, language, planning, memory,learning, etc.

As the Web gets more complex there will come a day when nobody understandsit anymore – after that point we will probably learn more about how the Web isorganized by learning about the human mind and brain – they will be quitesimilar in my opinion. Likewise we will probably learn a tremendous amountabout the functioning of the human brain and mind by observing how the Webfunctions, grows and evolves over time, because they really are quite similarin at least an abstract sense.

The internet and its software and content is like a brain, and the state ofits software and the content is like its mind. The people on the Internet arelike its consciousness. Although these are just analogies, they are actuallyuseful, at least in helping us to envision and understand this complex system. Asthe field of general systems theory has shown us in the past, systems at verydifferent levels of scale tend to share the same basic characteristics and obeythe same basic laws of behavior. Not only that, but evolution tends to convergeon similar solutions for similar problems. So these analogies may be more thanjust rough approximations, they may be quite accurate in fact.

The future global brain will require tremendous computing and storageresources — far beyond even what Google provides today. Fortunately as Moore’s Law advances thecost of computing and storage will eventually be low enough to do thiscost-effectively. However even with much cheaper and more powerful computingresources it will still have to be a distributed system. I doubt that therewill be any central node because quite simply no central solution will be ableto keep up with all the distributed change taking place. Highly distributed problemsrequire distributed solutions and that is probably what will eventually emergeon the future Web.

Someday perhaps it will be more like a peer-to-peer network, comprised ofapplications and people who function sort of like the neurons in the human brain.Perhaps they will be connected and organized by higher-level super-peers orsuper-nodes which bring things together, make sense of what is going on andcoordinate mass collective activities. But even these higher-level serviceswill probably have to be highly distributed as well. It really will bedifficult to draw boundaries between parts of this system, they will all beconnected as an integral whole.

In fact it may look very much like a grid computing architecture – in whichall the services are dynamically distributed across all the nodes such that atany one time any node might be working on a variety of tasks for differentservices. My guess is that because this is the simplest, most fault-tolerant,and most efficient way to do mass computation, it is probably what will evolvehere on Earth.

The Ecology of Mind

Where we are today in this evolutionary process is perhaps equivalent to therise of early forms of hominids. Perhaps Austrolapithecus or Cro-Magnon, ormaybe the first Homo Sapiens. Compared to early man, the global mind is like the rise of 21stcentury mega-cities. A lot of evolution has to happen to get there. But itprobably will happen, unless humanity self-destructs first,which I sincerely hope we somehow manage to avoid. And this brings me to afinal point. This vision of the future global mind is highly technological;however I don’t think we’ll ever accomplish it without a new focus on ecology.

Ecology probably conjures up images of hippies and biologists, or maybehippies who are biologists, or at least organic farmers, for most people, but infact it is really the science of living systems and how they work. And anysystem that includes living things is a living system. This means that the Webis a living system and the global mind will be a living system too. As a living system, the Web is an ecosystem and is alsoconnected to other ecosystems. In short, ecology is absolutely essential tomaking sense of the Web, let alone helping to grow and evolve it.

In many ways the Semantic Web and the collective minds, and the global mind,that it enables, can be seen as an ecosystem of people, applications,information and knowledge. This ecosystem is very complex, much like naturalecosystems in the physical world. An ecosystem isn’t built, it’s grown, andevolved. And similarly the Semantic Web, and the coming global mind, will notreally be built, they will be grown and evolved. The people and organizationsthat end up playing a leading role in this process will be the ones thatunderstand and adapt to the ecology most effectively.

In my opinion ecology is going to be the most important science anddiscipline of the 21st century – it is the science of healthysystems. What nature teaches us about complex systems can be applied to everykind of system – and especially the systems we are evolving on the Web. Inorder to ever have a hope of evolving a global mind, and all the wonderfullevels of species-level collective intelligence that it will enable, we have tonot destroy the planet before we get there. Ecology is the science that cansave us, not the Semantic Web (although perhaps by improving collectiveintelligence, it can help).

Ecology is essentially the science of community – whether biological,technological or social. And community is a key part of the Semantic Web atevery level: communities of software, communities of people, and communities ofgroups. In the end the global mind is the ultimate human community. It is thereward we get for finally learning how to live together in peace and balancewith our environment.

The Necessity of Sustainability

The point of this discussion of the relevance of ecology to the future ofthe Web, and my vision for the global mind, is that I think that it is clearthat if the global mind ever emerges it will not be in a world that is anythinglike what we might imagine. It won’t be like the Borg in Star Trek, it won’t belike living inside of a machine. Humans won’t be relegated to the roles ofslaves or drones. Robots won’t be doing all the work. The entire world won’t becoated with silicon. We won’t all live in a virtual reality. It won’t be one ofthese technological dystopias.

In fact, I think the global mind can only come to pass in a much greener,more organic, healthier, more balanced and sustainable world. Because it willtake a long time for the global mind to emerge, if humanity doesn’t figure outhow to create that sort of a world, it will wipe itself out sooner or later,but certainly long before the global mind really happens. Not only that, butthe global mind will be smart by definition, and hopefully this intelligencewill extend to helping humanity manage its resources, civilizations andrelationships to the natural environment.

The Smart Environment

The global mind also needs a global body so to speak. It’s not going to bean isolated homunculus floating in a vat of liquid that replaces the physicalworld! It will be a smart environment that ubiquitously integrates with ourphysical world. We won’t have to sit in front of computers or deliberatelylogon to the network to interact with the global mind. It will be everywhere.

The global mind will be physically integrated into furniture, houses,vehicles, devices, artworks, and even the natural environment. It will sensethe state of the world and different ecosystems in real-time and alert humansand applications to emerging threats. It will also be able to allocateresources intelligently to compensate for natural disasters, storms, andenvironmental damage – much in the way that the air traffic control systemsallocates and manages airplane traffic. It won’t do it all on its own, humansand organizations will be a key part of the process.

Someday the global mind may even be physically integrated into our bodiesand brains, even down the level of our DNA. It may in fact learn how to curediseases and improve the design of the human body, extending our lives, sensorycapabilities, and cognitive abilities. We may be able to interact with it bythought alone. At that point it will become indistinguishable from a limitedfrom of omniscience, and everyone may have access to it. Although it will onlyextend to wherever humanity has a presence in the universe, within thatboundary it will know everything there is to know, and everyone will be able toknow any of it they are interested in.

Enabling a Better World

By enabling greater forms of collective intelligence to emerge we really arehelping to make a better world, a world that learns and hopefully understandsitself well enough to find a way to survive. We’re building something thatsomeday will be wonderful – far greater than any of us can imagine. We’re helpingto make the species and the whole planet more intelligent. We’re building thetools for the future of human community. And that future community, if it ever arrives,will be better, more self-aware, more sustainable than the one we live intoday.

I should also mention that knowledge is power, and power can be used forgood or evil. The Semantic Web makes knowledge more accessible. This puts more power in the hands of the many, not just the few. As long as we stick to this vision — we stick to making knowledge open and accessible, using open standards, in as distributed a fashion as we can devise, then the potential power of the Semantic Web will be protected against being coopted or controlled by the few at the expense of the many. This is where technologists really have to be socially responsible when making development decisions. It’s important that we build a more open world, not a less open world. It’s important that we build a world where knowledge, integration and unification are balanced with respect for privacy, individuality, diversity and freedom of opinion.

But I am not particularly worried that the Semantic Web and the future globalmind will be the ultimate evil – I don’t think it is likely that we will end upwith a system of total control dominated by evil masterminds with powerfulSemantic Web computer systems to do their dirty work. Statistically speaking, criminal empires don’t last very long because theyare run by criminals who tend to be very short-sighted and who also surroundthemselves with other criminals who eventually unseat them, or theyself-destruct. It’s possible that the Semantic Web, like any other technology,may be used by the bad guys to spy on citizens, manipulate the world, and doevil things. But only in the short-term.

In the long-term either our civilization will get tired of endlesssuccessions of criminal empires and realize that the only way to actuallysurvive as a species is to invent a form of government that is immune to beingtaken over by evil people and organizations, or it will self-destruct. Eitherway, that is a hurdle we have to cross before the global mind that I envisioncan ever come about. Many civilizations came before ours, and it is likely thatours will not be the last one on this planet. It may in fact be the case that adifferent form of civilization is necessary for the global mind to emerge, andis the natural byproduct of the emergence of the global mind.

We know that the global mind cannot emerge anytime soon, and therefore, ifit ever emerges then by definition it must be in the context of a civilizationthat has learned to become sustainable. A long-term sustainable civilization is a non-evil civilization. And that is why I think it is a safebet to be so optimistic about the long-term future of this trend.

Is Moral Judgement Hard-Wired Into the Brain?

A Harvard University researcher believes that moral judgement is hard-wired into the brain:

The moral grammar now universal among people presumably evolved to its
final shape during the hunter-gatherer phase of the human past, before
the dispersal from the ancestral homeland in northeast Africa some
50,000 years ago. This may be why events before our eyes carry far
greater moral weight than happenings far away, Dr. Hauser believes,
since in those days one never had to care about people remote from ones
environment.

Dr. Hauser believes that the moral grammar may have evolved through the
evolutionary mechanism known as group selection. A group bound by
altruism toward its members and rigorous discouragement of cheaters
would be more likely to prevail over a less cohesive society, so genes
for moral grammar would become more common.

Playing Proteins as Songs Helps Researchers Hear Patterns

All living things are made up of proteins. Each protein is a string of
amino acids. There are 20 different amino acids, and each protein can
consist of dozens to thousands of them.

Scientists write down these amino acid sequences as series of
text letters. Clark and her colleagues assign musical notes to the
different values of the amino acids in each sequence. The result is
music in the form of "protein songs."

By listening to the songs, scientists and students alike can
hear the structure of a protein. And when the songs of the same protein
from different species are played together, their similarities and
differences are apparent to the ear.

"It’s an illustration transferred into a medium people will
find more accessible than just [text] sequences," Clark said. "If you
look at protein sequences, if you just read those as they are written
down, recorded in a database, it’s hard to get a sense for the
pattern."

When people look at a page full of text corresponding to
protein sequences, Clark explained, they tend spot clusters of letters
but fail to see the larger pattern.

"If you play [the protein song for that sequence] you get that
sense of the pattern much more strongly," she said. "That’s my feeling
at least. You hear stuff you can’t see."

From National Geographic

Is There Room for The Soul? – Good Article on Cognitive Science

This is a surprisingly good article on the nature of consciousness — providing a survey of the current state-of-the-art in cognitive science research. It covers the question from a number of perspectives and interviews many of the leading current researchers.

Why Machines Will Never be Conscious

Below is the text of my bet on Long Bets. Go there to vote.

“By 2050 no synthetic computer nor machine intelligence will have become truly self-aware (ie. will become conscious).”

Spivack’s Argument:

(This summary includes my argument, a method for judging the outcomeof this bet and some other thoughts on how to measure awareness…)

A. MY PERSPECTIVE…

Even if a computer passes the Turing Test it will not really beaware that it has passed the Turing Test. Even if a computer seems tobe intelligent and can answer most questions as well as an intelligent,self-aware, human being, it will not really have a continuum ofawareness, it will not really be aware of what it seems to “think” or”know,” it will not have any experience of it’s own reality or being.It will be nothing more than a fancy inanimate object, a clevermachine, it will not be a truly sentient being.

Self-awareness is not the same thing as merely answering questionsintelligently. Therefore even if you ask a computer if it is self-awareand it answers that it is self-aware and that it has passed the TuringTest, it will not really be self-aware or really know that it haspassed the Turing Test.

AsJohn Searle and others have pointed out, the Turing Test does notactually measure awareness, it just measures informationprocessing—particularly the ability to follow rules or at leastimitate a particular style of communication. In particular it measuresthe ability of a computer program to imitate humanlike dialogue, whichis different than measuring awareness itself. Thus even if we succeedin creating good AI, we won’t necessarily succeed in creating AA(“Artificial Awareness”).

But why does this matter? Becauseultimately, real awareness may be necessary to making an AI that is asintelligent as a human sentient being. However, since AA istheoretically impossible in my opinion, truly self-aware AI will neverbe created and thus no AI will ever be as intelligent as a humansentient being even if it manages to fool someone into thinking it is(and thus passing the Turing Test).

In my opinion, awareness isnot an information process at all and will never be simulated orsynthesized by any information process. Awareness cannot be measured byan information processing system, it can only be measured by awarenessitself—something no formal information processing system can eversimulate or synthesize.

One might ask how it is that a humanhas awareness then? My answer is that awareness does not arise from thebody or the brain, nor does it arise from any physical cause. Awarenessis not in the body or the brain, but rather the body and the brain arein awareness. The situation is analagous to a dream, a simulation orvirtual reality, such as that portrayed in the popular film “TheMatrix.”

We exist in the ultimate virtual reality. The mediumof this virtual reality is awareness. That is to say that whateverappears to be happening “out there” or “within the mind” is happeningwithin a unified, nondualistic field of awareness: both the “subject”and the “object” exist equally within this field and neither is thesource of awareness.

This is similar to the case where weproject ourselves as dream protagonists in our own dreams—even thoughour dream bodies appear to be different than other dream-images theyare really equally dream appearances, they are no more fundamental thandream-objects. We identify with our dream-bodies out of habit andbecause it’s practical because the stories that take place appear fromthe perspective of particular bodies. But just because this virtualreality is structured as if awareness is coming from within our heads,it does not mean that is actually the case. In fact, quite the oppositeis taking place.

Awareness is not actually “in” the VR, the VR is”in” awareness. Things are exactly the opposite of how they appear. Ofcourse this is just an analogy—for example, unlike the Matrix, thevirtual reality we live in is not running on some giant computersomewhere and there is no other hidden force controlling it from behindthe scenes. Awareness is the fabric of reality and there is nothingdeeper, nothing creating it, it is not running on some cosmic computer,it comes out of of nowhere yet everything else comes out of it.

Ifwe look for awareness we can’t find anything to grasp, it is empty yetnot a mere nothingness, it is an emptiness that is awake, creative,alert, radiant, self-realizing.

Awareness is empty andfundamental like space, but it goes beyond space for it is also lucid.If we look for space we don’t find anything there. Nobody has evertouched or grasped space directly! But unlike space, awareness can atleast be measured directly–it can measure itself, it knows its ownnature.

Awareness is simply fundamental, a given, theunderlying meta-reality in which everything appears. How did it come tobe? That is unanswerable. What is it? That is unanswerable as well. Butthere is no doubt that awareness is taking place. Each sentient beinghas a direct and intimate experience of their own self-awareness.

Each of us experiences a virtual reality in which we and our world areprojections. That which both projects these projections and experiencesthem is awareness. This is like saying that the VR inherently knows itsown content. But in my opinion this knowing comes from outside thesystem, not from some construct that we can create inside it. So anyawareness that arises comes from the transcendental nature of realityitself, not from our bodies, minds, or any physical system within aparticular reality.

So is there one cosmic awareness out therethat we are all a part of? Not exactly, there is not one awareness norare there many awarenesses because awareness is not a physical thingand cannot be limited by such logical materialist extremes. After allif it is not graspable how can we say it is one or many or any otherlogical combination of one or many? All we can say is that we are it,whatever it is, and that we cannot explain it further. In beingawareness, we are all equal, but we are clearly not the same. We aredifferent projections and on a relative level we are each unique, eventhough on an ultimate level perhaps we are also unified by beingprojections within the same underlying continuum. Yet this continuum isfundamentally empty, impossible to locate or limit, and infinitelybeyond the confines of any formal system or universe, so it cannotreally be called a “thing” and thus we are not “many” or “one” inactuality, what we really are is totally beyond such dualisticdistinctions.

Awareness is like space or reality, something sofundamental, so axiomatic, that it is impossible to prove, grasp ordescribe from “inside” the system using the formal logical tools of thesystem. Since nothing is beyond awareness, there is no outside, no wayto ever gain a perspective on awareness that is not mediated byawareness itself.

Therefore there is no way to reduce awarenessto anything deeper; there is no way to find anything more fundamentalthan awareness. But despite this awareness can be directly experienced,at least by itself.

That which is aware is self-aware.Self-awareness is the very nature of awareness. The self-awareness ofawareness does not come from something else, it is inherent toawareness itself. Only awareness is capable of awareness. Nothing thatis not aware can ever become aware.

This means awareness istruly fundamental, it has always been present everywhere. Awareness isinherent in the universe as the very basis of everything, it is notsomething anyone can synthesize and we cannot build a machine that cansuddenly experience awareness.

Only beings who are awarealready can ever experience awareness. The fact that we are aware nowmeans that we were always aware, even before we were born! Otherwise wenever could have become aware in the first place!

Each of us “is”awareness. The experience of being aware is unique and undeniable. Ithas its own particular nature, but this cannot be expressed it can onlybe known directly. There is no sentient being that is not aware.Furthermore, it would be a logical contradiction to claim that “I amnot aware that I am aware” or “that I am aware that I am not aware” andthus if anyone claims they are not aware or have ever experienced, orcan even imagine, there not being awareness they are lying. There isnobody who does not experience their own awareness, even if they don’trecognize or admit that they experience it.

The experience ofbeing self-aware is the unique experience of “being” — an experienceso basic that it is indescribable in terms of anything else —something that no synthetic computer will ever have.

Eventually, it will be proved that no formal information processingsystem is capable of self-awareness and that thus formal computerscannot be self-aware in principle. This proof will use the abstractself-referential structure of self-awareness to establish that noformal computer can ever be self-aware.

Simplyput, computers and computer programs cannot be truly self-referential:they always must refer to something else—there must at least be a setof fixed meta-rules that are not self-referential for a computer orprogram to work. Awareness is not like this however, awareness isperfectly self-referential without referring to anything else.

Thequestion will then arise as to what self-awareness is and how it ispossible. We will eventually conclude that systems that are self-awareare not formal systems and that awareness must be at least asfundamental as, or more fundamental than, space, time and energy.

Currentlymost scientists and non-scientists consider the physical world to beoutside of awareness and independent of it. But considering that nobodyhas or will ever experience anything without awareness it is illogicalto assume that anything is really outside of awareness. It is actuallyfar more rational to assume that whatever arises or is experienced isinside awareness, and that nothing is outside of awareness. Thisassumption of everything being within awareness would actually be amore scientific, observation-based conclusion than the oppositeassumption which is entirely unfounded on anything we have ever or willever be able to observe. After all, we have never observed anythingapart from awareness have we? Thus contrary to current beliefs, theonus is on scientists to prove that anything is outside of awareness,not the other way around!

Awareness is quite simply theultimate primordial basic nature of reality itself—without awarenessthere could be no “objective reality” at all and no “subjective beings”to experience it. Awareness is completely transcendental, beyond alllimitations and boundaries, outside of all possible systems. Whathubris to think we can simply manufacture, or evolve, awareness with apile of electrified silicon hardware and some software rules.

Nomatter how powerful the computer, no matter what it is made of, and nomatter how sophisticated or emergent the software is, it will stillnever be aware or evolve awareness. No computer or machine intelligencewill ever be aware. Even a quantum computer—if it is equivalent to afinite non-quantum computer at least—will not be capable ofawareness, and even if it is a transinfinite computer I still have mydoubts that it could ever be aware. Awareness is simply not aninformation process.

B. METHOD OF JUDGING THIS BET…

So the question ultimately is, how do we measureawareness or at least determine whether a computer is or is not aware?How can we judge the outcome of this bet?

I propose a method here: we let the bettors mutually agree on a judge.If the judge is a computer, fine. If the judge is a human, fine. Butboth bettors must agree on the judge. If both bettors accept that partyas the judge then the result will be deemed final and reliable. If acomputer is chosen by both parties to judge this, then I will concededefeat—but it would take a lot for any computer to convince me thatit is aware and thus qualified to judge this competition. On the otherhand, my opponent in this debate may accept a human judge—butobviously since they believe that computers can be aware if they accepta human judge they would be contradicting their own assertion—if acomputer is really intelligent and aware why would they choose a humanjudge over a computer judge?

This “recursive” judge-selection judging approach appeals to ourinherent direct human experience of awareness and the fact that wetrust another aware sentient being more than an inaminate machine tojudge whether or not something is aware. This may be the only practicalsolution to this problem: If both parties agree that a computer canjudge and the computer says the other computer is aware, then so be it!If both parties agree that a human can judge and the human says thatthe computer is not aware, so be it! May the best judge win!

Now, as long as we’re on the subject, how do we know that otherhumans, such as our potential human judge(s), are actually aware? Iactually believe that self-awareness is detectable by other beings thatare also aware, but not detectable by computers that are not aware.

C. A REVERSE TURING TEST FOR DETECTING AWARENESS IN A COMPUTER…

Ipropose a reversal of the Turing test for determining whether acomputer is aware (and forgive me in advance if anyone else has alreadyproposed this somewhere, I would be happy to give them credit).

Here is the test: Something is aware if whenever it is presented with acase where a human being and a synthetic machine intelligence areequally intelligent and capable of expression and interaction BUT notequally aware (the human is aware and the machine is not actuallyaware), then it can reliably and accurately figure out that the humanbeing is really aware and the machine is not really aware.

Ibelieve that only systems that are actually aware can correctlydifferentiate between two equally intelligent entities where one issentient and the other just a simulation of sentience, given enoughtime and experience with those systems.

How can such a differentiation be made? Assuming the human andcomputer candidates are equally intelligent and interactive, what isthe signature of awareness or lack of awareness? What difference isthere that can be measured? In my opinion there is a particular, yetindescribable mutual recognition that takes place when I encounteranother sentient being. I recognize their self-awareness with my ownself-awareness. Think of it as the equivalent of a “network handshake”that occurs at a fundamental level between entities that are actuallyaware.

How is this recognition possible? Perhaps it is due tothe fact that awareness, being inherently self-aware, is alsoinherently capable of recognizing awareness when it encounters it.

Onanother front, I actually have my doubts that any AI will ever beequally intelligent and interactive as a human sentient being. Inparticular I think this is not merely a matter of the difficulty ofbuilding such a complex computer, but rather it is a fundamentaldifference between machine cognition and the congition of a sentientbeing.

A human sentient being’s mind transcends computation.Sentient cognition transcends the limits of formal computation, it isnot equivalent to Turing Machine, it is much more powerful than that.We humans are not formal systems, we are not Turing Machines. Humanscan think in a way that no computer will ever be able to match letalone imitate convincingly. We are able to transcend our own logics,our own belief systems, our own programs, we are able to enter andbreak out of loops at will, we are able to know inifinities, to docompletely irrational, spontaneous and creative things. We are muchcloser to infinity than any finite state automaton can ever be. We aresimply not computers, although we can sometimes think like them theycannot really think like us.

In any case, this may be “faith”but for now at least I am quite certain that I am aware and that otherhumans and animals are also aware but that machines, plants and otherinanimate objects are not aware. I am certain that my awareness vastlytranscends any machine intelligence that exists or ever will exist. Iam certain that your awareness is just as transcendent as mine.Although I cannot prove that I am aware or that you are aware to you Iam willing to state such on the basis of my own direct experience and Iknow that if you take a moment to meditate on your own self-awarenessyou will agree.

After all, we cannot prove the existence of spaceor time either—these are just ideas and even physics has notexplained their origins nor can anyone even detect them directly, yetwe both believe they exist, don’t we?

Now if I claimed that asuitably complex computer simulation would someday suddenly containreal physical space and time that was indistinguishable in any way fromthe physical space and time outside the simulation—you would probablydisagree. You would say that the only “real” space-time is actually notin the computer but containing the computer, and any space-time thatappears within the computer simulation is but a mere lower-orderimitation and nothing like the real space-time that contains thecomputer.

No simulation can ever be exactly the same as what itsimulates, even if it is functionally similar or equivalent, forseveral reasons. On a purely information basis, it should be obviousthat if simulation B is within something else called A, then for B tobe exactly the same as A it must contain A and B and so on infinitely.At least if there is a finite amount of space and time to work with wesimply cannot build anything like this, we cannot build a simulationthat contains an exact simulation of itself without getting into aninfinite regression. Beyond this, there is a difference in medium: Inthe case of machine intelligence the medium is physical space, time andenergy—that is what machine intelligence is made of. In the case ofhuman awareness the medium is awareness itself, something at least asfundamental as space-time-energy if not more fundamental. Althoughhuman sentience can perform intelligent cognition, using a brain forexample, it is not a computer and it is not made of space-time-energy.Human sentience goes beyond the limits of space-time-energy andtherefore beyond computers.

If someone builds a Turing Machine that simulates a Turing Machinesimulating a Turing Machine, the simulation will never even start, letalone be useable! As the saying goes, it’s Turtles All The Way Down! Ifyou have a finite space and time, but an infinite initial condition, ittakes forever to simply set up the simulation let alone to compute it.

Thisis the case with self-awareness as well: It is truly self-referential.No finite formal system can complete an infinitely self-referentialprocess in finite time. We sentient beings can do this however.Whenever we realize our own awareness direclty—that is whenever weARE aware (as opposed to just representing this fact as a thought) weare being infinitely self-referential in finite time. That must mean weare either able to do an infinite amount of computing in a finiteamount of time, or we are not computing at all. Perhaps self-awarenessjust happens instantly and inherently rather than iteratively.

On a practical level as well we can see that there is adiffernece between a simulated experience within a simulation and theactual reality it attempts to simulate that exists outside thesimulation. For example, suppose I make a computer simulation ofchocolate and a simulated person who can eat the chocolate. Even thoughthat simulated person tastes the simulated chocolate, they do notreally taste chocolate at all—they have no actual experience of whatchocolate really tastes like to beings in reality (beings outside thesimulation).

Even if there are an infinite number of levels ofsimulation above the virtual reality we are in now, awareness is alwaysultimately beyond them all—it is the ultimate highest-level ofreality, there is nothing beyond it.

Thus even an infinitelyhigh-end computer simulation of awareness will be nothing like actualawareness and will not convince a truly aware being that it is actuallyaware.

New Study: TV May Cause Autism

This study is strange. But plausible.

Today, Cornell University researchers are reporting
what appears to be a statistically significant relationship between
autism rates and television watching by children under the age of 3.
The researchers studied autism incidence in California, Oregon,
Pennsylvania, and Washington state. They found that as cable television
became common in California and Pennsylvania beginning around 1980,
childhood autism rose more in the counties that had cable than in the
counties that did not. They further found that in all the Western
states, the more time toddlers spent in front of the television, the
more likely they were to exhibit symptoms of autism disorders.

From: Slate

Study: Woman in Coma Able to Respond With Thoughts

Wow…

A severely brain-damaged woman in an unresponsive, vegetative state
showed clear signs of conscious awareness on brain imaging tests
,
researchers are reporting today, in a finding that could have
far-reaching consequences for how unconscious patients are cared for
and diagnosed.


In response to commands, the
patient’s brain flared with activity, lighting the same language and
planning regions that are active when healthy people hear the commands.
Previous studies had found similar activity in partly conscious
patients, who occasionally respond to commands, but never before in
someone who was totally

 

This opens up a whole new range of possibilities. For example, what if there was a way to fit a comatose patient with a brain activity sensor that could enable them to think of certain things in order to trigger things in their environment? For example, suppose that the woman above could think of playing tennis and that would cause the radio to turn on or off in her hospital room? Similarly, if she thought about moving around her house, suppose that could alert a nurse that she needed pain medication or to be repositioned, etc.? This could provide a way for comatose people to communicate with their caregivers and have some control over their environments. It might even be possible to teach them things to think about in order to indicate "yes" and "no" answers to questions. So someone could ask them questions about what they experience and they could answer yes or no. It might even be possible to teach them to communicate letters so they could spell out messages.

The whole premise that a comatose person has no conscious awareness or sensation may be overturned by this. Perhaps they are much more aware than we thought but they are simply unable to control their bodies in order to speak or move? If that is the case, they must be desperate for a way to communicate and this could be the answer.