Bottlenose Announces Free Live Visualization of Global Social Trends

Bottlenose has just launched something very very cool: A free version of it’s live visualization of trends in the Twitter firehose.  Check it out at http://sonar.bottlenose.com and get your own embed for any topic. This is the future of real-time marketing. And by the way it’s also an awesome visualization of the global mind as it thinks collective thoughts.

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

This article is part of a series of articles about the Bottlenose Public Beta launch.

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence (you are here)

A New Window Into the Collective Consciousness

Bottlenose offers a new window into what the world is paying attention to right now, globally and locally.

We show you a live streaming view of what the crowd is thinking, sharing and talking about. We bring you trends, as they happen. That means the photos, videos and messages that matter most. That means suggested reading, and visualizations that cut through the clutter.

The center of online attention and gravity has shifted from the Web to social networks like Twitter, Facebook and Google+. Bottlenose operates across all them, in one place, and provides an integrated view of what’s happening.

The media also attempts to provide a reflection of what’s happening in the world, but the media is slow, and it’s not always objective. Bottlenose doesn’t replace the media — at least not the role of the writer. But it might do a better job of editing or curating in some cases, because it objectively measures the crowd — we don’t decide what to feature, we don’t decide what leads. The crowd does.

Other services in the past, like Digg for example, have helped pioneer this approach. But we’ve taken it further — in Digg people had to manually vote. In Bottlenose we simply measure what people say, and what they share, on public social networks.

Bottlenose is the best tool for people who want to be in the know, and the first to know. Bottlenose brings a new awareness of what’s trending online, and in the world, and how those trends impact us all.

We’ve made the Bottlenose home page into a simple Google-like query field, and nothing more. Results pages drop you into the app itself for further exploration and filtration. Except you don’t just get a long list of results, the way you get on Google.

Instead, you get an at-a-glance start page, a full-fledged newspaper, a beautiful photo gallery, a lean-back home theater, a visual map of the surrounding terrain, a police scanner, and Sonar — an off-road vehicle so that you can drive around and see what’s trending in networks as you please. We’ve made the conversation visual.

Each of these individual experiences is an app on top of the Bottlenose StreamOS platform, and each is a unique way of looking at sets and subsets of streams. You can switch between views effortlessly, and you can save anything for persistent use.

Discovery, we’ve found from user behavior, has been the entry point and the connective tissue for the rest of the Bottlenose experience all along. Our users have been asking for a better discovery experience, just as Twitter users have been asking for the same.

The new stuff you’ll see today has been one of the most difficult pieces for us to build computer-science-wise. It is a true technical achievement by our engineering team.

In many ways it’s also what we’ve been working towards all along. We’re really close now to the vision we held for Bottlenose at the very beginning, and the product we knew we’d achieve over time.

The Theory Behind It: How to Build a Smarter Global Brain

If Twitter, Facebook, Google+ and other social networks are the conduits for what the planet is thinking, then Bottlenose is a map of what the planet is actually paying attention to right now. Our mission is to “organize the world’s attention.” And ultimately I think by doing this we can help make the world a smarter place. At at the end of the day that’s what gets me excited in life.

After many years of thinking about this, I’ve come to the conclusion that the key to higher levels of collective intelligence is not making each person smarter, and it’s not some kind of Queen Bee machine up in the sky that tells us all what to do and runs the human hive. It’s not some fancy kind of groupware either. And it’s not the total loss of individuality into a Borg-like collective either.

I think that better collective intelligence really comes down to enabling better collective consciousness. The more conscious we can be of who we are collectively, and what we think, and what we are doing, the smarter we can actually be together, of our own free will, as individuals. This is a bottom-up approach to collective consciousness.

So how might we make this happen?

For the moment, let’s not try to figure out what consciousness really is, because we don’t know, and we probably never will, but regardless, for this adventure, we don’t need to. And we don’t even need to synthesize it either.

Collective consciousness is not a new form of consciousness, rather, it’s a new way to channel the consciousness that’s already there — in us. All we need to do is find a better way to organize it… or rather, to enable it to self-organize emergently.

What does consciousness actually do anyway?

Consciousness senses the internal and external world, and maintains a model of what it finds — a model of the state of the internal and external world that also contains a very rich model of “self” within it.

This self construct has an identity, thoughts, beliefs, emotions, feelings, goals, priorities, and a focus of attention.

If you look for it, it turns out there isn’t actually anything there you can find except information — the “self” is really just a complex information construct.

This “self” is not really who we are, it’s just a construct, a thought really — and it’s not consciousness either. Whatever is aware is aware of the self, so the self is just a construct like any other object of thought.

So given that this “self” is a conceptual object, not some mystical thing that we can’t ever understand, we should be able to model it, and make something that simulates it. And in fact we can.

We can already do this for artificially intelligent computer programs and robots in a primitive way in fact.

But what’s really interesting to me is that we can also do it for large groups of people too. This is a big paradigm shift – a leap. Something revolutionary really. If we can do it.

But how could we provide something like a self for groups, or for the planet as a whole? What would it be like?

Actually, there is already a pretty good proxy for this and it’s been around for a long time. It’s the media.

The Media is a Mirror

The media senses who we are and what we’re doing and it builds a representation — a mirror – in the form of reports, photos, articles, and stats about the state of the world. The media reflects who we are back to us. Or at least it reflects who it thinks we are…

It turns out it’s not a very accurate mirror. But since we don’t have anything better, most of us believe what we see in the media and internalize it as truth.

Even if we try not to, it’s just impossible to avoid the media that bombards us from everywhere all the time. Nobody is really separate from this, we’re all kind of stewing a media soup, whether we like it or not.

And when we look at the media and we see stories – stories about the world, about people we know, people we don’t know, places we live in, and other places, and events — we can’t help but absorb them. We don’t have first hand knowledge of those things, and so we take on faith what the media shows us.

We form our own internal stories that correspond to the stories we see in the media. And then, based on all these stories, we form beliefs about the world, ourselves and other people – and then those beliefs shape our behavior.

And there’s the rub. If the media gives us an inaccurate picture of reality, or a partially accurate one, and then we internalize it, it then conditions our actions. And so our actions are based on incomplete or incorrect information. How can we make good decisions if we don’t have good information to base them on?

The media used to be about objective reporting, and there are still those in the business who continue that tradition. But real journalists — the kind who would literally give their lives for the truth — are fewer and fewer. The noble art of journalism is falling prey, like everything else, to commercial interests.

There are still lots of great journalists and editors, but there are fewer and fewer great media companies. And fewer rules and standards too. To compete in today’s media mix it seems they have to stoop to the level of the lowest common denominator and there’s always a new low to achieve when you take that path.

Because the media is driven by profit, stories that get eyeballs get prioritized, and the less sensational but often more statistically representative stories don’t get written, or don’t make it onto the front page. There is even a saying in the TV news biz that “If it bleeds, it leads.”

Look at the news — it’s just filled with horrors. But that’s not an accurate depiction of the world. For example crimes don’t happen all the time, everywhere, to everyone – they are statistically quite unlikely and rare — yet so much news is devoted to crimes for example. It’s not an accurate portrayal of what’s really happening for most people, most of the time.

I’m not saying the news shouldn’t report crime, or show scary bad things. I’m just pointing out that the news is increasingly about sensationalism, fear, doubt, uncertainty, violence, hatred, crime, and that is not the whole truth. But it sells.

The problem is not that these things are reported — I am not advocating for censorship in any way. The problem is about the media game, and the profit motives that drive it. Media companies just have to compete to survive, and that means they have to play hard ball and get dirty.

Unfortunately the result is that the media shows us stories that do not really reflect the world we live in, or who we are, or what we think, accurately – these stories increasingly reflect the extremes, not the enormous middle of the bell curve.

But since the media functions as our de facto collective consciousness, and it’s filled with these images and stories, we cannot help but absorb them and believe them, and become like them.

But what if we could provide a new form of media, a more accurate reflection of the world, of who we are and what we are doing and thinking? A more democratic process, where anyone could participate and report on what they see.

What if in this new form of media ALL the stories are there, not just some of them, and they compete for attention on a level playing field?

And what if all the stories can compete and spread on their merits, not because some professional editor, or publisher, or advertiser says they should or should not be published?

Yes this is possible.

It’s happening now.

It’s social media in fact.

But for social media to really do a better job than the mainstream media, we need a way to organize and reflect it back to people at a higher level.

That’s where curation comes in. But manual curation is just not scalable to the vast number of messages flowing through social networks. It has to be automated, yet not lose its human element.

That’s what Bottlenose is doing, essentially.

Making a Better Mirror

To provide a better form of collective consciousness, you need a measurement system that can measure and reflect what people are REALLY thinking about and paying attention to in real-time.

It has to take a big data approach – it has to be about measurement. Let the opinions come from the people, not editors.

This new media has to be as free of bias as possible. It should simply measure and reflect collective attention. It should report the sentiment that is actually there, in people’s messages and posts.

Before the Internet and social networks, this was just not possible. But today we can actually attempt it. And that is what we’re doing with Bottlenose.

But this is just a first step. We’re dipping our toe in the water here. What we’re doing with Bottlenose today is only the beginning of this process. And I think it will look primitive compared to what we may evolve in years to come. Still it’s a start.

You can call this approach mass-scale social media listening and analytics, or trend detection, or social search and discovery. But it’s also a new form of media, or rather a new form of curating the media and reflecting the world back to people.

Bottlenose measures what the crowd is thinking, reading, looking at, feeling and doing in real-time, and coalesces what’s happening across social networks into a living map of the collective consciousness that anyone can understand. It’s a living map of the global brain.

Bottlenose wants to be the closest you can get to the Now, to being in the zone, in the moment. The Now is where everything actually happens. It’s the most important time period in fact. And our civilization is increasingly now-centric, for better or for worse.

Web search feels too much like research. It’s about the past, not the present. You’re looking for something lost, or old, or already finished — fleeting.  Web search only finds Web pages, and the Web is slow… it takes time to make pages, and time for them to be found by search engines.

On the other hand, discovery in Bottlenose is about the present — it’s not research, it’s discovery. It’s not about memory, it’s about consciousness.

It’s more like media — a live, flowing view of what the world is actually paying attention to now, around any topic.

Collective intelligence is theoretically made more possible by real-time protocols like Twitter. But in practice, keeping up with existing social networks has become a chore, and not drowning is a real concern. Raw data is not consciousness. It’s noise. And that’s why we so often feel overwhelmed by social media, instead of emboldened by it.

But what if you could flip the signal-to-noise ratio? What if social media could be more like actual media … meaning it would be more digestible, curated, organized, consumable?

What if you could have an experience that is built on following your intuition, and living this large-scale world to the fullest?

What if this could make groups smarter as they get larger, instead of dumber?

Why does group IQ so often seem inversely proportional to group size? The larger groups get, the dumber and more dysfunctional they become. This has been a fundamental obstacle for humanity for millennia.

Why can’t groups (including communities, enterprises, even whole societies) get smarter as they get larger instead of dumber? Isn’t it time we evolve past this problem? Isn’t this really what the promise of the Internet and social media is all about? I think so.

And what if there was a form of media that could help you react faster, and smarter, to what is going on around you as it happens, just like in real life?

And what if it could even deliver on the compelling original vision of the cyberspace as a place you could see and travel through?

What about getting back to the visceral, the physical?

Consciousness is interpretive, dynamic, and self-reflective. Social media should be too.

This is the fundamental idea I have been working on in various ways for almost a decade. As I have written many times, the global brain is about to wake up and I want to help.

By giving the world a better self-representation of what it is paying attention to right now, we are trying to increase the clock rate and resolution of collective consciousness.

By making this reflection more accurate, richer, and faster, and then making it available to everyone, we may help catalyze the evolution of higher levels of collective intelligence.

All you really need is a better mirror. A mirror big enough for large groups of people to look into and see what they are collectively paying attention to in it, together. By providing groups with a clearer picture of their own state and activity, they can adapt to themselves more intelligently.

Everyone looks in the collective mirror and adjusts their own behavior independently — there is no top-down control — but you get emergent self-organizing intelligent collective behavior as a result. The system as a whole gets smarter. So the better the mirror, the smarter we become, individually and collectively.

If the mirror is really fast, really good, really high res, and really accurate and objective – it can give groups an extremely important, missing piece: Collective consciousness that everyone can share.

We need collective consciousness that exists outside of any one person, and outside of any one perspective or organization’s agenda, and is not merely just in the parts (the individuals) either. Instead, this new level of collective consciousness should be something that is coalesced into a new place, a new layer, where it exists independently of the parts.

It’s not merely the sum of the parts, it’s actually greater than the sum – it’s a new level, a new layer, with new information in it. It’s a new whole that transcends just the parts on their own.  That’s the big missing piece that will make this planet smarter, I think.

We need this yesterday. Why? Because in fact collectives — groups, communities, organizations, nations — are the units of change on this planet. Not individuals.

Collectives make decisions, and usually these decisions are sub-optimal. That’s dangerous. Most of the problems we’ve faced and continue to face as a species come down to large groups doing stupid things, mainly due not having accurate information about the world or themselves. This is, ultimately, an engineering problem.

We should fix this, if we can.

I believe that the Internet is an evolving planetary nervous system, and it’s here to to make us smarter. But it’s going to take time. Today it’s not very smart. But it’s evolving fast.

Higher layers of knowledge, and intelligence are emerging in this medium, like higher layers of the cerebral cortex, connecting everything together ever more intelligently.

And we want to help make it even smarter, even faster, by providing something that functions like self-consciousness to it.

Now I don’t claim that what we’re making with Bottlenose is the same as actual consciousness — real consciousness is, in my opinion a cosmic mystery like the origin of space and time. We’ll probably never understand it. I hope we never do. Because I want there to be mystery and wonder in life. I’m confident there always will be.

But I think we can enable something on a collective scale, that is at least similar, functionally, to the role of self-consciousness in the brain — something that reflects our own state back to us as a whole all the time.

After all, the brain is a massive collective of hundreds of billions of neurons and trillions of connections that themselves are not conscious or even intelligent – and yet it forms a collective self and reacts to itself intelligently.

And this feedback loop – and the quality of the reflection it is based on – is really the key to collective intelligence, in the brain, and for organizations and the planet.

Collective intelligence is an emergent phenomena, it’s not something to program or control. All you need to do to enable it and make it smarter, is give groups and communities better quality feedback about themselves. Then they get smarter on their own, simply by reacting to that feedback.

Collective intelligence and collective consciousness, are at the end of the day, a feedback loop. And we’re trying to make that feedback loop better.

Bottlenose is a new way to curate the media, a new form of media in which anyone can participate but the crowd is the editor. It’s truly social media.

This is an exciting idea to me. It’s what I think social media is for and how it could really help us.

Until now people have had only the mainstream, top-down, profit-driven media to look to. But by simply measuring everything that flows through social networks in real time, and reflecting a high-level view of that back to everyone, it’s possible to evolve a better form of media.

It’s time for a bottom-up, collectively written and curated form of media that more accurately and inclusively reflects us to ourselves.

Concluding Thoughts

I think Bottlenose has the potential to become the giant cultural mirror we need.

Instead of editors and media empires sourcing and deciding what leads, the crowd is the editor, the crowd is the camera crew, and the crowd decides what’s important. Bottlenose simply measures the crowd and reflects it back to itself.

When you look into this real-time cultural mirror that is Bottlenose, you can see what the community around any topic is actually paying attention to right now. And I believe that as we improve it, and if it becomes widely used, it could facilitate smarter collective intelligence on a broader scale.

The world now operates at a ferocious pace and search engines are not keeping up. We’re proud to be launching a truly present-tense experience. Social messages are the best indicators today of what’s actually important, on the Web, and in the world.

We hope to show you an endlessly interesting, live train of global thought. The first evolution of the Stream has run its course and now it’s time to start making sense of it on a higher level. It’s time to start making it smart.

With the new Bottlenose, you can see, and be a part of, the world’s collective mind in a new and smarter way. That is ultimately why Bottlenose is worth participating in.

Keep Reading

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence (you are here)

 

A New Approach to Artificial Intelligence: Non-Computational AI

I was recently contacted by a computer scientist, Sergey Bulanov, who has been working quietly for 20 years on a new approach to artificial intelligence. It’s a pretty interesting and novel approach, and I would like to see what others think about it.

From what I understand, the essence of Sergey’s approach is a new form of computer reasoning that implements “non-computational” networks of logical operations to solve problems.

It is “non-computational” in the sense that it is not an expert system or traditional computer program — rather it is a network of simple operators that compute locally and interact with one another, to emergently arrive at results, reflected by an overall state of the system at the end of the process. This approach reminds me of “connectionist” approaches to AI, such as neural networks and cellular automata.

Sergey believes that his approach could be an important step towards making truly humanlike artificial intelligence in the future. His point is that the brain is a non-computational system, and might in fact use some of these principles.

Sergey calls his approach “Artificial Consciousness,” but I don’t think the word “consciousness” adds value here – and it may even distract from the core idea. But, for the moment, let’s not argue about terminology — his theory is very interesting.

Sergey states that he has used this approach, to solve every logic problem in Raymond Smullyan’s book, Lady and the Tiger. For more info, read Sergey’s overview of his theory. You can read some more of his writings on this theory, here.

You can also view a working simulation of the system in operation, here.

I can’t explain it very well, so here is Sergey’s explanation to me, from our correspondence (please note, he is not a native English speaker, so I have added some corrections to his letter to improve readability):

1.

I consider the present version of system, which only solves logical tasks, to not be a truly “intelligent” system. This system is only a starting point for my investigations. This system only looks like it is intelligent because it is solving tasks that are hard for people. The idea for how to solve logical problems in this way came to me accidentally by thinking about the book, Lady and the Tiger, by Raymond Smullyan. In my classification of AI, a system for solving logical puzzles appears to be a kind of low complexity system (according my theory). This present version of the system is just a step along the way towards more sophisticated AI.

2.

Despite my low valuation of systems for logical solving, for practical use at least, such systems can be amusing for people. And such system can be the starting point to thinking about more sophisticated “non-computational” systems. The theory of such systems is well developed for computational case and such system is called SAT system (Boolean satisfiability problem).

The essence of the problem is as follows. Suppose we have a logical expression. (In our case the logical expression reflects the statement of a puzzle). And we consider that logical expression has value “TRUE” (in our case the formulation of the puzzle is true). Then we shall find out logical arguments of this expression which satisfy this expression (to make this expression to be “TRUE”). This procedure is so called NP-complete. In the worst case, this requires full enumeration of all possible arguments. The SAT approach aims to reduce the probable enumerations. The methods of SAT is well developed. But I don’t know about this at the beginning of my work. Moreover, from the beginning I started to create a non-computational approach.

3.

My idea was very simple. Assume we have a logical function , “AND,” with two arguments. This function will have output value “TRUE” only in case where both of its arguments are “TRUE”. So if we know the value of the output of function, we can predict (not in any cases) the value of its inputs.

The formulation of the puzzle is expressed as a logical expression. The expression is represented in a form of a tree (mathematical tree). This tree you can see at video in my website. The nodes of the tree are logical functions (AND, OR and some more types). These nodes are represented as balls in the video. Each ball has one output link and several input links. The state of the function can be TRUE (red ball), FALSE (blue ball) and UNKNOWN (grey ball). From the beginning the logical tree has some nodes with pre-determined initial values (according to the formulation of the puzzle). These values are reassigned not only at the top or the bottom of the tree, but also in the middle of it.

After the start of the system,  each ball (each of the logical functions, i.e. each node) can fill states of the adjacent nodes. And each of the balls begins to continuously correct its state depending on the states of the nearby balls. For example, if one of the balls bears function AND with three inputs (thee arguments) and the upper ball sends to this ball information to be a “TRUE” then this ball will assign value “TRUE” at the each of its three inputs. In such a way different kinds of information will be propogated through the tree until a steady state is reached.

This information can change until steady state, asynchronously and even without clocking (this is not proved by me). During the theory about NP-completeness, solving can’t be reached unconditionally (like solving in the linear or differential equations). After some time, the system reaches an unresolvable state and it would need some more iterations to reach the complete solution. The system can be knocked out from each of these unresolvable states by assuming a hypothesis on one of the unresolved balls. The system can reach a global contradiction state or it can reach a global solution. If system doesn’t reach global solution or global contradiction state we must add a next hypothesis on the one of the next balls. In case of contradiction state we must change one of the hypotheses (typically the last hypothesis).

So the system can reach the solution (or set of the solutions) during the iterations between the assignment of hypotheses. This solving can be achieved without explicit algorithm and it can be achieved on non-computational structure, thousands or million time faster than in the computational devices.

4.

These results appear to be an unusual and promising for the AI domain. The importance of these results is in the demonstration of possibilities of non-computational solving of complicated tasks. I hope this system can attract attention of people to develop non-computational cognitive system millions times more powerful than human brain.

But unfortunately this kind of system is not yet a true AI system. Below is some explanations of why.

5.

A full AI system can’t be based on traditional (simple) logical basis. The system represented in our website can solve some kinds of logical tasks. But it can’t discus with humans about these tasks. It can’t explain the solving of these tasks. It can’t (and never could in future) understand natural written text. And it couldn’t do most of the human brain’s functions. One of the most fundamental reasons is that a network of logical functions (as I represent it) could only solve logical tasks, and it can’t grow by its own reasoning. There are many reasons to construct completely another kind of AI system based on different principles. But creating of more complicated system would be hard without understanding principles and problems of more simple system. Logical systems, such as mine, can be a starting point of the way to more powerful systems that apply my non-computational approach.

6.

I came to idea that a really powerful system must be based on the idea of mathematical sets. I found a way to create a network based on sets that can grow, and how such a network can solve different tasks. The range of these tasks is much greater than only solving of mathematical puzzles. I am working on this presently.

7.

My idea for a the chain of model tasks is not an engine of the system but it is a method of research. This  idea is very close to the statement of philosopher Bertrand Russell:

“The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it”.

So that is my approach. For example, I made an expression of the idea of logical functions without logical notions. And I found unusual ideas for my novel system in this way.

There is another example of my principle. Assume we take a simplest question, so simple that decision of this question would be almost inevitable. Then if the decision would have high quality, the principles of this decision can be applied to a next but more complicated question. So moving from simple task to more complicated we can develop our theory.

I hope Sergey’s 20 years of thinking in this direction will prove interesting, and perhaps even fruitful, for the field of artificial intelligence. It does appear to me to be a novel and potentially promising vein of innovation.

Best of luck to Sergey and his collaborators. I’m always happy to see really original thinking in the field of AI.

Keeping Up With the Stream — New Problems and Solutions

This is Part III of a series of articles on the new era of the Stream, a new phase of the Web.

In Part I, The Message is the Medium, I explored the shift in focus on the Web from documents to messages.

In Part II, Drowning in the Stream, we dove deep into some of the key challenges the Stream brings with it.

Here in Part III, we will discuss new challenges and solutions for keeping up with streams as they become increasingly noisy and fast-moving.

 

Getting Attention in Streams

Today if you post a message to Twitter, you have a very small chance of that message getting attention. What’s the solution?

You can do social SEO and try to come up with better, more attention-grabbing, search engine attracting, headlines. You can try to schedule your posts to appear at optimal times of day. You can even try posting the same thing many times a day to increase the chances of it being seen.

This last tactic is called “Repeat Posting” and it’s soon going to be clogging up all our streams with duplicate messages. Why am I so sure this is going to happen? Because we are in an arms race for attention. In a room where everyone is talking, everyone starts talking louder, and soon everyone is shouting.

Today when  you post a message to Twitter, the chances of getting anyone’s attention are low and they are getting lower.  If you have a lot of followers, the chances are a little better that at least some of them may be looking at their stream at precisely the time you post. But still, even with a lot of followers, the odds are that most of your followers probably won’t be online at that precise moment you post something, and so they’ll miss it.

Scheduled Posting

But it turns out there are optimal times of day to post, when more of your followers are likely to be looking at their streams. A new category of apps, typified by Buffer, has emerged to help you schedule your Tweets to post at such optimal times.

Using apps like Buffer, you can get more attention to your Tweets, but this is only a temporary solution. Because the exponential growth of the Stream means that soon even posting a message at an optimal time will not be enough to get it in front of everyone who should see it.

Repeat Posting

To really get noticed, above the noise, you need your message to be available at more than one optimal time, for example many times a day, or even every hour.

To achieve this, instead of posting a message once at the optimal time per day, we may soon see utilities that automatically post the same message many times a day – maybe every hour – perhaps with slightly different wording of headlines, to increase the chances that people will see them. I call this “repeat posting” or “message rotation.”

Repeat posting tools may get so sophisticated that they will A/B test different headlines and wordings and times of day to see what gets the best clickthroughs and then optimize for those. These apps may even intelligently rotate a set of messages over several days, repeating them optimally until they squeeze out every drop of potential attention and traffic, much like ad servers and ad networks rotate ads today.

But here’s the thing — as soon as anyone starts manually or automatically using repeat posting tactics, it will create an arms race – others will notice it, and compete for attention by doing the same thing. Soon everyone will have to post repeatedly to simply get noticed above the noise of all the other repeat posts.

This is exactly what happens when you are speaking in a crowded room. In a room full of people who are talking at once, some people start talking louder. Soon everyone is shouting and losing their voice at the same time.

This problem of everyone shouting at once is what is soon going to happen on Twitter and Facebook and other social networks. It’s already happening in some cases – more people are posting the same message more than once a day to get it noticed.

It’s inevitable that repeat posting behavior will increase, and when everyone starts doing it, our channels will become totally clogged with redundancy and noise. They will become unusable.

What’s the solution to this problem?

What to Do About Repeat Posting

One thing that is not the solution is to somehow create rules against repeat posting. That won’t work.

Another solution that won’t work is to attempt to detect and de-dupe repeats that occur. It’s hard to do this, and easy to create repeat posts that have different text and different links, to evade detection.

Another solution might be to recognize that repeat posting is inevitable, but to make the process smarter: Whenever a repeat posting happens, delete the previous repeat post. So at any given time the message only appears once in the stream. At least this prevents people from seeing the same thing many times at once in a stream. But it still doesn’t solve the problem of people seeing messages come by that they’ve seen already.

A better solution is to create a new consumption experience for keeping up with streams, where relevant messages are actually surfaced to users, instead of simply falling below the fold and getting buried forever. This would help to ensure that people would see the messages that were intended for them, and that they really wanted to see.

If this worked well enough, there would be less reason to do scheduled posting, let alone repeat posting. You could post a message once, and there would be much better chance of it being seen by your audience.

At Bottlenose, we’re working on exactly this issue in a number of ways. First of all, the app computes rich semantic metadata for messages in streams automatically, which makes it possible to filter them in many ways.

Bottlenose also computes the relevance of every message to every user, which enables ranking and sorting by relevancy, and the app provides smart automated assistants that can help to find and suggest relevant messages to users.

We’re only at the beginning of this and these features are still in early beta, but already we’re seeing significant productivity gains.

Fast-Moving Streams

As message volume increases exponentially in streams, our streams are going to not just going to be noisier, they are going to move faster. When we look at any stream there will be more updates per minute – more new messages scrolling in – and this will further reduce the chances of any message getting noticed.

Streams will begin to update so often they will literally move all the time. But how do you read, let alone keep up with, something that’s always moving?

Today, if you follow a Twitter stream for a breaking news story, such as a natural disaster like the Tsunami in Japan, or the death of Steve Jobs, you can see messages scrolling in, in real-time every second.

In fact, when Steve Jobs died, Twitter hit a record peak of around 50,000 Tweets per minute. If you were following that topic on Twitter at that time, the number of new messages pouring was impossible to keep up with.

Twitter has put together a nice infographic showing the highest Tweets Per Second events of 2011.

During such breaking news events, if you are looking at a stream for the topic, there is not even time to read a message before it has scrolled below the fold and been replaced by a bunch of more recent messages. The stream moves too fast to even read it.

But this doesn’t just happen during breaking news events. If you simply follow a lot of people and news sources, you will see that you start getting a lot of new messages every few minutes.

In fact, the more people and news sources, saved searches, and lists that you follow, the higher the chances are that at any given moment there are going to be many new messages for you.

Even if you just follow a few hundred people, the chances are pretty high that you are getting a number of new messages in Twitter and Facebook every minute. That’s way more messages than you get in email.

And even if you don’t follow a lot of people and news sources – even if you diligently prune your network, unfollow people, and screen out streams you don’t want, the mere exponential growth of message volume in coming years is soon going to catch up with you. Your streams are going to start moving faster.

But are there any ways to make it easier to keep up with these “whitewater streams?”

Scrolling is Not the Answer

One option is to just make people scroll. Since the 1990’s UX designers have been debating the issue of scrolling. Scrolling works, but it doesn’t work well when the scrolling is endless, or nearly endless. The longer the page, the lower percentage of users will scroll all the way down.

This becomes especially problematic if users are asked to scroll in long pages – for example infinite streams of messages going back from the present to the past (like Twitter, above). The more messages in the stream, the less attention those messages that are lower in the stream, below the fold, will get.

But that’s just the beginning of the problem. When a stream is not only long, but it’s also moving and changing all the time, it becomes much less productive to scroll. As you scroll down new stuff is coming in above you, so then you have to scroll up again, and then down again. It’s very confusing.

In long streams that are also changing constantly it is likely that engagement statistics will be very different than for scrolling down static pages. I think it’s likely engagement will be much lower, the farther down such dynamic streams one goes.

Pausing the Scroll is Not the Answer

Some apps handle this problem of streams moving out from under you by pausing auto-scrolling as you read – they simply notify you that there are new messages above whatever you are looking at. You can then click to expand the stream above and see the new messages. Effectively they make dynamic streams behave as if they are not dynamic, until you are ready to see the updates.

This at least enables you to read without the stream moving out from under you. It’s less disorienting that way. But in fast moving streams where there are constantly new updates coming in, you have to click on the “new posts above” notification frequently, and it gets tedious.

For example, here is Twitter, on a search for Instagram, a while after the news of their acquisition by Facebook. After waiting only a few seconds, there are 20 new tweets already. If you click the bar that says “20 new Tweets” they expand. But by the time you’ve done that and started reading them, there are 20 more.

 

Simply clicking to read “20 new tweets” again and again is tedious. And furthermore, it doesn’t really help users cope with the overwhelming number of messages and change in busy streams.

The problem here is that streams are starting to move faster than we can read, even faster than we can click. How do you keep up with this kind of change?

Tickers and Slideshows Are Helpful

Another possible solution to the problem of keeping up with moving streams is to make the streams become like news tickers, constantly updating and crawling by as new stuff comes in. Instead of trying to hide the movement of the stream, make it into a feature.

Some friends and I have tested this idea out in an iPad app we built for this purpose called StreamGlider. You can download StreamGlider and try it out for yourself.

StreamGlider shows streams in several different ways — including a ticker mode and a slideshow mode where streams advance on their own as new messages arrive.

 

The Power of Visualization

Another approach to keeping up with fast moving streams is to use visualization, like we’re doing in Bottlenose, with our Sonar feature. By visualizing what is going on in a stream you can provide a user with instant understanding of what is in the stream and what is important and potentially interesting to them, without requiring them to scroll, skim or read everything first.

Sonar reads all the messages in any stream, applies natural language and semantic analysis to them, detects and measures emerging topics, and then visualizes them in realtime as the stream changes.

It shows you what is going on in the stream – in that pile of messages you don’t have time to scroll through and read. As more messages come in, Sonar updates in realtime to show you what’s new.

You can click on any trend in Sonar that interests you, to quickly zoom into just the messages that relate.

The beauty of this approach is that it avoids scrolling until you absolutely want to. Instead of scrolling, or even skimming the messages in a stream, you just look at Sonar and see if there are any trends you care about. If there are, you click to zoom in and see only those messages. It’s extremely effective and productive.

Sonar is just one of many visualizations that could help with keeping up with change in huge streams. But it’s also only one piece of the solution. Another key piece of the solution is finding things in streams.

Finding Things in Streams

Above, we discussed problems and solutions related to keeping up with streams that are full of noise and constantly changing. Now let’s discuss another set of problems and solutions related to finding things in streams.

Filtering the Stream

For a visualization like Sonar to be effective, you need the ability to filter the stream for the sources and messages you want, so there isn’t too much noise in the visualization. The ability to filter the stream for just those subsets of messages you actually care about is going to be absolutely essential in coming years.

Streams are going to become increasingly filled with noise. But another way to think about noisy streams is that they are really just lots of less-noisy streams multiplexed together.

What we need is a way to intelligently and automatically de-multiplex them back into their component sub-streams.

For example, take the stream of all the messages you receive from Twitter and Facebook combined. That’s probably a pretty noisy stream. It’s hard to read, hard to keep up with, and quickly becomes a drag.

In Bottlenose you can automatically de-multiplex your streams into a bunch of sub-streams that are easier to manage. You can then read these, or view them via Sonar, to see what’s going on at a glance.

For example, you can instantly create sub-streams – which are really just filters on your  stream of everything. You might make one for just messages by people you like, another for just messages by influencers, another for just news articles related to your interests, another for just messages that are trending, another of just photos and videos posted by your friends, etc.

The ability to filter streams – to mix them and then unmix them – is going to be an essential tool for working with streams.

Searching the Stream

In the first article in this series we saw how online attention and traffic is shifting from search to social. Social streams are quickly becoming key drivers for how content on the Web is found. But how are things found in social streams? It turns out existing search engines, like Google, are not well-suited for searching in streams.

Existing algorithms for Web search do not work well for Streams. For example, consider Google’s PageRank algorithm.

In order to rank the relevancy of Web pages, PageRank needs a very rich link structure. It needs a Web of pages with lots of links between the documents. The link structure is used to determine which pages are the best for various topics. Effectively links are like votes – when pages about a topic link to other pages about that topic, they are effectively voting for or endorsing those pages.

While PageRank may be ideal for figuring out what Web pages are best, it doesn’t help much for searching messages, because messages may have no links at all, or may be only very sparsely linked together. There isn’t enough data in individual messages to figure out much about them.

So how do you know if a given message is important? How do you figure out what messages in a stream actually matter?

When searching the stream, instead of finding everything, we need to NOT find the stuff we don’t want. We need to filter out the noise. And that requires new approaches to search. We’ve already discussed filtering above and the ability to filter streams is a per-requisite for searching them intelligently. Beyond that, you need to be able to measure what is going on within streams, in order to detect emerging trends and influence.

The approach we’re taking in Bottlenose to solve this is a set of algorithms we call “StreamRank.” In StreamRank we analyze the series of messages in a stream to figure out what topics, people, links and messages are trending over time.

We also analyze the reputations or influence of message authors, and the amount of response (such as retweets or replies or likes) that messages receive.

In addition, we also measure the relevance of messages and their authors to the user, based on what we know of the user’s interest graph and social graph.

This knowledge enables us to rank messages in a number of ways: by date, by popularity, by relevance, by influence, and by activity.

Another issue that comes up when searching the Stream is that many messages in streams are quite strange looking – they don’t look like properly formed sentences or paragraphs. They don’t look like English, for example. They contain all sorts of abbreviations, hashtags, @replies, and short URLs, and they often lack punctuation and are scrunched to fit in 140 character Twitter messages.

Search algorithms that use any kind of linguistics, disambiguation, natural language processing, or semantics, don’t work well out of the box on these messy messages.

To apply such techniques you need to rewrite them so that they work on short, messy, strange looking messages. This is also something we’ve built in Bottlenose — we’ve built a new natural language processing and topic detection engine in Javascript that is designed specifically to handle these types of streams and messages.

These are some of the new challenges and solutions we’re applying in Bottlenose to make working with streams more productive. They are components of what we call our “StreamOS,” a new high-level Javascript and HTML5 operating system for applications that need to do smart things with streams. We’ll be writing a lot more about this in future articles.

 

Drowning in the Stream — New Challenges for a New Web

This is Part II of a three-part series of articles on how the Stream is changing the Web.

In Part I of this series, The Message is the Medium, I wrote about some of the shifts that are taking place as the center of online attention shifts from documents to messages.

Here in Part II, we will explore some of the deeper problems that this shift is bringing about.

New Challenges in the Era of the Stream

Today the Stream has truly arrived. The Stream is becoming primary and the Web is becoming secondary. And with this shift, we face tremendous new challenges, particularly around overload. I wrote about some of these problems for Mashable in an article called, “Sharepocalypse Now.”

The Sharepocalypse is here. It’s just too easy to share, there is too much stuff being shared, there are more people sharing, and more redundant ways to share the same things. The result is that we are overloaded with messages coming at us from all sides.

For example, I receive around 13,000 messages/day via various channels, and I’m probably a pretty typical case. You can see a more detailed analysis here.

As the barrier to messaging has become lower and people have started sending more messages than ever before, messaging behavior has changed. What used to be considered spam is now considered to be quite acceptable.

Noise is Increasing

In the 1990’s emailing out a photo of the interesting taco you are having for lunch to everyone you know would have been considered highly spammy behavior. But today we call that “foodspotting” and we happily send out pictures of our latest culinary adventure on multiple different social networks at once.

Spam is the New Normal

It’s not just foodspotting – the same thing is happening with check-ins, and with the new behavior of “pinning” things (the new social bookmarking) that is taking place in Pinterest. Activities that used to be considered noise have somehow started to be thought of as signal. But in fact, for most people, they are still really noise.

The reason this is happening is that the barrier to sharing is much lower than it once was. Email messages took some thought to compose – they were at least a few paragraphs long. But today you can share things that are 140 characters or less, or just a photo without even any comments. It’s instant and requires no investment or thought.

Likewise, in the days of email you had to at least think, “is it appropriate to send this or will it be viewed as spam?” Today people don’t even have that thought anymore. Send everything to everyone all the time. Spam is the new normal.

Sharing is a good thing, but like any good thing, too much of it becomes a problem.

The solution is not to get people to think before sharing, or to share less, or to unfollow people, or to join social networks where you can only follow a few people (like Path or Pair), it’s to find a smarter way to deal with the overload that is being created.

Notifications Overload

Sharing is not the only problem we’re facing. There are many other activities that generate messages as well. For example, we’re getting increasing numbers of notifications messages from apps. These notifications are not the result of a person sharing something, they are the result of an app wanting to get our attention.

We’re getting many types of notifications, for example:

  • When people follow us
  • When we’re tagged in photos
  • When people want to be friends with us
  • When there are news articles that match our interests
  • When friends check-in to various places
  • When people are near us
  • When our flights are delayed
  • When our credit scores change
  • When things we ordered are shipped
  • When there are new features in apps we use
  • When issue tickets are filed or changed
  • When files are shared with us
  • When people mention or reply to us
  • When we have meeting invites, acceptances, cancellations, or meetings are about to start
  • When we have unread messages waiting for us in a social network

The last bullet bears an extra mention. I have noticed that LinkedIn for example, sends me these notifications about notifications. Yes, we are even getting notifications about notifications!

When you get messages telling you that you have messages, that’s when you really know the problem is getting out of hand.

Fragmented Attention

Another major problem that the Stream is bringing about is the fragmentation of attention.

Today email is not enough. If it wasn’t enough work that we each have several email inboxes to manage, we are now also getting increasing volumes of messages outside of email in entirely different inboxes for specialized apps. We have too many inboxes.

It used to be that to keep up with your messages all you needed was an email client.

Then the pendulum swung to the Web and it started to become a challenge to keep up with all the Web sites we needed to track every day.

So RSS was invented and for a brief period it seemed that the RSS reader would be adopted widely and solve the problem of keeping up with the Web.

But then social networks came out and they circumvented RSS, forcing users to keep up in social-network specific apps and inboxes.

So a new class of “social dashboard” apps (like Tweetdeck) were created to keep up with social networks, but they didn’t include email or RSS, or all the other Web apps and silos.

This trend towards fragmentation has continued – an increasing array of social apps and web apps can really only be adequately monitored in those same apps. You can’t really effectively keep up with them in email, in RSS, or via social networks. You have to login to those apps to get high-fidelity information about what is going on.

We’re juggling many different inboxes. These include email, SMS, voicemail, Twitter, Facebook, LinkedIn, Pinterest, Tumblr, Google+, YouTube, Yammer, Dropbox, Chatter, Google Reader, Flipboard, Pulse, Zite, as well as inboxes in specialized tools like Github, Uservoice, Salesforce, and many other apps and services.

Alan Lepofsky, at Constellation Research, created a somewhat sarcastic graph to illustrate this problem, in his article, “Are We Really Better Off Without Email?” The graph is qualitative – it’s not based on direct numbers – but in my opinion it is probably very close to the truth.

What this graph shows is that email usage peaked around 2005/2006, after which several new forms of messaging began to get traction. As these new apps grew, they displaced email for some kinds of messaging activities, but more importantly, they fragmented our messaging and thus our attention.

The takeaway from this graph is that we will all soon be wishing for the good old days of email overload. Email overload was nothing compared to what we’re facing now.

The Message Volume Explosion

As well as increasing noise and the fragmentation of the inbox, we’re also seeing huge increases in message volume.

Message volume per day, in all messaging channels, is growing. In some of these channels, such as social messaging, it is growing exponentially. For example, look at this graph of Twitter’s growth in message volume per day since 2009, from the Bottlenose blog:

Twitter now transmits 340 Million messages per day, which is more than double the number of messages per day in March of 2011.

If this trend continues then in a year there will be between 500 million and 800 million messages per day flowing through Twitter.

And that’s just Twitter – Facebook, Pinterest, LinkedIn, Google+, Tumblr, and many other streams are also growing. And email messages are also increasing as well, thanks to all the notifications that are being sent to email by various apps.

Message volume is growing across all channels. This is going to have several repercussions for all of us.

Engagement is Threatened

First of all, the signal-to-noise ratio of social media, and other messaging channels, is going to become increasingly bad as volume increases. There’s going to be less signal and more noise. It is going to get harder to find the needles in the haystack that we want, because there is going to be so much more hay.

Today, on services like Twitter and Facebook, signal-to-noise is barely tolerable already. But as this situation gets worse in the next two years, we are going to become increasingly frustrated. And when this happens we are going to stop engaging.

When signal-to-noise in a channel gets too out of hand, it becomes unproductive and inefficient to use that channel. In the case of social media, we are right on cusp of this happening. And when this happens, people will simply stop engaging. And when engagement falls the entire premise of social media will start to fail.

This is already starting to happen. One recent article by George Colony, CEO of analyst firm, Forrester Research, cites a recent study that found that 56% of time spent on social media is wasted.

When you start hearing numbers like this, it means that consumers are not getting the signal they need most of the time, and this will inevitably result in a decrease in satisfaction and engagement.

What’s Next?

We have seen some of the issues that are coming about, or may soon come about, as the Stream continues to grow. But what’s going to happen next? How is the Stream, and our tools for interacting with it, going to adapt?

Click here to read Part III of this series, Keeping Up With the Stream, where we’ll explore various approaches do solving these problems.

The Message is the Medium – Attention is Shifting from the Web to the Stream

Shift Happens

A major shift has taken place on the Web. Web pages and Web search are no longer the center of online activity and attention. Instead, the new center of attention is messaging and streams. We have moved from the era of the Web to the era of the Stream. This changes everything.

Back in 2009, I wrote an article called “Welcome to the Stream – Next Phase of the Web” which discussed the early signs of this shift. Around the same time, Erick Schonfeld, at TechCrunch, also used the term in his article, “Jump Into the Stream.” Many others undoubtedly were thinking the same thing: The Stream would be the next evolution of the Web.

What we predicted has come to pass, and now we’re in this new landscape of the Stream, facing new challenges and opportunities that we’re only beginning to understand.

In this series of articles I’m going to explore some of the implications of this shift to the Stream, and where I think this trend is going. Along the way we’re going to dive deep into some major sea changes, emerging problems, and new solutions.

From Documents to Messages

The shift to the Stream is the latest step in a cycle that seems to repeat. Online attention appears to swing like a pendulum from documents to messages and back every few decades.

Before the advent of the Web, the pendulum was swinging towards messaging. The center of online attention was messaging via email, chat and threaded discussions. People spent most of their online time doing things with messages. Secondarily, they spent time in documents, for example doing word-processing.

Then the Web was born and the pendulum swung rapidly from messages to documents. All of a sudden Web pages – documents – became more important than messages. During this period the Web browser became more important than the email client.

But with the growth of social media, the pendulum is swinging back from documents to messaging again.

Today, the focus of our online attention is increasingly focused towards messages, not Web pages. We are getting more messages, and more types of messages, from more apps and relationships, than ever before.

We’re not only getting social messages, we’re getting notifications messages. And they are coming to us from more places – especially from social networks, content providers, and social apps of all kinds.

More importantly, messages are now our starting points for the Web — we are discovering things on the Web from messages. When we visit Web pages, it’s more often a result of us finding some link via a message that was sent to us, or shared with us. The messages are where we begin, they are primary, and Web pages are secondary.

From Search to Social

Another sign of the shift from the Web to the Stream is that consumers are spending more time in social sites like Facebook, Pinterest and Twitter than on search engines or content sites.

In December of 2011, Comscore reported that social networking ranked as the most popular content category in online engagement, accounting for 19% of all consumer time spent online.

These trends have led some such as VC, Fred Wilson, to ask, “how long until social drives more traffic than search?”  Fred’s observation was that his own blog was getting more traffic from social media sites than from Google.

Ben Elowitz, the CEO of Wetpaint, followed up on this by pointing out that according to several sources of metrics, the shift to social supplanting search as the primary traffic driver on the Web was well underway.

According to Ben’s analysis, the top 50 sites were getting almost as much traffic from Facebook as from Google by December of 2011. Seven of these top 50 sites were already getting 12% more visits from Facebook than from Google, up from 5 of these top sites just a month earlier.

The shift from search to social is just one of many signs that the era of the Stream has arrived and we are now in a different landscape than before.

The Web has changed, the focus is now on messages, not documents. This leads to many new challenges and opportunities. It’s almost as if we are in a new Web, starting from scratch – it’s 1994 all over again.

Click here to continue on to Part II of this series, Drowning in the Stream, where we’ll dig more deeply into some of the unique challenges of the Stream.

Consciousness is Not a Computation

In the previous article in this series, Is The Universe a Computer? New Evidence Emerges I wrote about some new evidence that appears to suggest that the universe may be like a computer, or least that it contains computer codes of a sort.

But while this evidence is fascinating, I don’t believe that ultimately the universe is in fact a computer. In this article, I explain why.

My primary argument for this is that consciousness is not computable. Since consciousness is an undeniable phenomenon that we directly experience, the universe has to be more than a mere computer, because a computer cannot create or simulate consciousness. No universe that is merely a computer or a computation can generate or account for consciousness. Below I explain this in more detail.

Consciousness is More Fundamental Than Computation

If the universe is a computer, it would have to be a very different kind of computer than what we think of as a computer today. It would have to be capable of a kind of computation that transcends what our computers can presently do. It would have to be capable of generating all the strangeness of general relativity and quantum mechanics. Perhaps one might posit that it is a quantum computer of some sort.

However, it’s not that simple. IF the universe is any kind of computer, it would actually have to be able to create every phenomenon that exists, and that includes consciousness.

The problem is that conscious is notoriously elusive, and may not even be something a computer could ever generate. After decades of thinking about this question from many angles, I seriously doubt that consciousness is computable.

In fact, I don’t think consciousness is an information process, or a material thing, at all. It seems, from my investigations, that consciousness is not a “thing” that exists “in” the universe, but rather it is in the category of fundamentals just like space and time. For example, space and time are not “in” the universe, rather the universe is “in” space and time. I think the same can be said about consciousness. In fact, I would go so far as to say consciousness is probably more fundamental than space and time, they are in “in” it rather than it being “in” them.

There are numerous arguments for why consciousness may be fundamental. Here I will summarize a few of my favorites:

  • Physics and Cosmology. First of all there is evidence in physics, such as the double slit experiment, that indicates there may be a fundamental causal connection between the act of consciously observing something and what is actually observed. Observation seems to be intimately connected to what the universe does, to what is actually measured. It is as if the act of observation — of measurement — actually causes the universe to make choices that collapse possibilities into specific outcomes. This implies that consciousness may be connected to the fundamental physical laws and the very nature of the universe. Taken to the extreme there are even physical theories, such as the anthropic principle, that postulate that the whole point of the universe, and all the physical laws, is consciousness.
  • Simulation. Another approach to analyzing consciousness is to attempt to simulate, or synthesize consciousness with software, where one quickly ends up in either an infinite regress or a system that is not conscious of its own consciousness. Trying to build a conscious machine, even in principle, is very instructive and everyone who is seriously interested in this subject should attempt it until they are convinced it is not possible. In particular self-awareness, the consciousness of consciousness, is hard to model. Nobody has succeeded in designing a conscious machine so far. Nobody has even succeeded in designing a non-conscious machine that can fool a conscious being into thinking it is a conscious being. Try it. I dare you. I tried many times and in end I came to the conclusion that consciousness, and in particular self-consciousness, lead to infinite regresses that computers are not capable of resolving in finite time.
  • Neuroscience. Another approach is to try to locate consciousness in the physical brain, the body, or anywhere in the physical world – nobody has yet found it. Consciousness may have correlates in the brain, but they are not equivalent to consciousness. John Searle and others have written extensively about this issue. Why do we even have brains then? Are they the source of consciousness, or are they more like electrical circuits that merely channel it without originating it, or are brains the source of memory and cognition, but not consciousness itself? There are many possibilities and we’re only at the beginning of understanding the mind-brain connection. However so far, after centuries of dissecting the brain, and mapping it, and measuring it in all kinds of ways, no consciousness has been found inside it.
  • Direct Introspection. One approach is through direct experience: search for an origin of knowing, by observing your own consciousness directly, with your own consciousness. No origin is found. There is no homunculus in the back of our minds that we can identify. In fact, when you search, even mere consciousness is not found, let alone its source. The more we look the more it dissolves. Consciousness is a word we use, but when we look for it we can’t find what it refers to. But that doesn’t mean consciousness isn’t a real phenomenon, or that it is an illusion. It is undeniable that we are aware of things, including of the experience of being conscious. It is unfindable, yet it is not a mere nothingness either – there is definitely some kind of awareness or consciousness taking place that is in fact the very essence of our minds. The nature of consciousness exemplifies the Buddhist concept of the “emptiness” in a manner that we can easily and directly experience for ourselves. But note that “empty” in this sense doesn’t mean nothingness, or non-existence, it means that it exists in a manner that transcends being either something or nothing. From the Buddhist perspective, although consciousness cannot be found, it is in fact the ultimate nature of reality, from which everything else appears.
  • Logic. Another approach is logical: Recognize that all experience is mediated by consciousness — all measurements, all science, all our own personal experience, all our collective experiences. Nothing ever happens or is known by us without first being mediated by consciousness. Thus consciousness is more fundamental than anything we know of, it is the most fundamental experience, even more fundamental than the experience of space and time, or our measurements thereof. From this perspective we cannot honestly say that anything ever can exist apart from consciousness, from someone or something knowing it. In fact, it would appear that everything depends on consciousness to be known, and possibly to exist, because we have no way to establish that anything exists apart from consciousness. Based on the evidence we have, consciousness is therefore fundamental. The universe appears to be in consciousness not vice-versa: This is in fact a more logical and more scientific conclusion than the standard belief that consciousness is an emergent property of the brain, or that it is a separate phenomenon from appearances. In the extreme, this investigation leads to a philosophical view called solipsism. However note that the Buddhist view (above) transcends solipsim because, in fact there is no self in consciousness – anything you can label as “self” or “I” is actually just an appearance in consciousness, not consciousness in pure form. Since there is no self, you cannot claim that you own consciousness, or that everything exists in “your” consciousness – because there is no way to assert a self that owns or is consciousness that contains everything else, nor can any “other” be asserted either. Since consciousness is more fundamental than self, or the self-other dichotomy, the view of solipsism is defeated. Instead consciousness transcends self and other, one and many.
  • Unusual experiences. Yet another approach is to observe consciousness under unusual or extreme conditions such as during dreaming, lucid dreaming, religious experiences, peak experiences, when under the influence of mind-altering drugs, or in numerous well-documented cases of apparent reincarnation, and well-documented near-death experiences. In such cases there is a wealth of both direct and anecdotal evidence suggestive of the idea that consciousness is able to transcend the limits of the body, as well as space and time. Whether you believe such evidence is valid is up to you, however there is an increasing body of careful studies on these topics that are indicative that there is a lot more to consciousness than our day-to-day waking state.

Beyond Computation

Because of the above lines of reasoning and observation I have come to the conclusion that consciousness transcends the physical, material world. It is something different, something special. And it does not seem to be computable, because it has no specific form, substance or even content that can be represented as information or as an information process.

For example, in order to to be the product of a computation, consciousness would need to be comprised of information — there would need to be some way to completely describe and implement it with information, or an a information process — that is, with bits in a computer system. Information processes cannot operate without information – they require bits 1’s and 0’s, and some kind of a program for doing things with them.

So the question is, can any set or process of 1’s and 0’s perfectly simulate or synthesize what is to be conscious? I don’t think so. Because consciousness, when examined, is found to be literally formless and unfindable, it has no content or form that can be represented with 1’s and 0’s. Furthermore, because consciousness, when examined is essentially changeless, it is not a process – for a process requires some kind of change. Therefore it is not information or an information process.

Some people counter the above argument by saying that consciousness is an illusion, a side-effect, or what is called an “epiphenomenon” of the brain. They claim that there is no such thing as actual consciousness, and that there is nothing more to cognition than the machinery of the brain. They are completely missing the fundamental point.

But let’s assume they are right for a moment – if there is no consciousness, then what is taking place when a being knows something, or when they know their own knowing capacity? How could that be modeled in a computer program? Simply creating a data structure and process that represents its own state recursively is not sufficient – because it is static, it is just data – there is no actual qualia of knowing taking place in that system.

Try as one might, there is no way to design a machine or program that manifests the ability to know or experience the actual qualia of experiences. John Searle’s Chinese Room though experiment is a famous logical argument that illustrates this. The simple act of following instructions – which is all a computer can do – never results in actually knowing what those instructions mean, or what it is doing. The knowing aspect of mind – the consciousness – is not computable.

Not only can consciousness not be simulated or synthesized by a computer, it cannot be found in a computer or computer program. It cannot magically emerge in a computer of sufficient complexity.

For example, suppose we build up a computer or computer program by gradually adding tiny bits of additional complexity — at what point does it suddenly transition from being not-conscious to being conscious? There is no sudden transition to consciousness — I call that kind of thinking “magical complexity” – and many people today are guilty of it. However it’s just an intellectual cop-out. There is nothing special about complexity that suddenly and magically causes consciousness to appear out of nowhere.

Consciousness is not an emergent property of anything, nor is dependent on anything. It does not come from the brain, and it does not depend on the brain. It is not part of the brain either. Instead, it would be more correct to say that brain is perhaps an instrument of consciousness, or a projection that occurs within consciousness.

One analogy is that the brain channels consciousness, like an electrical circuit channels electricity. In a circuit the electricity does not come from the circuitry, it’s fundamentally the energy of the universe – the circuit is just a conduit for it.

A better analogy however is that the brain is actually a projection of conscious just as a character in a dream is a projection of the dreaming mind. Within a dream there can be fully functional, free-standing characters that have bodies, personalities and that seem to have minds of their own, but in fact they are all just projections of the dreaming mind. Similarly the brain appears to be a machine that functions a certain way, but it is less fundamental than the consciousness that projects it.

How could this be the case, it sounds so strange! However, if I phrase it differently all of a sudden it sounds perfectly normal. Instead of “consciousness” let’s say “space-time.” The brain is a projection of space-time, space-time does not emerge from the brain. That sounds perfectly reasonable.

The key is that we have to think of consciousness as the same level of phenomena as space-time, as a fundamental aspect of the universe. The brain is a space-time-consciousness machine, and the conceptual mind is what that machine is experiencing and doing. However, space-time-consciousness is more fundamental than the machinery of the brain, and even when the brain dies, space-time-consciousness continues.

For the above reasons, I think that consciousness proves that the universe is not a computer — at least not on the ultimate, final level of analysis. Even if the universe contains computers, or contains processes that compute, the ultimate level of reality is probably not a computer.

But let’s, for the purpose of being thorough, suppose that we take the opposite view, that the universe IS a computer and everything in it is a computation. This view leads to all sorts of problems.

If we say that the universe is a computation, it would imply that everything — all energy, space, time and consciousness — are taking place within the computation. But then what is the computation coming from and where is it happening? A computation requires a computer to compute it — some substrate that does the computation. Where is this substrate? What is it made of? It cannot also be made of energy, space, time or consciousness — those are all “inside” the computation, they are not the substrate, the computer.

Where is the computer that generates this universal computation? Is it generating itself? That is a circular reference that doesn’t make sense. For example, you can’t make a computer program that generates the computer that runs it. The computer has to be there before the program, it can’t come from the program. A computation requires a computer to compute it, and that computer cannot be the same thing as the computation it generates.

If we posit a computer that exists beyond everything – beyond energy, space and time — how could it compute anything? Computation requires energy, space and time — without energy there is no information, and without space and time there is no change, and thus no computation. A computer that exists beyond everything could not actually do any computation.

One might try to answer this by saying that the universal computation takes place on a computer that exists in a meta-level space-time beyond ours — in other words it exists in a meta-universe beyond our universe. But that answer contradicts the claim that our universe is a computer – because it means that what appears to be a universe computer is really not the final level of reality. The final level of reality in this case is the meta-universe that contains the computer that is computing our universe. That just pushes the problem down a level.

Alternatively one could claim that in fact the meta-universe beyond our universe is also a computer – So our universe computer exists inside a meta-level universe computer. In this case it’s “computers all the way down” – an infinite regress of meta-computers containing meta-computers containing meta-computers. But to claim that is a bit of a logical cop-out, because then there is no final computer behind it all – thus there is no source or end of computation. If such infinite chains of computations could exist it would be difficult to say they actually compute anything since they could never start or complete, and thus this claim is not that unlike claiming that the universe is NOT a computer.

In the end we face the same metaphysical problems we’ve always faced – either there is a fundamental level of reality that we cannot ever really understand, or we fall into paradoxes and infinite regress. Digital physics may have some explanatory power, but it has its limits.

But then what does it mean that we find error correcting codes in the equations of supersymmetry? If the fundamental laws of our universe contain computer codes in them, how can we say the universe is not a computer? Perhaps the universe IS a computer, but it’s a computer that is appearing within something that fundamentally is not computable, something like consciousness perhaps. But can something that is not computable generate or contain computations? That’s an interesting question.

Consciousness is certainly capable of containing computations, even if it is not a computation. A simple example of this would be a dream about a computer that is computing something. In such a dream there is an actual computer doing computations, but the computer and the computations depend on something (consciousness) that is not coming from a computer and is not a computation.

In the end I think it’s more likely that ultimate reality is not a computer – that it is a field of consciousness that is beyond computation. But that doesn’t mean that universes that appear to be computations can’t appear within it.

“Once upon a time, I, Chuang Chou, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was Chou. Soon I awaked, and there I was, veritably myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man.” — Chuang Chou

Further Reading

If you are interested in exploring the nature of consciousness more directly, the next article in this series, Recognizing The Significance of Consciousness, explains what consciousness is actually like, in its pure form, and how to develop a better recognition of it for yourself.

Is the Universe a Computer? New Evidence Emerges.

I haven’t posted in a while, but this is blog-worthy material. I’ve recently become familiar with the thinking of University of Maryland physicist, James Gates Jr. Dr. Gates is working on a branch of physics called supersymmetry. In the process of his work he’s discovered the presence of what appear to resemble a form of computer code, called error correcting codes, embedded within, or resulting from, the equations of supersymmetry that describe fundamental particles.

You can read a non-technical description of what Dr. Gates has discovered in this article, which I highly recommend.

In the article, Gates asks, “How could we discover whether we live inside a Matrix? One answer might be ‘Try to detect the presence of codes in the laws that describe physics.'” And this is precisely what he has done. Specifically, within the equations of supersymmetry he has found, quite unexpectedly, what are called “doubly-even self-dual linear binary error-correcting block codes.” That’s a long-winded label for codes that are commonly used to remove errors in computer transmissions, for example to correct errors in a sequence of bits representing text that has been sent across a wire.

Gates explains, “This unsuspected connection suggests that these codes may be ubiquitous in nature, and could even be embedded in the essence of reality. If this is the case, we might have something in common with the Matrix science-fiction films, which depict a world where everything human being’s experience is the product of a virtual-reality-generating computer network.”

Why are these codes hidden in the laws of fundamental particles? “Could it be that codes, in some deep and fundamental way, control the structure of our reality?,” he asks. It’s a good question.

If you want to explore further, here is a Youtube video by someone who is interested in popularizing Dr. Gates’ work, containing an audio interview that is worth hearing. Here, you can hear Gates describe the potential significance of his discovery in layman’s terms. The video then goes on to explain how all of this might be further evidence for Bostrom’s Simulation Hypothesis (in which it is suggested that the universe is a computer simulation). (NOTE: The video is a bit annoying – in particular the melodramatic soundtrack, but it’s still worth watching in order to get a quick high level overview of what this is all about, and some of the wild implications).

Now why does this discovery matter? Well it is more than strange and intriguing that fundamental physics equations that describe the universe would contain these error correcting codes. Could it mean that the universe itself is built with error correcting codes in it, codes that that are just like those used in computers and computer networks? Did they emerge naturally, or are they artifacts of some kind of intelligent design? Or do they indicate the universe literally IS a computer? For example maybe the universe is a cellular automata machine, or perhaps a loop quantum gravity computer.

Digital Physics – A New Kind of Science

The view that the universe is some kind of computer is called digital physics – it’s a relatively new niche field within physics that may be destined for major importance in the future. But these are still early days.

I’ve been fascinated by the possibility that the universe is a computer since college, when I first found out about the work of Ed Fredkin on his theory that the universe is a cellular automaton — for, example, like John Conway’s Game of Life algorithm (particularly this article, excerpted from the book Three Scientists and their Gods).

Following this interest, I ended up interning in a supercomputing lab that was working on testing these possibilites, at MIT, with the authors of this book on “Cellular Automata Machines.”

Later I had the opportunity to become friends with Stephen Wolfram, whose magnum opus, “A New Kind of Science” is the ultimate, and also heaviest, book on this topic.

I asked Stephen about what he thinks about this idea and he said it is, “a bit like saying ‘there’s a Fibonacci sequence there; this must be a phenomenon based on rabbits’.  Error-correcting codes have a certain mathematical structure, associated e.g. with sphere packing.  You don’t have to use them to correct errors. But it’s definitely an amusing thought that one could detect the Matrix by looking for robustification features of code.  Of course, today’s technology/code rarely has these … because our computers are already incredibly reliable (and probably getting more so)”

The work of Dr. Gates, is at the very least, an interesting new development for this field. At best it might turn out to be a very important clue about the nature of the universe, although it’s very early and purely theoretical at this point. It will be interesting to see how this develops.

However, I personally don’t believe the universe will turn out to be a computer or a computation. Read the next article in this series to find out why I think Consciousness is Not a Computation.

 Notes:

  • Seth Lloyd, professor quantum mechanical engineering at MIT, has written a book that describes his theory that the universe is a quantum computer.
  • Here’s a good article that explores various views related to the idea the universe is a computation in some more detail.

Bottlenose has Launched!

Today, after almost two years of work in stealth, I am proud to announce the launch of Bottlenose.

While I have co-founded and serve on the boards of several other ventures (The Daily Dot, Live Matrix, StreamGlider, and others), Bottlenose is different from all my other projects in that I am also in a full-time day-to-day role as the CEO. In short, Bottlenose is what I’m putting the bulk of my time into going forward, although I will continue to angel invest and advise other startups.

The story of Bottlenose began when my good friend and advisor, Josh Jones-Dilworth, introduced me to Dominiek ter Heide after I sold my last company, Twine.com in 2010.

Dominiek was at the time working on a new kind of personalization technology for social media. Meanwhile, I had been thinking about how to filter the Stream, and the emerging problem of the Sharepocalypse and what I have been calling “the Stream 3.0 Problem.”

Josh knew both of us and had a hunch that we were really thinking about the same problem from different angles. Dominiek and I started speaking via Skype and soon we teamed up. Bottlenose was officially born in 2010.

Working with Dominiek has been a true pleasure. He’s one of the most productive, talented, software engineers I’ve ever met. It’s been an amazing ride so far. Soon, thanks to Dominiek, we were joined by an A-team of killer engineers with expertise in natural language processing, Node.js, Javascript, HTML 5, machine learning, cloud computing, NoSQL, and more.

Our little band of hotshots has produced an amazingly robust and powerful app — something that even large companies with huge engineering teams would be hard-pressed to develop. I’m honored to be working with these guys, and very proud of the team and the what we’ve built.

We have also been fortunate to be joined by some terrific angel investors, including Andy Jenks, of Stage One Capital, and several others (see the About page on Bottlenose for the complete list).

So what is Bottlenose anyway? Well one way to find out is to visit the site and check out the Tour there. But I’ll summarize here as well:

Bottlenose is the smartest social media dashboard ever built. It’s designed for busy people who make heavy use of social media: prosumers, influencers, professionals.

Bottlenose uses next-generation “stream intelligence” technology to understand the messages that are flowing through Twitter, Facebook and other social networks. It also learns about your interests.

On the basis of this knowledge, Bottlenose helps you filter your streams to find what matters to you, what’s relevant, and what’s most important. Bottlenose also includes many new features, like Sonar, which visualizes what’s going on in any stream, and powerful rules and automation capabilities to help you become more productive.

This is just the beginning of this adventure. Our roadmap for Bottlenose is very ambitious, and it’s going to be a lot of fun, and hopefully will really make a difference too. We’re super excited about this product and we hope you will be as well.

Check back here for more posts and observations about Bottlenose and where I think social media is headed.

Make sure to follow us on Twitter:

And come check out Bottlenose! The app is still in invite beta so you either have to have a high enough Klout score or an invite code to get in.

The first 500 readers of my blog who want to try it out, can get into Bottlenose using the invite code: novafriends

I look forward to seeing you Bottlenose!

For more about the thinking behind Bottlenose, read The Problem of Stream 3.0

 

My Best Interview: About Global Brain, Consciousness and AI

I was recently interviewed by Stephen Ibaraki and Alex Lin (CEO of ChinaValue) in what turned out to be the most interesting, far-reaching, and multi-disciplinary (and long) interview I’ve ever given. I was very pleased with the depth of their questions and the topics we covered. You can listen to the MP3 version here, or read a full-text transcript here.

Topics covered:

  • My work over the last few decades
  • Big life lessons I’ve had
  • My recent “Venture Production Studio” concept
  • Stealth ventures I’m working on (realtime web, wireless power, etc.)
  • Intelligent assistants
  • Predictions for the future
  • Augmented reality
  • The Singularity
  • Do we have free will? Will that change as Global Mind emerges?
  • The changing nature of individuality
  • The Psychological Singularity
  • The Global Brain – history and implications
  • The WebOS – Which cloud will win?
  • The Semantic Web – what it’s really for, is it being adopted?
  • What level does the brain compute at? Neural vs. quantum.
  • Nature of consciousness (Buddhist view vs. Western Scientific view) – “I think, therefore I am” vs. “I am, therefore I think”
  • The nature of self & possibility of artificial selves
  • The Singularity
  • John Searle’s Chinese Room thought experiment
  • Digital physics & cellular automata; Ed Fredkin & Stephen Wolfram
  • Bostrom’s Simulation Hypothesis
  • Buddhist views on ultimate nature of reality
  • My relationship with Peter Drucker (my grandfather) and his influence (management, knowledge workers, social sector etc.)
  • The shift to a now-centric civilization
  • The fragmentation of the Semantic Web
  • Freeing intelligence from human brains (like we did with knowledge)
  • Symbiosis; Part vs. Whole – When does the Global Brain change to a new level of order?
  • Beyond Homo Sapiens – What’s next? Cyborgs, collective beings, etc.
  • Technological ethics – what kind of future are we building?
  • Combining the best of Asian and Western intellectual approaches
  • IBM-Jeopardy Challenge

My Father and Me. A Memoir. For Mayer Spivack (1936 – 2011)

My father, Mayer Spivack, passed away on February 12, 2011, in the Kaplan Family House, a beautiful hospice outside of Boston. He passed away, at the young age of 74, after a difficult year and a half battle with colon cancer. During his illness he never lost his spirit of childlike curiosity, enormous compassion, and his dedication to innovation.

His passing was at times difficult, but ultimately peaceful, and took place over five days, during which he was surrounded by love from close family and friends. His presence and spirit, and the intense experiences we all shared over those last days with him are unforgettable: the most incredible experience of love and spiritual connection I have ever had. He was as great in death as he was in life.

This is the story of my relationship with my father: the things I appreciated most about him, what I learned from him, and what he gave to me at the end of his life. By sharing this, I hope to amplify and share his gifts with others.

My father was a truly unique person, and a Boston legend. He was multi-talented and worked in many fields at once, mastering them all (you can read more about his actual work here). He had a vast intelligence, a palpably original approach, and an even greater heart. He was a true Renaissance Man, a great intellectual and artist, and often an unintentionally entertaining and eccentric genius. He had a profound influence on all who knew him well.

As a father, he was a large, warm, loving, fuzzy bear of a man who never really lost his childlike innocence. He was the kind of father everyone wanted to have and when they met him they instantly wanted to hug him. His greatest accomplishment was his compassionate heart: Everyone could feel it.

But despite his brilliance, or perhaps because of it, my father never really fit in. There was no box that could contain him. He was an only child, a loner, and an outsider with little interest in conformity. He had a disdain for formality and social conventions, which always manifested, much to our embarrassment, in the most formal and conventional of settings. He described himself as an iconoclast. Despite his unconventional ways, he was loved and appreciated for his humor, his quirkiness, his unselfconscious originality, and his always out-of-the-box thinking, even (and sometimes especially) by those in the mainstream.

One funny story we recently remembered illustrates his irrepressible spirit: He was invited with his wife to a major European conference of art restorers in Italy. There was a formal reception at the home an Italian Duke. My father, never comfortable with any kind of formality, playfully took one of the candles from the reception, and wore it on his head for the entire night. During the 5 course formal dinner and the reception, he was introduced to various members of the Venetian nobility and the European art world, all the time, balancing this burning little candle on his head, yet also acting completely as if it wasn’t there and not acknowledging it at all. Everyone thought that, because of his first name, “Mayer,” he was actually the eccentric “mayor” of some city in the USA and so despite their horror they were too afraid to point out that there was a candle on his head.

In another infamous incident, my father sat on the Arts Council for the city of Newton, Massachusetts. One day a photo was taken of the Council members, none of whom were actual artists, aside from my father — they were prominent upstanding Newton business leaders and socialites. In the photo they are all wearing three piece suits and looking very formal and proud. My father is also wearing a three piece suit, except that, much to the dismay of the other Council members, his suit pants are tucked into gigantic calf-height silver moon boots (to him it was winter and it was perfectly logical to wear snow boots).

In a similar vein, whenever my father was invited to a black tie event, he would reluctantly attend, dressed appropriately, except with a black dress sock tied around his neck instead of a bow tie. Of course he would never acknowledge this to anyone, and they were all too shocked to point it out to him.

One more example of my father’s individuality: when we were children in the 1970’s in Boston, my father got a great deal on a World War One field ambulance. That was our family “car.” He also had a longstanding love affair with army surplus, to which he had special access through his position on the faculty of Harvard Medical School. From some special warehouse, he acquired a full Coast Guard extreme-weather helicopter rescue snowsuit — a bright orange practically bulletproof insulated monstrosity. To him it was extremely practical – warm, waterproof, and visible even in the worst white-out snowstorm conditions.  He was entirely unselfconscious of the fact that he looked like he had just descended from a rescue helicopter when he wore it. And so this was what he wore, along with his usual silver moon boots, all winter, every winter, through my early childhood.

My poor brother and I would have to be dropped off every morning at elementary school this way: We would pull up in an an antique white ambulance — a big man in an orange emergency jumpsuit, sunglasses, and silver moon boots would get out, tromp through the snow, and open the rear doors (where the stretcher would normally be) and then my younger brother and I would pop out, much to the shock and awe of our fellow schoolmates. Thus were the origins of my own life as an alien and outsider. While these experiences were a source of horror and embarrassment for us growing up, today we laugh hysterically when we remember them — they are what we are made of and I wouldn’t trade them back for anything.

My father was a huge influence for me as an innovator. He was a prolific, constant professional inventor and my childhood was filled with his inventions, in various stages of development. He was such a good inventor that corporations like Polaroid, Otis Elevator and others, would hire him to come up with inventions. I remember him once telling me that he made 100 inventions for Polaroid in 100 days. There was another time when my father was hired to invent new uses for Silly Putty — he received a giant vat of the stuff from the Silly Putty people. With the attention of my father, two kids, and all our friends, the Silly Putty gradually dispersed throughout our house, until little blobs of Silly Putty could be found in every corner, crevice, crack, cranny and nook.

My brother and I grew up inventing things with our father. In fact, we were not allowed to have or watch a TV as children – instead we had three rooms dedicated to making things, in which we spent most of our time: one for building things with wood, one for drawing and painting, and another was my father’s studio. These rooms were stocked with all kinds of tools and art supplies.

As an inventor, my father always had tools and various devices hanging off of him, clipped onto his belt, in fanny packs, in holsters, backpacks, special cases, and in holders of his own making. Our nickname for him at times was “Inspector Gadget.”  He was always infatuated with some new tool or device.

I remember, for example, what we refer to as his “Hot Glue Phase,” when I was in junior high school. Hot glue is a plastic that you melt through a device called a hot glue gun. It creates a white plastic goo that hardens as it cools and is unfortunately able to fasten just about anything together, much to my father’s delight, and our misfortune. I remember going to junior high school with a rip in my pants repaired visibly with hot glue, my sneakers repaired with hot glue, my book bag repaired with hot glue. There was nothing that hot glue couldn’t be used on, we discovered. Clothes. Plates. Furniture. Our house was at one time filled with little spider web strands of hot glue residue, stringing together our possessions, our home, our clothes, us.

One of my father’s most memorable inventions was “The Body Sail” – a precursor to the Windsurfer, on which the sail was not attached to the board  but rather was held by hand using a special boom. He once won the Charles River Boat Festival sailing that contraption – of course, wearing a full body scuba suit. My brother and I used to use his Body Sail on ice skates in the winter, on frozen ponds. My father, of course, preferred to sail it on roller skates, in full bodysuit, helmet and gloves, right through parting waves of startled lunchtime crowds in Harvard Square.

No story about my father would be complete without mentioning his love of sailing. It encompassed not only his Body Sail invention, but a series of boats, particularly multi-hulled boats such as catamarans and eventually trimarans. In his later years he moved to Marblehead outside of Boston, a worldwide center of sailing, where he became an avid fan of high-speed sailing, eventually designing and starting to build his own trimaran out of aerospace composite materials, which, had it ever been finished, would have been among the fastest, and certainly the most computerized and advanced, trimarans on Earth.

My father was also a classically trained artist and particularly a widely shown sculptor — I grew up surrounded by his artworks — photos, drawings, and sculptures made from found objects, industrial artifacts, natural materials. I played in his studios – surrounded by tools for making things, prototyping, and inventing. As an artist, my father was also truly unique. An early pioneer of the use of “found objects,” his artworks were made from rusty pieces of industrial machinery, wooden molds for casting pieces of ships, old rusty farm tools, pieces of found wood and materials from nature. I grew up surrounded by these artworks. There were hundreds of them and he had numerous exhibitions.

One series of works he called “Foundiron” consisted of pieces taken from the intestines of large industrial boilers and furnaces. Another series used wooden molds for casting brass for ships, appeared like a set of primitive human figures – perhaps from Easter Island. Later works included a two ton angelic shape made from the massive steel blades of a snowplow for train tracks, and gossamer drawings in air made from the unwound springs of massive clocks that reminded one of Picasso’s drawings. His Shrine Series included animal bones, bird wings, industrial spindles, parts from clocks, early computers, and metronomes, and melted industrial alloys. One of his larger installations is made from three giant steel train car hitches that he cut apart and welded back together like hands grasping each other, and now stands permanently in Boston’s new South Station.

He was also a photographer and some of his images — for example macro images of honeycombs and turtles, still remain in my mind as if I saw them yesterday. At one point his entire office was rigged up with a complicated system of prisms, blackout shades, lenses, reflective materials, and rear projection screens so that he could take photos of shapes made of pure light that he called Lumia – which he then blew up to massive size and animated with a bank of slide projectors — some of these images can be seen on his weblog.

Another area of life that my father dove into deeply was music. He had a profound connection with music. His music collection included many of the greatest works of classical music, but also Jazz and folk music, and even Indian classical music. Our childhood was filled with music, and also with musical instruments of all kinds – particularly unusual instruments: aboriginal instruments, vibraphones, banjos, harpsichords, flutes, guitars, percussion instruments. My own broad taste in music came from this. My brother, Marin Spivack, took it even further, becoming a masterful Jazz saxophone player, as well as learning to compose for and play guitar, drums, piano, bass.

My father’s fascination with science and his massive appetite for knowledge translated into a home filled with books about science, scientific journals, and discussions about physics, biology, chemistry, brain science, psychology, architecture, engineering, and anthropology. We spent countless hours discussing science, the future, the brain, and technology, and coming up with new theories and inventions.

In my own life as an innovator, my father was my biggest fan and supporter. He taught me to invent – it was his passion. He wrote about it, and refined his theories and methods for innovating and enhancing creativity over the course of his life, and as children my brother and I were his very fortunate experimental guinea pigs.

I can remember being brought by him as a child to MIT, the Massachusetts Institute of Technology, where my father had done his graduate studies — there my brother and I were subjects in early experiments on children and computers: we were observed as we played the early computer game, “Wumpus,” and learned how to use computers, by his colleagues. I still remember my father’s love for MIT — how he took my little brother and I on nighttime expeditions into the hidden catacombs under the campus, and the many times we met with his friends, colleagues and relatives from various MIT departments. My father wore his MIT ring proudly right until his last breath: It was the only club he ever wanted to belong to.

As I got older my father shared with me his work with architects and designers, and his “Design Log” methodology for documenting and improving any kind of design process. Later, as an adult he shared his new theories about human intelligence, learning disabilities, dyslexia, and what he called “syncretic associative thinking.” His theory of syncretic cognition proposes that there are two fundamentally different, yet complementary, forms of human intelligence — linear and syncretic. According to my father’s thinking, syncretic thought is associative and seemingly chaotic, yet out of it great creative leaps and innovations are born.

Dyslexics, of which my father was one, were examples of the extreme case of syncretic thinking: despite difficulties with linear logic, dyslexics are often brilliantly creative; in fact many great geniuses – especially artists, but also scientists — have been dyslexic. My father believed that instead of viewing dyslexics as “learning disabled” they should be viewed as “creativity enabled” and trained and taught differently, to leverage their unique cognitive abilities.

Instead of being viewed as bad at math or slow at reading, dyslexics might instead be viewed as unusually talented at associative thinking, brilliant in the arts and inventing. It was all a matter of perspective. My father advocated passionately for the often-overlooked talents hidden within dyslexia in his own writing, and also in his parallel career as a trained psychotherapist working with hundreds of people, especially learning disabled people, engineers and artists.

My father’s interest in the many flavors of intelligence extended not just to humans but also to animals: He had a long fascination with animal intelligence. His homes were always filled with animals – particularly highly intelligent parrots of various breeds, with whom he would speak, whistle, sing, and explore his theories about learning and cognition. When I was just a newborn, he had a pet crow — which he said was one of the most intelligent of birds.

My father painstakingly studied crows and eventually learned how to mimic their various kinds of calls. I can distinctly remember how, throughout our entire life together, he would suddenly start embarrassingly screeching, “Caaah  caaahhh Caaaaaaahhhh,” whenever he encountered a crow in some random tree.

In another famous story from my father’s MIT days, he became fascinated with echolocation — the form of navigation through sound used by animals bats and dolphins. Bats in particular became a bit of an obsession for my father. Bats navigate with high frequency clicks. These clicks bounce off of surfaces like walls, buildings, plants, insects, other bats and the reflections are turned into images in the bat brain.

My father decided that bat echolocation would be a great way to help the blind navigate through cities. So he invented a bat clicker device you could wear on your head. It would emit rapid loud clicks that were within the range of human hearing. He spent a week blindfolded, wearing this device, walking around the MIT and Harvard campuses, and apparently he was able to navigate successfully with it.

He recounted that after many days of using this contraption, blindfolded the whole time, his brain adapted and he was able to discern the different types of materials, objects and surfaces from the subtle differences in sound reflections. He was able to cross streets, navigate around buildings and obstacles, and could even find his way through crowds (although we all suspected the crowds were probably parting of their own volition around this strange blindfolded man with the clicking machine on his head). The astonished people of Cambridge who encountered him must have thought he was some kind of alien exploring a strange new world. And one can only wonder what the bats themselves must have thought.

At various times in my childhood my father also had pet frogs, lizards, turtles, fish, snakes, squirrels, cats, and later, his beloved pug. We grew up with enormous aquariums, terrariums, and aviaries — as kids these were wonderlands. This love of all kinds of living things would eventually guide him to his second wife: Boston artist, Louise Freedman. We knew they were made for each other when, for their first date, they chose to go to a local cemetery pond to collect pond water and frogs together.

As their lives merged, so did their always increasing menagerie of animals. And gradually there was less and less room, or time, for humans in their house. During my college years, my father and his wife had started raising African Grey parrots, and had also become close friends with Harvard/MIT animal cognition researcher, Irene Pepperberg, and her famous parrot, Alex.

When I would visit their home on school breaks, the parrots were as much a part of the family as my brother and I, and occupied a central location in the family room. A typical mealtime conversation in our family was a combination of English words, chirps, clicks and whistles, spoken by humans and parrots alike. My father and Louise eventually moved into a home that literally was like a tree — surrounded by trees on many levels, on the edge of a huge nature sanctuary on Marblehead Neck. There amongst the branches, they could almost live as birds. My brother I joked — half-seriously — that for an upcoming wedding anniversary, we would throw out their couch and instead replace it with matching human-sized perches for them.

But my father’s fascination with animals wasn’t just about intelligence, it was also about love. I remember one day as a child, while frantically evacuating from Cape Cod ahead of a fast oncoming hurricane, my father suddenly backed up miles of panicked traffic when he stopped the car in the pouring rain and lightning to scramble around on his hands and knees, risking his own life, to rescue a turtle that had strayed onto the freeway. This deep love of animals, and people, that he manifested throughout his life, was at times a source of embarrassment for me, but later became what I admired most about him. For my father, this simple love of all living things was his religion. But for most of my life, I didn’t realize what an accomplishment that was.

Although my father influenced me in so many ways, the most important facet of life that we shared — and struggled over — was spirituality.

He was a dedicated scientific materialist and rejected superstition, which to him included all institutionalized forms of religion. He even sometimes referred to himself as an atheist, although I think more accurately, he was an agnostic. I on the other hand, while also deeply interested in the sciences, had come to the conclusion that science alone could never fully explain reality or consciousness — I felt that there was a common underlying truth in all the great religions which science had so far completely missed, a truth that was essential for a complete and accurate understanding of reality. This debate between science and religion became the fulcrum on which we wrestled endlessly and in many different ways.

I had always known, even as a child, that there is something more than meets the eye about reality that is extremely subtle, yet at once vividly evident. Growing up, I had a number of spontaneous mystical experiences that I could not explain, and later I witnessed highly unusual phenomena taking place in monasteries in Nepal and India that convinced me that there must be more to the mind, and to reality, than our western scientific worldview could presently measure or explain. I was perplexed by the apparent incompatibility of these experiences, and the Western scientific framework that my father and I both lived and worked in.

In my attempts to reconcile these two worlds, I became obsessed with physics, computer science and artificial intelligence. I began searching for a grand unified theory. I sought to create software that could simulate physics, the brain, and the mind.  With some of the world’s most cutting-edge physicists and computer scientists, as well as at some of the top artificial intelligence companies, I worked on on several major initiatives in computational physics, parallel supercomputing, and artificial intelligence, as well as my own software projects and theories.

All of these attempts failed to achieve their goals so thoroughly and so repeatedly that eventually I began to question if it was even possible to do. I reached a point where I began to doubt the assumptions behind these projects — I began to question my own questions. This led me to a deeper exploration of the mind and the foundations of reality – a journey from cognitive science and physics to philosophy, and finally to spirituality. Paradoxically, I ended up back where I began, looking inwards rather than outwards, for the answers.

My quest for spiritual meaning took me through a survey of all the major Western and Eastern religions, and while traveling in Asia for a year after college, I landed in Tibetan Buddhism, with its intense focus on the nature of mind and consciousness. I was home. For me, Tibetan Buddhism had the perfect combination of rational and objective logical analysis (my father’s influence), and the mystical direct experience of the union of consciousness with divinity that I had tasted in my own experience.

In Tibetan Buddhism I finally found a rational yet holistic framework that could account for all the dimensions of observed experience: both the outer physical world and the inner dimensions of consciousness. From the Buddhist perspective, we humans are manifestations or projections of a deeper ultimate nature of reality, as are all sentient beings, and in fact all animate and inanimate things. This deeper level of reality is the origin of both the subjective and objective poles of experience, and it’s nature is transcendental, empty, yet aware.

The direct proof and experience of this can be found many ways: through logical reasoning, through prayer, through love, through nature, through art, through meditation, and perhaps most easily, by searching for the source of one’s own consciousness. Consciousness is a unique phenomena that we all have direct, equal, and immediate access to, yet which science cannot measure let alone explain. By persistently searching for the source of our own consciousness, and discovering that we can’t find it yet it is not non-existent, we are inevitably brought to a direct realization of the ultimate nature of reality.

Over decades of searching for consciousness, first through science, then through Buddhism, I had come to the conclusion that rather than consciousness emerging from the brain, it had to be the other way around: All experience, and indeed the body, brain and even the physical universe, emerge from consciousness. I had discovered that consciousness is a gateway to a sourceless, deep and endless wellspring of mysteries. And more importantly, I had found what I thought would be conclusive evidence that would finally convince my father that I was right.

But when I tried to relate these realizations to my father, he was entirely unconvinced. He argued that my experiences were not really objective, and that consciousness is an epiphenomenon of the brain; a wonderful side-effect, a remarkable illusion that nonetheless could be reduced to neurochemistry and atoms. I countered that in the special case of consciousness, subjective observations could in fact be objective, under the right circumstances. I claimed that it was possible to scientifically and objectively observe consciousness by looking at it under the microscope of carefully trained meditation. But he cast doubts on these claims, citing numerous examples from psychology and neuroscience.

So I tried many other arguments. I cited the work of philosophers like John Searle who provided many illustrations of how conscious experiences could not be reduced to the brain or any kind of machine. I used lines of reasoning from Buddhist logic. I even cited recent findings in quantum theory that seem to imply that the act of conscious observation interacts with experimental results. But all of these arguments failed to convince my father that consciousness was fundamental or irreducible. He remained a skeptic and I felt invalidated. And so I strived even harder to find a way to map my experiences to his worldview, so I could finally prove the scientific foundations for my experience and belief in divinity to him.

This ongoing debate between my father and I — between science and religion — was not unique to us; it had been going on for millennia, and yielded many great works of both science and art. Our conversations were often frustrating and ended in exhaustion and exasperation, but we also sensed that somehow we were getting somewhere, if not mutually, then at least as individuals. We were foils to one another, worthy opponents. Like many who had come before us, the dialectical process of trying to convince one another of our conflicting views of reality, caused us to generated volumes of new writing, theories, inventions, and ideas we could not have arrived at on our own.

Nevertheless, despite my father’s strong rebukes of superstitious belief systems, and his skepticism towards my Buddhist beliefs, he was in fact a deeply spiritual man, in a very human, unembellished way. His spirituality was not tied to any system or institution — it was natural and basic: it was how he lived and the ideals he lived by: Love, Science, and Art. His spirituality was not about words, it was about actions. He expressed it in his art, his good deeds, his compassion, his joyful creativity, and his ability to love and be loved.

What I failed to see was that my father’s spirituality was immensely humble. So humble that he would not even claim to be spiritual, and certainly wouldn’t go so far as to conceptualize it. Instead, he was simply a truly good man, a mensch. While I continued to try new tactics in my campaign to convince him, and as I judged him as closed-minded and non-spiritual, he was in fact actually living my spiritual ideals better than I could understand at the time. But, not realizing this, I was certain he was missing out on something of vital importance, something that I had to convince him of before he died. And so our debate continued.

Then, in the last few months of my father’s life, we were finally able to bridge this divide. As his illness progressed, his wife called me and urged me to visit before it was too late. “He’s really getting worse, and I want you to have a chance to be together while he’s still strong enough,” she said. And so I flew to Boston and we resumed the debate.

Perhaps it was our mutual sense that time was running out, or perhaps it was that we had both exhausted all our prior arguments, but this time we reached a level of discourse that was essentially mathematical in nature; pure logic, pure set theory. Without imposing the assumptions of either science or religion, we started anew from first principles and through pure reason and observation, we derived a new common language, on neutral ground. And with this in hand, we arrived at a single nondual phenomenology — At last we had arrived at the basic nature of reality.

When we finally reached the point of agreement and mutual understanding, after decades of debate, and we both witnessed the simultaneous unification and transcendence of our prior belief systems — we saw that we had always actually agreed on a deeper level. And on that December afternoon, as we sketched out the full picture together, in a way that neither of us had done before on our own, we both breathed a sigh of relief. It was an incredibly cathartic moment for both of us.

At the conclusion of our decades long debate, we sat quietly together, just being in that understanding — a meditation on awareness and knowledge, on physics, time and space — on our mutual respect for the immensity and majesty of the universe. I will always treasure that time.

The day after that experience, before I left to return to California, I sat by my father’s bed. He was almost unable to walk at this point. As I said goodbye, thinking I might never see him again, I said, “Don’t forget what we discovered together, it is the highest realization.” He replied, “There is still one more realization that is higher.” Surprised, I asked him, “What?” He answered, “To live it!”

About a month later my wife called again. “He’s dying,” she said, “come back as soon as you can.” The cancer had advanced unexpectedly fast and so I flew back to be with him one last time.

I stayed by his side, looking into his eyes, talking to him, even though he had lost the ability to move or speak. His eyes smiled back. My brother and I kept telling him, as he labored to breathe for the final two days, “It’s ok to go now, you can let go, we love you, we’ll be ok, we’ll take care of each other.” But his drive to love and protect us all was so strong. He wasn’t ready to go. Even while in the depths of his own suffering, he was still filled with compassion, he was worried about what would happen to all of us. It was noble and beautiful to witness.

We played him the music he loved, the music he played for us as we grew up. We laughed and told him our memories and stories of him. We stroked his hair and his beard and tried to make him as comfortable as possible as he lay there, struggling, and probably frustrated that he couldn’t communicate, and at times in terrible pain. Yet through great effort he still found ways to let us know he heard us, loved us, and was still conscious.

As his breathing changed and we saw the signs of death advancing further through his body, he maintained his clarity and brilliance and even got brighter — we could feel his heart, and see his kind and intelligent spirit in his eyes. He tried to speak to us by making what little sound he could and moving his eyebrows in response to us. “Remember what we talked about, what we realized,” I said to him over and over, and I could see he was living it.

Finally, on the evening of February 12, 2011, he let go and died peacefully in his wife’s arms as she sang to him gently. All of us felt at that moment an incredible, all-embracing, boundless love and bliss, even as we grieved. It was him. My father, Mayer Spivack. Our Buddha. He went into Love.

Web 3.0 Documentary by Kate Ray – I'm interviewed

Kate Ray has done a terrific job illustrating and explaining Web 3.0 and the Semantic Web in her new documentary. She interviews, Tim Berners-Lee, Clay Shirky, me, and many others. If you’re interested in where the Web is headed, and the challenges and opportunities ahead, then you should watch this, and share it too!

The Digital Generation Gap

We exist in a epoch of great technological change. Within the space of just a few generations we have gone from horse drawn carriages to exploring the outer reaches of our solar system, from building with wood, stone and metals to nanoscale construction with individual atoms, and from manual printing presses and physical libraries, to desktop publishing and the World Wide Web. The increasing pace of technological evolution brings with it many gifts, but also poses challenges never-before-faced by humanity. One of these challenges is the digital generation gap.

The digital generation gap is the result of the extremely rapid rise of personal computing, the Internet, mobile applications, and coming next, biotechnology. Never before in the history of our species have we been faced with a situation where each living generation is focused around a different technology platform.

The tools and practices that the elders of our civilization use are still based on the pre-digital analog era. Their children — the Baby Boomers — use entirely different tools and practices based around the PC. And the youth of today — the Boomers’ children, exist in yet another domain: the world of mobile devices.

The digital generation gap presents a major challenge to our civilization. In particular because of the effect this has on education — both informal education that takes place at home and in communities, and formal education that takes place in school settings. The tools that teachers grew up with and now teach with (PC’s) are not the same tools that the students of today are using today to learn and communicate with (mobile devices).

Baby Boomers grew up before the advent of any of these technologies — they lived in an analog world in which daily life took place primarily on the physical, face-to-face human scale, with physical materials and physical information media like printed books and newspapers. This world was similar to the world of their parents and grandparents — even though it was increasingly automated and industrialized during their lives. As children and during their young adult years the Boomers grew up amidst the fruition of the industrial revolution: mass-produced physical and synthetic goods of all kinds. Among the defining shifts of this period was the transition from a world of manual labor to one of increasing automation. The pinnacle of this transition was the adoption of the first generations of computers.

The Boomer’s children — people in their 30’s and 40’s today — arrived to usher in the transition from an automated analog world, to the new digital world. They were born into a civilization where monolithic computers had already taking hold in government and industry, and they witnessed the birth of waves of increasingly powerful, inexpensive and portable personal computers, the Internet, and the Web. This generation built the bridges from the industrial world of the Boomers to the digital world we live in today. They integrated systems, connected devices, and brought the whole world together as one global social and economic network.

Now their children – the children and youth of today — are growing up in a world that is primarily focused around mobile devices and mobile applications. They have always lived with ubiquitous mobile access and social media. No longer concerned with building bridges to the legacy industrial world of their parents and grandparents, they are plunging headlong into an increasingly digital culture. One in which dating, shopping, business, education — almost everything we do as humans — is taking place online, and via mobile devices.

Each generation is out of touch with the means of production and consumption of the other generations. The result is an increasing communications gap between the generations: They use different platforms. And not surprisingly the inter-generational transmission of knowledge, traditions, cultural norms and standards is not operating like it used to. In fact it may be breaking down entirely.

Many of the cultural and social stresses making headline news are related to the digital generation gap. For example, the increasing growth of cyberbullying is the result of parents and teachers being totally out of touch with the mobile world that kids live in today.

Parents and teachers are so out of the loop technologically, compared to kids today, that they are literally unable to see what is going on between them, let alone do anything about it.

It’s no wonder that kids are running wild online, “sexting,” cyberbullying, and cheating in school. There are few adults, and little to no adult-supervision, where they spend their time online keeping order.

There is no period in recent history when this has ever been the case. It used to be that schoolkids took recess breaks in the schoolyard under the watchful eyes their teachers. There was a certain level of adult supervision in school, and also at home. Not today. Teachers and parents can’t see what their kids are up to online and have no control over what they do with their mobile devices. We have a generation of kids who are growing up with less adult oversight and supervision than ever before.

And the newest generation — the babies of today — what will their experience be? Will the pace of technological progress finally start to plateau for them? Will their world be more like the world of their parents?

Instead of a sudden shift to yet a smaller level of scale or a more powerful technology platform, will they and many generations to come, live on a more stable and shared technology platform? If the pace does slow down for a while, we may see inter-generational gaps decrease. Perhaps this will serve to standardize and solidify our emerging global digital culture. A new set of digital norms and traditions will have time to form and be handed down across generations.

Alternatively, what if in fact the pace of change continues to quicken instead? What if the babies of today grow up in a world of augmented reality and industrial-scale genetic engineering? And what if their children (the grandchildren of people in their 40’s today) grow up in a world of direct brain-machine interfaces and personal genetic engineering? Those of us today who think of ourselves as being on the cutting edge will be the elders of tomorrow, and we will be hopelessly out of touch.

The Global Brain is About to Wake Up

The emerging realtime Web is not only going to speed up the Web and our lives, it is going to bring about a kind of awakening of our collective Global Brain. It’s going to change how many things happen on online, but it’s also going to change how we see and understand what the Web is doing. By speeding up the Web, it will cause processes that used to take weeks or months to unfold online, to happen in days or even minutes. And this will bring these processes to the human-scale — to the scale of our human “now” — making it possible for us to be aware of larger collective processes than before. We have until now been watching the Web in slow motion. As it speeds up, we will begin to see and understand what’s taking place on the Web in a whole new way.

This process of of quickening is part of a larger trend which I and others call “Nowism.” You can read more of my thoughts about Nowism here. Nowism is an orientation that is gaining momentum and will help to shape this decade, and in particular, how the Web unfolds. It is the idea that the present-timeframe (“the now”) is getting more important, shorter and also more information-rich. As this happens our civilization is becoming more focused on the now, and less focused on past or the future. Simply keeping up with the present is becoming an all-consuming challenge: Both a threat and an opportunity.

The realtime Web —  what I call “The Stream”  (see “Welcome to the Stream”) — is changing the unit of now. It’s making it shorter. The now is the span of time which we have to be aware of to be effective our work and lives, and it is getting shorter. On a personal level the now is getting shorter and denser — more information and change is packed into shorter spans of time; a single minute on Twitter is overflowing with potentially relevant messages and links. In business as well, the now is getting shorter and denser — it used to be about the size of a fiscal quarter, then it became a month, then a week, then a day, and now it is probably about half a day in span. Soon it will be just a few hours.

To keep up with what is going on we have to check in with the world in at least half-day chunks. Important news breaks about once or twice a day. Trends on Twitter take about a day to develop too. So basically, you can afford to just check  the news and the real-time Web once or twice a day and still get by. But that’s going to change.  As the now gets shorter, we’ll have to check in more frequently to keep abreast of change. As the Stream picks up speed in the middle of this decade, to remain competitive will require near-constant monitoring — we will have to always be connected to, and watching, the real-time Web and our personal streams. Being offline at all will risk missing out on big important trends, threats and opportunities that emerge and develop within minutes or hours. But nobody is capable of tracking the Stream all 24/7 — we must at least take breaks to eat and sleep. And this is a problem.

Big Changes to the Web Coming Soon…

With Nowism comes a faster Web, and this will lead to big changes in how we do various activities on the Web:

  • We will spend less time searching. Nowism pushes us to find better alternatives to search, or to eliminate search entirely, because people don’t have time to search anymore. We need tools that do the searching for us and that help with decision support so we don’t have to spend so much of our scarce time doing that. See my article on “Eliminating the Need for Search — Help Engines” for more about that.
  • Monitoring (not searching) the real-time stream becomes more important. We need to stay constantly vigilant about what’s happening, what’s trending. We need to be alerted of the important stuff (to us), and we need a way to filter out what’s not important to us. Probably a filter based on influence of people and tweets, and/or time dynamics of memes will be necessary. Monitoring the real-time stream effectively is different from searching it. I see more value in real-time monitoring than realtime search — I haven’t seen any monitoring tools for Twitter that are smart enough to give me just the content I want yet. There’s a real business opportunity there.
  • The return of agents. Intelligent agents are going to come back. To monitor the realtime Web effectively each of us will need online intelligent agents that can help us — because we don’t have time, and even if we did, there’s just too much information to sift through.
  • Influence becomes more important than relevance. Advertisers and marketers will look for the most influential parties (individuals or groups) on Twitter and other social media to connect with and work through. But to do this there has to be an effective way to measure influence. One service that’s providing a solution for this (which I’ve angel invested in and advise) is Klout.com – they measure influence per person per topic. I think that’s a good start.
  • Filtering content by influence. We also will need a way to find the most influential content. Influential content could be the content most RT’d or most RT’d by most influential people. It would be much less noisy to be able to see only the more influential tweets of people I follow. If a tweet gets RT’d a lot, or is RT’d by really influential people, then I want to see it. If not, then only if it’s really important (based on some rule). This will be the only way to cope with the information overload of the real-time Web and keep up with it effectively. I don’t know of anyone providing a service for this yet. It’s a business opportunity.
  • Nowness as a measure of value of content. We will need a new form of ranking of results by “nowness” – how timely they are now. So for example, in real-time search engines we shouldn’t rank results merely by how recent they are, but also by how timely, influential, and “hot” they are now. See my article from years ago on “A Physics of Ideas” for more about that. Real-time search companies should think of themselves as real-time monitoring companies — that’s what they are really going to be used for in the end. Only the real-time search ventures that think of themselves this way are going to survive the conceptual paradigm shift that the realtime Web is bringing about. In a realtime context, search is actually too late — once something has happened in the past it really is not that important anymore –what matters is current awareness: discovering the trends NOW. To do that one has to analyze the present, and the very recent past, much more than searching the longer term past. The focus has to be on real-time or near-real-time analytics, statistical analysis, topic and trend detection, prediction, filtering and alerting. Not search.
  • New ways to understand and navigate the now. We will need a way to visualize and navigate the now. I’m helping to incubate a stealth startup venture, Live Matrix, that is working on that. It hasn’t launched yet. It’s cool stuff. More on that in the future when they launch.
  • New tools for browsing the Stream. New tools will emerge for making the realtime Web more compelling and smarter. I’m working on incubating some new stealth startups in this area as well. They’re very early-stage so can’t say more about them yet.
  • The merger of semantics with the realtime Web. We need to make the realtime Web semantic — as well as the rest of the Web — in order to make it easier for software to make sense of it for us. This is the best approach to increasing the signal-to-noise ratio of content we have to look at whether searching or monitoring stuff. The Semantic Web standars of the W3C are key to this. I’ve written a long manifesto on this in “Minding The Planet: The Meaning and Future of the Semantic Web” if you’re really interested in that topic.

Faster Leads to Smarter

As the realtime web unfolds and speeds up, I think it will also have a big impact on what some people call “The Global Brain.” The Global Brain has always existed, but in recent times it has been experiencing a series of major upgrades — particularly around how connected, affordable, accessible and fast it is. First we got phone and faxes, then the Internet, the PC and the Web, and now the real-time Web and the Semantic Web. All of these recent changes are making the Global Brain faster, more richly interconnected. And this makes it smarter. For more about my thoughts on the Global Brain, see these two talks:

What’s most interesting to me is that as the rate of communication and messaging on the Web approaches near-real time, we may see a kind of phase change take place – a much smarter Global Brain will sort of begin to appear out of the chaos. In other words, the speed of collective thinking is as important to the complexity or sophistication of collective thinking, in making the Global Brain significantly more intelligent. In other words, I’m proposing that there is a sort of critical speed of collective thinking, before which the Global Brain seems like just a crowd of actors chaotically flocking around memes, and after which the Global Brain makes big leaps — instead of seeming like a chaotic crowd, it starts to look more like an organized group around certain activitities — it is able to respond to change faster, and optimize and even do things collectively more productively than a random crowd could.

This is kind of like film, or animation. When you watch a movie or animation you are really watching a rapid series of frames. This gives the illusion of there being cohesive, continuous characters, things and worlds in the movie — but really they aren’t there at all, it’s just an illusion — our brains put these scenes together and start to recognize and follow higher order patterns. A certain shape appears to maintain itself and move around relative to other shapes, and we name it with a certain label — but there isn’t really something there, let alone something moving or interacting — there are just frames flicking by rapidly . It turns out that after a critical frame rate (around 20 to 60 frames per second) the human brain stops seeing individual frames and starts seeing a continuous movie. When you start flipping pages fast enough it appears to be a coherent animation and then we start seeing things “moving within the sequence” of frames. In the same way, as the unit of time of (aka the speed) of the real-time Web increases, its behavior will start to seem more continuous and smarter — we won’t see separate chunks of time or messages, we’ll see intelligent continuous collective thinking and adaptation processes.

In other words, as the Web gets faster, we’ll start to see processes emerge within it that appear to be cohesive intelligent collective entities in their own right. There won’t really be any actual entities there that we can isolate, but when we watch the patterns on the Web it will appear as if such entities are there. This is basically what is happening at every level of scale — even in the real world. There really isn’t anything there that we can find — everything is divisible down to the quantum level and probably beyond — but over time our brains seem to recognize and label patterns as discrete “things.” This is what will happen across the Web as well. For example, a certain meme (such as a fad or a movement) may become a “thing” in it’s own right, a kind of entity that seemingly takes on a life of its own and seems to be doing something. Similarly certain groups or social networks or activities they engage in may seem to be intelligent entities in their own rights.

This is an illusion in that there really are no entities there, they are just collections of parts that themselves can be broken down into more parts, and no final entities can be found. However, nonethless, they will seem like intelligent entities when not analyzed in detail. In addition, the behavior of these chaotic systems may resist reduction — they may not even be understandable and their behavior may not be predictable through a purely reductionist approach — it may be that they react to their own internal state and their environments virtually in real-time, making it difficult to take a top-down or bottom-up view of what they are doing. In a realtime world, change happens in every direction.

As the Web gets faster, the patterns that are taking place across it will start to become more animated. Big processes that used to take months or years to happen will happen in minutes or hours. As this comes about we will begin to see larger patterns than before, and they will start to make more sense to us — they will emerge out of the mists of time so to speak, and become visible to us on our human timescale — the timescale of our human-level “now. As a result, we will become more aware of higher order dynamics taking place on the real-time Web, and we will begin to participate in and adapt to those dynamics, making those dynamics in turn even smarter. (For more on my thoughts about how the Global Brain gets smarter, see:  “How to Build the Global Mind.”)

See Part II: “Will The Web Become Conscious?” if you want to dig further into the thorny philosophical and scientific issues that this brings up…

What's After the Real Time Web?

In typical Web-industry style we’re all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call “The Stream,” is not an end in itself, it’s a means to an end. So what will it enable, where is it headed, and what’s it going to look like when we look back at this trend in 10 or 20 years?

In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:

  1. Web Attention Deficit Disorder. The first problem with the real-time Web that is becoming increasingly evident is that it has a bad case of ADD. There is so much information streaming in from so many places at once that it’s simply impossible to focus on anything for very long, and a lot of important things are missed in the chaos. The first generation of tools for the Stream are going to need to address this problem.
  2. Web Intention Deficit Disorder. The second problem with the real-time Web will emerge after we have made some real headway in solving Web attention deficit disorder. This second problem is about how to get large numbers of people to focus their intention not just their attention. It’s not just difficult to get people to notice something, it’s even more difficult to get them to do something. Attending to something is simply noticing it. Intending to do something is actually taking action, expending some energy or effort to do something. Intending is a lot more expensive, cognitively speaking, than merely attending. The power of collective intention is literally what changes the world, but we don’t have the tools to direct it yet.

The Stream is not the only big trend taking place right now. In fact, it’s just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I’m tracking:

  • Messaging. The real-time Web aka The Stream is really about messaging in essence. It’s a subset of the global trend towards building a better messaging layer for the Web. Multiple forms of messaging are emerging, from the publish-and-subscribe nature of Twitter and RSS, to things like Google Wave, Pubsubhubub, and broadcast style messaging or multicasting via screencast, conferencing and media streaming and events in virtual worlds. The effect of these tools is that the speed and interactivity of the Web are increasing — the Web is getting faster. Information spreads more virally, more rapidly — in other words, “memes” (which we can think of as collective thoughts) are getting more sophisticated and gaining more mobility.
  • Semantics. The Web becomes more like a database. The resolution of search, ad targeting, and publishing increases. In other words, it’s a higher-resolution Web. Search will be able to target not just keywords but specific meaning. For example, you will be able to search precisely for products or content that meet certain constraints. Multiple approaches from natural language search to the metadata of the Semantic Web will contribute to increased semantic understanding and representation of the Web.
  • Attenuation. As information moves faster, and our networks get broader, information overload gets worse in multiple dimensions. This creates a need for tools to help people filter the firehose. Filtering in its essence is a process of attenuation — a way to focus attention more efficiently on signal versus noise. Broadly speaking there are many forms of filtering from automated filtering, to social filtering, to personalization, but they all come down to helping someone focus their finite attention more efficiently on the things they care about most.
  • The WebOS.  As cloud computing resources, mashups, open linked data, and open API’s proliferate, a new level of aggregator is emerging. These aggregators may focus on one of these areas or may cut across them. Ultimately they are the beginning of true cross-service WebOS’s. I predict this is going to be a big trend in the future — for example instead of writing Web apps directly to various data and API’s in dozens of places, just write to a single WebOS aggregator that acts as middleware between your app and all these choices. It’s much less complicated for developers. The winning WebOS is probably not going to come from Google, Microsoft or Amazon — rather it will probably come from someone neutral, with the best interests of developers as the primary goal.
  • Decentralization. As the semantics of the Web get richer, and the WebOS really emerges it will finally be possible for applications to leverage federated, Web-scale computing. This is when intelligent agents will actually emerge and be practical. By this time the Web will be far too vast and complex and rapidly changing for any centralized system to index and search it. Only massively federated swarms of intelligent agents, or extremely dynamic distributed computing tools, that can spread around the Web as they work, will be able to keep up with the Web.
  • Socialization. Our interactions and activities on the Web are increasingly socially networked, whether individual, group or involving large networks or crowds. Content is both shared and discovered socially through our circles of friends and contacts. In addition, new technologies like Google Social Search enable search results to be filtered by social distance or social relevancy. In other words, things that people you follow like get higher visibility in your search results. Socialization is a trend towards making previously non-social activities more social, and towards making already-social activities more efficient and broader. Ultimately this process leads to wider collaboration and higher levels of collective intelligence.
  • Augmentation. Increasingly we will see a trend towards augmenting things with other things. For example, augmenting a Web page or data set with links or notes from another Web page or data set. Or augmenting reality by superimposing video and data onto a live video image on a mobile phone. Or augmenting our bodies with direct connections to computers and the Web.

If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it’s collective intelligence — not just of humans, but also our computing systems, working in concert.

Collective Intelligence

I think that these trends are all combining, and going real-time. Effectively what we’re seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.

But that’s in the very distant future still. In the nearer term — the next 100 years or so — we’re going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.

Social Evolution

Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.

Physical Evolution

As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we’ll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:

  • Individual brains will get less good at things like memorization and recall, calculation, reasoning, and long-term planning and action.
  • Individual brains will get better at multi-tasking, information filtering, trend detection, and social communication. The parts of the nervous system involved in processing live information will increase disproportionately to other parts.
  • Our bodies may actually improve in certain areas. We will become more, not less, mobile, as computation and the Web become increasingly embedded into our surroundings, and into augmented views of our environments. This may cause our bodies to get into better health and shape since we will be less sedentary, less at our desks, less in front of TV’s. We’ll be moving around in the world, connected to everything and everyone no matter where we are. Physical strength will probably decrease overall as we will need to do less manual labor of any kind.

These are just some of the changes that are likely to occur as a result of the things we’re working on today. The Web and the emerging Real-Time Web are just a prelude of things to come.

Wolfram Alpha is Coming — And It Could be as Important as Google

Notes:

– This article last updated on March 11, 2009.

– For follow-up, connect with me about this on Twitter here.

– See also: for more details, be sure to read the new review by Doug Lenat, creator of Cyc. He just saw the Wolfram Alpha demo and has added many useful insights.

——————————————————————–

Introducing Wolfram Alpha

Stephen Wolfram is building something new — and it is really impressive and significant. In fact it may be as important for the Web (and the world) as Google, but for a different purpose. It’s not a “Google killer” — it does something different. It’s an “answer engine” rather than a search engine.

Stephen was kind enough to spend two hours with me last week to demo his new online service — Wolfram Alpha (scheduled to open in May). In the course of our conversation we took a close look at Wolfram Alpha’s capabilities, discussed where it might go, and what it means for the Web, and even the Semantic Web.

Stephen has not released many details of his project publicly yet, so I will respect that and not give a visual description of exactly what I saw. However, he has revealed it a bit in a recent article, and so below I will give my reactions to what I saw and what I think it means. And from that you should be able to get at least some idea of the power of this new system.

A Computational Knowledge Engine for the Web

In a nutshell, Wolfram and his team have built what he calls a “computational knowledge engine” for the Web. OK, so what does that really mean? Basically it means that you can ask it factual questions and it computes answers for you.

It doesn’t simply return documents that (might) contain the answers, like Google does, and it isn’t just a giant database of knowledge, like the Wikipedia. It doesn’t simply parse natural language and then use that to retrieve documents, like Powerset, for example.

Instead, Wolfram Alpha actually computes the answers to a wide range of questions — like questions that have factual answers such as “What is the location of Timbuktu?” or “How many protons are in a hydrogen atom?,” “What was the average rainfall in Boston last year?,” “What is the 307th digit of Pi?,” or “what would 80/20 vision look like?”

Think about that for a minute. It computes the answers. Wolfram Alpha doesn’t simply contain huge amounts of manually entered pairs of questions and answers, nor does it search for answers in a database of facts. Instead, it understands and then computes answers to certain kinds of questions.

(Update: in fact, Wolfram Alpha doesn’t merely answer questions, it also helps users to explore knowledge, data and relationships between things. It can even open up new questions — the “answers” it provides include computed data or facts, plus relevant diagrams, graphs, and links to other related questions and sources. It also can be used to ask questions that are new explorations between relationships, data sets or systems of knowledge. It does not just provides textual answers to questions — it helps you explore ideas and create new knowledge as well)

How Does it Work?

Wolfram Alpha is a system for computing the answers to questions. To accomplish this it uses built-in models of fields of knowledge, complete with data and algorithms, that represent real-world knowledge.

For example, it contains formal models of much of what we know about science — massive amounts of data about various physical laws and properties, as well as data about the physical world.

Based on this you can ask it scientific questions and it can compute the answers for you. Even if it has not been programmed explicity to answer each question you might ask it.

But science is just one of the domains it knows about — it also knows about technology, geography, weather, cooking, business, travel, people, music, and more.

Alpha does not answer natural language queries — you have to ask questions in a particular syntax, or various forms of abbreviated notation. This requires a little bit of learning, but it’s quite intuitive and in some cases even resembles natural language or the keywordese we’re used to in Google.

The vision seems to be to create a system wich can do for formal knowledge (all the formally definable systems, heuristics, algorithms, rules, methods, theorems, and facts in the world) what search engines have done for informal knowledge (all the text and documents in various forms of media).

How Does it Differ from Google?

Wolfram Alpha and Google are very different animals. Google is designed to help people find Web pages. It’s a big lookup system basically, a librarian for the Web. Wolfram Alpha on the other hand is not at all oriented towards finding Web pages, it’s for computing factual answers. It’s much more like a giant calculator for computing all sorts of answers to questions that involve or require numbers. Alpha is for calculating, not for finding. So it doesn’t compete with Google’s core business at all. In fact, it is much more comptetive with the Wikipedia than with Google.

On the other hand, while Alpha doesn’t compete with Google, Google may compete with Alpha. Google is increasingly trying to answer factual questions directly — for example unit conversions, questions about the time, the weather, the stock market, geography, etc. But in this area, Alpha has a powerful advantage: it’s built on top of Wolfram’s Mathematica engine, which represents decades of work and is perhaps the most powerful calculation engine ever built.

How Smart is it and Will it Take Over the World?

Wolfram Alpha is like plugging into a vast electronic brain. It provides extremely impressive and thorough answers to a wide range of questions asked in many different ways, and it computes answers, it doesn’t merely look them up in a big database.

In this respect it is vastly smarter than (and different from) Google. Google simply retrieves documents based on keyword searches. Google doesn’t understand the question or the answer, and doesn’t compute answers based on models of various fields of human knowledge.

But as intelligent as it seems, Wolfram Alpha is not HAL 9000, and it wasn’t intended to be. It doesn’t have a sense of self or opinions or feelings. It’s not artificial intelligence in the sense of being a simulation of a human mind. Instead, it is a system that has been engineered to provide really rich knowledge about human knowledge — it’s a very powerful calculator that doesn’t just work for math problems — it works for many other kinds of questions that have unambiguous (computable) answers.

There is no risk of Wolfram Alpha becoming too smart, or taking over the world. It’s good at answering factual questions; it’s a computing machine, a tool — not a mind.

One of the most surprising aspects of this project is that Wolfram has been able to keep it secret for so long. I say this because it is a monumental effort (and achievement) and almost absurdly ambitious. The project involves more than a hundred people working in stealth to create a vast system of reusable, computable knowledge, from terabytes of raw data, statistics, algorithms, data feeds, and expertise. But he appears to have done it, and kept it quiet for a long time while it was being developed.

Computation Versus Lookup

For those who are more scientifically inclined, Stephen showed me many interesting examples — for example, Wolfram Alpha was able to solve novel numeric sequencing problems, calculus problems, and could answer questions about the human genome too. It was also able to compute answers to questions about many other kinds of topics (cooking, people, economics, etc.). Some commenters on this article have mentioned that in some cases Google appears to be able to answer questions, or at least the answers appear at the top of Google’s results. So what is the Big Deal? The Big Deal is that Wolfram Alpha doesn’t merely look up the answers like Google does, it computes them using at least some level of domain understanding and reasoning, plus vast amounts of data about the topic being asked about.

Computation is in many cases a better alternative to lookup. For example, you could solve math problems using lookup — that is what a multiplication table is after all. For a small multiplication table, lookup might even be almost as computationally inexpensive as computing the answers. But imagine trying to create a lookup table of all answers to all possible multiplication problems — an infinite multiplication table. That is a clear case where lookup is no longer a better option compared to computation.

The ability to compute the answer on a case by case basis, only when asked, is clearly more efficient than trying to enumerate and store an infinitely large multiplication table. The computation approach only requires a finite amount of data storage — just enough to store the algorithms for solving general multiplication problems — whereas the lookup table approach requires an infinite amount of storage — it requires actually storing, in advance, the products of all pairs of numbers.

(Note: If we really want to store the products of ALL pairs of numbers, it turns out this is impossible to accomplish, because there are an infinite number of numbers. It would require an infinite amount of time to simply generate the data, and an infinite amount of storage to store it. In fact, just to enumerate and store all themultiplication products of the numbers between 0 and 1 would require an infinite amount of time and storage. This is because the real-numbers are uncountable. There are in fact more real-numbers than integers (see the work of Georg Cantor on this). However, the same problem holds even if we are speaking of integers — it would require an infinite amount of storage to store all their multiplication products, although they at least could be enumerated, given infinite time.)

Using the above analogy, we can see why a computational system like Wolfram Alpha is ultimately a more efficient way to compute the answers to many kinds offactual questions than a lookup system like Google. Even though Google is becoming increasingly comprehensive as more information comes on-line and gets indexed, it will never know EVERYTHING. Google is effectively just a lookup table of everything that has been written and published on the Web, that Google has found. But not everything has been published yet, and furthermore Google’s index is also incomplete, and always will be.

Therefore Google does and always will contain gaps. It cannot possibly index the answer to every question that matters or will matter in the future — it doesn’t contain all the questions or all the answers. If nobody has ever published a particular question-answer pair onto some Web page, then Google will not be able to index it, and won’t be able to help you find the answer to that question — UNLESS Google also is able to compute the answer like Wolfram Alpha does (an area that Google is probably working on, but most likely not to as sophisticated a level as Wolfram’s Mathematica engine enables).

While Google only provide answers that are found on some Web page (or at least in some data set they index), a computational knowledge engine like Wolfram Alpha can provide answers to questions it has never seen before — provided however that it at least knows the necessary algorithms for answering such questions, and it at least has sufficient data to compute the answers using these algorithms. This is a “big if” of course.

Wolfram Alpha substitutes computation for storage. It is simply more compact to store general algorithms for computing the answers to various types of potential factual questions, than to store all possible answers to all possible factual questions. In then end making this tradeoff in favor of computation wins, at least for subject domains where the space of possible factual questions and answers islarge. A computational engine is simply more compact and extensible than a database of all questions and answers.

This tradeoff, as Mills Davis points out in the comments to this article is also referred to as the tradeoff between time and space in computation. For very difficult computations, it may take a long time to compute the answer. If the answer was simply stored in a database already of course that would be faster and more efficient. Therefore, a hybrid approach would be for a system like Wolfram Alpha to store all the answers to any questions that have already been asked of it, so that they can be provided by simple lookup in the future, rather than recalculated each time. There may also already be databases of precomputed answers to very hard problems, such as finding very large prime numbers for example. These should also be stored in the system for simple lookup, rather than having to be recomputed. I think that Wolfram Alpha is probably taking this approach. For many questions it doesn’t make sense to store all the answers in advance, but certainly for some questions it is more efficient to store the answers, when you already know them, and just look them up.

Other Competition

Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for COMPUTING answers to questions about what we as a civilization collectively know. It’s the next step in the distribution of knowledge and intelligence around the world — a new leap in the intelligence of our collective”Global Brain.” And like any big next-step, Wolfram Alpha works in a new way — it computes answers instead of just looking them up.

Wolfram Alpha, at its heart is quite different from a brute force statistical search engine like Google. And it is not going to replace Google — it is not a general search engine: You would probably not use Wolfram Alpha to shop for a new car, find blog posts about a topic, or to choose a resort for your honeymoon. It is not a system that will understand the nuances of what you consider to be the perfect romanticgetaway, for example — there is still no substitute for manual human-guided search for that. Where it appears to excel is when you want facts about something, or when you need to compute a factual answer to some set of questions about factual data.

I think the folks at Google will be surprised by Wolfram Alpha, and they will probably want to own it, but not because it risks cutting into their core search engine traffic. Instead, it will be because it opens up an entirely new field of potential traffic around questions, answers and computations that you can’t do on Google today.

The services that are probably going to be most threatened by a service like Wolfram Alpha are the Wikipedia, Cyc, Metaweb’s Freebase, True Knowledge, the START Project, and natural language search engines (such as Microsoft’s upcoming search engine, based perhaps in part on Powerset‘s technology), and other services that are trying to build comprehensive factual knowledge bases.

As a side-note, my own service, Twine.com, is NOT trying to do what Wolfram Alpha is trying to do, fortunately. Instead, Twine uses the Semantic Web to help people filter the Web, organize knowledge, and track their interests. It’s a very different goal. And I’m glad, because I would not want to be competing withWolfram Alpha. It’s a force to be reckoned with.

Relationship to the Semantic Web

During our discussion, after I tried and failed to poke holes in his natural language parser for a while, we turned to the question of just what this thing is, and how it relates to other approaches like the Semantic Web.

The first question was could (or even should) Wolfram Alpha be built using the Semantic Web in some manner, rather than (or as well as) the Mathematica engine it is currently built on. Is anything missed by not building it with Semantic Web’s languages (RDF, OWL, Sparql, etc.)?

The answer is that there is no reason that one MUST use the Semantic Web stack to build something like Wolfram Alpha. In fact, in my opinion it would be far too difficult to try to explicitly represent everything Wolfram Alpha knows and can compute using OWL ontologies and the reasoning that they enable. It is just too wide a range of human knowledge and giant OWL ontologies are too difficult to build and curate.

It would of course at some point be beneficial to integrate with the Semantic Web so that the knowledge in Wolfram Alpha could be accessed, linked with, and reasoned with, by other semantic applications on the Web, and perhaps to make it easier to pull knowledge in from outside as well. Wolfram Alpha could probably play better with other Web services in the future by providing RDF and OWL representations of it’s knowledge, via a SPARQL query interface — the basic open standards of the Semantic Web. However for the internal knowledge representation and reasoning that takes places in Wolfram Alpah, OWL and RDF are not required and it appears Wolfram has found a more pragmatic and efficient representation of his own.

I don’t think he needs the Semantic Web INSIDE his engine, at least; it seems to be doing just fine without it. This view is in fact not different from the current mainstream approach to the Semantic Web — as one commenter on this article pointed out, “what you do in your database is your business” — the power of the Semantic Web is really for knowledge linking and exchange — for linking data and reasoning across different databases. As Wolfram Alpha connects with the rest ofthe “linked data Web,” Wolfram Alpha could benefit from providing access to its knowledge via OWL, RDF and Sparql. But that’s off in the future.

It is important to note that just like OpenCyc (which has taken decades to build up a very broad knowledge base of common sense knowledge and reasoning heuristics), Wolfram Alpha is also a centrally hand-curated system. Somehow, perhaps just secretly but over a long period of time, or perhaps due to some new formulation or methodology for rapid knowledge-entry, Wolfram and his team have figured out a way to make the process of building up a broad knowledge base about the world practical where all others who have tried this have found it takes far longer than expected. The task is gargantuan — there is just so much diverse knowledge in the world. Representing even a small area of it formally turns out to be extremely difficult and time-consuming.

It has generally not been considered feasible for any one group to hand-curate all knowledge about every subject. The centralized hand-curation of Wolfram Alpha is certainly more controllable, manageable and efficient for a project of this scale and complexity. It avoids problems of data quality and data-consistency. But it’s also apotential bottleneck and most certainly a cost-center. Yet it appears to be a tradeoff that Wolfram can afford to make, and one worth making as well, from what I could see. I don’t yet know how Wolfram has managed to assemble his knowledge base in less than a very long time, or even how much knowledge he and his team have really added, but at first glance it seems to be a large amount. I look forward to learning more about this aspect of the project.

Building Blocks for Knowledge Computing

Wolfram Alpha is almost more of an engineering accomplishment than a scientific one — Wolfram has broken down the set of factual questions we might ask, and the computational models and data necessary for answering them, into basic building blocks — a kind of basic language for knowledge computing if you will. Then, with these building blocks in hand his system is able to compute with them — to break down questions into the basic building blocks and computations necessary to answer them, and then to actually build up computations and compute the answers on the fly.

Wolfram’s team manually entered, and in some cases automatically pulled in, masses of raw factual data about various fields of knowledge, plus models and algorithms for doing computations with the data. By building all of this in a modular fashion on top of the Mathematica engine, they have built a system that is able to actually do computations over vast data sets representing real-world knowledge. More importantly, it enables anyone to easily construct their own computations — simply by asking questions.

The scientific and philosophical underpinnings of Wolfram Alpha are similar to those of the cellular automata systems he describes in his book, “A New Kind of Science” (NKS). Just as with cellular automata (such as the famous “Game of Life” algorithm that many have seen on screensavers), a set of simple rules and data can be used to generate surprisingly diverse, even lifelike patterns. One of the observations of NKS is that incredibly rich, even unpredictable patterns, can be generated from tiny sets of simple rules and data, when they are applied to their own output over and over again.

In fact, cellular automata, by using just a few simple repetitive rules, can compute anything any computer or computer program can compute, in theory at least. But actually using such systems to build real computers or useful programs (such as Web browsers) has never been practical because they are so low-level it would not be efficient (it would be like trying to build a giant computer, starting from theatomic level).

The simplicity and elegance of cellular automata proves that anything that may be computed — and potentially anything that may exist in nature — can be generated from very simple building blocks and rules that interact locally with one another. There is no top-down control, there is no overarching model. Instead, from a bunch of low-level parts that interact only with other nearby parts, complex global behaviors emerge that, for example, can simulate physical systems such as fluid flow, optics, population dynamics in nature, voting behaviors, and perhaps even the very nature of space-time. This is the main point of the NKS book in fact, and Wolfram draws numerous examples from nature and cellular automata to make his case.

But with all its focus on recombining simple bits of information according to simple rules, cellular automata is not a reductionist approach to science — in fact, it is much more focused on synthesizing complex emergent behaviors from simple elements than in reducing complexity back to simple units. The highly synthetic philosophy behind NKS is the paradigm shift at the basis of Wolfram Alpha’s approach too. It is a system that is very much “bottom-up” in orientation. This isnot to say that Wolfram Alpha IS a cellular automaton itself — but rather that it is similarly based on fundamental rules and data that are recombined to form highly sophisticated structures.

Wolfram has created a set of building blocks for working with formal knowledge to generate useful computations, and in turn, by putting these computations together you can answer even more sophisticated questions and so on. It’s a system for synthesizing sophisticated computations from simple computations. Of course anyone who understands computer programming will recognize this as the very essence of good software design. But the key is that instead of forcing users to writeprograms to do this in Mathematica, Wolfram Alpha enables them to simply ask questions in natural language and then automatically assembles the programs to compute the answers they need.

Wolfram Alpha perhaps represents what may be a new approach to creating an “intelligent machine” that does away with much of the manual labor of explicitly building top-down expert systems about fields of knowledge (the traditional AI approach, such as that taken by the Cyc project), while simultaneously avoiding the complexities of trying to do anything reasonable with the messy distributed knowledge on the Web (the open-standards Semantic Web approach). It’s simplerthan top down AI and easier than the original vision of Semantic Web.

Generally if someone had proposed doing this to me, I would have said it was not practical. But Wolfram seems to have figured out a way to do it. The proof is that he’s done it. It works. I’ve seen it myself.

Questions Abound

Of course, questions abound. It remains to be seen just how smartWolfram Alpha really is, or can be. How easily extensible is it? Willit get increasingly hard to add and maintain knowledge as more is addedto it? Will it ever make mistakes? What forms of knowledge will it beable to handle in the future?

I think Wolfram would agree that it is probably never going to be able to give relationship or career advice, for example, because that is “fuzzy” — there is often no single right answer to such questions. And I don’t know how comprehensive it is, or how it will be able to keep up with all the new knowledge in the world (the knowledge in the system is exclusively added by Wolfram’s team right now, which is a labor intensive process). But Wolfram is an ambitious guy. He seems confident that he has figured out how to add new knowledge to the system at a fairly rapid pace, and he seems to be planning to make the system extremely broad.

And there is the question of bias, which we addressed as well. Is there any risk of bias in the answers the system gives because all the knowledge is entered by Wolfram’s team? Those who enter the knowledge and design the formal models in the system are in a position to both define the way the system thinks — both the questions and the answers it can handle. Wolfram believes that by focusing on factual knowledge — things like you might find in the Wikipedia or textbooks or reports — the bias problem can be avoided. At least he is focusing the systemon questions that do have only one answer — not questions for which there might be many different opinions. Everyone generally agrees for example that the closing price of GOOG on a certain data is a particular dollar amount. It is not debatable. These are the kinds of questions the system addresses.

But even for some supposedly factual questions, there are potential biases in the answers one might come up with, depending on the data sources and paradigms used to compute them. Thus the choice of data sources has to be made carefully to try to reflect as non-biased a view as possible. Wolfram’s strategy is to rely on widely accepted data sources like well-known scientific models, public data about factual things like the weather, geography and the stock market published byreputable organizatoins and government agencies, etc. But of course even this is a particular worldview and reflects certain implicit or explicit assumptions about what data sources are authoritative.

This is a system that reflects one perspective — that of Wolfram and his team — which probably is a close approximation of the mainstream consensus scientific worldview of our modern civilization. It is a tool — a tool for answering questions about the world today, based on what we generally agree that we know about it. Still, this is potentially murky philosophical territory, at least for some kinds ofquestions. Consider global warming — not all scientists even agree it is taking place, let alone what it signifies or where the trends are headed. Similarly in economics, based on certain assumptions and measurements we are either experiencing only mild inflation right now, or significant inflation. There is not necessarily one right answer — there are valid alternative perspectives.

I agree with Wolfram, that bias in the data choices will not be a problem, at least for a while. But even scientists don’t always agree on the answers to factual questions, or what models to use to describe the world — and this disagreement is essential to progress in science in fact. If there is only one “right” answer to any question there could never be progress, or even different points of view. Fortunately, Wolfram is desigining his system to link to alternative questions andanswers at least, and even to sources for more information about the answers (such as the Wikipeda for example). In this way he can provide unambiguous factual answers, yet also connect to more information and points of view about them at the same time. This is important.

It is ironic that a system like Wolfram Alpha, which is designed to answer questions factually, will probably bring up a broad range of questions that don’t themselves have unambiguous factual answers — questions about philosophy, perspective, and even public policy in the future (if it becomes very widely used). It is a system that has the potential to touch our lives as deeply as Google. Yet how widely it will be used is an open question too.

The system is beautiful, and the user interface is already quite simple and clean. In addition, answers include computationally generated diagrams and graphs — not just text. It looks really cool. But it is also designed by and for people with IQ’s somewhere in the altitude of Wolfram’s — some work will need to be done dumbing it down a few hundred IQ points so as to not overwhelm the average consumer with answers that are so comprehensive that they require a graduate degree to fully understand.

It also remains to be seen how much the average consumer thirsts for answers to factual questions. I do think all consumers at times have a need for this kind of intelligence once in a while, but perhaps not as often as they need something like Google. But I am sure that academics, researchers, students, government employees, journalists and a broad range of professionals in all fields definitely need a tool like this and will use it every day.

Future Potential

I think there is more potential to this system than Stephen has revealed so far. I think he has bigger ambitions for it in the long-term future. I believe it has the potential to be THE online service for computing factual answers. THE system for factual knowlege on the Web. More than that, it may eventually have the potential to learn and even to make new discoveries. We’ll have to wait and see where Wolfram takes it.

Maybe Wolfram Alpha could even do a better job of retrieving documents than Google, for certain kinds of questions — by first understanding what you really want, then computing the answer, and then giving you links to documents that related to the answer. But even if it is never applied to document retrieval, I think it has the potential to play a leading role in all our daily lives — it could function likea kind of expert assistant, with all the facts and computational power in the world at our fingertips.

I would expect that Wolfram Alpha will open up various API’s in the future and then we’ll begin to see some interesting new, intelligent, applications begin to emerge based on its underlying capabilities and what it knows already.

In May, Wolfram plans to open up what I believe will be a first version of Wolfram Alpha. Anyone interested in a smarter Web will find it quite interesting, I think. Meanwhile, I look forward to learning more about this project as Stephen reveals more in months to come.

One thing is certain, Wolfram Alpha is quite impressive and Stephen Wolfram deserves all the congratulations he is soon going to get.

Appendix: Answer Engines vs. Search Engines

The above article about Wolfram Alpha has created quite a stir on the blogosphere (Note: For those who haven’t used Techmeme before: just move your mouse over the “discussion” links under the Techmeme headline and expand to see references to related responses)

But while the response from most was quite positive and hopeful, some writers jumped to conclusions, went snarky, or entirely missed the point.

For example some articles such as this one by Jon Stokes at Ars Technica, quickly veered into refuting points that I in fact never made (Stokes seems to have not actually read my article in full before blogging his reply perhaps, or maybe he did read it but simply missed my point).

Other articles such as this one by Saul Hansell of the New York Times’ Bits blog,focused on the business questions — again a topic that I did not address in my article. My article was about the technology, not the company or the business opportunity.

The most common misconception in the articles that misesd the point concerns whether Wolfram Alpha is a “Google killer.”

In fact I was very careful in the title of my article, and the content, to make the distinction between Wolfram Alpha and Google. And I tried to make it clear that Wolfram Alpha is not designed to be a “Google killer.” It has a very different purpose: it doesn’t compete with Google for general document retreival, instead it answers factual questions.

Wolfram Alpha is an “answer engine” not a search engine.

Answer engines are different category of tool from search engines. They understand and answer questions — they don’t simply retrieve documents. (Note: in fact, Wolfram Alpha doesn’t merely answer questions, it also helps users to explore knowledge and data visually and can even open up new questions)

Of course Wolfram Alpha is not alone in making a system that can answer questions. This has been a longstanding dream of computer scientists, artificial intelligence theorists, and even a few brave entrepreneurs in the past.

Google has also been working on answering questions that are typed directly into their search box. For example, type a geography question or even “what time is it in Italy” into the Google search box and you will get a direct answer. But the reasoning and computational capabilities of Google’s “answer engine” features are primitivecompared to what Wolfram Alpha does.

For example, the Google search box does not compute answers to calculus problems, or tell you what phase the moon will be in on a certain future date, or tell you the distance from San Francisco to Ulan Bator, Mongolia.

Many questions can or might be answered by Google, using simple database lookup, provided that Google already has the answers in its index or databases. But there are many questions that Google does not yet find or store the answers to efficiently. And there always will be.

Google’s search box provides some answers to common computational questions (perhaps via looking them up in a big database in some cases, or perhaps by computing the answers in other cases). But so far it has limited range. Of course the folks at Google could work more on this. They have the resources if they want to. But they are far behind Wolfram Alpha, and others (for example, the START project, which I recently learned about today, True Knowledge and Cyc project, among many others).

The approach taken by Wolfram Alpha — and others working on “answer engines” is not to build the world’s largest database of answers but rather to build a system that can compute answers to unanticipated questions. Google has built a system that can retrieve any document on the Web. Wolfram Alpha is designed to be a system that can answer any factual question in the world.

Of course, if the Wolfram Alpha people are clever (and they are), they will probably design their system to also leverage databases of known answers whenever they can, and to also store any new answers they compute to save the trouble of re-computing them if asked again in the future. But they are fundamentally not making a database lookup oriented service. They are making a computation oriented service.

Answer engines do not compete with search engines, but some search engines (such as Google) may compete with answer engines. Time will tell if search engine leaders like Google will put enough resources into this area of functionality to dominate it, or whether they will simply team up with the likes of Wolfram and/or others who have put a lot more time into this problem already.

In any case, Wolfram Alpha is not a “Google killer.” It wasn’t designed to be one. It does however answer useful questions — and everyone has questions. There is an opportunity to get a lot of traffic, depending on things that still need some thought (such as branding, for starters). The opportunity is there, although we don’t yet know whether Wolfram Alpha will win it. I think it certainly has all the hallmarks of a strong contender at least.

Video: My Talk on the Evolution of the Global Brain at the Singularity Summit

If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.

(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).

How to Build the Global Mind

Kevin Kelly recently wrote another fascinating article about evidence of a global superorganism. It’s another useful contribution to the ongoing evolution of this meme.

I tend to agree that we are at what Kevin calls, Stage III. However, an important distinction in my own thinking is that the superorganism is not comprised just of machines, but it is also comprised of people.

(Note: I propose that we abbreviate the One Machine, as “the OM.” It’s easier to write and it sounds cool.)

Today, humans still make up the majority of processors in the OM. Each human nervous system comprises billions of processors, and there are billions of humans. That’s a lot of processors.

However, Ray Kurzweil posits that the balance of processors is rapidly movingtowards favoring machines — and that sometime in the latter half of this century, machine processors will outnumber or at least outcompute all the human processors combined, perhaps many times over.

While agree with Ray’s point that machine intelligence will soon outnumber human intelligence, I’m skeptical of Kurzweil’s timeline, especially in light of recent research that shows evidence of quantum level computation within microtubules inside nuerons. If in fact the brain computes at the tubulin level then it may have many orders of magnitude more processors than currently estimated. This remains to be determined. Those who argue against this claim that the brain can be modelled on a Classical level and that quantum computing need not be invoked. To be clear, I am not claiming that the brain is a quantum computer, I am claiming that there seems to be evidence that computation in the brain takes place at the quantum level, or near it. Whether quantum effects have any measurable effect on what the brain does is not the question, the question is simply whether microtubules are the lowest level processing elements of the brain. If they are,then there are a whole lot more processors in the brain than previously thought.

Another point worth considering is that much of the brain’s computation is not taking place within the neurons but rather in the gaps between synapses, and this computation happens chemically rather than electrically. There are vastly more synapses than neurons, and computation within the synapses happens at a much faster and more granular level than neuronal firings. It is definitely the case thatchemical-level computations take place with elements that are many orders of magnitude smaller than neurons. This is another case for the brain computing at a much lower level than is currently thought.

In other words the resolution of computation in the human brain is still unknown. We have several competing approximations but no final answer on this. I do think however that evidence points to computation being much more granular than we currently think.

In any case, I do agree with Kurzweil that at least it is definitely the case that artificial computers will outnumber naturally occurring human computers on this planet — it’s just a question of when. In my view it will take a little longer than he thinks: it is likely to happen after 100 to 200 years at the most.

There is another aspect of my thinking on this subject which I think may throw a wrench in the works. I don’t think that what we call “consciousness” is something that can be synthesized. Humans appear to be conscious, but we have no idea what that means yet. It is undeniable that we all have an experience of being conscious, and this experience is mysterious. It is also the case that at least so far, nobody hasbuilt a software program or hardware device that seems to be having this experience. We don’t even know how to test for consciousness in fact. For example, the much touted Turing Test does not test consciousness, it tests humanlike intelligence. There really isn’t a test for consciousness yet. Devising one is an interesting an important goal that we should perhaps be working on.

In my own view, consciousness is probably fundamental to the substrate of the universe, like space, time and energy. We don’t know what space, time and energy actually are. We cannot actually measure them directly either. All our measurements of space, time and energy are indirect — we measure other things that imply that space, time and energy exist. Space, time and energy are inferred by effects we observe on material things that we can measure. I think the same may be true of consciousness. So the question is, what are the measureable effects ofconsciousness? Well one candidate seems to be the Double Slit experiment, which shows that the act of observation causes the quantum wave function to collapse. Are there other effects we can cite as evidence of consciousness?

I have recently been wondering how connected consciousness is to the substrate of the universe we are in. If consciousness is a property of the substrate, then it may be impossible to synthesize. For example, we never synthesize space, time or energy — no matter what we do, we are simply using the space, time and energy of the substrate that is this universe.

If this is the case, then creating consciousness is impossible. The best we can do is somehow channel the consciousness that is already there in the substrate of the universe. In fact, that may be what the human nervous system does: it channels consciousness, much in the way that an electrical circuit channels electricity. The reason that software programs will probably not become conscious is that they aretoo many levels removed from the substrate. There is little or no feedback between the high-level representations of cognition in AI programs and the quantum-level computation (and possibly consciousness) of the physical substrate of the universe. That is not the case in the human nervous system — in the human nervous system the basic computing elements and all the cognitive activity are directly tied to thephysical substrate of the universe. There is at least the potential for two-way feedback to take place between the human mind (the software), the human brain (a sort of virtual machine), and the quantum field (the actual hardware).

So the question I have been asking myself lately is how connected is consciousness to the physical substrate? And furthermore, how important is consciousness to what we consider intelligence to be? If consciousness is important to intelligence, then artificial intelligence may not be achievable through software alone — it mayrequire consciousness, which may in turn require a different kind of computing system, one which is more connected (through bidirectional feedback) to the physical quantum substrate of the universe.

What all this means to me is that human beings may form an important and potentially irreplaceable part of the OM — the One Machine — the emerging global superorganism. In particular today the humans are still the most intelligent parts. But in the future when machine intelligence may exceed human intelligence a billionfold, humans may still be the only or at least most conscious parts of the system. Because of the uniquely human capacity for consciousness (actually, animals and insects are conscious too), I think we have an important role to playin the emerging superorganism. We are it’s awareness. We are who watches, feels, and knows what it is thinking and doing ultimately.

Because humans are the actual witnesses and knowers of what the OM does and thinks, the function of the OM will very likely be to serve and amplify humans, rather than to replace them. It will be a system that is comprised of humans and machines working together, for human benefit, not for machine benefit. This is a very different future outlook than that of people who predict a kind of “Terminator-esque” future in which machines get smart enough to exterminate the human race. It won’t happen that way. Machines will very likely not get that smart for a long time, if ever, because they are not going to be conscious. I think we should be much more afraid of humans exterminating humanity than of machines doing it.

So to get to Kevin Kelly’s Level IV, what he calls “An Intelligent Conscious Superorganism” we simply have to include humans in the system. Machines alone are not, and will not ever be, enough to get us there. I don’t believe consciousness can be sythesized or that it will suddenly appear in a suitably complex computer program. I think it is a property of the substrate, and computer programs are just too many levels removed from the substrate. Now, it is possible that we mightdevise a new kind of computer architecture — one which is much more connected to the quantum field. Perhaps in such a system, consciousness, like electricity, could be embodied. That’s a possibility. It is likely that such a system would be more biological in nature, but that’s just a guess. It’s an interesting direction forresearch.

In any case, if we are willing to include humans in the global superorganism — the OM, the One Machine — then we are already at Kevin Kelly’s Level IV. If we are not willing to include them, then I don’t think will reach Level IV anytime soon, or perhaps ever.

It is also important to note that consciousness has many levels, just like intelligence. There is basic raw consciousness which simply perceives the qualia of what takes place. But there are also forms of consciousness which are more powerful — for example, consciousness that is aware of itself, and consciousness which is so highly tuned that it has much higher resolution, and consciousness which is aware of the physical substrate and its qualities of being spacelike and empty of any kind of fundamental existence. These are in fact the qualities of the quantum substrate we live in. Interestingly, they are also the qualities of reality that Buddhists masters also point out to be the ultimate nature of reality and of the mind (they do not consider reality and mind to be two different things ultimately). Consciousness may or may not be aware of these qualities of consciousness and ofreality itself — consciousness can be dull, or low-grade, or simply not awake. The level to which consciousness is aware of the substrate is a way to measure the grade of consciousness taking place. We might call this dimension of consciousness, “resolution.” The higher the resolution of consciousness is, the more acutely aware it is of the actual nature of phenomena, the substrate. At the highest  resolutionit can directly percieve the space-like, mind-like, quantum nature of what it observes. At the highest level of resolution, there is no perception of duality between observer and observed — consciousness perceives everything to be essentially consciousness appearing in different forms and behaving in a quantum fashion.

Another dimension of consciousness that is important to consider is what we could call “unity.” On the lowest level of the unity scale, there is no sense of unity, but rather a sense of extreme isolation or individuality. At the highest level of the scale there is a sense of total unification of everything within one field of consciousness. That highest-level corresponds to what we could call “omniscience.” TheBuddhist concept of spiritual enlightenment is essentially consciousness that has evolved to BOTH the highest level of resolution and the highest level of unity.

The global superorganism is already conscious, in my opinion, but it has not achieved very high resolution or unity. This is because most humans, and most human groups and organizations, have only been able to achive the most basic levels of consciousness themselves. Since humans, and groups of humans, comprise the consciousness of the global superorganism, our individual and collective conscious evolution is directly related to the conscious evolution of the superorganism as a whole. This is why it is important for individuals and groups to work on their own consciousnesses. Consciousness is “there” as a basic property of the physical substrate, but like mass or energy, it can be channelled and accumulated and shaped. Currently the consciousness that is present in us as individuals, and in groups of us, is at best, nascent and underdeveloped.

In our young, dualistic, materialistic, and externally-obsessed civilization, we have made very little progress on working with consciousness. Instead we have focused most or all of our energy on working with certain other more material-seeming aspects of the substrate — space, time and energy. In my opinion a civilizationbecomes fully mature when it spends equal if not more time on the concsiousness dimension of the substrate. That is something we are just beginning to work on, thanks to the strangeness of quantum mechanics breaking our classical physical paradims and forcing us to admit that consciousness might play a role in our reality.

But there are ways to speed up the evolution of individual and collective consciousness, and in doing so we can advance our civilization as a whole. I have lately been writing and speaking about this in more detail.

On an individual level one way to rapidly develop our own consciousness is the path of meditation and spirituality — this is most important and effective. There may also be technological improvements, such as augmented reality, or sensory augmentation, that can improve how we perceive, and what we perceive. In the not too distant future we will probably have the opportunity to dramatically improve the range and resolution of our sense organs using computers or biological means. We may even develop new senses that we cannot imagine yet. In addition, using the Internet for example, we will be able to be aware of more things at once than ever before. But ultimately, the scope of our individual consciousness has to develop on an internal level in order to truly reach higher levels of resolution and unity.Machine augmentation can help perhaps, but it is not a substitute for actually increasing the capacity of our consciousnesses. For example, if we use machines to get access to vastly more data, but our consciousnesses remain at a relatively low-capacity level, we may not be able to integrate or make use of all that new data anyway.

It is a well known fact that the brain filters out most of the information we actually percieve. Furthermore when taking a a hallucinogenic drug, the filter opens up a little wider, and people become aware of things which were there all along but which they previously filtered out. Widening the scope of consciousness — increasing the resolution and unity of consciousness, is akin to what happens when taking such a drug, except that it is not a temporary effect and it is more controllable and functional on a day-to-day basis. Many great Tibetan lamas I know seem to have accomplished this — the scope of their consciousness is quite vast, and the resolution is quite precise. They literally can and do see every detail of eventhe smallest things, and at the same time they have very little or no sense of individuality. The lack of individuality seems to remove certain barriers which in turn enable them to perceive things that happen beyond the scope of what would normally be considered their own minds — for example they may be able to perceive the thoughts of others, or see what is happening in other places or times. This seems to take place because they have increased the resolution and unity oftheir consciousnesses.

On a collective level, there are also things we can do to make groups, organizations and communities more conscious. In particular, we can build systems that do for groups what the “self construct” does for individuals.

The self is an illusion. And that’s good news. If it wasn’t an illusion we could never see through it and so for one thing spiritual enlightenment would not be possible to achieve. Furthermore, if it wasn’t an illusion we could never hope to synthesize it for machines, or for large collectives. The fact that “self” is an illusion is something that Buddhist, neuroscientists, and cognitive scientists all seem to agree on. The self is an illusion, a mere mental construct. But it’s a very useful one, when applied in the right way. Without some concept of self we humans would find it difficult to communicate or even navigate down the street. Similarly, without some concept of self groups, organizations and communities also cannot function very productively.

The self construct provides an entity with a model of itself, and its environment. This model includes what is taking place “inside” and what is taking place “outside” what is considered to be self or “me.” By creating this artificial boundary, and modelling what is taking place on both sides of the boundary, the self construct is able to measure and plan behavior, and to enable a system to adjust and adaptto “itself” and the external environment. Entities that have a self construct are able to behave far more intelligently than those which do not. For example, consider the difference between the intelligence of a dog and that of a human. Much of this is really a difference in the sophistication of the self-constructs of these two different species. Human selves are far more self-aware, introspective, and sophisticatedthan that of dogs. They are equally conscious, but humans have more developed self-constructs. This applies to simple AI programs as well, and to collective intelligences such as workgroups, enterprises, and online communities. The more sophisticated the self-construct, the smarter the system can be.

The key to appropriate and effective application of the self-construct is to develop a healthy self, rather than to eliminate the self entirely. Eradication of the self is form of nihilism that leads to an inability to function in the world. That is not somethingthat Buddhist or neuroscientists advocate. So what is a healthy self? In an individual, a healthy self is a construct that accurately represents past, present and projected future internal and external state, and that is highly self-aware, rational but not overly so, adaptable, respectful of external systems and other beings, and open to learning and changing to fit new situations. The same is true for a healthy collective self. However, most individuals today do not have healthy selves — they have highly delluded, unhealthy self-constructs. This in turn is reflected in the higher-order self-constructs of the groups, organizations and communities we build.

One of the most important things we can work on now is creating systems that provide collectives — groups, organizations and communities — with sophisticated, healthy, virtual selves. These virtual selves provide collectives with a mirror of themselves. Having a mirror enables the members of those systems to see the whole, and how they fit in. Once they can see this they can then begin to adjust their own behavior to fit what the whole is trying to do. This simplemirroring function can catalyze dramatic new levels of self-organization and synchrony in what would otherwise be a totally chaotic “crowd” of individual entities.

In fact, I think that collectives move through three levels of development:

  • Level 1: Crowds. Crowds are collectives in which the individuals are not aware of the whole and in which there is no unified sense of identity or purpose. Nevertheless crowds do intelligent things. Consider for example, schools of fish, or flocks of birds. There is no single leader, yet the individuals, by adapting to what their nearby neighbors are doing, behave collectively as a single entity of sorts. Crowds are amoebic entities that ooze around in a bloblike fashion. They are not that different from physical models of gasses.
  • Level 2: Groups. Groups are the next step up from crowds. Groups have some form of structure, which usually includes a system for command and control. They are more organized. Groups are capable of much more directed and intelligent behaviors. Families, cities, workgroups, sports teams, armies, universities, corporations, and nations are examples of groups. Most groups have intelligences that are roughly similar to that of simple animals. Theymay have a primitive sense of identity and self, and on the basis of that, they are capable of planning and acting in a more coordinated fashion.
  • Level 3: Meta-Individuals. The highest level of collective intelligence is the meta-individual. This emerges when what was once a crowd of separate individuals, evolves to become a new individual in its own right, and is faciliated by the formation of a sophisticated meta-level self-construct for the collective. This evolutionary leap is called a metasystem transition — the parts join together to form a new higher-order whole that is made of the parts themselves. This new whole resembles the parts, but transcends theirabilities. To evolve a collective to the level of being a true individual, it has to have a well-designed nervous system, it has to have a collective brain and mind, and most importantly it has to achieve a high-level of collective consciousness. High level collective consciousness requires a sophisticated collective self construct to serve as a catalyst. Fortunately, this is something we can actually build, because as has been asserted previously, self is an illusion, a consturct, and therefore selves can be built, even for large collectives comprised of millions or billions of members.

The global superorganism has been called The Global Brain for over a century by a stream of forward looking thinkers. Today we may start calling it the One Machine, or the OM, or something else. But in any event, I think the most important work that we can can do to make it smarter is to provide it with a more developed and accurate sense of collective self. To do this we might start by working on ways toprovide smaller collectives with better selves — for example, groups, teams, enterprises and online communities. Can we provide them with dashboards and systems which catalyze greater collective awareness and self-organization? I really believe this is possible, and I am certain there are technological advances that can support this goal. That is what I’m working on with my own project, Twine.com. But this is just the beginning.

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

New Video: Leading Minds from Google, Yahoo, and Microsoft talk about their Visions for Future of The Web

Video from my panel at DEMO Fall ’08 on the Future of the Web is now available.

I moderated the panel, and our panelists were:

Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century

Peter Norvig, Director of Research, Google Inc.

Jon Udell, Evangelist, Microsoft Corporation

Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.

The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.

Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft’s longer-term views as well.

Enjoy!!!

Peace in the Middle East: Could Alternative Energy Be the Solution?

I have been thinking about the situation in the Middle East and also the rise of oil prices, peak oil, and the problem of a world economy based on energy scarcity rather than abundance. There is, I believe, a way to solve the problems in the Middle East, and the energy problems facing the world, at the same time. But it requires thinking “outside the box.”

Middle Eastern nations must take the lead in freeing the world from dependence on their oil. This is not only their best strategy for the future of their nations and their people, but also it is what will ultimately be best for the region and the whole world.

It is inevitable that someone is going to invent a new technology that frees the world from dependence on fossil fuels. When that happens all oil empires will suddenly collapse. Far-sighted, visionary leaders in oil-producing nations must ensure that their nations are in position to lead the coming non-fossil-fuel energy revolution. This is the wisdom of “cannibalize yourself before someone else does.”

Middle Eastern nations should invest more heavily than any other nations in inventing and supplying new alternative energy technologies. For example: hydrogen, solar, biofuels, zero point energy, magnetic power, and the many new emerging alternatives to fossil fuels. This is a huge opportunity for the Middle East not only for economic reasons, but also because it may just be the key to bringing about long-term sustainable peace in the region.

There is a finite supply of oil in the Middle East — the game will and must eventually end. Are Middle Eastern nations thinking far enough ahead about this or not? There is a tremendous opportunity for them if they can take the initiative on this front and there is an equally tremendous risk if they do not. If they do not have a major stake in whatever comes after fossil fuels, they will be left with nothing when whatever is next inevitably happens (which might be very soon).

Any Middle Eastern leader who is not thinking very seriously about this issue right now is selling their people short. I sincerely advise them to make this a major focus going forward. Not only will this help them to improve quality of life for their people now and in the future, but it is the best way to help bring about world peace. The Middle East has the potential to lead a huge and lucrative global energy Renaissance. All it takes is vision and courage to push the frontier and to think outside of the box.

Continue reading

Great Collective Intelligence Book; Includes a Chapter I Wrote

I highly recommend this new book on Collective Intelligence. It features chapters by a Who’s Who of thinkers on Collective Intelligence, including a chapter by me about “Harnessing the Collective Intelligence of the World Wide Web.”

Here is the full-text of my chapter, minus illustrations (the rest of the book is great and I suggest you buy it to have on your shelf. It’s a big volume and worth the read):

Continue reading

My Visit to DERI — World's Premier Semantic Web Research Institute

Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.

DERI has become the world’s premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what’s happening there.

Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:

  • Semantic Web Search Engine (SWSE) and YARS, a massively scalable triplestore.  These projects are concerned with crawling and indexing the information on the Semantic Web so that end-users can find it. They have done good work on consolidating data and also on building a highly scalable triplestore architecture.
  • Sindice — An API and search infrastructure for the Semantic Web. This project is focused on providing a rapid indexing API that apps can use to get their semantic content indexed, and that can also be used by apps to do semantic searches and retrieve semantic content from the rest of the Semantic Web. Sindice provides Web-scale semantic search capabilities to any semantic application or service.
  • SIOC — Semantically Interlinked Online Communities. This is an ontology for linking and sharing data across online communities in an open manner, that is getting a lot of traction. SIOC is on its way to becoming a standard and may play a big role in enabling portability and interoperability of social Web data.
  • JeromeDL is developing technology for semantically enabled digital libraries. I was impressed with the powerful faceted navigation and search capabilities they demonstrated.
  • notitio.us. is a project for personal knowledge management of bookmarks and unstructured data.
  • SCOT, OpenTagging and Int.ere.st.  These projects are focused on making tags more interoperable, and for generating social networks and communities from tags. They provide a richer tag ontology and framework for representing, connecting and sharing tags across applications.
  • Semantic Web Services.  One of the big opportunities for the Semantic Web that is often overlooked by the media is Web services. Semantics can be used to describe Web services so they can find one another and connect, and even to compose and orchestrate transactions and other solutions across networks of Web services, using rules and reasoning capabilities. Think of this as dynamic semantic middleware, with reasoning built-in.
  • eLite. I was introduced to the eLite project, a large e-learning initiative that is applying the Semantic Web.
  • Nepomuk.  Nepomuk is a large effort supported by many big industry players. They are making a social semantic desktop and a set of developer tools and libraries for semantic applications that are being shipped in the Linux KDE distribution. This is a big step for the Semantic Web!
  • Semantic Reality. Last but not least, and perhaps one of the most eye-opening demos I saw at DERI, is the Semantic Reality project. They are using semantics to integrate sensors with the real world. They are creating an infrastructure that can scale to handle trillions of sensors eventually. Among other things I saw, you can ask things like "where are my keys?" and the system will search a network of sensors and show you a live image of your keys on the desk where you left them, and even give you a map showing the exact location. The service can also email you or phone you when things happen in the real world that you care about — for example, if someone opens the door to your office, or a file cabinet, or your car, etc. Very groundbreaking research that could seed an entire new industry.

In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI — they are clearly the leader in the space.

A Universal Classification of Intelligence

I’ve been thinking lately about whether or not it is possible to formulate a scale of universal cognitive capabilities, such that any intelligent system — whether naturally occurring or synthetic — can be classified according to its cognitive capacity. Such a system would provide us with a normalized scientific basis by which to quantify and compare the relative cognitive capabilities of artificially intelligent systems, various species of intelligent life on Earth, and perhaps even intelligent lifeforms encountered on other planets.

One approach to such evaluation is to use a standardized test, such as an IQ test. However, this test is far too primitive and biased towards human intelligence. A dolphin would do poorly on our standardized IQ test, but that doesn’t mean much, because the test itself is geared towards humans. What is needed is a way to evaluate and compare intelligence across different species — one that is much more granular and basic.

What we need is a system that focuses on basic building blocks of intelligence, starting by measuring the presence or ability to work with fundamental cognitive constructs (such as the notion of object constancy, quantities, basic arithmetic constructs, self-constructs, etc.) and moving up towards higher-level abstractions and procedural capabilities (self-awareness, time, space, spatial and temporal reasoning, metaphors, sets, language, induction, logical reasoning, etc.).

What I am asking is whether we can develop a more "universal" way to rate and compare intelligences? Such a system would provide a way to formally evaluate and rate any kind of intelligent system — whether insect, animal, human, software, or alien — in a normalized manner.

Beyond the inherent utility of having such a rating scale, there is an additional benefit to trying to formulate this system: It will lead us to really question and explore the nature of cognition itself. I believe we are moving into an age of intelligence — an age where humanity will explore the brain and the mind (the true "final frontier"). In order to explore this frontier, we need a map — and the rating scale I am calling for would provide us with one, for it maps the range of possible capabilities that intelligent systems are capable of.

I’m not as concerned with measuring the degree to which any system is more or less capable of some particular cognitive capability within the space of possible capabilities we map (such as how fast it can do algebra for example, or how well it can recall memories, etc.) — but that is a useful second step. The first step, however, is to simply provide a comprehensive map of all the possible fundamental cognitive behaviors there are — and to make this map as minimal and elegant as we can. Ideally we should be seeking the simplest set of cognitive building blocks from which all cognitive behavior, and therefore all minds, are comprised.

So the question is: Are there in fact "cognitive universals" or universal cognitive capabilities that we can generalize across all possible intelligent systems? This is a fascinating question — although we are human, can we not only imagine, but even prove, that there is a set of basic universal cognitive capabilities that applies everywhere in the universe, or even in other possible universes? This is an exploration that leads into the region where science, pure math, philosophy, and perhaps even spirituality all converge. Ultimately, this map must cover the full range of cognitive capabilities from the most mundane, to what might be (from our perspective) paranormal, or even in the realm of science fiction. Ordinary cognition as well as forms of altered or unhealthy cognition, as well as highly advanced or even what might be said to be enlightened cognition, all have to fit into this model.

Can we develop a system that would apply not just to any form of intelligence on Earth, but even to far-flung intelligent organisms that might exist on other worlds, and that perhaps might exist in dramatically different environments than humans? And how might we develop and test this model?

I would propose that such a system could be developed and tuned by testing it across the range of forms of intelligent life we find on Earth — including social insects (termite colonies, bee hives, etc.), a wide range of other animal species (dogs, birds, chimpanzees, dolphins, whales, etc.), human individuals, and human social organizations (teams, communities, enterprises). Since there are very few examples of artificial intelligence today it would be hard to find suitable systems to test it on, but perhaps there may be a few candidates in the next decade. We should also attempt to imagine forms of intelligence on other planets that might have extremely different sensory capabilities, totally different bodies, and perhaps that exist on very different timescales or spatial scales as well — what would such exotic, alien intelligences be like, and can our model encompass the basic building blocks of their cognition as well?

It will take decades to develop and tune a system such as this, and as we learn more about the brain and the mind, we will continue to add subtlety to the model. But when humanity finally establishes open dialog with an extraterrestrial civilization, perhaps via SETI or some other means of more direct contact, we will reap important rewards. A system such as what I am proposing will provide us with a valuable map for understanding alien cognition, and that may prove to be the key to enabling humanity to engage in successful interactions and relations with alien civilizations as we may inevitably encounter as humanity spreads throughout the galaxy. While some skeptics may claim that we will never encounter intelligent life on other planets, the odds would indicate otherwise. It may take a long time, but eventually it is inevitable that we will cross paths — if they exist at all. Not to be prepared would be irresponsible.

A Bottle That Purifies Enough Water for a Year

This is a really great invention — a hand held water bottle that can purify a year’s worth of water. It removes not only parasites and bacteria, but also viruses. It was just announced recently at a defense industry tradeshow and was a big hit among military commanders who need a better way to get water to their troops. Beyond that it could be a lifesaver in disaster areas and in developing countries where finding clean water is a daily struggle.

New Photon Thruster: Get to Mars in 1 Week!

An interesting new patent pending design for a photon thruster appears to be the real deal. Check out the article and who is behind it. (A fellow SRI alumnus!). Getting to Mars in a week means getting to the moon, as well as other nearby planets would be quite fast as well. This could be quite revolutionary.

TUSTIN, Calif., Sept. 7, 2007 — An amplified photon thruster that
could potentially shorten the trip to Mars from six months to a week
has reportedly attracted the attention of aerospace agencies and
contractors.

Young Bae, founder of the Bae Institute in
Tustin, Calif., first demonstrated his photonic laser thruster (PLT),
which he built with off-the-shelf components, in December.