How Twitter Could be 10X Bigger, 100X More Profitable, and 1000X More Awesome

Read my new article about how to evolve Twitter, on VentureBeat

 

I’ve spent many years studying, writing about, building, and funding companies (such as Bottlenose, Klout, and The Daily Dot) in Twitter’s ecosystem.

Despite the media chatter, I am still bullish on Twitter – as should be any investor who understands the social network’s fundamentals and true potential. Twitter has the highest revenue growth rate of any tech firm with over $2 billion in sales over the last year. And at today’s market cap, Twitter is an incredible bargain.

The company has enormous untapped potential to impact the world and create value for investors and partners — far more than short-term investors probably realize. But to unlock that hidden potential, some significant product and business model evolution may also be necessary.

I truly want the Twitter ecosystem to succeed. And it is in that spirit of support and optimism that I’m offering a number of ideas below that could help Twitter not only regain its former growth curve but surpass it. I’m breaking down my detailed playbook for the company into three sections:

1: Improving the signal-to-noise ratio on tweets
2: Enabling better search and collection of tweets
3: Focusing on being a network not a destination

 

Making Sense of Streams

This is a talk I’ve been giving on how we filter the Stream at Bottlenose.

You can view the slides below, or click here to replay the webinar with my talk.

Note: I recommend the webinar if you have time, as I go into a lot more detail than is in the slides – in particular some thoughts about the Global Brain, mapping collective consciousness, and what the future of social media is really all about.  My talk starts at 05:38:00 in the recording.

 

Bottlenose Beat Bit.ly to the First Attention Engine – But It’s Going to Get Interesting

Bottlenose (disclosure: my startup) just launched the first attention engine this week.

But it appears that Bit.ly is launching one soon as well.

It’s going to get interesting to watch this category develop. Clearly there is new interest in building a good real-time picture of what’s happening, and what’s trending, and providing search, discovery, and insights around that.

I believe Bottlenose has the most sophisticated map of attention today, and we have very deep intellectual property across 8 pending patents and a very advanced technology stack behind it as well. And we have some pretty compelling user-experiences on top of it all. So in short, we have a lead here on many levels. (Read more about that here)

But that might not even matter because I think ultimately Bit.ly will be a potential partner for Bottlenose, rather than a long-term competitor — at least if they stay true to their roots and DNA as a data provider rather than a user-experience provider. I doubt that Bit.ly will succed in making a search destination that consumers will use and I’m guessing that is not really their goal.

In testing their Realtime service, my impression is that it feels more like a Web 1.0 search engine. Static search results for advanced search style queries. I don’t see that as a consumer experience.

Bottlenose on the other hand, goes way into a consumer UX, with live photos, newspapers, topic portals, a dashboard, etc. It is also a more dynamic, always changing, realtime content consumption destination. Bottlenose feels like media, not merely search (in fact I think search, news and analytics are actually converging in the social network era).

Bottlenose has a huge emphasis on discovery, analytics, and other further actions on the content that go beyond just search.

I think in the end Bit.ly’s Realtime site will really demonstrate the power of their data, which will still mainly be consumed via their API rather than in their own destination. I’m hopeful that Bit.ly will do just that. It would be useful to everyone, including Bottlenose.

The Threat to Third-Party URL Shorteners

If I were Bit.ly, my primary fear today would be Twitter with their t.co shortener. That is a big threat to Bit.ly and will probably result in Bit.ly losing a lot of their data input over time as more Tweets have t.co links on them than Bit.ly links.

Perhaps Bit.ly is attempting to pivot their business to the user experience side in advance of such a threat potentially reducing their data set and thus the value of their API. But without their data set I don’t see where they can get the data to measure the present. So as a pivot it would not work – where would they get the data?

In other words, if people are not using as many Bit.ly links in the future, Bit.ly will see less attention. And trends point to this happening in fact — Twitter has their own shortener. So does Facebook. So does Google. Third-party shorteners will probably represent a decreasing share of messages and attention over time.

I think the core challenge for Bit.ly is to find a reason for their short URLs to be used instead of native app short URLs. Can they add more value to them somehow? Could they perhaps build in monetization opportunities for parties who use their shortener, for example? Or could they provide better analytics than Twitter or Facebook or Google will on short URL uptake (Bit.ly arguably does, today).

Bottlenose and Bit.ly Realtime: Compared and Contrasted

In any case there are a few similarities between what Bit.ly may be launching and what Bottlenose provides today.

But there are far more differences.

These products only partially intersect. Most of what Bottlenose does has no equivalent in Bit.ly Realtime. Similarly much of what Bit.ly actually does (outside of their Realtime experiment) is different from what Bottlenose does.

It is also worht mentioning that Bit.ly’s “Realtime” app is a Bit.ly “labs” project and is not their central focus, whereas at Bottlenose it is 100% of what we do. Mapping the present is our core focus.

There is also a big difference in business model. Bottelnose does map the present in high-fidelity, but has no plans currently to provide a competing shortening API, or an API about shortURLs, like Bit.ly presently does. So currently we are not competitors.

Also, where Bit.ly currently has a broader and larger data set, Bottlenose has created a more cutting-edge and compelling user-experience and has spent more time on a new kind of computing architecture as well.

The Bottlenose StreamOS engine is worth mentioning here: Bottlenose has new engine for real-time big data analytics engine that uses a massively distributed and patent pending “crowd computing” architecture.

We actually have buit what I think is the most advanced engine and architecture on the planet for mapping attention in real-time today.

The deep semantics and analytics we compute in realtime are very expensive to compute centrally. Rather than compute everything in the center we compute everywhere; everyone who uses Bottlenose helps us to map the present.

Our StreamOS engine is in fact a small (just a few megabytes) Javascript and HTML5 app (the size of a photo) that runs in the browser or device of each user. Almost all the computing and analytics that Bottlenose does happens in the browser at the edge.

We have very low centralized costs. This approach scales better, faster, and more cheaply than any centralized approach can. The crowd literally IS our computer. It’s the Holy Grail of distributed real-time indexing.

We also see a broader set of data than Bit.ly does. We don’t only see content that has a bit.ly URL on it. We see all kinds of messages moving through social media — with other shortURls, and even without URLs.

We see Bit.ly URLs, but we also see data that is outside of the Bit.ly universe. I think ultimately it’s more valuable to see all the trends across all data sources, and even content that contains no URLs at all (Bottlenose analyzes all kinds of messages for example, not just messages that contain URLs, let alone just Bit.ly URLs).

Finally, the use-cases for Bottlenose go far beyond just search, or just news reading and news discovery.

We have all kinds of  brands and enterprises actually using our Bottlenose Dashboard product, for example, for social listening, analytics and discovery. I don’t see Bit.ly going as deeply into that as us.

For these reasons I’m optimistic that Bottlenose (and everyone else) will benefit from what Bit.ly may be launching — particularly via their API, if they make their attention data available as an additional signal.

This space is going to get interesting fast.

(To learn more about what Bottlenose does, read this)

 

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

This article is part of a series of articles about the Bottlenose Public Beta launch.

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence (you are here)

A New Window Into the Collective Consciousness

Bottlenose offers a new window into what the world is paying attention to right now, globally and locally.

We show you a live streaming view of what the crowd is thinking, sharing and talking about. We bring you trends, as they happen. That means the photos, videos and messages that matter most. That means suggested reading, and visualizations that cut through the clutter.

The center of online attention and gravity has shifted from the Web to social networks like Twitter, Facebook and Google+. Bottlenose operates across all them, in one place, and provides an integrated view of what’s happening.

The media also attempts to provide a reflection of what’s happening in the world, but the media is slow, and it’s not always objective. Bottlenose doesn’t replace the media — at least not the role of the writer. But it might do a better job of editing or curating in some cases, because it objectively measures the crowd — we don’t decide what to feature, we don’t decide what leads. The crowd does.

Other services in the past, like Digg for example, have helped pioneer this approach. But we’ve taken it further — in Digg people had to manually vote. In Bottlenose we simply measure what people say, and what they share, on public social networks.

Bottlenose is the best tool for people who want to be in the know, and the first to know. Bottlenose brings a new awareness of what’s trending online, and in the world, and how those trends impact us all.

We’ve made the Bottlenose home page into a simple Google-like query field, and nothing more. Results pages drop you into the app itself for further exploration and filtration. Except you don’t just get a long list of results, the way you get on Google.

Instead, you get an at-a-glance start page, a full-fledged newspaper, a beautiful photo gallery, a lean-back home theater, a visual map of the surrounding terrain, a police scanner, and Sonar — an off-road vehicle so that you can drive around and see what’s trending in networks as you please. We’ve made the conversation visual.

Each of these individual experiences is an app on top of the Bottlenose StreamOS platform, and each is a unique way of looking at sets and subsets of streams. You can switch between views effortlessly, and you can save anything for persistent use.

Discovery, we’ve found from user behavior, has been the entry point and the connective tissue for the rest of the Bottlenose experience all along. Our users have been asking for a better discovery experience, just as Twitter users have been asking for the same.

The new stuff you’ll see today has been one of the most difficult pieces for us to build computer-science-wise. It is a true technical achievement by our engineering team.

In many ways it’s also what we’ve been working towards all along. We’re really close now to the vision we held for Bottlenose at the very beginning, and the product we knew we’d achieve over time.

The Theory Behind It: How to Build a Smarter Global Brain

If Twitter, Facebook, Google+ and other social networks are the conduits for what the planet is thinking, then Bottlenose is a map of what the planet is actually paying attention to right now. Our mission is to “organize the world’s attention.” And ultimately I think by doing this we can help make the world a smarter place. At at the end of the day that’s what gets me excited in life.

After many years of thinking about this, I’ve come to the conclusion that the key to higher levels of collective intelligence is not making each person smarter, and it’s not some kind of Queen Bee machine up in the sky that tells us all what to do and runs the human hive. It’s not some fancy kind of groupware either. And it’s not the total loss of individuality into a Borg-like collective either.

I think that better collective intelligence really comes down to enabling better collective consciousness. The more conscious we can be of who we are collectively, and what we think, and what we are doing, the smarter we can actually be together, of our own free will, as individuals. This is a bottom-up approach to collective consciousness.

So how might we make this happen?

For the moment, let’s not try to figure out what consciousness really is, because we don’t know, and we probably never will, but regardless, for this adventure, we don’t need to. And we don’t even need to synthesize it either.

Collective consciousness is not a new form of consciousness, rather, it’s a new way to channel the consciousness that’s already there — in us. All we need to do is find a better way to organize it… or rather, to enable it to self-organize emergently.

What does consciousness actually do anyway?

Consciousness senses the internal and external world, and maintains a model of what it finds — a model of the state of the internal and external world that also contains a very rich model of “self” within it.

This self construct has an identity, thoughts, beliefs, emotions, feelings, goals, priorities, and a focus of attention.

If you look for it, it turns out there isn’t actually anything there you can find except information — the “self” is really just a complex information construct.

This “self” is not really who we are, it’s just a construct, a thought really — and it’s not consciousness either. Whatever is aware is aware of the self, so the self is just a construct like any other object of thought.

So given that this “self” is a conceptual object, not some mystical thing that we can’t ever understand, we should be able to model it, and make something that simulates it. And in fact we can.

We can already do this for artificially intelligent computer programs and robots in a primitive way in fact.

But what’s really interesting to me is that we can also do it for large groups of people too. This is a big paradigm shift – a leap. Something revolutionary really. If we can do it.

But how could we provide something like a self for groups, or for the planet as a whole? What would it be like?

Actually, there is already a pretty good proxy for this and it’s been around for a long time. It’s the media.

The Media is a Mirror

The media senses who we are and what we’re doing and it builds a representation — a mirror – in the form of reports, photos, articles, and stats about the state of the world. The media reflects who we are back to us. Or at least it reflects who it thinks we are…

It turns out it’s not a very accurate mirror. But since we don’t have anything better, most of us believe what we see in the media and internalize it as truth.

Even if we try not to, it’s just impossible to avoid the media that bombards us from everywhere all the time. Nobody is really separate from this, we’re all kind of stewing a media soup, whether we like it or not.

And when we look at the media and we see stories – stories about the world, about people we know, people we don’t know, places we live in, and other places, and events — we can’t help but absorb them. We don’t have first hand knowledge of those things, and so we take on faith what the media shows us.

We form our own internal stories that correspond to the stories we see in the media. And then, based on all these stories, we form beliefs about the world, ourselves and other people – and then those beliefs shape our behavior.

And there’s the rub. If the media gives us an inaccurate picture of reality, or a partially accurate one, and then we internalize it, it then conditions our actions. And so our actions are based on incomplete or incorrect information. How can we make good decisions if we don’t have good information to base them on?

The media used to be about objective reporting, and there are still those in the business who continue that tradition. But real journalists — the kind who would literally give their lives for the truth — are fewer and fewer. The noble art of journalism is falling prey, like everything else, to commercial interests.

There are still lots of great journalists and editors, but there are fewer and fewer great media companies. And fewer rules and standards too. To compete in today’s media mix it seems they have to stoop to the level of the lowest common denominator and there’s always a new low to achieve when you take that path.

Because the media is driven by profit, stories that get eyeballs get prioritized, and the less sensational but often more statistically representative stories don’t get written, or don’t make it onto the front page. There is even a saying in the TV news biz that “If it bleeds, it leads.”

Look at the news — it’s just filled with horrors. But that’s not an accurate depiction of the world. For example crimes don’t happen all the time, everywhere, to everyone – they are statistically quite unlikely and rare — yet so much news is devoted to crimes for example. It’s not an accurate portrayal of what’s really happening for most people, most of the time.

I’m not saying the news shouldn’t report crime, or show scary bad things. I’m just pointing out that the news is increasingly about sensationalism, fear, doubt, uncertainty, violence, hatred, crime, and that is not the whole truth. But it sells.

The problem is not that these things are reported — I am not advocating for censorship in any way. The problem is about the media game, and the profit motives that drive it. Media companies just have to compete to survive, and that means they have to play hard ball and get dirty.

Unfortunately the result is that the media shows us stories that do not really reflect the world we live in, or who we are, or what we think, accurately – these stories increasingly reflect the extremes, not the enormous middle of the bell curve.

But since the media functions as our de facto collective consciousness, and it’s filled with these images and stories, we cannot help but absorb them and believe them, and become like them.

But what if we could provide a new form of media, a more accurate reflection of the world, of who we are and what we are doing and thinking? A more democratic process, where anyone could participate and report on what they see.

What if in this new form of media ALL the stories are there, not just some of them, and they compete for attention on a level playing field?

And what if all the stories can compete and spread on their merits, not because some professional editor, or publisher, or advertiser says they should or should not be published?

Yes this is possible.

It’s happening now.

It’s social media in fact.

But for social media to really do a better job than the mainstream media, we need a way to organize and reflect it back to people at a higher level.

That’s where curation comes in. But manual curation is just not scalable to the vast number of messages flowing through social networks. It has to be automated, yet not lose its human element.

That’s what Bottlenose is doing, essentially.

Making a Better Mirror

To provide a better form of collective consciousness, you need a measurement system that can measure and reflect what people are REALLY thinking about and paying attention to in real-time.

It has to take a big data approach – it has to be about measurement. Let the opinions come from the people, not editors.

This new media has to be as free of bias as possible. It should simply measure and reflect collective attention. It should report the sentiment that is actually there, in people’s messages and posts.

Before the Internet and social networks, this was just not possible. But today we can actually attempt it. And that is what we’re doing with Bottlenose.

But this is just a first step. We’re dipping our toe in the water here. What we’re doing with Bottlenose today is only the beginning of this process. And I think it will look primitive compared to what we may evolve in years to come. Still it’s a start.

You can call this approach mass-scale social media listening and analytics, or trend detection, or social search and discovery. But it’s also a new form of media, or rather a new form of curating the media and reflecting the world back to people.

Bottlenose measures what the crowd is thinking, reading, looking at, feeling and doing in real-time, and coalesces what’s happening across social networks into a living map of the collective consciousness that anyone can understand. It’s a living map of the global brain.

Bottlenose wants to be the closest you can get to the Now, to being in the zone, in the moment. The Now is where everything actually happens. It’s the most important time period in fact. And our civilization is increasingly now-centric, for better or for worse.

Web search feels too much like research. It’s about the past, not the present. You’re looking for something lost, or old, or already finished — fleeting.  Web search only finds Web pages, and the Web is slow… it takes time to make pages, and time for them to be found by search engines.

On the other hand, discovery in Bottlenose is about the present — it’s not research, it’s discovery. It’s not about memory, it’s about consciousness.

It’s more like media — a live, flowing view of what the world is actually paying attention to now, around any topic.

Collective intelligence is theoretically made more possible by real-time protocols like Twitter. But in practice, keeping up with existing social networks has become a chore, and not drowning is a real concern. Raw data is not consciousness. It’s noise. And that’s why we so often feel overwhelmed by social media, instead of emboldened by it.

But what if you could flip the signal-to-noise ratio? What if social media could be more like actual media … meaning it would be more digestible, curated, organized, consumable?

What if you could have an experience that is built on following your intuition, and living this large-scale world to the fullest?

What if this could make groups smarter as they get larger, instead of dumber?

Why does group IQ so often seem inversely proportional to group size? The larger groups get, the dumber and more dysfunctional they become. This has been a fundamental obstacle for humanity for millennia.

Why can’t groups (including communities, enterprises, even whole societies) get smarter as they get larger instead of dumber? Isn’t it time we evolve past this problem? Isn’t this really what the promise of the Internet and social media is all about? I think so.

And what if there was a form of media that could help you react faster, and smarter, to what is going on around you as it happens, just like in real life?

And what if it could even deliver on the compelling original vision of the cyberspace as a place you could see and travel through?

What about getting back to the visceral, the physical?

Consciousness is interpretive, dynamic, and self-reflective. Social media should be too.

This is the fundamental idea I have been working on in various ways for almost a decade. As I have written many times, the global brain is about to wake up and I want to help.

By giving the world a better self-representation of what it is paying attention to right now, we are trying to increase the clock rate and resolution of collective consciousness.

By making this reflection more accurate, richer, and faster, and then making it available to everyone, we may help catalyze the evolution of higher levels of collective intelligence.

All you really need is a better mirror. A mirror big enough for large groups of people to look into and see what they are collectively paying attention to in it, together. By providing groups with a clearer picture of their own state and activity, they can adapt to themselves more intelligently.

Everyone looks in the collective mirror and adjusts their own behavior independently — there is no top-down control — but you get emergent self-organizing intelligent collective behavior as a result. The system as a whole gets smarter. So the better the mirror, the smarter we become, individually and collectively.

If the mirror is really fast, really good, really high res, and really accurate and objective – it can give groups an extremely important, missing piece: Collective consciousness that everyone can share.

We need collective consciousness that exists outside of any one person, and outside of any one perspective or organization’s agenda, and is not merely just in the parts (the individuals) either. Instead, this new level of collective consciousness should be something that is coalesced into a new place, a new layer, where it exists independently of the parts.

It’s not merely the sum of the parts, it’s actually greater than the sum – it’s a new level, a new layer, with new information in it. It’s a new whole that transcends just the parts on their own.  That’s the big missing piece that will make this planet smarter, I think.

We need this yesterday. Why? Because in fact collectives — groups, communities, organizations, nations — are the units of change on this planet. Not individuals.

Collectives make decisions, and usually these decisions are sub-optimal. That’s dangerous. Most of the problems we’ve faced and continue to face as a species come down to large groups doing stupid things, mainly due not having accurate information about the world or themselves. This is, ultimately, an engineering problem.

We should fix this, if we can.

I believe that the Internet is an evolving planetary nervous system, and it’s here to to make us smarter. But it’s going to take time. Today it’s not very smart. But it’s evolving fast.

Higher layers of knowledge, and intelligence are emerging in this medium, like higher layers of the cerebral cortex, connecting everything together ever more intelligently.

And we want to help make it even smarter, even faster, by providing something that functions like self-consciousness to it.

Now I don’t claim that what we’re making with Bottlenose is the same as actual consciousness — real consciousness is, in my opinion a cosmic mystery like the origin of space and time. We’ll probably never understand it. I hope we never do. Because I want there to be mystery and wonder in life. I’m confident there always will be.

But I think we can enable something on a collective scale, that is at least similar, functionally, to the role of self-consciousness in the brain — something that reflects our own state back to us as a whole all the time.

After all, the brain is a massive collective of hundreds of billions of neurons and trillions of connections that themselves are not conscious or even intelligent – and yet it forms a collective self and reacts to itself intelligently.

And this feedback loop – and the quality of the reflection it is based on – is really the key to collective intelligence, in the brain, and for organizations and the planet.

Collective intelligence is an emergent phenomena, it’s not something to program or control. All you need to do to enable it and make it smarter, is give groups and communities better quality feedback about themselves. Then they get smarter on their own, simply by reacting to that feedback.

Collective intelligence and collective consciousness, are at the end of the day, a feedback loop. And we’re trying to make that feedback loop better.

Bottlenose is a new way to curate the media, a new form of media in which anyone can participate but the crowd is the editor. It’s truly social media.

This is an exciting idea to me. It’s what I think social media is for and how it could really help us.

Until now people have had only the mainstream, top-down, profit-driven media to look to. But by simply measuring everything that flows through social networks in real time, and reflecting a high-level view of that back to everyone, it’s possible to evolve a better form of media.

It’s time for a bottom-up, collectively written and curated form of media that more accurately and inclusively reflects us to ourselves.

Concluding Thoughts

I think Bottlenose has the potential to become the giant cultural mirror we need.

Instead of editors and media empires sourcing and deciding what leads, the crowd is the editor, the crowd is the camera crew, and the crowd decides what’s important. Bottlenose simply measures the crowd and reflects it back to itself.

When you look into this real-time cultural mirror that is Bottlenose, you can see what the community around any topic is actually paying attention to right now. And I believe that as we improve it, and if it becomes widely used, it could facilitate smarter collective intelligence on a broader scale.

The world now operates at a ferocious pace and search engines are not keeping up. We’re proud to be launching a truly present-tense experience. Social messages are the best indicators today of what’s actually important, on the Web, and in the world.

We hope to show you an endlessly interesting, live train of global thought. The first evolution of the Stream has run its course and now it’s time to start making sense of it on a higher level. It’s time to start making it smart.

With the new Bottlenose, you can see, and be a part of, the world’s collective mind in a new and smarter way. That is ultimately why Bottlenose is worth participating in.

Keep Reading

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence (you are here)

 

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

Recently, one of Twitter’s top search engineers tweeted that Twitter was set to “change search forever.” This proclamation sparked a hearty round of speculation and excitement about what was coming down the pipe for Twitter search.

The actual announcement featured the introduction of autocomplete and the ability to search within the subset of people on Twitter that you follow — both long-anticipated features.

However, while certainly a technical accomplishment (Twitter operates a huge scale and building these features must have been very difficult), this was an iterative improvement to search…an evolution, not a revolution.

Today I’m proud to announce something that I think could actually be revolutionary.

 

And here’s the video….

 

My CTO/Co-founder, Dominiek ter Heide, and I have been working for 2 years on an engine for making sense of social media. It’s called Bottlenose, and we started with a smart social dashboard.

Now we’re launching the second stage of our mission “to organize the world’s attention” — a new layer of Bottlenose that provides a live discovery portal for the social web.

This new service measures the collective consciousness in real-time and shows you what the crowd is actually paying attention to now, about any topic, person, brand, place, event… anything.

If the crowd is thinking about it, we see it. It’s a new way to see what’s important in the world, right now.

This discovery engine, combined with our existing dashboard, provides a comprehensive solution for discovering what’s happening, and then keeping up with it over time.

Together, these two tools not only help you stay current, they provide compelling and deep insights about real-time trends, influencers, and emerging conversations.

All of this goes into public beta today.

An Amazing Team

I am very proud of what we are launching today, in many ways — while still just a step on a longer journey — it is the culmination of an idea I’ve been working on, thinking about, dreaming of… for decades… and I’d love you to give it a spin.

And I’m proud of my amazing technical team — they are the most talented technical team I’ve ever worked with in my more than 20 years in this field.

I have never seen such a small team deliver so much, so well. And Bottlenose is them – it is their creation and their brilliance that has made this possible. I am really so thankful to be working with this crew.

Welcome to the Bottlenose Public Beta

So what is Bottlenose anyway?

It is a real-time view of what’s actually important across all the major social networks — the first of its kind — what you might call a “now engine.”

This new service is not about information retrieval. It’s about information awareness. It’s not search, it’s discovery.

We don’t index the past, we map the present. That’s why I think it’s better to call it a discovery engine than a search engine. Search implies research towards a specific desired answer, whereas discovery implies exploration and curiosity.

We measure what the crowd is paying attention to now, and we build a living, constantly learning and evolving, map of the present.

Twitter has always encouraged innovation around their data, and that innovation is really what has fueled their rapid growth and adoption. We’ve taken them at their word and innovated.

We think that what we have built adds tremendous value to the ecosystem and to Twitter.

But while Twitter data is certainly very important and high volume, Bottlenose is not just about Twitter… we integrate the other leading social networks too: Facebook, LinkedIn, Google+, YouTube, Flickr, and even networks whose data comes through them like Pinterest and Instagram. And we also see RSS too.

We provide a very broad view of what’s happening across the social web — a view that is not available anywhere else.

Bottlenose is what you’d build if you got the chance to start over and work on the problem from scratch — a new and comprehensive vision for how to make sense of what’s happening across and within social networks.

We think it could be for the social web what Google was for the Web. Ok that’s a bold statement – and perhaps it’s wishful thinking – but we’re at least off to a good start here and we’re pushing the envelope farther than it has ever been pushed. Try it!

Oh and one more thing, why the name? We chose it because dolphins are smart, they’re social, they hunt in pods, they have sonar. We chose the name as an homage to their bright and optimistic social intelligence. We felt it was a good metaphor for how we want to help people surf the Stream.

Thanks for reading this post, and thanks for your support. If you have a few moments to spare today, we’d love it if you gave Bottlenose a try. And remember, it’s still a beta.

Note: It’s Still a Beta!

Before I get too deep into the tech and all the possibilities and potential I see in Bottlenose, I first want to make it very clear that this is a BETA.

We’re still testing, tuning, adding stuff, fixing bugs, and most of all learning from our users.

There will be bugs and things to improve. We know. We’re listening. We’re on it. And we really appreciate your help and feedback as we continue to work on this.

Want to Know More?

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

 

 

Creator of Delicious Wants to Meet Your Needs With Jig

Joshua Schachter, the creator of Delicious, has launched his newest creation, Jig.

At first glance the site seems a bit like Twitter, but it has a different focus. Instead of posting about what you are doing, you post about what you need. Then other people reply with suggestions, ideas, answers, help, or presumably commercial products and services that can meet your need.

This is not a new idea. It’s been done before, at least in print, quite successfully, in the form of “the want ads.” Want ads are classified ads, where instead of offering something, you ask for something. They are basically inverse classified ads. Like a reverse auction is an inverse auction.

But although it’s not groundbreakingly new, it’s beautifully executed and quite simple and elegant. It’s elegant enough in fact that it might catch on. And if it does, it could be quite useful.

The site has some similarities to Quora, but it’s broader. It’s not just about questions and answers – it’s about getting help with any kind of need.

Looking through the initial needs being posted by early users there are requests for restaurants suggestions, a guy asking what gift he should buy for his minimalist girlfriend, a request to understand how UFO propulsion works, requests to hire people, and even a request for affordable health insurance.

There also seems to be quite a bit of spam, or at least unhelpful questions and comments, including some harmless but irrelevant banter. Jig will need to provide for a way to rank needs, comments, and authors so that noise is filtered out. This is a problem that Schachter should be able to solve in his sleep, so I’m not worried about that being a barrier to adoption. It will be resolved soon, I’m betting.

There’s a lot of potential here, if people actively start helping to share their tips and advice for getting needs met. One challenge will be to make it easy for people to find needs they can help with. A categorization system, based on hashtags perhaps, would help to find needs that match your offers or areas of expertise.

All the product level issues are pretty easy to solve. This is not rocket science. But a harder problem to solve is, how is Jig going to make money? Who is going to have to pay for what? There’s always a catch somewhere. At least if the goal is to build a revenue business.

Will users eventually be charged to post certain kinds of needs? Or is the idea to charge companies, for example, as they are asked to do when posting job ads in Craigslist? Or will there be some kind of reverse auction or group buying angle to this – when enough people have the same need they can pool together and negotiate for a group deal?

Time will tell. But since it’s Joshua Schachter, Jig is bound to get a lot of attention. Check it out for yourself and see if it meets your needs.

By the way, if you’re reading this, tell our reporters at The Daily Dot (@dailydot) what you think of Jig, and whether it’s helped you in any interesting ways. We’re curious to hear your perspective.

The Daily Dot – Our Newest Venture Production – Launches Today!

Today I’m pleased to announce that, The Daily Dot, our newest “venture production,” has launched into public beta.

The Daily Dot is the first of its kind – it’s the Web’s newspaper — the first community newspaper about the Web. We cover the Web like a town paper covers its community. Here’s a video overview of the site.

This venture began with the insight that each of us is spending an increasing amount of our lives online, in various online communities, yet we have very little insight into what’s going in this new landscape. These communities are literally places, and some of them are quite large. This is beautifully illustrated in this “map” of the Web as a geography.

I believe that it’s time for the Web community to have it’s own newspaper. The launch of the Daily Dot — the web community’s first actual newspaper of record — is a turning point, a coming-of-age, for the Web as a medium, as a place, and as a community.

Our editorial focus is different than other publications that cover the Web. Instead of covering the Web as an industry, a technology or a phenomenon, we cover it as a community. We tell the stories of the people, culture, content, events and issues that are making waves in communities around the Web. And to find and report on these stories, we have embedded reporters in those communities: Facebook, YouTube, Reddit, Twitter, Tumblr, with more communities coming soon.

Just like our physical cities and towns, our online communities are constantly moving and developing, and they are full of interesting people doing newsworthy and important things. The Daily Dot’s mission is to cover these communities just like physical community newspapers cover cities and towns.

Where a town newspaper covers the latest high school sports game, the town meeting, the local crime report, we cover the story behind the hottest viral video sweeping the planet, the latest social movement in Facebook, and important issues (like cybercrime or online bullying) that are happening in our online neighborhoods.

When a major event happens in the physical world – like the revolutions in Arab world, for example — we don’t cover the events themselves, we cover their online footprint — what’s happening online that relates to the story.

The Daily Dot will also cover what’s happening around the Web in time: just like physical community newspapers have calendar sections – The Daily Dot has an online events section, provided in partnership with Live Matrix, one of our other venture productions, that aggregates the schedule of the Web. These two companies are highly synergistic and form the beginnings of our online media network.

While those of us in the Web industry have our fingers slightly more on the pulse of the Web, the vast majority of people who use the Web do not read industry blogs and have little or no visibility into what’s going on in the online world or where it’s headed. Other than a few articles a week published by mainstream media, they are not being informed.

It’s time for that to change. The Daily Dot will be publishing dozens of articles each day about what’s happening online. We’re writing for the mainstream, not for elites or geeks. The Daily Dot is for the people who use the Web — who live in it — not just the people who are building it.

Our content is designed to be entertaining, interesting, informative — and sometimes edgy and controversial – kind of like People Magazine meets USA Today, with a little bit of TMZ thrown in.

If you want to know what’s happening online, or you’re looking to find the hottest emerging entertainment, personalities, viral videos, issues, etc — and the stories behind them — The Daily Dot is your newspaper.

But The Daily Dot is not just a newspaper, it’s also a very interesting business venture. It’s a chance to build what could become one of the largest circulation newspapers in the world someday – a global newspaper about the one community that we all share in common, no matter where we actually live.

I also want to congratulate and thank the amazing editorial and development team at the Daily Dot, who made this possible. And most importantly, I want to acknowledge Nicholas White (Daily Dot CEO), Owen Thomas (Daily Dot founding editor), and Josh Jones-Dilworth (marketing guru), my co-founders in this venture.

Nick and Owen are leading business and editorial, and running the operations, and Josh and myself are on the board, advising to help in our respective areas of expertise. Nick and Owen deserve all the credit here — they have done the heavy lifting to bring this vision to market, and I’m very proud to be working with them.

Please join me us helping to spread the word about The Daily Dot — it’s your newspaper — and we need your help to make it great (and we look forward to your feedback and participation in the comments).

This is going to be a fun ride and I can’t wait to see how it evolves.

Sharepocalypse Now

The social media landscape is changing quickly, but this change won’t be immediate, or for that matter, efficient. And that’s going to be a big problem for all of us.

I believe that Twitter, Facebook, Google+ and LinkedIn are fundamentally different, and thus, should not be in competition. However, I’m not sure the companies themselves see it this way. It’s likely they will continue dedicating resources to competition instead of differentiation.

And while the social media gods fight it out in the clouds above us, what will happen down here on Earth? What about all of us, the little people — the users?

We’re entering a new era of social network chaos, and this, in turn, is going to create new needs and opportunities for startups.


The Sharepocalypse


Welcome to the “Sharepocalypse,” a new era of social network insanity.

READ THE REST OF THE ARTICLE HERE

[Excerpt From My TechCrunch post] Why Twitter Should Adopt a Freemium API Model Immediately

TechCrunch kindly ran my most recent article today — the full version is available here.

Here is an excerpt:

I’ve been puzzling over Twitter’s recent tactical moves around their API, Ubermedia and Tweetdeck, for a few months now, and it just doesn’t add up. In fact I think Twitter’s current strategy may take them in a direction where they end up missing out on their biggest potential win.

If Twitter continues to go down the media company path, without incorporating their API into the plan, that could not only force a large part of their ecosystem to go elsewhere, but it could deprive them of a much larger potential infrastructure revenue opportunity, and could even end up costing them the company.

After all, Silicon Valley is littered with the  burned out wreckage of once-great media companies that failed create and keep third-party app ecosystems: AOL, Friendster, MySpace, Yahoo – to name a few. It’s very hard to maintain leadership as an online media company without an ecosystem of outside apps increasing reach, innovation, and stickiness.

In light of this, I’ve been exploring an alternate path for Twitter that leverages their API in a much bigger way, and this path appears to be a better strategy. According to my own experimental revenue  projections for Twitter, this alternative path is not only a good tactical move, but it’s a good business move because it increases Twitter’s reach, number of active users, and revenues massively.

….. Read the rest here.

Announcing my newest production, The Daily Dot

I’m pleased to announce that my newest venture production is beginning to unstealth. It’s called The Daily Dot and it promises to be “the hometown newspaper of the Web ” — the community newspaper for Web.

The story of The Daily Dot began several years ago when I was thinking about where the Web was headed. At the time I was thinking a lot about the future of emerging online communities such as Digg, Facebook, the early days of Twitter, and even my own Twine.com — as well as about the growth of fully immersive games and virtual worlds like World of Warcraft and Second Life. I realized that these communities really were virtual places, and some of them literally even contained the equivalent of towns, leagues, guilds, nations.

But when I looked at how the media was covering the Web at the time, I saw a huge gap. Coverage broke out into two areas: stories aimed at an industry audience (TechCrunch, ReadWriteWeb, GigaOm, Venturebeat, Techmeme) and coverage aimed at an early-adopter tech audience (Wired, Engadget, Slashdot, Boing Boing). But collectively, these audiences made up only a small slice of the overall Web audience pie.

Even more notable was that nobody was covering online communities like places — the way that newspapers cover nations, cities and towns. There were no local reporters, embedded reporters,  no stringers or correspondents in various online communities. In short, traditional media was covering the Web like a technology, not like a place.

Where was the coverage for the majority of the audience? The mainstream consumers who spent the better part of every day in this new place we call the Web? Where was the coverage of what was happening in the communities on the Web? The stories about the people on the Web? The stories for the people who used the Web?

Curiously, when I dug into this, I found that the mainstream was receiving scattered attention, in the form of only a few human interest or business articles per week in the major national media outlets (New York Times, Wall Street Journal, CBS, NBC, ABC). Mostly these articles were either curiosities or they were about big financial deals around hot companies. They too failed to address the Web as a place.

While the Internet industry audience was being deluged with thousands of geeky articles and blog posts every day, the mainstream audience was for the most part being ignored by the media. My mainstream audience member friends confirmed this – they had no clue at all about what was really going on online – even around topics they cared about like brands, celebrities, music, major privacy issues that would affect them, the birth and death of major online services, new social trends and memes, new legislation, cybercrime. The small amount of this news they were aware of, reached them weeks after it was fresh, when the major outlets finally covered it.

I realized that here was an opportunity – in fact a need – for a newspaper that covered news about the Web for the people who use the Web — mainstream people. A newspaper by and for the people of the Web (in other words, all of us). A newspaper that would cover the Web like a place and as a community. In further discussions about this concept, my wife, Kimberly Rubin, came up with the perfect name, “The Daily Dot” and I went about buying the domain name.

The idea gestated and grew. I couldn’t stop thinking about it. Finally, I decided this was a good enough idea to actively produce it in my new “venture production studio” — And so with that in mind I began looking for the right CEO and co-founders to produce the venture around.

As fate would have it, I had been introduced to Nicholas White through my longtime friend and PR guru, Josh Jones-Dilworth. Nick and I had been circling for a while. He had this “young Richard Branson” vibe — which everyone comments on after they meet him. I knew he was going to be someone important but I wasn’t sure exactly in what way.  Then it hit me.

Nick grew up in a newspaper family, working in the print newspaper biz. For over a century his family has been running community newspapers; Today they own 22 newspapers and radio stations. Like me and Josh, Nick had been thinking about the same problem — how to evolve community news reporting for the new millennium, but from the perspective of the newspaper business.

Nick was thinking about how to save the newspaper business — thoughts which he has elaborated on this week in  a new article about how he hopes to save the newspaper business by leaving it. As we spoke about the Daily Dot and his own ideas, I realized Nick had both the pedigree and the passion to build what I had envisioned. Nick was the perfect CEO for the Daily Dot. And so we invited him to co-found the venture.

With Nick’s experience at the helm, we are already making great progress. An example of this is last week’s announcement that the widely-followed editor, Owen Thomas, has left his position as executive editor at Venturebeat (a terrific publication that I read every day), to join Nick and the team as founding editor of the Daily Dot. And around Nick and Owen we are already growing a team of really awesome editors, writers, designers, coders, marketers and investors. It’s really starting to take shape, rapidly.

Owen in particular brings a strong editorial background, and is already helping to focus our strategy. We were all impressed by Owen’s incredible network of connections to the movers and shakers of the Web, as well as to the users of the Web — and also with his knowledge of all things media, pop culture, gossip, fashion, design, entertainment and more. He really understands what people use the Web for, and he’s got a great nose for news. In short he’s got exactly the right mix to head up the Daily Dot’s editorial strategy.

As Owen explains it, The Daily Dot, is going to cover the Web in a new way: It’s about people. We’re going to cover the Web not just as a technology or an industry, but as a community — actually a community of communities — spread across a virtual landscape of online places. Some of these communities, like Facebook, are even larger than physical nations, and contain communities within them that are larger than many cities. Others, like World of Warcraft, are complex parallel worlds complete with warring factions and their own economies.

And there are many other vibrant communities on the Web: Youtube, Etsy, Second Life, 4Chan, the Word Press blogging community, Tumblr, Reddit, and literally millions of micro communities around vertical interests. These communities have people in them — yes actual people, not just technologies and venture capitalists! And these people have stories, stories we want to know about. And so do the people who participate in them. But who is telling those stories?

Imagine a nation or city without its own daily newspaper — how would people know what’s going on, what would hold it together, would it even feel like one nation or city at all? A newspaper is a critical enabling catalyst that transforms a crowd into a community. It gives people news, but also a sense of place, a sense of belonging, a sense of community. It tells the story of the place, it holds the record of the place. On a deeper level, a newspaper provides a mirror of the whole back to the parts, enabling an essential feedback loop. In short, newspapers are the lifeblood of communities.

The Web today is like that nation or city without a newspaper. It’s missing something essential – the one key catalyst that will transform it from a crowd into an actual community. By providing the Web with its own newspaper, The Daily Dot will make the Web feel more human, more connected, and more cohesive. And this is really important.

The Daily Dot aims to be the community newspaper for the Web as a whole, as well as for each of the communities within it. And by doing this, we may just end up play a key role in the life of the Web. But that’s the just the beginning: We may also become the first truly global newspaper; the newspaper with the largest daily readership on the planet. After all, what newspaper today has 6 billion daily readers?

There is no geographic print newspaper audience that large. But The Daily Dot is not limited by geography; it has a real chance at achieving a truly global readership by covering the one community that everyone on this planet has in common: The Web. It’s the first newspaper that everyone may actually read every day.

The Daily Dot is still young – in fact we haven’t really even launched it yet. And as we launch it’s going to be a work in progress: We’ll be starting with a series of experiments, a newsletter, and some explorations of new approaches for involving the community in making its own news, and then we’ll be launching a major new site — currently in private beta.

Meanwhile, we’re hiring writers and editors, so if you share our passion for this mission (and you’re awesome) definitely apply. We look forward to hearing your stories.

 

 

 

What I’ve Been Up To: The Venture Production Studio Model

UPDATE NOTE: January 19, 2015:

The original post below was written in 2011. In that article I discussed the concept of venture production studios as a model — what are now being called “startup studios.”

I actively incubated 7 “ventures” starting in 2011 (not including a number of angel investments that I made but did not actively incubate to this degree), using my venture studio model.

Since that time there have been several developments of note:

2 Exits:

  • My Klout exit was definitely a giant homerun from an ROI perspective for me as an investor. Very pleased with the outcome there. The founding team did an amazing job getting that to an exit.
  • Live Matrix was acquired by OVGuide, the leading independent video portal. The company is growing revenues and users steadily under (my Live Matrix co-founder) Sanjay Reddy’s guidance — and I’m optimistic about the future there.

2 Companies with Multiple Up Venture Rounds + Growth:

  • Bottlenose has raised several significant funding rounds, most recently from KPMG Capital, and continues to grow into a potentially important company in the big data and analytics space. Because Bottlenose was directly in my wheelhouse I ended up taking an active role as CEO there.
  • The Daily Dot, under the guidance of my co-founder and CEO, Nicholas White, raised significant venture funding rounds, and now reaches over 20M monthly readers: it is on track to hit 40M readers in 2015 – making it among the fastest growing online publications in history (beating historical 36-month audience growth of many top media brands like the Huffington Post and others).

1 Setback.

  • StreamGlider didn’t make it — in part due to the tragic loss of Bill McDaniel, the CEO/CTO of that project, who passed away in 2014 from a battle with cancer. It’s a sad loss for us, but Bill’s work and vision were groundbreaking and I’m proud of what the team built under his guidance. (Note: StreamGlider is looking for a buyer for the (still quite awesome) platform and app; if anyone is interested in that, please reach out to me).

1 Long-Term, R&D Science Project:

  • Project Nikola” (Energy Magnification Corporation) continues to do R&D on its potentially world-changing new electricity platform technology, but has not yet been ready for outside venture funding.

1 Pro-Bono Data Project:

  • The Earth Dashboard continues as a non-profit.

So out of 7 projects that I seed-funded and helped to start, there were 2 exits so far, 2 successful ventures that continue to grow and raise further funding rounds, 1 that didn’t make it, 1 slow R&D effort, and 1 that continues as a non-profit. So that’s not a bad ratio in fact — still much better than the 1 in 10 ratio of big VC funds. )

Bottlenose became so all-consuming for me, that other than angel investing in a number of interesting companies and advising several others, I haven’t originated or co-founded anything new during this period.

Ultimately I think that it would certainly be possible to raise further funding to grow my “startup studio” further, and it’s been a dream of mine to do that eventually – but for the time being I think there’s still more work to do to bring the current portfolio to fruition. So that is still my current focus.

ORIGINAL POST IS BELOW THIS LINE


 

 

I’m writing this post since many of my friends and colleagues have gotten wind of some news and asked me what I’m up to. This is just the first in a series of articles I’ll be writing on this topic.

In a nutshell, I’ve been working behind the scenes for the last year to co-found and angel invest in a number of exciting new ventures. Several of these ventures will be launching soon, and so it’s time to begin telling the story of what they do, and the big idea behind them: a new approach to building startups that borrows from how Hollywood produces movies.

And in keeping with this, I’ve moved with my wife Kimberly Rubin (a TV producer with 11 movies to her credit), from San Francisco to Los Angeles, where Hollywood production studios began. I believe LA is a great place to build this concept out.

I call this new model of venture incubation the “production studio model” and in this approach I work as a producer of ventures, not merely a founder or angel investor.

As well as being a better fit for the needs of early stage startups than the typical angel investor or VC approach, the production studio model has enabled me to start a number of really excellent ventures, for less cost, in less time, than I ever thought possible.

Production Portfolio

But before I go into more detail about the model, here is the current portfolio of companies that I am actively producing. With the exception of Live Matrix and Klout (which were started earlier), all of these companies were started in the last year and will be launching soon:

  • Live Matrix — The schedule of the Web. Live Matrix is the only guide to what’s happening, when, online – across all media types (video, audio, chat, gaming, shopping, and more). I co-founded and seed-funded this venture with CEO, Sanjay Reddy, and I continue to incubate it and actively participate, in the same way as I have since we began, by serving on the board and helping on product strategy, marketing and technology. Live Matrix is launched and busy making deals and launching new features. More news coming soon. (UPDATE: Sold to OVGuide)
  • Klout — Klout is the standard for measuring influence. I discovered the company when I was a judge at the SXSW Accelerator in 2009. I was really impressed with the founders and soon became the company’s first outside investor. Since then I have served actively as an advisor to the company. They recently raised a terrific venture round with Kleiner Perkins and are off to the races. (UPDATE: Sold to Lithium)
  • Bottlenose — You’ll be hearing a lot about this venture soon. I co-founded this company with Dominiek ter Heide. As well as seed-funding the company, I’m taking an increasingly active role in helping to build this company. (UPDATE: See company site for more info on funding and growth)
  • The Daily Dot — The Daily Dot is a new online newspaper about the Web, for consumers. Most of the coverage of the Web today is targeted at the tech industry, a tiny fraction of the audience, but the Daily Dot will cover the Web for the majority of the audience: consumers who spend much of their day, every day, online. The Daily Dot hasn’t launched yet but it’s going to be an exciting company. I co-founded it with newspaper-industry CEO, Nicholas White and co-founder Josh Jones-Dilworth. More news will be coming out soon! (UPDATE: As of January 2015, The Daily Dot has more than 20 million monthly readers and is growing fast!)
  • StreamGlider — StreamGlider is a new visual real-time dashboard for tracking interests across various types of devices, starting with the iPad. It’s got a gorgeous user-interface and some novel features that are especially suited to keeping up with streams of rich media. I co-founded this venture with two leading technologists in the information filtering and semantics space, Bill McDaniel and John Breslin. This will be launching soon as well. More news will be available at launch.(UPDATE: On hold, see notes above)
  • “Project Nikola” This new venture has a breakthrough new energy technology and is totally in stealth. This isn’t even its real name; even that is a secret, for now. What I can say currently is that it really works, it’s mind-blowingly cool and just may disrupt the entire power grid someday. But there’s still a lot of R&D to do before we release it. (UPDATE: As of 2015, R&D continues actively; funded internally by the original investors and me)
  • The Earth Dashboard. This is a not-for-profit initiative (yes, I sometimes help produce game-changing nonprofits too) that is working to create an interactive live dashboard about the state of the planet that brings together, and visualizes, all the key global indicators, in one place for the first time. This project is led by Medard Gabel, who worked with Buckminster Fuller, and the creative director is Mia Hanak and her accomplished museum exhibit design team. The Dashboard will be available online as well as in major physical public locations around the globe. (UPDATE: Continues as a non-profit; site is online)

Several of these companies have a common thread – and a common passion for me — they are focused on helping people filter the Web and big data in potentially disruptive ways. Some are using “Big Data” analytics, data mining and extraction, natural language processing, machine learning and semantics, to understand the Web. These are areas that I am deeply familiar with from my many years working around information filtering, AI and search. The Nikola project is an exception — it is outside of the Internet space but springs from a multi-decade interest I’ve had in radical alternative energy technologies.

A History of Incubation

Since 1994, I’ve been involved in starting companies as an entrepreneur, and since 2000 I’ve also been an angel investor. Through incubating numerous ventures (my own and those I’ve angel invested in), I’ve gained some experience into the art of incubating startups.

But one of the best experiences I had was starting one of the more successful incubators, nVention, at SRI, which I conceived of and co-founded with Norman Winarsky (now head of ventures at SRI) in 1999. nVention is now global and has launched more than 40 ventures.

Unlike many incubators, nVention acts in a very hands-on way. First of all, most of nVention’s work focuses around creating ventures to commercialize intellectual property that was originated at SRI. Secondly, nVention bring teams of internal and external experts together to help incubate its companies from concept stage through commercialization. In effect, nVention acts like a production studio, and the people who work there function like producers. It’s a model I’m emulating, albeit in a more grassroots and distributed way.

The Venture Production Model

As a producer, I work actively to develop new original intellectual property, or to source it from great innovators, and then I angel invest and/or bring funding to the deal, shape products and strategies, build teams, invent and develop products, and actively grow companies and take them to market. In many cases, the ventures I’m producing are originated by me, but I also have several ventures in my portfolio that were originated by others, or in partnership with others.

The key to this approach is that I usually get involved at or even before concept stage — even before there is a real team — and I actively work to shape it into a venture, from concept through commercialization.

To accomplish this, across more than one venture at a time, I partner with excellent people to co-produce these companies. In some cases I act as the startup CEO, in other cases my partners do, or we find and partner with the right person to be CEO. Often I find myself partnering with entrepreneurs and helping to coach them to be CEO’s. But in all of these cases, we focus on producing ventures together, as a team. And we’re all in it for the long-term. Because I’ve found excellent partners to work with, including excellent CEO’s where needed, I’m personally able to focus more intensively on helping each venture. This has worked very well so far.

The production approach to venture creation is quite different from the “fire and forget” or “spray and pay” (or “pay and pray”) model that many VC’s and angel investors are engaging in. Instead of spreading lots of fairly hands-off bets across dozens of companies, in the production model I really focus and get deeply hands-on with a pipeline of projects of various stages.

This the opposite of the index fund or hedge fund approach that some funds are taking in the Valley. And I think it is a much better fit for the needs of early stage companies.

The production approach is also different from what many incubators and start-up accelerators are doing these days. Incubators and accelerators play an important role in the startup ecosystem, but the key difference is that in the Hollywood-inspired production model I’m testing, I often start earlier in the process – before there is a concept, company, product, CEO or even a team. Many incubators and accelerators start later in the process.

Here’s how it works. My associates and I source candidate ideas both from our own stream of inventions, as well as from people in our network, and from other innovators we find or who find us. Next we “option” the best ideas with joint R&D agreements and/or with initial prototype funding to test them out. We then filter these prototypes and choose the best ones to produce.

We then form companies, build teams, develop business plans, branding and strategy, and bring additional funding together to develop the commercial offerings, launch, market, and grow them into full ventures. It’s a very hands-on process – and just like movie production, and we usually have a number of projects going on at each stage in our pipeline at once.

The process is similar to producing a film. With a film, first you have to create or find the story, then hire writers and a director, recruit talent, and build a production team, get the financing and early distribution deals in place, shoot the film, do post-production, get broader distribution, market the film and release it.  In the early stages of companies, we all wear many hats, and as they grow, we specialize. In my own case, I assume different roles and levels of involvement according to the needs of the ventures as they grow.

The New Role of the Producer in Tech

A key to this process is really in understanding the new role of a producer in the venture world. It’s not exactly the same as the role of an angel or VC, or an EIR, or even a typical “superhero CEO.” It’s a new role that connects them all together.

Another key is having a model that is designed to attract excellent producers and talent to team up with. To do that, I’m working with a structure that gives my partners a better opportunity than they can find anywhere else. Whether it is an externally or internally originated venture, the model I’m working with is, by far, the most entrepreneur-friendly deal in the entire industry.

In most of my ventures today I take a minority, or at most, an equal partner position, with my cofounders – even in the ventures I originated and funded. In some of the ventures I have angel funded, I have continued to maintain the original equity split with my co-founders, even as the amount of funding I have contributed has increased over time. Where other angels and VC’s take the approach that money is everything, I take the exact opposite view. Talent is everything. Talent is rare, and it’s the lifeblood of ventures.  I don’t believe it is healthy for any company to have the investors take control away from founders — too often that results in disaster. My model is all about cultivating and facilitating the founders.

Why do I do this? Because I believe that you get the best out of people when they really feel they own their venture, and when they feel respected and valued for their contributions. Part of this is because as an entrepreneur myself I have experienced life on the other side. I know intimately what it is to be an entrepreneur, and I know how some VC’s, and even some angels, take advantage of entrepreneurs and founders, suck the life out of companies, and destroy businesses by over-controlling them, and I vowed NOT to be like that.

The terms I offer are the same terms I always wanted for myself. No bullshit and no games. We all succeed or fail together. It’s a true partnership. I’m not betting the odds across dozens of companies and expecting only 1 or 2 to survive; I’m NOT doing shotgun investing like most angels out there, I’m making extremely careful, extremely deeply involved long-term commitments to build companies side-by-side with my partners. And I don’t just talk about this, I do it – my model reflects this.

At the same time, I also spend money in a different way than angels or VC’s. For example, in many of my projects I’ll start on spec with a developer and an idea. No money is initially invested. They work on the idea and prove they have the goods. Then if that works, there is a small grant to “option” it and we develop it further, much like a production studio options a story. I then work with the technical or product teams to see what they can deliver with this initial grant – usually a prototype. This is a test.

If the test goes well and progress is good, we make a production deal in which money, and time, are invested in stages, by myself and others in my network, as needed, rather than all at once. We’re not talking about huge dollar investments early-on, it’s frugal and careful, but it’s enough, and it’s extremely value-added. Later in the process, when the time is right, more money can be brought to work.

Building out the Production Studio

Another way I help my portfolio companies is by bringing pre-negotiated deals with handpicked best-of-breed vendors for many of the services they need. I bring the top law firms and patent teams, the best PR and marketing teams in the business, and accounting and HR services, from partners I know and trust. This saves ventures valuable time and money, and protects them from making early mistakes.

Once the ventures are aimed at a clear target and have something to show, we go together to other angels and venture funds in my network to raise the roll-out money. There’s no reason to give up equity until we need to. The result is that entrepreneurs who work with me end up owning bigger stakes of their ventures than they would if they worked with traditional angels and VC’s.

On the investment side, I’ve been meeting with interesting angel investors – some are pros and some are new to this – and we’re teaming up to jointly fund these ventures together. By working with a production team such as mine, angel investors can put their money to work with less hands-on effort on their parts – because we’re doing the production work for them. That doesn’t mean they aren’t involved – we consult with them as much as they want and we actively solicit their feedback and ideas at every step — but it means they can have confidence that an experienced team is doing the groundwork day-to-day. By working with producers like myself and my team, angels and funds can spread their bets without being spread too thin. If you’re an angel investor, or even a VC, and you’re curious about working with our network, drop me a line.

On my own side, of course, I have to be very picky about what ventures I get involved with, so that I and my team are not spread too thin as well. For that reason, we only allow for about 4 ventures in any stage (from concept stage, through R&D, to beta, and commercialization) of our pipeline at once. But this is a model that I think can scale as we add more producers and team members to the mix. Scaling this is an area that I am actively thinking about right now. If you’re an exceptional venture producer and you would like to be involved, get in touch. This is not about anyone being a superhero, it’s about creating an awesome and highly collaborative team, supported by an incredible network.

There’s so much more I could say about all this, and in time, I’ll write more about it.You will be hearing a lot more about this model, as well as all of the ventures we’re producing in coming months, as well as some new ventures not listed above that are in the pipeline and will become visible in the future.

In the meantime, I’ll be posting here and on my Twitter feed, about my thoughts and what we’re learning as this goes forward. I welcome your thoughts too — this is one of the reasons I’ve written this. So please don’t hesitate to ask questions, offer advice, or observations. Stay tuned, it’s going to be an adventure!

— Nova

The Schedule of the Web: Live Matrix – Launched Tonight

Tonight I am pleased to announce that my next Big Idea has launched. It’s called Live Matrix and I invite you to come check it out.

Live Matrix is the schedule of the Web — We help you to find out “What’s When on the Web” — the hottest live online events happening on the Web: concerts, interviews, live chat sessions, game tournaments, sales, popular Webshows, tech conferences, live streaming sports coverage, and much more.

It’s like TV Guide was for TV, but it’s not for TV, it’s for the Web. There are all kinds of things happening online — and while Live Matrix includes a lot of live streaming video events, there is much more than just video in our guide. Live Matrix includes any types of scheduled online events — but we don’t include offline events — to be in Live Matrix an event must enable people to participate online.

The site combines elements of a guide, a search engine, and a DVR, to help you discover events and then get reminded to attend them, or catch them later if you missed them.

The insight that led to Live Matrix was that the time-dimension of the Web is perhaps the last big greenfield opportunity on the Web. It’s an entire dimension of the Web that nobody has made a search engine for, and nobody is providing any guidance for. Nobody owns it yet — it’s a whole new frontier of the Web.

There are millions of scheduled events taking place online every day. Some of these events are very cool, some are very relevant — but there is no easy way to find out about them. To find out what’s happening when on TV for example, we have TV Guide, but there is no equivalent for finding out what’s happening when on the Web.

In my own case I kept finding out about cool online events that I would have participated in — concerts, conference streams, webinars, online debates and interviews, and sales —  if only I had known they were happening. I think many Internet users have experienced this.

Google, Yahoo and Bing all focus on what I call the “space dimension” of the Web — they help you find what’s where — where is the best page about topic x? — But they don’t help you find out what’s when — what’s happening now, what’s coming next. They only help you find out what’s already finished and done with. How do you find out what’s happening now? How do you know what’s upcoming?

It was an “aha moment” when this all became clear — there is a new opportunity to be the Google or Yahoo for the time dimension of the Web. Or at least to be the equivalent of a TV Guide for the Web.

Furthermore, All trends point to this being a big opportunity. The continued growth of the realtime Web (Twitter, etc.) and the emerging Live Web (video and audio streaming) has been discussed extensively in the media; most recently comScore reported nearly a 650% increase in time spent viewing live video online.

So with this opportunity clearly in mind I set about looking for a co-founder who would be the right person to team up with, someone who would be the CEO.

That person was Sanjay Reddy. Soon after I met Sanjay it was clear to me that he was the exact right guy to partner with: his background in media and technology were what impressed me (for example, he was head of corp dev, strategy and M&A at Gemstar-TV Guide, where he led the $2.3 billion dollar sale of the company to Macrovision, and he had also worked at other Silicon Valley startups and investment banks as well).

Sanjay and I spent quite a bit of time just talking about ideas and eventually decided to join forces. My Lucid Ventures incubator, along with Sanjay, seed-funded the new venture and named it Live Matrix, to go after our mutual vision.

Soon after Sanjay joined we were fortunate to be joined by our two highly experienced colleagues, Edgar Fereira (formerly VP of data for TV Guide Data and TV Guide Online) and Tobias Batton (serial entrepreneur, product manager, game designer). Then others joined around us.

Eventually we formed a small (but awesome) startup team and began working on a prototype and eventually an alpha. We debuted a closed beta preview at TechCrunch Disrupt last spring and received enthusiastic reviews. Now, today, we are releasing our public beta.

Read the full press release here.

I hope you like what we’ve created so far. But please note it is still a BETA. We are interested in your feedback and we already have a lot of feedback from our private beta. Here are some of the ideas we are working on for our next few releases:

  • The Number One request we have received so far is to make it easier and faster for people to find events that would interest them. So for the remainder of the year one of our big priorities will be to add in more personalization and recommendations.
  • We’re also working on new UI concepts, including some more ways to view the schedule of the Web.
  • And we’re going to make it easier and faster for you to add events to Live Matrix — we’ll be launching improvements to our publisher tools section, as well more ways for people to suggest events for us to list.
  • And we also plan to add new categories of events — for examples, Business, Technology, Games, and more.

So stay tuned! Live Matrix is just getting started. But this could be the start of something big.

ps. Here’s a screencast with a quick tour of Live Matrix

Live Matrix Demo from Doug Freeman on Vimeo.

Web 3.0 Documentary by Kate Ray – I'm interviewed

Kate Ray has done a terrific job illustrating and explaining Web 3.0 and the Semantic Web in her new documentary. She interviews, Tim Berners-Lee, Clay Shirky, me, and many others. If you’re interested in where the Web is headed, and the challenges and opportunities ahead, then you should watch this, and share it too!

Is Live Content More Valuable than On-Demand Content?

I have started blogging about a new concept that I call The Scheduled Web. The Scheduled Web is the next evolution of the Real-Time Web, in which it will become possible to actually navigate the time dimension of the Web more productively.

There is a popular misconception that on-demand content, such as archived video, is more valuable than live content. But in fact, this may not be the case.

Live content has built-in perishability that makes it potentially more valuable than on-demand content – if relevant audiences can find it while it is live. If a piece of high-demand content is only live for a short period of time it can attract more traffic in less time, provided that people who would want to participate interactively (or even transactively) in it are notified beforehand.

More demand in less time translates to higher advertising revenues, or higher prices in time-based sales like auctions. A series of high-demand live events could actually earn more revenues than a series of on-demand content releases in any given unit of time.

A live event is only live for some limited period of time, after which even though it may later be available in archived form, the event is finished, it is no longer a live event. If you want to get the live experience and be able to actually participate in a live event, you have to be there. It isn’t the same to watch it after the fact. And in some cases, for example auctions, sales, games, contests and chats, if you miss the event you can’t participate and may not even be able to access an archived version (if you even wanted to).

Live events are the best of both worlds for several reasons:

1. They have extra perishability because they are live, giving people a stronger incentive to participate synchronously when they are actually happening. Furthermore, if a live event is also interactive in some way, it is even more valuable to those who are present. A good example of this is American Idol, where for instance, the audience can participate in the voting process that selects finalists. Interactivity makes the show more engaging and gives viewers a sense of ownership and personal investment in the content.

2. Live events can also be archived and made available on-demand, as well. The key to getting this double-layer of value out of live events is to schedule them so that they can be found before or while they are actually live. This amplifies the initial demand and attendance to the event, and also provides any archived version that follows an added social virality.

At Live Matrix we believe it is incorrect to assume that the television model carries over directly to the Web. The Web is an entirely different medium because it is two-way, interactive, both synchronous and asynchronous, and distribution is open to anyone and portable across any device. Television over the Web is going to be different than TV on cable and satellite networks. The fact that consumers can consume Web video content asynchronously is a plus, but it doesn’t obviate the need or opportunity for live synchronous content on the Web. In fact, for any event that requires or even wants to leverage interactivity, live synchronous attendance by audience members is a key part of the experience.

There are many use-cases where live synchronous content consumption cannot be replaced by asynchronous content consumption — for example a live chat, or a time-limited sale or auction, or a multiplayer live game. Even in the case of video and audio there are many cases where live synchronous content is more valuable than asynchronous on-demand content. For example who wants to watch the Superbowl months after the game is over? Who really wants to watch a major presidential address or a press conference weeks later? Who wants to watch video of election coverage months after it’s decided? These kinds of “timely” events are live by their nature, and part of the value of consuming the content is the act of doing it in a timely manner.

The value of live interactive content begins to become even more clear as on-demand content that is originally streamed live has the ability to generate more revenues over its lifetime than simply recorded, on-demand content alone. The Scheduled Web will thus even improve traffic and revenues for on-demand content, if that content can be initiated as live events, or at least paired with them in some way.

The value of the Scheduled Web will be realized as not simply a schedule of video content, but of all scheduled events of any type that take place on the Internet. While much of this content is valuable both when it initially goes live and on an ongoing basis as on-demand content after the fact, there is also a lot of content in Live Matrix that will be inherently and necessarily more valuable when it is live, such as sales and auctions or games.

In addition there is a new category of “exclusively live” online events that we may see emerge in 2011. These events will be one-time events, with no archived copies after they finish. They may be high-profile events where attendance requires paid admission for example. They will be marketed as special experiences – where not only do you have to be there to experience them, but where being there has special advantages, like being able to interact with others who are there and perhaps with the performers or celebrities involved as well. Some events may also offer backstage passes, or special break-out sessions as well.

For events like these — where the only value created is during the event’s live run — discovery must happen prior to or during the event for participation to take place. For these, the Scheduled Web is absolutely essential.

The Birth of the Scheduled Web

If 2010 was the year of the Real-Time Web, then 2011 is going to be the year that it evolves into the Scheduled Web.

The Real-Time Web happens in the now: it is spontaneous, overwhelming, and disorganized. Things just happen unpredictably and nobody really knows what to expect or what will happen when.

The Real-Time Web is something of a misnomer, however, because usually it’s not real-time at all –  it’s after-the-fact. Most people find out about things that happened on the Real-Time Web after they happen, or, if they are lucky, when they happen. There is no way to know what is going to happen before it happens; there is no way to prepare or ensure that you will be online when something happens on the Real-Time Web. It’s entirely hit-or-miss.

If we are going to truly realize the Real-Time Web vision, then “time” needs to be the primary focus. So far, the Real-Time Web has mainly just been about simultaneity and speed – for example how quickly people on Twitter can respond to an event in the real world such as the Haiti Earthquake or the Oscars.

This obsession with the present is a sign of the times, but it is also a form of collective myopia — the Real-Time Web really doesn’t include the past or the future – it exists in a kind of perpetual now. To put the “time” into Real-Time, we need to  provide a way to see the past, present and the future Real-Time Web at once.  For example, we need a way to search and browse the past, present, and the future of a stream – what happened, what is happening, and what is scheduled to happen in the future. And this is where what I am calling The Scheduled Web comes in. It’s the next step for the Real-Time Web.

Defining the Scheduled Web

With the Scheduled Web things will start to make sense again. There will be a return of some semblance of order thanks to schedule metadata that enables people (and software) to find out about upcoming things on the Web that matter to them, before they happen, and to find out about past things that matter, after they happen.

The Scheduled Web is a Web that has a schedule, or many schedules, which exist in some commonly accessible, open format. These schedules should be searchable, linkable, shareable, interactive, collaborative, and discoverable. And they should be able to apply to anything — not just video, but any kind of content or activity online.

Why is this needed? Well consider this example. Imagine if there was no TV Guide on digital television. How would you navigate the constantly changing programming of more than 1000 digital TV channels without an interactive program guide (IPG)? It would be extremely difficult to find shows in a timely manner. According to clickstream data from television set-top boxes, about 10% of all time spent watching TV is spent in the IPG environment. And that is not even counting additional time-spent in on-demand guidance interfaces on DVRs. The point here is that guidance is key when you have lots of streams of content happening over time.

Now extend this same problem to the Web where there are literally millions of things happening every minute. These streams of content are not just limited to video. There are myriad types of real-time streams, everything from sales, auctions, and chats, to product launches, games, and audio, to streams of RSS feeds, Web pages appearing on Web sites, photos appearing on photo sites, software releases, announcements, etc.

Without some kind of guidance it is simply impossible to navigate the firehose of live online content streams on the Web efficiently. This firehose is too much to cope with in the present moment, let alone the past, or the future. This is what the Scheduled Web will solve.

By giving people a way to see into the past, present and future of the Real-Time Web, the Scheduled Web will enable the REAL Real-Time Web to be truly actualized. People will be able to know and plan in advance to actually be online when live events they care about take place.

Instead of missing that cool live Web concert or that auction for your favorite brand of shoes, simply because you didn’t know about it beforehand, you will be able to discover it in advance, RSVP, and get reminded before it starts — so you can be there and participate in the experience, right as it happens.

We are just beginning to see the emergence of the Scheduled Web. Two new examples of startups that are at work in the space are Clicker and Live Matrix.

  • Clicker, a site that mainly provides on-demand video clips of past TV episodes, this week launched a schedule for live video streams on the Web.
  • Live Matrix (my new startup), is soon to launch a schedule for all types of online events, not just video streams.

Some people have compared Live Matrix to Clicker, however this is not a wholly accurate comparison. We have very different, although  intersecting, goals.

While Clicker is an interesting play to compete with TV Guide and companies like Hulu, Live Matrix is creating a broader index of all the events taking place across the Scheduled Web, not just video/TV content events.

The insight behind Live Matrix is that there is much more to the Scheduled Web than video and TV content. The Web is not just about TV or video – it is about many different kinds of content.

Applying a TV metaphor to the Web is like trying to apply a print metaphor to tablet computing. While print has many positive qualities, tablet devices should not be limited just to text should they? Likewise, while the TV metaphor has advantages, it doesn’t make sense to limit the experience of time or scheduled content on the Web just to video.

With this in mind, while Live Matrix includes scheduled live video streams, we view video and TV type content as just one of many different types of scheduled Web content that matter.

For example, Live Matrix also includes online shopping events like sales and auctions, which comprise an enormous segment of the Scheduled Web. As an illustration eBay alone lists around 10 million scheduled auctions and sales each day! Live Matrix also includes scheduling metadata for many other kinds of content — online games, online chats, online audio, and more.

Live Matrix is building something quite a bit broader than current narrow conceptions of the Real-Time Web, or the narrow metaphor of TV on the Web. We are creating a way to navigate and search the full time dimension of the Web, we are building the schedule of the Web.

This will become a valuable, even essential, layer of metadata that just about every application, service and Internet surfer will make use of every day. Because after all, life happens in time and so does the Web. By adding metadata about time to the Web, Live Matrix will help make the Web – and particularly the Real-Time Web – easier to navigate.

Online vs. Offline Events

One of the key rules of Live Matrix is that, to be included in our schedule, an event must be consumable on-line. This means that it must be possible to access and participate in an event on an Internet-connected device.

Live Matrix is not a schedule of offline events or events that cannot be consumed or participated in using Internet-connected devices.

We made this rule because we believe that in the near-future almost everything interesting will, in fact, be consumable online, even if it has an offline component to it. We want to focus attention on those events which can be consumed on Internet-connected devices, so that if you have a connected device you can know that everything in Live Matrix can be accessed directly on your device. You don’t have to get in your car and drive to some physical venue, you don’t have to leave the Internet and go to some other device and network (like a TV and cable network).

Note the shift in emphasis here: We believe that the center of an increasing number of events is going to be online, and the offline world is going to increasingly become more peripheral.

For example, if a retail sale generates more revenues from online purchases than physical in-store purchases, the center of the sale is really on-line and the physical store becomes peripheral. Similarly, if a live concert has 30,000 audience members in a physical stadium but 10,000,000 people attending it online, the bulk of the concert is in fact online. This is already starting to happen.

For example, the recent Youtube concert featuring U2 had 10 million live streams – that’s up to 10 million live people in the audience at one time, making it possibly the largest online concert in history; it’s certainly a lot more people than any physical stadium could accommodate. Similarly, online venues like Second Life and World of Warcraft can accommodate thousands of players interacting in the same virtual spaces – not only do these spaces not even have a physical analogue (they exist only in virtual space), but there are no physical spaces that could accommodate such large games. These are examples of how online events may start to eclipse offline events.

I’m not saying this trend is good or bad; I’m simply stating a fact of our changing participatory culture. The world is going increasingly online and with this shift the center of our lives is going increasingly online, as well. It is this insight that gave my co-founder, Sanjay Reddy, and I, the inspiration to start Live Matrix, and to begin building what we hope will be the backbone of the Scheduled Web.

The Global Brain is About to Wake Up

The emerging realtime Web is not only going to speed up the Web and our lives, it is going to bring about a kind of awakening of our collective Global Brain. It’s going to change how many things happen on online, but it’s also going to change how we see and understand what the Web is doing. By speeding up the Web, it will cause processes that used to take weeks or months to unfold online, to happen in days or even minutes. And this will bring these processes to the human-scale — to the scale of our human “now” — making it possible for us to be aware of larger collective processes than before. We have until now been watching the Web in slow motion. As it speeds up, we will begin to see and understand what’s taking place on the Web in a whole new way.

This process of of quickening is part of a larger trend which I and others call “Nowism.” You can read more of my thoughts about Nowism here. Nowism is an orientation that is gaining momentum and will help to shape this decade, and in particular, how the Web unfolds. It is the idea that the present-timeframe (“the now”) is getting more important, shorter and also more information-rich. As this happens our civilization is becoming more focused on the now, and less focused on past or the future. Simply keeping up with the present is becoming an all-consuming challenge: Both a threat and an opportunity.

The realtime Web —  what I call “The Stream”  (see “Welcome to the Stream”) — is changing the unit of now. It’s making it shorter. The now is the span of time which we have to be aware of to be effective our work and lives, and it is getting shorter. On a personal level the now is getting shorter and denser — more information and change is packed into shorter spans of time; a single minute on Twitter is overflowing with potentially relevant messages and links. In business as well, the now is getting shorter and denser — it used to be about the size of a fiscal quarter, then it became a month, then a week, then a day, and now it is probably about half a day in span. Soon it will be just a few hours.

To keep up with what is going on we have to check in with the world in at least half-day chunks. Important news breaks about once or twice a day. Trends on Twitter take about a day to develop too. So basically, you can afford to just check  the news and the real-time Web once or twice a day and still get by. But that’s going to change.  As the now gets shorter, we’ll have to check in more frequently to keep abreast of change. As the Stream picks up speed in the middle of this decade, to remain competitive will require near-constant monitoring — we will have to always be connected to, and watching, the real-time Web and our personal streams. Being offline at all will risk missing out on big important trends, threats and opportunities that emerge and develop within minutes or hours. But nobody is capable of tracking the Stream all 24/7 — we must at least take breaks to eat and sleep. And this is a problem.

Big Changes to the Web Coming Soon…

With Nowism comes a faster Web, and this will lead to big changes in how we do various activities on the Web:

  • We will spend less time searching. Nowism pushes us to find better alternatives to search, or to eliminate search entirely, because people don’t have time to search anymore. We need tools that do the searching for us and that help with decision support so we don’t have to spend so much of our scarce time doing that. See my article on “Eliminating the Need for Search — Help Engines” for more about that.
  • Monitoring (not searching) the real-time stream becomes more important. We need to stay constantly vigilant about what’s happening, what’s trending. We need to be alerted of the important stuff (to us), and we need a way to filter out what’s not important to us. Probably a filter based on influence of people and tweets, and/or time dynamics of memes will be necessary. Monitoring the real-time stream effectively is different from searching it. I see more value in real-time monitoring than realtime search — I haven’t seen any monitoring tools for Twitter that are smart enough to give me just the content I want yet. There’s a real business opportunity there.
  • The return of agents. Intelligent agents are going to come back. To monitor the realtime Web effectively each of us will need online intelligent agents that can help us — because we don’t have time, and even if we did, there’s just too much information to sift through.
  • Influence becomes more important than relevance. Advertisers and marketers will look for the most influential parties (individuals or groups) on Twitter and other social media to connect with and work through. But to do this there has to be an effective way to measure influence. One service that’s providing a solution for this (which I’ve angel invested in and advise) is Klout.com – they measure influence per person per topic. I think that’s a good start.
  • Filtering content by influence. We also will need a way to find the most influential content. Influential content could be the content most RT’d or most RT’d by most influential people. It would be much less noisy to be able to see only the more influential tweets of people I follow. If a tweet gets RT’d a lot, or is RT’d by really influential people, then I want to see it. If not, then only if it’s really important (based on some rule). This will be the only way to cope with the information overload of the real-time Web and keep up with it effectively. I don’t know of anyone providing a service for this yet. It’s a business opportunity.
  • Nowness as a measure of value of content. We will need a new form of ranking of results by “nowness” – how timely they are now. So for example, in real-time search engines we shouldn’t rank results merely by how recent they are, but also by how timely, influential, and “hot” they are now. See my article from years ago on “A Physics of Ideas” for more about that. Real-time search companies should think of themselves as real-time monitoring companies — that’s what they are really going to be used for in the end. Only the real-time search ventures that think of themselves this way are going to survive the conceptual paradigm shift that the realtime Web is bringing about. In a realtime context, search is actually too late — once something has happened in the past it really is not that important anymore –what matters is current awareness: discovering the trends NOW. To do that one has to analyze the present, and the very recent past, much more than searching the longer term past. The focus has to be on real-time or near-real-time analytics, statistical analysis, topic and trend detection, prediction, filtering and alerting. Not search.
  • New ways to understand and navigate the now. We will need a way to visualize and navigate the now. I’m helping to incubate a stealth startup venture, Live Matrix, that is working on that. It hasn’t launched yet. It’s cool stuff. More on that in the future when they launch.
  • New tools for browsing the Stream. New tools will emerge for making the realtime Web more compelling and smarter. I’m working on incubating some new stealth startups in this area as well. They’re very early-stage so can’t say more about them yet.
  • The merger of semantics with the realtime Web. We need to make the realtime Web semantic — as well as the rest of the Web — in order to make it easier for software to make sense of it for us. This is the best approach to increasing the signal-to-noise ratio of content we have to look at whether searching or monitoring stuff. The Semantic Web standars of the W3C are key to this. I’ve written a long manifesto on this in “Minding The Planet: The Meaning and Future of the Semantic Web” if you’re really interested in that topic.

Faster Leads to Smarter

As the realtime web unfolds and speeds up, I think it will also have a big impact on what some people call “The Global Brain.” The Global Brain has always existed, but in recent times it has been experiencing a series of major upgrades — particularly around how connected, affordable, accessible and fast it is. First we got phone and faxes, then the Internet, the PC and the Web, and now the real-time Web and the Semantic Web. All of these recent changes are making the Global Brain faster, more richly interconnected. And this makes it smarter. For more about my thoughts on the Global Brain, see these two talks:

What’s most interesting to me is that as the rate of communication and messaging on the Web approaches near-real time, we may see a kind of phase change take place – a much smarter Global Brain will sort of begin to appear out of the chaos. In other words, the speed of collective thinking is as important to the complexity or sophistication of collective thinking, in making the Global Brain significantly more intelligent. In other words, I’m proposing that there is a sort of critical speed of collective thinking, before which the Global Brain seems like just a crowd of actors chaotically flocking around memes, and after which the Global Brain makes big leaps — instead of seeming like a chaotic crowd, it starts to look more like an organized group around certain activitities — it is able to respond to change faster, and optimize and even do things collectively more productively than a random crowd could.

This is kind of like film, or animation. When you watch a movie or animation you are really watching a rapid series of frames. This gives the illusion of there being cohesive, continuous characters, things and worlds in the movie — but really they aren’t there at all, it’s just an illusion — our brains put these scenes together and start to recognize and follow higher order patterns. A certain shape appears to maintain itself and move around relative to other shapes, and we name it with a certain label — but there isn’t really something there, let alone something moving or interacting — there are just frames flicking by rapidly . It turns out that after a critical frame rate (around 20 to 60 frames per second) the human brain stops seeing individual frames and starts seeing a continuous movie. When you start flipping pages fast enough it appears to be a coherent animation and then we start seeing things “moving within the sequence” of frames. In the same way, as the unit of time of (aka the speed) of the real-time Web increases, its behavior will start to seem more continuous and smarter — we won’t see separate chunks of time or messages, we’ll see intelligent continuous collective thinking and adaptation processes.

In other words, as the Web gets faster, we’ll start to see processes emerge within it that appear to be cohesive intelligent collective entities in their own right. There won’t really be any actual entities there that we can isolate, but when we watch the patterns on the Web it will appear as if such entities are there. This is basically what is happening at every level of scale — even in the real world. There really isn’t anything there that we can find — everything is divisible down to the quantum level and probably beyond — but over time our brains seem to recognize and label patterns as discrete “things.” This is what will happen across the Web as well. For example, a certain meme (such as a fad or a movement) may become a “thing” in it’s own right, a kind of entity that seemingly takes on a life of its own and seems to be doing something. Similarly certain groups or social networks or activities they engage in may seem to be intelligent entities in their own rights.

This is an illusion in that there really are no entities there, they are just collections of parts that themselves can be broken down into more parts, and no final entities can be found. However, nonethless, they will seem like intelligent entities when not analyzed in detail. In addition, the behavior of these chaotic systems may resist reduction — they may not even be understandable and their behavior may not be predictable through a purely reductionist approach — it may be that they react to their own internal state and their environments virtually in real-time, making it difficult to take a top-down or bottom-up view of what they are doing. In a realtime world, change happens in every direction.

As the Web gets faster, the patterns that are taking place across it will start to become more animated. Big processes that used to take months or years to happen will happen in minutes or hours. As this comes about we will begin to see larger patterns than before, and they will start to make more sense to us — they will emerge out of the mists of time so to speak, and become visible to us on our human timescale — the timescale of our human-level “now. As a result, we will become more aware of higher order dynamics taking place on the real-time Web, and we will begin to participate in and adapt to those dynamics, making those dynamics in turn even smarter. (For more on my thoughts about how the Global Brain gets smarter, see:  “How to Build the Global Mind.”)

See Part II: “Will The Web Become Conscious?” if you want to dig further into the thorny philosophical and scientific issues that this brings up…

The Road to Semantic Search — The Twine.com Story

This is the story of Twine.com — our early research (with never before seen screenshots of our early semantic desktop work), and our evolution from Twine 1.0 towards Twine 2.0 (“T2”) which is focused on semantic search.

The Next Generation of Web Search — Search 3.0

The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.

Web 1.0, the first decade of the Web (1989 – 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.

Web 2.0, the second decade of the Web (1999 – 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive “web of trust” to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value — how many people in the community liked them and current activity level — as
well as by semantic relevancy measures.

In the coming third decade of the Web, Web 3.0 (2009 – 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.

Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.

Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past — the more timely something is the more relevant it may be as well.

These two themes — present and personal — will define the next great search experience.

To accomplish this, we need to make progress on a number of fronts.

First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.

Metadata reduces the need for computation in order to determine what content is about — it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.

This applies especially to the area of the real-time Web, where for example short “tweets” of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.

In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a “one-size fits all” ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.

Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what’s most important effectively. Social graph analysis is a key tool for doing this, but in
addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.

Sneak Peak – Siri — Interview with Tom Gruber

Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff

In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:

Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?

Tom Gruber: A virtual personal assistant is a software system that

  • Helps the user find or do something (focus on tasks, rather than information)
  • Understands the user’s intent (interpreting language) and context (location, schedule, history)
  • Works on the user’s behalf, orchestrating multiple services and information sources to help complete the task

In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don’t do things for me – I have to use them as tools to do something, and I have to adapt to their ways of taking input.

Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?

Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time.  Apple’s famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT’s Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book “The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us”.  These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results.  These are hallmarks of the Siri assistant.  Some of the elements of these visions
are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator.  Or self-awareness a la Singularity.  But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.

Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)

Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual
assistant that helps people do things.  It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.

Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant.  Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.

Nova Spivack: What are primary aspects of Siri that you would say are “novel”?

Tom Gruber: The demands of the consumer internet focus — instant usability and robust interaction with the evolving web — has driven us to come up with some new innovations:

  • A conversational interface that combines the best of speech and semantic language understanding with an interactive dialog that helps guide
    people toward saying what they want to do and getting it done. The
    conversational interface allows for much more interactivity that one-shot search style interfaces, which aids usability and improves intent understanding.  For example, if Siri didn’t quite hear what you said, or isn’t sure what you meant, it can ask for clarifying information.   For example, it can prompt on ambiguity: did you mean pizza restaurants in Chicago or Chicago-style pizza places near you? It can also make reasonable guesses based on context. Walking around with the phone at lunchtime, if the speech interpretation comes back with something garbled about food you probably meant “places to eat near my current location”. If this assumption isn’t right, it is easy to correct in a conversation.
  • Semantic auto-complete – a combination of the familiar “autocomplete” interface of search boxes with a semantic and linguistic model of what might be worth saying. The so-called “semantic completion” makes it possible to rapidly state complex requests (Italian restaurants in the SOMA neighborhood of San Francisco that have tables available tonight) with just a few clicks. It’s sort of like the power of faceted search a la Kayak, but packaged in a clever command line style interface that works in small form factor and low bandwidth environments.
  • Service delegation – Siri is particularly deep in technology for operationalizing a user’s intent into computational form, dispatching to multiple, heterogeneous services, gathering and integrating results, and presenting them back to the user as a set of solutions to their request.  In a restaurant selection task, for instance, Siri combines information from many different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and the user’s own favorites) to show a set of candidates that meet the intent expressed in the user’s natural language request.

Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?

Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:

  • Task focus. Siri is very focused on a bounded set of specific human tasks, like finding something to do, going out with friends, and getting around town.  This task focus allows it to have a very rich model of its domain of competence, which makes everything more tractable from language understanding to reasoning to service invocation and results presentation
  • Structured data focus. The kinds of tasks that Siri is particularly good at involve semistructured data, usually on tasks involving multiple criteria and drawing from multiple sources.  For example, to help find a place to eat, user preferences for cuisine, price range, location, or even specific food items come into play.  Combining results from multiple sources requires
    reasoning about domain entity identity and the relative capabilities of different information providers.  These are hard problems of semantic
    information processing and integration that are difficult but feasible
    today using the latest AI technologies.
  • Architecture focus. Siri is built from deep experience in integrating multiple advanced technologies into a platform designed expressly for virtual assistants. Siri co-founder Adam Cheyer was chief architect of the CALO project, and has applied a career of experience to design the platform of the Siri product. Leading the CALO project taught him a lot about what works and doesn’t when applying AI to build a virtual assistant. Adam and I also have rather unique experience in combining AI with intelligent interfaces and web-scale knowledge integration. The result is a “pure  play” dedicated architecture for virtual assistants, integrating all the components of intent understanding, service delegation, and dialog flow management. We have avoided the need to solve general AI problems by concentrating on only what is needed for a virtual assistant, and have chosen to begin with a
    finite set of vertical domains serving mobile use cases.

Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?

Tom Gruber: Rather than trying to be like a search engine to all the world’s information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface.  The smaller the form factor, the more mobile the context,
the more limited the bandwidth : the more it is important that the interface make intelligent use of the user’s attention and the resources at hand.  In other words, “smaller needs to be smarter.”  And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure.  When you are on the go, you just don’t have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.

Nova Spivack: What language and platform is Siri written in?

Tom Gruber: Java, Javascript, and Objective C (for the iPhone)

Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?

Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards.  A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier.  For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic
Web endpoint, and that makes it easy to deal with.  The more the API declares its data model, the more automated we can make our coupling to it.

Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?

Tom Gruber: Siri’s knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models.  As much as possible we represent things declaratively (i.e., as data in models, not lines of code).  This is a tried and true best practice for complex AI systems.  This makes the whole system more robust and scalable, and the development process more agile.  It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.


Nova Spivack: Will Siri be part of the Semantic Web, or at least the open linked data Web (by making open API’s, sharing of linked data, RDF, available, etc.)?

Tom Gruber: Siri isn’t a source of data, so it doesn’t expose data using Semantic Web standards.  In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop – an intelligent interface that knows about user needs
and sources of information to meet those needs, and intermediates.  The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.).  The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data.  For example, if a virtual assistant wants to schedule a dinner it needs more than the information
about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies.  That is the original purpose of ontologies-as-specification that I promoted in the
1990s – to help specify how to interact with these agents via knowledge-level APIs.

Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication.  As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.

All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text.  So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.

Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?

Tom Gruber: Siri’s top line measure of success is task completion (not relevance).  A subtask is intent recognition, and subtask of that is NLP.  Speech is another element, which couples to NLP and adds its own issues.  In this context, Siri’s NLP is “pretty darn good” — if the user is talking about something in Siri’s domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese.  All NLP is tuned for some class of natural language, and Siri’s is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don’tknow how it would compare to standard message and news corpuses using by the NLP research community.


Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?

Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.

Nova Spivack: Will Siri be able to talk back to users at any point?

Tom Gruber: It could use speech synthesis for output, for the appropriate contexts.  I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone.  For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.

Nova Spivack: Can you give me more examples of how the NLP in Siri works?

Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)

Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?

Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time.  As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live.  Siri doesn’t forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results.  The evolution in learning comes as users have a history with Siri, which gives it achance to make some generalizations about preferences.  There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.

Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?

Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes.  Siri knows about the data because we (humans) explicitly model what is in those sources.  With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request.  For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.


Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.

Tom Gruber: Thank you, Nova, it’s a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It’s easy to project intelligence onto an assistant, but Siri isn’t going to pass the Turing Test. It’s just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.

Interest Networks are at a Tipping Point

UPDATE: There’s already a lot of good discussion going on around this post in my public twine.

I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.

In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.

At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.t

So, what is an interest network?

In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.

Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.

I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.

This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi–dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.

We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:

What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t.

We let our users tell us what they’re most interested in, and we follow their lead).

Most interest networks exhibit the following characteristics as well:

  • They have some sort of bookmarking/submission/markup function to store and map data (often using existing metaphors, even if what’s under the hood is new)
  • They also have some sort of social sharing function to provide the network benefit (this isn’t exclusive to interest networks, obviously, but it is characteristic)
  • And in most cases, interest networks look to add some sort of “smarts” or “recommendations” capability to the mix (that is, you get more out than you put in)

This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.

To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.

At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.

The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.

Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.

6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.

I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts — Carla, Jeremiah, and others, are you listening?

Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.

Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”

Now that anyone can join, it will be fun and gratifying to watch Twine grow.

Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.

Stay tuned!

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

New Video: Leading Minds from Google, Yahoo, and Microsoft talk about their Visions for Future of The Web

Video from my panel at DEMO Fall ’08 on the Future of the Web is now available.

I moderated the panel, and our panelists were:

Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century

Peter Norvig, Director of Research, Google Inc.

Jon Udell, Evangelist, Microsoft Corporation

Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.

The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.

Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft’s longer-term views as well.

Enjoy!!!

If Social Networks Were Like Cars…

I have been thinking a lot about social networks lately, and why there are so many of them, and what will happen in that space.

Today I had what I think is a "big realization" about this.

Everyone, including myself, seems to think that there is only room for one big social network, and it looks like Facebook is winning that race. But what if that assumption is simply wrong from the start?

What if social networks are more like automobile brands? In other words, there can, will and should be many competing brands in the space?

Social networks no longer compete on terms of who has what members. All my friends are in pretty much every major social network.

I also don’t need more than one social network, for the same reason — my friends are all in all of them. How many different ways do I need to reach the same set of people? I only need one.

But the Big Realization is that no social network satisfies all types of users. Some people are more at home in a place like LinkedIn than they are in Facebook, for example. Others prefer MySpace.  There are always going to be different social networks catering to the common types of people (different age groups, different personalities, different industries, different lifestyles, etc.).

The Big Realization implies that all the social networks are going to be able to interoperate eventually, just like almost all email clients and servers do today. Email didn’t begin this way. There were different networks, different servers and different clients, and they didn’t all speak to each other. To communicate with certain people you had to use a certain email network, and/or a certain email program. Today almost all email systems interoperate directly or at least indirectly. The same thing is going to happen in the social networking space.

Today we see the first signs of this interoperability emerging as social networks open their APIs and enable increasing integration. Currently there is a competition going on to see which "open" social network can get the most people and sites to use it. But this is an illusion. It doesn’t matter who is dominant, there are always going to be alternative social networks, and the pressure to interoperate will grow until it happens. It is only a matter of time before they connect together.

I think this should be the greatest fear at companies like Facebook. For when it inevitably happens they will be on a level playing field competing for members with a lot of other companies large and small. Today Facebook and Google’s scale are advantages, but in a world of interoperability they may actually be disadvantages — they cannot adapt, change or innovate as fast as smaller, nimbler startups.

Thinking of social networks as if they were automotive brands also reveals interesting business opportunities. There are still several unowned opportunities in the space.

Myspace is like the car you have in high school. Probably not very expensive, probably used, probably a bit clunky. It’s fine if you are a kid driving around your hometown.

Facebook is more like the car you have in college. It has a lot of your junk in it, it is probably still not cutting edge, but its cooler and more powerful.

LinkedIn kind of feels like a commuter car to me. It’s just for business, not for pleasure or entertainment.

So who owns the "adult luxury sedan" category? Which one is the BMW of social networks?

Who owns the sportscar category? Which one is the Ferrari of social networks?

Who owns the entry-level commuter car category?

Who owns equivalent of the "family stationwagon or minivan" category?

Who owns the SUV and offroad category?

You see my point. There are a number of big segments that are not owned yet, and it is really unlikely that any one company can win them all.

If all social networks are converging on the same set of features, then eventually they will be close to equal in function. The only way to differentiate them will be in terms of the brands they build and the audience segments they focus on. These in turn will cause them to emphasize certain features more than others.

In the future the question for consumers will be "Which social network is most like me? Which social network is the place for me to base my online presence?"

Sue may connect to Bob who is in a different social network — his account is hosted in a different social network. Sue will not be a member of Bob’s service, and Bob will not be a member of Sue’s, yet they will be able to form a social relationship and communication channel. This is like email. I may use Outlook and you may use Gmail, but we can still send messages to each other.

Although all social networks will interoperate eventually, depending on each person’s unique identity they may choose to be based in — to live and surf in — a particular social network that expresses their identity, and caters to it. For example, I would probably want to be surfing in the luxury SUV of social networks at this point in my life, not in the luxury sedan, not the racecar, not in the family car, not the dune-buggy. Someone else might much prefer an open source, home-built social network account running on a server they host. It shouldn’t matter — we should still be able to connect, share stuff, get notified of each other’s posts, etc. It should feel like we are in a unified social networking fabric, even though our accounts live in different services with different brands, different interfaces, and different features.

I think this is where social networks are heading. If it’s true then there are still many big business opportunities in this space.

Associative Search and the Semantic Web: The Next Step Beyond Natural Language Search

Our present day search engines are a poor match for the way that our brains actually think and search for answers. Our brains search associatively along networks of relationships. We search for things that are related to things we know, and things that are related to those things. Our brains not only search along these networks, they sense when networks intersect, and that is how we find things. I call this associative search, because we search along networks of associations between things.

Human memory — in other words, human search — is associative. It works by “homing in” on what we are looking for, rather than finding exact matches. Compare this to the the keyword search that is so popular on the Web today and there are obvious differences. Keyword searching provides a very weak form of “homing in” — by choosing our keywords carefully we can limit the set of things which match. But the problem is we can only find things which contain those literal keywords.

There is no actual use of associations in keyword search, it is just literal matching to keywords. Our brains on the other hand use a much more sophisticated form of “homing in” on answers. Instead of literal matches, our brains look for things things which are associatively connected to things we remember, in order to find what we are ultimately looking for.

For example, consider the case where you cannot remember someone’s name. How do you remember it? Usually we start by trying to remember various facts about that person. By doing this our brains then start networking from those facts to other facts and finally to other memories that they intersect.  Ultimately through this process of “free association” or “associative memory” we home in on things which eventually trigger a memory of the person’s name.

Both forms of search make use of the intersections of sets, but the associative search model is exponentially more powerful because for every additional search term in your query, an entire network of concepts, and relationships between them, is implied. One additional term can result in an entire network of related queries, and when you begin to intersect the different networks that result from multiple
terms in the query, you quickly home in on only those results that make sense. In keyword search on the other hand, each additional search term only provides a linear benefit — there is no exponential amplification using networks.

Keyword search is a very weak approximation of associative search because there really is no concept of a relationship at all. By entering keywords into a search engine like Google we are simulating an associative search, but without the real power of actual relationships between things to help us. Google does not know how various concepts are related and it doesn’t take that into account when helping us find things. Instead, Google just looks for documents that contain exact matches to the terms we are looking for and weights them statistically. It makes some use of relationships between Web pages to rank the results, but it does not actually search along relationships to find new results.

Basically the problem today is that Google does not work the way our brains think. This difference creates an inefficiency for searchers: We have to do the work of translating our associative way of thinking into “keywordese” that is likely to return results we want. Often this requires a bit of trial and error and reiteration of our searches before we get result sets that match our needs.

A recently proposed solution to the problem of “keywordese” is natural language search (or NLP search), such as what is being proposed by companies like Powerset and Hakia. Natural language search engines are slightly closer to the way we actually think because they at least attempt to understand ordinary language instead of requiring keywords. You can ask a question and get answers to that question that make sense.

Natural language search engines are able to understand the language of a query and the language in the result documents in order to make a better match between the question and potential answers. But this is still not true associative search. Although these systems bear a closer resemblance to the way we think, they still do not actually leverage the power of networks — they are still not as powerful as associative search.

Continue reading

A Few Predictions for the Near Future

This is a five minute video in which I was asked to make some predictions for the next decade about the Semantic Web, search and artificial intelligence. It was done at the NextWeb conference and was a fun interview.


Learning from the Future with Nova Spivack from Maarten on Vimeo.

Insightful Article About Twine

Carla Thompson, an analyst for Guidewire Group, has written what I think is a very insightful article about her experience participating in the early-access wave of the Twine beta.

We are now starting to let the press in and next week we will begin to let waves of people in from our over 30,000 user wait list. We will be letting people into the beta in waves every week going forward.

As Carla notes, Twine is a work in progress and we are mainly focused on learning from our users now. We have lots more to do, but we’re very excited about the direction Twine is headed in, and it’s really great to see Twine getting so much active use.

Continue reading