The Present IS the Future: Real-Time Marketing In the Era of the Stream – Part Two

In Part I of this article series, we looked at how the real-time Web has precipitated Nowism as a fundamental shift in how we understand and engage with information. Nowism is a cultural shift to a focus on the present, instead of the past or future.

One example of Nowism in action is Nowcasting, which attempts to make sense of the present in real-time, before all the data has been analyzed, in order to project trends sooner or even continuously. Nowcasting is quickly becoming a necessary and powerful function in our media, culture and society.

These ideas are being harnessed by savvy brands and companies not only in how they operate, but also in how they conceive of themselves.

Next we will look how they impact social marketing, and why brands must learn to act like media companies in this new environment.

The Three Stages of Real-Time Marketing Evolution

The Stream is more real-time than the Web. And it’s even more real-time than blogging and the early days of social networking. But it’s not only faster, it’s also orders of magnitude bigger. Instead of millions Web pages every month, we’re dealing with billions of messages every day.

There’s vastly more activity, more change, more noise, and when trends happen they are more contagious and spread more quickly. It’s therefore even more important to sense and respond to change in the present, right when it happens.

Unlike the Web, the Stream is constantly changing, everywhere, on the second timescale: It is a massively parallel real-time medium. And instead of a few channels there are billions of channels — at least one (if not many) for each person, brand, organization and media outlet on the Net — and they are all flowing with messages and data.

Keeping up with the deluge of real-time conversation across so many channels at once is a huge challenge, but making sense of so much change in real-time is even harder. Yet, even harder still is intelligently engaging with the Stream in real-time.

These three objectives represent three levels of maturity and mastery for real-time marketers.



Stage One: Week Marketing.

Today, most brands and agencies are still stuck trying to accomplish Stage One, if they are even that far along.

Stage One social marketers are focused on simply monitoring the Stream and trying to keep up with the conversations about their brand.

Some Stage One marketers are also actively trying to drive perception and optimize engagement through social media. But their ability to measure the effects of their actions and optimize their engagement are primitive at best.

The timescale of their measurement and engagement with the stream can range from hours to days, or even to the weekly timescale.

Stage Two: Day Marketing.

A smaller set of organizations have learned how to make sense of the Stream in real-time and are operating on the hour to daily timescale. They have graduated to Stage Two.

Stage Two marketers don’t merely monitor and respond, they digest and interpret. They measure and engage in sense-making and trend discovery. They generate live insights from millions of messages and incorporate these insights into their thinking and behavior on an hourly to daily basis.

Stage Two social marketers have evolved past the stage of simple reflexive response to the stage where they can interpret and reason about the Stream intelligently in near-real-time. They leverage social analytics, data mining, and visualization tools to facilitate insight, and this leads to smarter behavior, more optimal engagement and better results.

But Stage Two organization response times are still not real-time. Instead of seconds or minutes their responses often take hours or even days and that’s not fast enough anymore.

Stage Three: Now Marketing.

Stage Three marketing organizations are incredibly rare today. But there will be more of them soon.

Stage Three’s are able to monitor, make sense of, and then engage intelligently with the Stream, all in real-time, not after-the-fact.

In other words, they are consistently able to detect, measure, analyze, reason, and respond to signals in the Stream within seconds or minutes, or at most within the hour.

Stage Three organizations continuously run a real-time marketing feedback loop that works like this:

  1. Sense. First a signal is sensed: it might be a breaking story rumor, a complaint by an influencer, a change in customer perception or audience sentiment, a shift in engagement levels, a crisis, or a sudden new trend or opportunity. Sensing signals, and differentiating important signals from noise, in real-time, requires new approaches to determining relevance, timeliness, importance that don’t rely on analyzing historical data. There is no time for that in the present. Sensing has to intelligently filter signal from noise by recognizing the signs of potentially interesting trends, regardless of their content. Organizations that can do this well are able to detect emerging trends early in their life cycles, giving them powerful time advantages.

  2. Analyze. Next, the signal is analyzed in real-time to understand what drives it, and what it drives. Its underlying causes, influencers, effects, demographics, time dynamics and relationships to other entities and signals are mined, measured, visualized and interpreted. This requires new live social discovery and analytics capabilities – what’s new here is that this isn’t merely classical social analytics (measuring follower count or message volumes, or charting historical volume), it’s massive “big data” mining and discovery in real-time. It’s live prospecting around all the potentially relevant signals that are connected to each signal to get the context.

  3. Respond. Then an intelligent response (itself a signal) is generated across one or more channels within seconds to minutes. For example a reply or an offer may be sent to a customer, an alert may be sent to a team, a new piece of content may be synthesized and published, the targeting for an ad campaign may be adjusted, or the tone of social messaging may be modified, prices may be adjusted, and even policies may be changed – all in real-time. Organizations that get good at this process are able to respond so quickly they cross the threshold from being reactive to being proactive. They are able to drive the direction of trends by being the first to detect, analyze and respond to them.

This feedback loop is exemplified in real-time social advertising targeting and campaign optimization, for example. But as organizations mature to Stage Three they must learn to apply this same methodology to ALL their interactions with the Stream, not just advertising. They must apply this feedback loop across ALL their engagement with customers, the media and the marketplace.

Within a decade, all leading brands will be stage three marketing organizations.

Mapping The Ripple Effect

“Stage Three” agencies and brands need to master ripple effects to thrive in the next generation social environment.

Ripple effects are the key forces in the emerging real-time social Web. Information propagates through ripple effects along social relationships, across channels, communities, and media. Ripple effects are how trends emerge and rise, how rumors spread, and how ads and content are distributed. But we’re currently almost completely blind to ripple effects, we have almost no way to detect, measure or predict them.

The average Facebook user has 190 friends. The average Twitter user has 208 followers. Each group contains a number of influencers. Within each influencer’s social graph there are another set of even more powerful influencers. And so on and so on. When you seed a branded message on Facebook, for example, it’s not a straight trajectory. A ripple starts with the above numbers, but each new impression creates a new set of ripples.

Suddenly, your branded message is being seeded across a number of platforms — even spreading to platforms you never intended — and 99% of brands and agencies have no way of mapping this ripple effect, let alone controlling it.

But what if you could track your ripple effects? What if you could guide them? What if you could measure their effectiveness, or even predict where they are going? Suddenly there would be a wealth of new insights to pull and learn from. And this is precisely what is now possible, using emerging tools.

Problems with Existing Tools

Stage Three marketing organizations have to keep up with ripple effects in real-time, and they have to anticipate where those ripple effects are headed, in order to react immediately.

But most social analytics and engagement tools fail to show ripple effects. They provide loads of raw data — lists of messages from various social accounts and searches. But they expect humans to do most of the work of actually figuring out what’s important in those lists of messages. That is no longer realistic. Humans can’t cope with the data — it’s overwhelming.

Furthermore, most existing social analytics tools focus on simply measuring engagement via follower counts, mentions, likes, Retweets, favorites, impressions, click-throughs, and basic sentiment. But those metrics are no longer sufficient: They aren’t the trends, they are just signals that may or may not be relevant to actual trends. Not all signals are trends. The art is in finding a way to pull out the actual trends from the rest of the signals that are not in fact trends of any value.

What existing tools fail to do is actually make sense of what’s going on for you — they show you either too little or too much information, but they fail to show you what’s actually important; they’re not smart enough to figure that out for you.

Existing tools are good at finding known topics and trends (“known unknowns”) things you explicitly ask to know about in advance — but what we need in the era of the Stream are tools that show you what you don’t even know to ask for (“unknown unknowns.”) They have to detect novelty, outliers, anomalies, the unexpected — and they have to do this automatically, without being instructed on how to find these nuggets.

Existing social media analytics tools are too retrospective in nature – they show how a brand performed on social channels from the past up to the moment a question is asked. But these reports are static. They don’t show change happening, they don’t say anything about what’s next. The minute they are generated they become obsolete. It’s interesting to look at past performance, but what is really needed is more predictive analytics.

Trendcasting

We need a new generation of tools that are designed for identifying real-time ripple effects and filtering them to figure out which ones are noise and which are actual trends we should pay attention to. Better yet, we need tools that can not only identify the trends, but that can project where they are headed in real-time. Think of this as the next evolution of Nowcasting.

We might call this Trendcasting. Where Nowcasting figures out what’s happening now in real-time, Trendcasting figures out what’s happening next in real-time. 

No human can Trendcast in real-time anymore without help from software, the Stream has too much volume and velocity for the human mind to comprehend or process on its own. This is a problem that can only be solved by cloud computing against big data in real-time.

In the today’s real-time Stream, marketers cannot afford to be hours or days behind the curve. They need to know and understand the present in the present, while it is still unfolding. They need tools for Trendcasting – for finding and predicting trends. Trends are not merely raw data, they are particularly meaningful and noteworthy trajectories in the data.

Trendcasting is going to be come absolutely key. The next-generation real-time marketing platforms will provide automated trendcasting as a key feature. They will sift through the noise, find the signal, and then measure it to see if it actually matters. Trendcasting is about filtering for the trends that actually matter, because not all signal is important and not all trends are equal.

Traditionally, finding and forecasting trends has always been thought of as an exclusively human skill — but today we’re starting to automate this function. I believe Trendcasting can be fully automated, or at least dramatically improved, using massively parallel big data analytics approaches. This is the where the cutting-edge of innovation for real-time marketing will focus for the next decade. (Disclosure: My own company, Bottlenose, is focused on exactly this goal, for Fortune 500 brands).

Trendcasting tools are the next leap in a long process of measurement tools innovation that has included inventions like telescopes, microscopes, X-rays, weather satellites, MRIs, and search engines. In a sense trendcasting engines could be thought of as automated cultural measurement tools — the social equivalent of a weather satellite —  social satellites. They help us to visualize, understand and project the weather of markets, cultures, industries, communities, brands and their audiences, just like satellites have helped us understand and map the weather patterns of our planet.

Every Brand is a Media Company

The shift to real-time and the advent of the Stream changes how brands must think of themselves.

Whether they are ready or not, all brands have to learn to function more like media companies – and in particular like news networks – in order to remain competitive in the era of live social media.

For the first decade of social media the emphasis was clearly on social, but now it is shifting to media. Leading brands have learned how to be social for the most part. Now they have to learn how to act like media companies.

Consider a network like CNN: They have reporters all over the world, constantly giving them text, images, video, opinion, insights and leads. They have viewers, some of whom are also contributing news tips and stories, and opinions, all over the world across many platforms and channels.

CNN’s bread and butter is finding breaking stories first, getting the best information about them, and covering them most comprehensively and creating original news content and analysis for their audience.

CNN is a good model for what every Stage 3 brand has to learn to do in order to master Now Marketing.

Brands that want to lead in the Stream era have to gather intelligence constantly, using social media. They have to create content, share it, and engage. They have to keep their fingers on the pulse of their markets and culture in general in order to remain relevant and timely. They have to respond to a huge influx of questions, opinions, complaints, suggestions, leads. And they have to do this across many platforms and channels at once, in real-time.

The distinction between content provider and audience is dissolving. It’s now a two-way live conversation with the market, a conversation among equals. Brands have to learn to share, interact, make friends, and socialize just like people do. They have to not only create content for their audiences, they have to use their audiences as the content. And they have to do it on a massive scale.

Some brands – like Nike and Red Bull – have gone very far down this path and even think of themselves as media companies to some degree. But for most brands thinking like a media company is still a completely new orientation and set of skills.

Brands need new tools in order to think and operate like media companies. They can’t work on the weekly or monthly timescale anymore. Even daily timescales are too slow: they have to go live.

They can’t just market to their customers, they have to engage them in marketing the brand and creating media, together. They can’t just analyze key metrics anymore, they have to understand the trends that are emerging, and what’s driving change.

The Stream is here, and it’s happening in real-time. Marketers who can adapt to this shift early will be the leaders of tomorrow; Brands that are late in adopting these practices risk become nothing but historical data points.

 

The Present IS the Future: Real-Time Marketing In the Era of the Stream – Part One

Introduction

The pulse of the Net has gotten faster. It’s not a static Web of documents anymore, it’s a new real-time messaging medium we call the Stream.

The Stream is unlike any form of live media before it: It is a completely real-time, globally distributed, two-way conversation. And it’s already changing everything we know about marketing, advertising, branding and PR.

For marketers – and particularly for brands and agencies – mastering the Stream requires a new set of approaches, new tools and new practices. Like the similar shift, over two decades ago, from traditional media to digital media, this shift is both an existential challenge and a potentially destiny-changing opportunity.

Some organizations are learning to master the Stream faster than others, and they will be the leaders of tomorrow. But even those that lag will have to adapt soon, or they will become irrelevant by 2016. It’s evolve or die, all over again.

The Clock Rate of the Net is Speeding Up

One of the things that makes the Stream different from the Web is clock rate. The clock rate of the Net is increasing.

For the past 30 years we have been trending towards immediacy – the world has been getting faster, and nowhere has this been more apparent than online.

Now we have arrived at real-time and the Net has become a live medium. This changes everything.

Before blogging the clock rate of the Web was slow. Most Web sites were updated less than once per day. News and media sites were updated daily or perhaps a few times a day. Blogging eventually increased the rate of change to the hours timescale. RSS then made it possible to keep up with this change more efficiently – the Web became a gigantic news ticker. But it was still a relatively slow one compared to today.

Fourth Era

Starting in 2000, instant messaging and text messaging both started to gain adoption. These shifted marketing from the hours to intra-hour timescale.

Facebook was launched in 2004, followed by YouTube in 2005, Twitter in 2006 and Instagram in 2010. Due to the rapid message-based brand conversations they enabled, social networks sped up the timescale of digital marketing from hours to minutes, and even to seconds.

Since 2000 we have also seen a steady transition from stationary and sporadic Internet access to continuous mobile access, complemented by simultaneous increases in bandwidth and reductions of bandwidth cost.

Everyone is now connected all the time, both as a content consumer and as a content provider. These trends have democratized the Internet from a nearly static and one-way textual medium to a fully live two-way multimedia medium – the Stream of today.

Social Media Beats Mainstream Media

Social media is the new media – it is a new form of media, not just a media distribution pipe, and it is much faster than traditional media in every way.

The Stream is more live and real-time than TV and radio ever were. For example, social networks consistently beat TV and radio to the story. They sense and distribute breaking news and trends ahead of mainstream TV networks and media outlets, often by up to tens of minutes and even as long as hours. 

There are numerous examples of major stories that broke on Twitter before mainstream outlets. For example news of Whitney Houston’s death broke on Twitter by 27 minutes ahead of mainstream media. Another example is Earthquakes – Twitter consistently breaks news of earthquakes minutes ahead of mainstream media.

Likewise, ambient social apps that take advantage of geofencing and silent, constant communication between devices are driving real-time content and engagement. 

SoLoMo (Social + Mobile + Local Technologies) tech is still in its infancy, but it opens myriad new doors for brands and agencies to begin mining live data. Tracking consumer behavior in real-time paints an incredibly vivid picture of your fans and followers as SoMoLo becomes widely adopted.

The digital native demographic has developed an expectation that their devices seamlessly integrate with the world around them.  As such, Millennials have begun hyper-tasking: operating multiple devices/consuming data from multiple sources at once.

Brands can no longer rely on a blanket broadcast strategy with their messaging. You need to know the behaviors of your target demo on each device – what they’re saying, when they’re saying it, and who they’re saying it to. And you need to know this in real-time.

Attention has Shifted to the Stream

Much of the growth of the SoLoMo movement has been driven by increased bandwidth, faster adoption times for smartphones and tablets, but also broader demographics embracing social media.

This produces a phenomenal amount of data – and it’s a challenge to manage and make sense of it, but it’s imperative brands/agencies begin sifting through it all to execute the right kind of campaigns.

The pace and volume of social messages on Twitter and Facebook have been growing exponentially, year over year and this trend shows no signs of slowing down. Attention is shifting from search to social.

Meanwhile attention to the top sites on the Web is increasingly being driven by this social messaging activity or dark social, rather than by Web navigation, Web search and SEO.

Among the top 50 sites on the Web, most get at least equal, if not more, of their traffic from social than from search. In other words, the primary driver of digital consumer attention, brand perception, and engagement has shifted to social.


 

The Age of Nowism

The shift to faster timescales is altering the landscape of marketing, sales and even customer service. Consumers live in the Now, and they are demanding that brands live there too.

This new and growing obsession with now even has its own marketing buzzword called Nowism:

NOWISM | “Consumers’ ingrained* lust for instant gratification is being satisfied by a host of novel, important (offline and online) real-time products, services and experiences. Consumers are also feverishly contributing to the real-time content avalanche that’s building as we speak. As a result, expect your brand and company to have no choice but to finally mirror and join the ‘now’, in all its splendid chaos, realness and excitement.”

Nowism is a cultural shift to a focus on the present, instead of the past or future. It’s new and unprecedented: never before has a civilization on this planet lived so exclusively in the now.



In the information age, thanks to the double-edged blessing of information technology and communication networks, the present has become bigger, faster, and more consuming. 

Today we are focused on immediacy and instant gratification. And with these come an expectation of instant response, instant customer service, and instant solutions. This is an era of shoot-first-ask-questions-later, where trends and rumors flare up and go global in minutes.

Now, the risk of being late is greater than the risk of being wrong. And due to this, even experienced media outlets and brands are under pressure to publish or respond as fast as possible, without even time to think or fact-check. It’s better to issue a correction than to be perceived as slow.

This is a world in which there will be more error, more confusion, more threats, more crises, but they will start and end more quickly as well. It is also a world in which there will be more leads, more opportunities, and more transactions, and they too will start and end more rapidly. 

To survive and prosper in these faster cycles of activity organizations have to learn to think and respond in real-time, even if it means making mistakes and corrections more frequently.

The present contains more data, and more change, than what used to occur in months or even years of activity. And the present is therefore more difficult to understand today than it was before.

Nowcasting: Predicting the Present

Because there is so much to measure in the present, and we can measure everything in higher resolution, there is vastly more that we must pay attention to at any time. And this means we simply don’t have as much time or resources to focus on the past or the future.

Instead of predicting the future, there is a new option, in the age of Nowism: Predict the present, while it is still unfolding, using an approach called Nowcasting, which was pioneered at Google.

Nowcasting attempts to make sense of the present before all the data has been analyzed, in order to project trends sooner, or even continuously. For example, using nowcasting techniques Google has been able to predict monthly sales, economic indicators, and disease spread in near real-time, before end of month results are usually available.

Nowcasting has been applied by hedge funds, economists, and epidemiologists, and soon it will be a standard tool for marketers.

 

Applying nowcasting effectively is more than simply measuring and predicting events. The opportunity is to use those measurements to then act proactively to respond, create new content, adjust campaigns, and stay one step ahead of emerging trends. It is not reactive, but proactive to opportunities as they develop in real-time.

This of course requires a sophistication and understanding of the present to sense, analyze and act in a way that is substantial and moves engagements forward.

In part II of this series, we will look at how marketing has evolved to its present state in the real-time Web, and how Nowism is necessarily changing the way that marketers work. We will also explore how leading businesses must learn to consider themselves as creators of media rather than simply forces moving within it.

Notes:

The author thanks Adam Blumenfeld and Phil Ressler for contributions, edits and suggestions for this article.

Twitter is No Longer a Village

I’ve noticed a distinct change in how people use Twitter in the last year:

1. People are increasingly not using Twitter for actual two-way conversations or interactions. Instead it’s being used more for one-way “fire and forget” posting. People just post into the aether, without knowing or even caring if anyone actually reads their posts.

2. People are spending less time reading Twitter messages, they are paying less attention to what other people say. This is because it’s too difficult to keep up with what your friends are up to on Twitter: we all follow too many people now, and there are just too many messages flowing by all the time.

These two shifts are going to fundamentally change what Twitter is for, and how it is used. It is gradually becoming less of a social network where people interact, and more of a place to simply express opinions.

Maybe in a way this is a return to the original intent of Twitter — a place where you could post what you were doing. That was originally a one-way activity. However soon after those early days a community formed and Twitter became conversational and highly interactive for a while. Until it got so big that it lost that village feeling.

Twitter used to be a village — it was in fact the epicenter of the global village for a while. But now it has become a gigantic industrialized urban sprawl. A megacity. It’s lost that feeling of intimacy and community it once had.

Today Twitter is a mass market backchannel for consumers to express themselves to businesses and media providers, and for businesses to market to their audiences. It is also a place where people express themselves around live events like sports games, television shows and breaking news.

But while people and businesses are increasingly expressing themselves on Twitter, they are actually doing less listening to each other there.

Listening is on the decline because the message volumes on Twitter are now so high that it just is impossible to keep up. There are too many messages flowing by all the time. It’s information overload. There’s no point in even trying to pay attention to what people you follow are saying.

Of course people still pay attention to replies, mentions and Retweets of them — at least if they are not famous. Famous people get far too many mentions from strangers and so they usually just ignore them as well.

I’m willing to bet that you aren’t paying attention to Twitter. Your friends aren’t either. At least not like in years past.

So who is listening to Twitter if it’s not all of us? Businesses. They are listening, analyzing, and using this data to gauge perception, market and advertise. This is where the real value of Twitter seems to be headed: It’s a channel for people to express themselves around products, brands, events and content. And it’s a tool for businesses to learn about their audiences and market to them in real-time. Twitter is becoming our global backchannel.

As a side-effect of these shifts, Twitter is feeling less social every day. It’s no longer a place where people listen or pay attention to one another anymore. It’s certainly not a place where people have conversations beyond the occasional reply. Instead, it’s more like a giant stadium where everyone is shouting at the same time.

This probably means that as a publishing and messaging channel Twitter will become less effective over time.

As message volumes keep growing, what are the chances that your audience will be looking at the exact second that your message is actually visible above the fold, before it is buried by 1000 new Tweets? The chances are getting lower every day. And nobody scrolls down to look at older messages anymore. Why look back through the past when there are so many new Tweets arriving in the present?

This means that the likelihood of your intended audience seeing anything you post to Twitter is headed towards zero.

Unless of course, you’re famous. If you’re famous you can post once and get a thousands of Retweets and that might get your post noticed. But for most of us, and even most brands, most of their posts are going to be missed. They are like shots in the dark.

If you’re not famous you can still get noticed however. If you are willing to pay. You can buy visibility for your Tweets by making them into Promoted Tweets. But ads are different than conversation. And a network where people have to advertise to each other to be heard would not feel social at all.

Should this be fixed? I’m willing to bet that Twitter will probably not put much effort in reducing noise, or adding really good personalization, precisely because such measures would compete with Promoted Tweets. Promoted Tweets make money precisely because there is increasing noise in Twitter, just like Google Ads make money because Google is not as relevant as it could be.

These trends throw into question the value of posting anything to Twitter today, at least if your goal is to reach your followers organically and get attention. That is just increasingly unlikely.

If you really want to reach people on Twitter, the best bet will be to advertise there.

But advertise to whom? If attention to Twitter is declining because people are posting more but reading less, that would reduce attention to Twitter ads as well.

Ironically it’s the noise on Twitter that creates a need for Twitter ads, but it’s that same noise that will ultimately cause people to not pay attention to Twitter anymore. And if people pay less attention to Twitter’s content, there will be less of an audience for Twitter’s ads. It’s just too much work to find the needles you care about in all that hay.

The noise problem on Twitter is a side-effect of mass adoption. But it’s also a side-effect of a growing mismatch between how Twitter was designed as a product and the size of audience, and message volumes, it now supports today. Twitter was not designed for this level of audience or activity, and it shows. Twitter was designed to be village, but it’s now a megacity.

It will be interesting to see how Twitter evolves to meet this challenge. Can they restore the balance by creating ways for consumers to filter the noise? Can they attract more attention and content consumption?

My theory is that Twitter may inevitably focus more on advertising outside of Twitter, than inside, perhaps by using a retargeting approach on sites that use Twitter OATH to register their users. Here’s how this could work:

  1. Twitter can potentially see the interests of anyone who posts content to Twitter.
  2. When any member of Twitter uses their Twitter credentials to login to any site that uses Twitter OATH as a login (including Twitter.com), Twitter can place a cookie in their browser.
  3. Then any site that uses Twitter OATH can detect that user and associate them with their interest profile from Twitter.
  4. With this knowledge any site in the Twitter network can target ads to Twitter users’ personalized interests when they get visits from those users.

This technique is already being applied by one company, LocalResponse. I wonder when Twitter will start doing it themselves. If they do this, Twitter can become an ad network that uses what people talk about inside of Twitter, to target ads to them outside of Twitter.

Ultimately this may solve the attention problem in Twitter. Don’t even bother getting people to pay attention to content inside of Twitter. Just get them to talk about their interests and then target ads to them when they pay attention to content outside of Twitter. This “retargeting” approach is working well for Facebook and it’s only a matter of time until Twitter does it. Of course I’m sure Facebook has applied for a patent on this idea by now and that will also add a wrinkle to how this plays out in the future.

 

 

 

 

 

Making Sense of Streams

This is a talk I’ve been giving on how we filter the Stream at Bottlenose.

You can view the slides below, or click here to replay the webinar with my talk.

Note: I recommend the webinar if you have time, as I go into a lot more detail than is in the slides – in particular some thoughts about the Global Brain, mapping collective consciousness, and what the future of social media is really all about.  My talk starts at 05:38:00 in the recording.

 

Bottlenose Beat Bit.ly to the First Attention Engine – But It’s Going to Get Interesting

Bottlenose (disclosure: my startup) just launched the first attention engine this week.

But it appears that Bit.ly is launching one soon as well.

It’s going to get interesting to watch this category develop. Clearly there is new interest in building a good real-time picture of what’s happening, and what’s trending, and providing search, discovery, and insights around that.

I believe Bottlenose has the most sophisticated map of attention today, and we have very deep intellectual property across 8 pending patents and a very advanced technology stack behind it as well. And we have some pretty compelling user-experiences on top of it all. So in short, we have a lead here on many levels. (Read more about that here)

But that might not even matter because I think ultimately Bit.ly will be a potential partner for Bottlenose, rather than a long-term competitor — at least if they stay true to their roots and DNA as a data provider rather than a user-experience provider. I doubt that Bit.ly will succed in making a search destination that consumers will use and I’m guessing that is not really their goal.

In testing their Realtime service, my impression is that it feels more like a Web 1.0 search engine. Static search results for advanced search style queries. I don’t see that as a consumer experience.

Bottlenose on the other hand, goes way into a consumer UX, with live photos, newspapers, topic portals, a dashboard, etc. It is also a more dynamic, always changing, realtime content consumption destination. Bottlenose feels like media, not merely search (in fact I think search, news and analytics are actually converging in the social network era).

Bottlenose has a huge emphasis on discovery, analytics, and other further actions on the content that go beyond just search.

I think in the end Bit.ly’s Realtime site will really demonstrate the power of their data, which will still mainly be consumed via their API rather than in their own destination. I’m hopeful that Bit.ly will do just that. It would be useful to everyone, including Bottlenose.

The Threat to Third-Party URL Shorteners

If I were Bit.ly, my primary fear today would be Twitter with their t.co shortener. That is a big threat to Bit.ly and will probably result in Bit.ly losing a lot of their data input over time as more Tweets have t.co links on them than Bit.ly links.

Perhaps Bit.ly is attempting to pivot their business to the user experience side in advance of such a threat potentially reducing their data set and thus the value of their API. But without their data set I don’t see where they can get the data to measure the present. So as a pivot it would not work – where would they get the data?

In other words, if people are not using as many Bit.ly links in the future, Bit.ly will see less attention. And trends point to this happening in fact — Twitter has their own shortener. So does Facebook. So does Google. Third-party shorteners will probably represent a decreasing share of messages and attention over time.

I think the core challenge for Bit.ly is to find a reason for their short URLs to be used instead of native app short URLs. Can they add more value to them somehow? Could they perhaps build in monetization opportunities for parties who use their shortener, for example? Or could they provide better analytics than Twitter or Facebook or Google will on short URL uptake (Bit.ly arguably does, today).

Bottlenose and Bit.ly Realtime: Compared and Contrasted

In any case there are a few similarities between what Bit.ly may be launching and what Bottlenose provides today.

But there are far more differences.

These products only partially intersect. Most of what Bottlenose does has no equivalent in Bit.ly Realtime. Similarly much of what Bit.ly actually does (outside of their Realtime experiment) is different from what Bottlenose does.

It is also worht mentioning that Bit.ly’s “Realtime” app is a Bit.ly “labs” project and is not their central focus, whereas at Bottlenose it is 100% of what we do. Mapping the present is our core focus.

There is also a big difference in business model. Bottelnose does map the present in high-fidelity, but has no plans currently to provide a competing shortening API, or an API about shortURLs, like Bit.ly presently does. So currently we are not competitors.

Also, where Bit.ly currently has a broader and larger data set, Bottlenose has created a more cutting-edge and compelling user-experience and has spent more time on a new kind of computing architecture as well.

The Bottlenose StreamOS engine is worth mentioning here: Bottlenose has new engine for real-time big data analytics engine that uses a massively distributed and patent pending “crowd computing” architecture.

We actually have buit what I think is the most advanced engine and architecture on the planet for mapping attention in real-time today.

The deep semantics and analytics we compute in realtime are very expensive to compute centrally. Rather than compute everything in the center we compute everywhere; everyone who uses Bottlenose helps us to map the present.

Our StreamOS engine is in fact a small (just a few megabytes) Javascript and HTML5 app (the size of a photo) that runs in the browser or device of each user. Almost all the computing and analytics that Bottlenose does happens in the browser at the edge.

We have very low centralized costs. This approach scales better, faster, and more cheaply than any centralized approach can. The crowd literally IS our computer. It’s the Holy Grail of distributed real-time indexing.

We also see a broader set of data than Bit.ly does. We don’t only see content that has a bit.ly URL on it. We see all kinds of messages moving through social media — with other shortURls, and even without URLs.

We see Bit.ly URLs, but we also see data that is outside of the Bit.ly universe. I think ultimately it’s more valuable to see all the trends across all data sources, and even content that contains no URLs at all (Bottlenose analyzes all kinds of messages for example, not just messages that contain URLs, let alone just Bit.ly URLs).

Finally, the use-cases for Bottlenose go far beyond just search, or just news reading and news discovery.

We have all kinds of  brands and enterprises actually using our Bottlenose Dashboard product, for example, for social listening, analytics and discovery. I don’t see Bit.ly going as deeply into that as us.

For these reasons I’m optimistic that Bottlenose (and everyone else) will benefit from what Bit.ly may be launching — particularly via their API, if they make their attention data available as an additional signal.

This space is going to get interesting fast.

(To learn more about what Bottlenose does, read this)

 

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

This article is part of a series of articles about the Bottlenose Public Beta launch.

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence (you are here)

A New Window Into the Collective Consciousness

Bottlenose offers a new window into what the world is paying attention to right now, globally and locally.

We show you a live streaming view of what the crowd is thinking, sharing and talking about. We bring you trends, as they happen. That means the photos, videos and messages that matter most. That means suggested reading, and visualizations that cut through the clutter.

The center of online attention and gravity has shifted from the Web to social networks like Twitter, Facebook and Google+. Bottlenose operates across all them, in one place, and provides an integrated view of what’s happening.

The media also attempts to provide a reflection of what’s happening in the world, but the media is slow, and it’s not always objective. Bottlenose doesn’t replace the media — at least not the role of the writer. But it might do a better job of editing or curating in some cases, because it objectively measures the crowd — we don’t decide what to feature, we don’t decide what leads. The crowd does.

Other services in the past, like Digg for example, have helped pioneer this approach. But we’ve taken it further — in Digg people had to manually vote. In Bottlenose we simply measure what people say, and what they share, on public social networks.

Bottlenose is the best tool for people who want to be in the know, and the first to know. Bottlenose brings a new awareness of what’s trending online, and in the world, and how those trends impact us all.

We’ve made the Bottlenose home page into a simple Google-like query field, and nothing more. Results pages drop you into the app itself for further exploration and filtration. Except you don’t just get a long list of results, the way you get on Google.

Instead, you get an at-a-glance start page, a full-fledged newspaper, a beautiful photo gallery, a lean-back home theater, a visual map of the surrounding terrain, a police scanner, and Sonar — an off-road vehicle so that you can drive around and see what’s trending in networks as you please. We’ve made the conversation visual.

Each of these individual experiences is an app on top of the Bottlenose StreamOS platform, and each is a unique way of looking at sets and subsets of streams. You can switch between views effortlessly, and you can save anything for persistent use.

Discovery, we’ve found from user behavior, has been the entry point and the connective tissue for the rest of the Bottlenose experience all along. Our users have been asking for a better discovery experience, just as Twitter users have been asking for the same.

The new stuff you’ll see today has been one of the most difficult pieces for us to build computer-science-wise. It is a true technical achievement by our engineering team.

In many ways it’s also what we’ve been working towards all along. We’re really close now to the vision we held for Bottlenose at the very beginning, and the product we knew we’d achieve over time.

The Theory Behind It: How to Build a Smarter Global Brain

If Twitter, Facebook, Google+ and other social networks are the conduits for what the planet is thinking, then Bottlenose is a map of what the planet is actually paying attention to right now. Our mission is to “organize the world’s attention.” And ultimately I think by doing this we can help make the world a smarter place. At at the end of the day that’s what gets me excited in life.

After many years of thinking about this, I’ve come to the conclusion that the key to higher levels of collective intelligence is not making each person smarter, and it’s not some kind of Queen Bee machine up in the sky that tells us all what to do and runs the human hive. It’s not some fancy kind of groupware either. And it’s not the total loss of individuality into a Borg-like collective either.

I think that better collective intelligence really comes down to enabling better collective consciousness. The more conscious we can be of who we are collectively, and what we think, and what we are doing, the smarter we can actually be together, of our own free will, as individuals. This is a bottom-up approach to collective consciousness.

So how might we make this happen?

For the moment, let’s not try to figure out what consciousness really is, because we don’t know, and we probably never will, but regardless, for this adventure, we don’t need to. And we don’t even need to synthesize it either.

Collective consciousness is not a new form of consciousness, rather, it’s a new way to channel the consciousness that’s already there — in us. All we need to do is find a better way to organize it… or rather, to enable it to self-organize emergently.

What does consciousness actually do anyway?

Consciousness senses the internal and external world, and maintains a model of what it finds — a model of the state of the internal and external world that also contains a very rich model of “self” within it.

This self construct has an identity, thoughts, beliefs, emotions, feelings, goals, priorities, and a focus of attention.

If you look for it, it turns out there isn’t actually anything there you can find except information — the “self” is really just a complex information construct.

This “self” is not really who we are, it’s just a construct, a thought really — and it’s not consciousness either. Whatever is aware is aware of the self, so the self is just a construct like any other object of thought.

So given that this “self” is a conceptual object, not some mystical thing that we can’t ever understand, we should be able to model it, and make something that simulates it. And in fact we can.

We can already do this for artificially intelligent computer programs and robots in a primitive way in fact.

But what’s really interesting to me is that we can also do it for large groups of people too. This is a big paradigm shift – a leap. Something revolutionary really. If we can do it.

But how could we provide something like a self for groups, or for the planet as a whole? What would it be like?

Actually, there is already a pretty good proxy for this and it’s been around for a long time. It’s the media.

The Media is a Mirror

The media senses who we are and what we’re doing and it builds a representation — a mirror – in the form of reports, photos, articles, and stats about the state of the world. The media reflects who we are back to us. Or at least it reflects who it thinks we are…

It turns out it’s not a very accurate mirror. But since we don’t have anything better, most of us believe what we see in the media and internalize it as truth.

Even if we try not to, it’s just impossible to avoid the media that bombards us from everywhere all the time. Nobody is really separate from this, we’re all kind of stewing a media soup, whether we like it or not.

And when we look at the media and we see stories – stories about the world, about people we know, people we don’t know, places we live in, and other places, and events — we can’t help but absorb them. We don’t have first hand knowledge of those things, and so we take on faith what the media shows us.

We form our own internal stories that correspond to the stories we see in the media. And then, based on all these stories, we form beliefs about the world, ourselves and other people – and then those beliefs shape our behavior.

And there’s the rub. If the media gives us an inaccurate picture of reality, or a partially accurate one, and then we internalize it, it then conditions our actions. And so our actions are based on incomplete or incorrect information. How can we make good decisions if we don’t have good information to base them on?

The media used to be about objective reporting, and there are still those in the business who continue that tradition. But real journalists — the kind who would literally give their lives for the truth — are fewer and fewer. The noble art of journalism is falling prey, like everything else, to commercial interests.

There are still lots of great journalists and editors, but there are fewer and fewer great media companies. And fewer rules and standards too. To compete in today’s media mix it seems they have to stoop to the level of the lowest common denominator and there’s always a new low to achieve when you take that path.

Because the media is driven by profit, stories that get eyeballs get prioritized, and the less sensational but often more statistically representative stories don’t get written, or don’t make it onto the front page. There is even a saying in the TV news biz that “If it bleeds, it leads.”

Look at the news — it’s just filled with horrors. But that’s not an accurate depiction of the world. For example crimes don’t happen all the time, everywhere, to everyone – they are statistically quite unlikely and rare — yet so much news is devoted to crimes for example. It’s not an accurate portrayal of what’s really happening for most people, most of the time.

I’m not saying the news shouldn’t report crime, or show scary bad things. I’m just pointing out that the news is increasingly about sensationalism, fear, doubt, uncertainty, violence, hatred, crime, and that is not the whole truth. But it sells.

The problem is not that these things are reported — I am not advocating for censorship in any way. The problem is about the media game, and the profit motives that drive it. Media companies just have to compete to survive, and that means they have to play hard ball and get dirty.

Unfortunately the result is that the media shows us stories that do not really reflect the world we live in, or who we are, or what we think, accurately – these stories increasingly reflect the extremes, not the enormous middle of the bell curve.

But since the media functions as our de facto collective consciousness, and it’s filled with these images and stories, we cannot help but absorb them and believe them, and become like them.

But what if we could provide a new form of media, a more accurate reflection of the world, of who we are and what we are doing and thinking? A more democratic process, where anyone could participate and report on what they see.

What if in this new form of media ALL the stories are there, not just some of them, and they compete for attention on a level playing field?

And what if all the stories can compete and spread on their merits, not because some professional editor, or publisher, or advertiser says they should or should not be published?

Yes this is possible.

It’s happening now.

It’s social media in fact.

But for social media to really do a better job than the mainstream media, we need a way to organize and reflect it back to people at a higher level.

That’s where curation comes in. But manual curation is just not scalable to the vast number of messages flowing through social networks. It has to be automated, yet not lose its human element.

That’s what Bottlenose is doing, essentially.

Making a Better Mirror

To provide a better form of collective consciousness, you need a measurement system that can measure and reflect what people are REALLY thinking about and paying attention to in real-time.

It has to take a big data approach – it has to be about measurement. Let the opinions come from the people, not editors.

This new media has to be as free of bias as possible. It should simply measure and reflect collective attention. It should report the sentiment that is actually there, in people’s messages and posts.

Before the Internet and social networks, this was just not possible. But today we can actually attempt it. And that is what we’re doing with Bottlenose.

But this is just a first step. We’re dipping our toe in the water here. What we’re doing with Bottlenose today is only the beginning of this process. And I think it will look primitive compared to what we may evolve in years to come. Still it’s a start.

You can call this approach mass-scale social media listening and analytics, or trend detection, or social search and discovery. But it’s also a new form of media, or rather a new form of curating the media and reflecting the world back to people.

Bottlenose measures what the crowd is thinking, reading, looking at, feeling and doing in real-time, and coalesces what’s happening across social networks into a living map of the collective consciousness that anyone can understand. It’s a living map of the global brain.

Bottlenose wants to be the closest you can get to the Now, to being in the zone, in the moment. The Now is where everything actually happens. It’s the most important time period in fact. And our civilization is increasingly now-centric, for better or for worse.

Web search feels too much like research. It’s about the past, not the present. You’re looking for something lost, or old, or already finished — fleeting.  Web search only finds Web pages, and the Web is slow… it takes time to make pages, and time for them to be found by search engines.

On the other hand, discovery in Bottlenose is about the present — it’s not research, it’s discovery. It’s not about memory, it’s about consciousness.

It’s more like media — a live, flowing view of what the world is actually paying attention to now, around any topic.

Collective intelligence is theoretically made more possible by real-time protocols like Twitter. But in practice, keeping up with existing social networks has become a chore, and not drowning is a real concern. Raw data is not consciousness. It’s noise. And that’s why we so often feel overwhelmed by social media, instead of emboldened by it.

But what if you could flip the signal-to-noise ratio? What if social media could be more like actual media … meaning it would be more digestible, curated, organized, consumable?

What if you could have an experience that is built on following your intuition, and living this large-scale world to the fullest?

What if this could make groups smarter as they get larger, instead of dumber?

Why does group IQ so often seem inversely proportional to group size? The larger groups get, the dumber and more dysfunctional they become. This has been a fundamental obstacle for humanity for millennia.

Why can’t groups (including communities, enterprises, even whole societies) get smarter as they get larger instead of dumber? Isn’t it time we evolve past this problem? Isn’t this really what the promise of the Internet and social media is all about? I think so.

And what if there was a form of media that could help you react faster, and smarter, to what is going on around you as it happens, just like in real life?

And what if it could even deliver on the compelling original vision of the cyberspace as a place you could see and travel through?

What about getting back to the visceral, the physical?

Consciousness is interpretive, dynamic, and self-reflective. Social media should be too.

This is the fundamental idea I have been working on in various ways for almost a decade. As I have written many times, the global brain is about to wake up and I want to help.

By giving the world a better self-representation of what it is paying attention to right now, we are trying to increase the clock rate and resolution of collective consciousness.

By making this reflection more accurate, richer, and faster, and then making it available to everyone, we may help catalyze the evolution of higher levels of collective intelligence.

All you really need is a better mirror. A mirror big enough for large groups of people to look into and see what they are collectively paying attention to in it, together. By providing groups with a clearer picture of their own state and activity, they can adapt to themselves more intelligently.

Everyone looks in the collective mirror and adjusts their own behavior independently — there is no top-down control — but you get emergent self-organizing intelligent collective behavior as a result. The system as a whole gets smarter. So the better the mirror, the smarter we become, individually and collectively.

If the mirror is really fast, really good, really high res, and really accurate and objective – it can give groups an extremely important, missing piece: Collective consciousness that everyone can share.

We need collective consciousness that exists outside of any one person, and outside of any one perspective or organization’s agenda, and is not merely just in the parts (the individuals) either. Instead, this new level of collective consciousness should be something that is coalesced into a new place, a new layer, where it exists independently of the parts.

It’s not merely the sum of the parts, it’s actually greater than the sum – it’s a new level, a new layer, with new information in it. It’s a new whole that transcends just the parts on their own.  That’s the big missing piece that will make this planet smarter, I think.

We need this yesterday. Why? Because in fact collectives — groups, communities, organizations, nations — are the units of change on this planet. Not individuals.

Collectives make decisions, and usually these decisions are sub-optimal. That’s dangerous. Most of the problems we’ve faced and continue to face as a species come down to large groups doing stupid things, mainly due not having accurate information about the world or themselves. This is, ultimately, an engineering problem.

We should fix this, if we can.

I believe that the Internet is an evolving planetary nervous system, and it’s here to to make us smarter. But it’s going to take time. Today it’s not very smart. But it’s evolving fast.

Higher layers of knowledge, and intelligence are emerging in this medium, like higher layers of the cerebral cortex, connecting everything together ever more intelligently.

And we want to help make it even smarter, even faster, by providing something that functions like self-consciousness to it.

Now I don’t claim that what we’re making with Bottlenose is the same as actual consciousness — real consciousness is, in my opinion a cosmic mystery like the origin of space and time. We’ll probably never understand it. I hope we never do. Because I want there to be mystery and wonder in life. I’m confident there always will be.

But I think we can enable something on a collective scale, that is at least similar, functionally, to the role of self-consciousness in the brain — something that reflects our own state back to us as a whole all the time.

After all, the brain is a massive collective of hundreds of billions of neurons and trillions of connections that themselves are not conscious or even intelligent – and yet it forms a collective self and reacts to itself intelligently.

And this feedback loop – and the quality of the reflection it is based on – is really the key to collective intelligence, in the brain, and for organizations and the planet.

Collective intelligence is an emergent phenomena, it’s not something to program or control. All you need to do to enable it and make it smarter, is give groups and communities better quality feedback about themselves. Then they get smarter on their own, simply by reacting to that feedback.

Collective intelligence and collective consciousness, are at the end of the day, a feedback loop. And we’re trying to make that feedback loop better.

Bottlenose is a new way to curate the media, a new form of media in which anyone can participate but the crowd is the editor. It’s truly social media.

This is an exciting idea to me. It’s what I think social media is for and how it could really help us.

Until now people have had only the mainstream, top-down, profit-driven media to look to. But by simply measuring everything that flows through social networks in real time, and reflecting a high-level view of that back to everyone, it’s possible to evolve a better form of media.

It’s time for a bottom-up, collectively written and curated form of media that more accurately and inclusively reflects us to ourselves.

Concluding Thoughts

I think Bottlenose has the potential to become the giant cultural mirror we need.

Instead of editors and media empires sourcing and deciding what leads, the crowd is the editor, the crowd is the camera crew, and the crowd decides what’s important. Bottlenose simply measures the crowd and reflects it back to itself.

When you look into this real-time cultural mirror that is Bottlenose, you can see what the community around any topic is actually paying attention to right now. And I believe that as we improve it, and if it becomes widely used, it could facilitate smarter collective intelligence on a broader scale.

The world now operates at a ferocious pace and search engines are not keeping up. We’re proud to be launching a truly present-tense experience. Social messages are the best indicators today of what’s actually important, on the Web, and in the world.

We hope to show you an endlessly interesting, live train of global thought. The first evolution of the Stream has run its course and now it’s time to start making sense of it on a higher level. It’s time to start making it smart.

With the new Bottlenose, you can see, and be a part of, the world’s collective mind in a new and smarter way. That is ultimately why Bottlenose is worth participating in.

Keep Reading

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence (you are here)

 

Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarter

Recently, one of Twitter’s top search engineers tweeted that Twitter was set to “change search forever.” This proclamation sparked a hearty round of speculation and excitement about what was coming down the pipe for Twitter search.

The actual announcement featured the introduction of autocomplete and the ability to search within the subset of people on Twitter that you follow — both long-anticipated features.

However, while certainly a technical accomplishment (Twitter operates a huge scale and building these features must have been very difficult), this was an iterative improvement to search…an evolution, not a revolution.

Today I’m proud to announce something that I think could actually be revolutionary.

 

And here’s the video….

 

My CTO/Co-founder, Dominiek ter Heide, and I have been working for 2 years on an engine for making sense of social media. It’s called Bottlenose, and we started with a smart social dashboard.

Now we’re launching the second stage of our mission “to organize the world’s attention” — a new layer of Bottlenose that provides a live discovery portal for the social web.

This new service measures the collective consciousness in real-time and shows you what the crowd is actually paying attention to now, about any topic, person, brand, place, event… anything.

If the crowd is thinking about it, we see it. It’s a new way to see what’s important in the world, right now.

This discovery engine, combined with our existing dashboard, provides a comprehensive solution for discovering what’s happening, and then keeping up with it over time.

Together, these two tools not only help you stay current, they provide compelling and deep insights about real-time trends, influencers, and emerging conversations.

All of this goes into public beta today.

An Amazing Team

I am very proud of what we are launching today, in many ways — while still just a step on a longer journey — it is the culmination of an idea I’ve been working on, thinking about, dreaming of… for decades… and I’d love you to give it a spin.

And I’m proud of my amazing technical team — they are the most talented technical team I’ve ever worked with in my more than 20 years in this field.

I have never seen such a small team deliver so much, so well. And Bottlenose is them – it is their creation and their brilliance that has made this possible. I am really so thankful to be working with this crew.

Welcome to the Bottlenose Public Beta

So what is Bottlenose anyway?

It is a real-time view of what’s actually important across all the major social networks — the first of its kind — what you might call a “now engine.”

This new service is not about information retrieval. It’s about information awareness. It’s not search, it’s discovery.

We don’t index the past, we map the present. That’s why I think it’s better to call it a discovery engine than a search engine. Search implies research towards a specific desired answer, whereas discovery implies exploration and curiosity.

We measure what the crowd is paying attention to now, and we build a living, constantly learning and evolving, map of the present.

Twitter has always encouraged innovation around their data, and that innovation is really what has fueled their rapid growth and adoption. We’ve taken them at their word and innovated.

We think that what we have built adds tremendous value to the ecosystem and to Twitter.

But while Twitter data is certainly very important and high volume, Bottlenose is not just about Twitter… we integrate the other leading social networks too: Facebook, LinkedIn, Google+, YouTube, Flickr, and even networks whose data comes through them like Pinterest and Instagram. And we also see RSS too.

We provide a very broad view of what’s happening across the social web — a view that is not available anywhere else.

Bottlenose is what you’d build if you got the chance to start over and work on the problem from scratch — a new and comprehensive vision for how to make sense of what’s happening across and within social networks.

We think it could be for the social web what Google was for the Web. Ok that’s a bold statement – and perhaps it’s wishful thinking – but we’re at least off to a good start here and we’re pushing the envelope farther than it has ever been pushed. Try it!

Oh and one more thing, why the name? We chose it because dolphins are smart, they’re social, they hunt in pods, they have sonar. We chose the name as an homage to their bright and optimistic social intelligence. We felt it was a good metaphor for how we want to help people surf the Stream.

Thanks for reading this post, and thanks for your support. If you have a few moments to spare today, we’d love it if you gave Bottlenose a try. And remember, it’s still a beta.

Note: It’s Still a Beta!

Before I get too deep into the tech and all the possibilities and potential I see in Bottlenose, I first want to make it very clear that this is a BETA.

We’re still testing, tuning, adding stuff, fixing bugs, and most of all learning from our users.

There will be bugs and things to improve. We know. We’re listening. We’re on it. And we really appreciate your help and feedback as we continue to work on this.

Want to Know More?

How Bottlenose Could Improve the Media and Enable Smarter Collective Intelligence

 

 

A New Approach to Artificial Intelligence: Non-Computational AI

I was recently contacted by a computer scientist, Sergey Bulanov, who has been working quietly for 20 years on a new approach to artificial intelligence. It’s a pretty interesting and novel approach, and I would like to see what others think about it.

From what I understand, the essence of Sergey’s approach is a new form of computer reasoning that implements “non-computational” networks of logical operations to solve problems.

It is “non-computational” in the sense that it is not an expert system or traditional computer program — rather it is a network of simple operators that compute locally and interact with one another, to emergently arrive at results, reflected by an overall state of the system at the end of the process. This approach reminds me of “connectionist” approaches to AI, such as neural networks and cellular automata.

Sergey believes that his approach could be an important step towards making truly humanlike artificial intelligence in the future. His point is that the brain is a non-computational system, and might in fact use some of these principles.

Sergey calls his approach “Artificial Consciousness,” but I don’t think the word “consciousness” adds value here – and it may even distract from the core idea. But, for the moment, let’s not argue about terminology — his theory is very interesting.

Sergey states that he has used this approach, to solve every logic problem in Raymond Smullyan’s book, Lady and the Tiger. For more info, read Sergey’s overview of his theory. You can read some more of his writings on this theory, here.

You can also view a working simulation of the system in operation, here.

I can’t explain it very well, so here is Sergey’s explanation to me, from our correspondence (please note, he is not a native English speaker, so I have added some corrections to his letter to improve readability):

1.

I consider the present version of system, which only solves logical tasks, to not be a truly “intelligent” system. This system is only a starting point for my investigations. This system only looks like it is intelligent because it is solving tasks that are hard for people. The idea for how to solve logical problems in this way came to me accidentally by thinking about the book, Lady and the Tiger, by Raymond Smullyan. In my classification of AI, a system for solving logical puzzles appears to be a kind of low complexity system (according my theory). This present version of the system is just a step along the way towards more sophisticated AI.

2.

Despite my low valuation of systems for logical solving, for practical use at least, such systems can be amusing for people. And such system can be the starting point to thinking about more sophisticated “non-computational” systems. The theory of such systems is well developed for computational case and such system is called SAT system (Boolean satisfiability problem).

The essence of the problem is as follows. Suppose we have a logical expression. (In our case the logical expression reflects the statement of a puzzle). And we consider that logical expression has value “TRUE” (in our case the formulation of the puzzle is true). Then we shall find out logical arguments of this expression which satisfy this expression (to make this expression to be “TRUE”). This procedure is so called NP-complete. In the worst case, this requires full enumeration of all possible arguments. The SAT approach aims to reduce the probable enumerations. The methods of SAT is well developed. But I don’t know about this at the beginning of my work. Moreover, from the beginning I started to create a non-computational approach.

3.

My idea was very simple. Assume we have a logical function , “AND,” with two arguments. This function will have output value “TRUE” only in case where both of its arguments are “TRUE”. So if we know the value of the output of function, we can predict (not in any cases) the value of its inputs.

The formulation of the puzzle is expressed as a logical expression. The expression is represented in a form of a tree (mathematical tree). This tree you can see at video in my website. The nodes of the tree are logical functions (AND, OR and some more types). These nodes are represented as balls in the video. Each ball has one output link and several input links. The state of the function can be TRUE (red ball), FALSE (blue ball) and UNKNOWN (grey ball). From the beginning the logical tree has some nodes with pre-determined initial values (according to the formulation of the puzzle). These values are reassigned not only at the top or the bottom of the tree, but also in the middle of it.

After the start of the system,  each ball (each of the logical functions, i.e. each node) can fill states of the adjacent nodes. And each of the balls begins to continuously correct its state depending on the states of the nearby balls. For example, if one of the balls bears function AND with three inputs (thee arguments) and the upper ball sends to this ball information to be a “TRUE” then this ball will assign value “TRUE” at the each of its three inputs. In such a way different kinds of information will be propogated through the tree until a steady state is reached.

This information can change until steady state, asynchronously and even without clocking (this is not proved by me). During the theory about NP-completeness, solving can’t be reached unconditionally (like solving in the linear or differential equations). After some time, the system reaches an unresolvable state and it would need some more iterations to reach the complete solution. The system can be knocked out from each of these unresolvable states by assuming a hypothesis on one of the unresolved balls. The system can reach a global contradiction state or it can reach a global solution. If system doesn’t reach global solution or global contradiction state we must add a next hypothesis on the one of the next balls. In case of contradiction state we must change one of the hypotheses (typically the last hypothesis).

So the system can reach the solution (or set of the solutions) during the iterations between the assignment of hypotheses. This solving can be achieved without explicit algorithm and it can be achieved on non-computational structure, thousands or million time faster than in the computational devices.

4.

These results appear to be an unusual and promising for the AI domain. The importance of these results is in the demonstration of possibilities of non-computational solving of complicated tasks. I hope this system can attract attention of people to develop non-computational cognitive system millions times more powerful than human brain.

But unfortunately this kind of system is not yet a true AI system. Below is some explanations of why.

5.

A full AI system can’t be based on traditional (simple) logical basis. The system represented in our website can solve some kinds of logical tasks. But it can’t discus with humans about these tasks. It can’t explain the solving of these tasks. It can’t (and never could in future) understand natural written text. And it couldn’t do most of the human brain’s functions. One of the most fundamental reasons is that a network of logical functions (as I represent it) could only solve logical tasks, and it can’t grow by its own reasoning. There are many reasons to construct completely another kind of AI system based on different principles. But creating of more complicated system would be hard without understanding principles and problems of more simple system. Logical systems, such as mine, can be a starting point of the way to more powerful systems that apply my non-computational approach.

6.

I came to idea that a really powerful system must be based on the idea of mathematical sets. I found a way to create a network based on sets that can grow, and how such a network can solve different tasks. The range of these tasks is much greater than only solving of mathematical puzzles. I am working on this presently.

7.

My idea for a the chain of model tasks is not an engine of the system but it is a method of research. This  idea is very close to the statement of philosopher Bertrand Russell:

“The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it”.

So that is my approach. For example, I made an expression of the idea of logical functions without logical notions. And I found unusual ideas for my novel system in this way.

There is another example of my principle. Assume we take a simplest question, so simple that decision of this question would be almost inevitable. Then if the decision would have high quality, the principles of this decision can be applied to a next but more complicated question. So moving from simple task to more complicated we can develop our theory.

I hope Sergey’s 20 years of thinking in this direction will prove interesting, and perhaps even fruitful, for the field of artificial intelligence. It does appear to me to be a novel and potentially promising vein of innovation.

Best of luck to Sergey and his collaborators. I’m always happy to see really original thinking in the field of AI.

How I Got Into College (by Doing the Opposite of What I Should Have Done). An Essay.

Today I had an interesting phone call with an alumnus of my alma mater, Oberlin College. He called me for an informational interview, asking for some career advice. It was a good conversation. At one point, on a tangent, he asked me why I went to Oberlin? It’s a funny story actually.

In fact, I didn’t want to attend Oberlin. It was my absolute last choice; I was forced to apply by my mother. She went to Oberlin and loved it. She said she knew me better than anyone and knew for sure that Oberlin was where I belonged.

But from my perspective, there was no way I was going from Boston to some tiny school in the midwest with no city, no ocean, no tech community, no anything! No frikkin way. I wanted to go to Brown, or NYU, or somewhere “cool” or at least “big.”

Never mind the fact that Oberlin was one of the most intellectually intense, creative, free thinking liberal arts colleges in the country. Never mind that Oberlin was the first college to admit women and to not discriminate against people of color, and never mind that it had one of the top conservatories of music in the world, or that it has long had one of the highest percentages of graduates to go onto get PhDs.

Never mind all of that. My mother went there, and it was in Ohio. And it wasn’t Brown University. Those three facts were enough to convince me I didn’t belong there.

I procrastinated until I had sent out all my other applications. But my mother would not leave me alone. So, at the last minute, one evening, in a very rebellious mood, I filled out my Oberlin application in a way that I thought would GUARANTEE that they would not admit me.

Here is the essay:

Nova Spivack – Oberlin Essay

I wasn’t going to take my mother’s advice, no matter what. I did my best to write an essay that was the very opposite of what a college application essay should be. It was not serious, well reasoned, carefully written, or intellectually brilliant, and certainly did not demonstrate my desire or qualifications to attend Oberlin. In fact, if anything, I was hoping that Oberlin’s admission staff would read it and cross me right off their list.

But fate or destiny had other plans for me.

Brown University lost my application (I received a belated apology from their admissions department months later).

And to make matters worse, much to my dismay, Oberlin loved my essay.

They called me and told me it was one of the most creative essays they had ever received. They were convinced I really wanted to attend and that my essay was actually a serious attempt to get admitted.

They didn’t believe me when I said that no, in fact, I really didn’t want to go there and that it was my last choice and that I only applied because my mother forced me.

Nothing I said would convince them otherwise. They were sure I was playing an elaborate game with them. They were sure I really wanted to attend, and the more I denied it, the more they thought I was playing with them.

Their admissions director said I was exactly the kind of out-of-the-box thinker they look for. They called again. I said no. So they wrote, they spoke to my mother, and they even offered me a very generous scholarship. It was by far the best offer I got from any college. Ironically, in the end, I just could not say no.

It just goes to show you, everyone wants whomever doesn’t want them. Even colleges.

But on hindsight it turned out that my mother was right about me (as mothers usually are when it comes to their children). Oberlin was the best college I could have possibly have gone to. It was the perfect petri dish for an interdisciplinary, intensely curious, anti-authoritarian, free-thinking creative person like myself.

And the fact that there was no city to speak of and nothing at all to do off-campus (you could barely even find coffee off-campus when I attended) contributed to the most active, vibrant, non-boring on-campus community imaginable.

It was an absolute hotbed of thinking, activism, creativity, music, literature, art, science, philosophy, and basically just about everything but sports.

I tried my best to avoid it, and when I applied I tried to disqualify myself, but there was no escaping it. And it turned out that it really was the best place for me in the end; it was where I belonged.

I loved it. Every quirky idealistic isolated ivory tower dreamy minute of it.

Sometimes life works that way. What’s best for you is sometimes the opposite of what you think or want. And sometimes, when you are stubbornly certain that you know what’s best for you — just don’t listen to yourself, listen to your mother.

 

 

I Get 13,000 Messages/Day via Different Streams – Here’s the Analysis

Continuing with the theme I’ve been writing about lately, focused on the growth of the next phase of the Web, what I call “The Stream,” I’ve started to analyze the messages I get on a typical day.

First of all, through all the different channels I use, I now receive approximately 13,000 messages a day. I don’t think I am an extreme case. In fact, anyone who uses Twitter, Facebook, LinkedIn, Google+, email, RSS, and a few Web apps, is probably in the same boat.

Of these, email is no longer the largest stream, but it’s still the most important. However, of 112 email messages received on that day, 46 (41%) were “notifications” from Web apps and Web sites, and these were a lot less important than the remaining messages that were actual communication of one form or another.

The largest streams are Twitter, Facebook and LinkedIn. These streams are comprised of public messages posted by people and sources I follow. In these streams, based on a cursory analysis, it appears messages spool in at a rate that varies on the low side from 1 message every 2 minutes on average (LinkedIn) up to 2 messages per minute on average (Twitter for my personal account where I follow 525 people).

The volume of messages pouring through my social streams is impossible to keep up with. It’s becoming a personal firehose. So, like most people, I have no choice but to ignore 99.99% of them.

However, there are some needles in the haystack that I really would like to find. To solve for that, I use Bottlenose as my dashboard – it surfaces the social messages I really need to pay attention to. It helps me extract more value from my social streams. Fewer calories, more protein.  (Disclosure: I’m the CEO of Bottlenose).

I really need something to make sense of all the messages I’m getting. You probably do too. And in with the exponential growth of message volume across all streams, everyone’s going to need this within a year or two. Not just professionals and power-users, but even everyday consumers. In fact, many regular people are already overwhelmed.

Today, while we’re in beta testing, Bottlenose only pulls in Twitter and Facebook. However, where we’re heading is to include ALL the types of streams you see in this diagram above, making Bottlenose a truly “universal dashboard” for the era of the Stream.

I would be very curious to hear what your messaging looks like and if you’re seeing similar levels of overload.

Here’s the raw data for a typical day:

Stream Messages/day Percentage Notes
LinkedIn Direct Messages 1 0.01%
LinkedIn Connect Requests 2 0.01%
Facebook Private Messages 3 0.02%
Facebook Events Notifications 6 0.04%
Facebook Notifications 7 0.05%
Novaspivack Twitter Mentions 10 0.07%
Facebook suggested Events 15 0.11%
GitHub Notifications 42 0.31%
Yammer Messages 44 0.32%
Facebook Groups 58 0.42%
Email Messages 112 0.82%
Bottlenose Twitter Mentions 200 1.46%
RSS News Articles 350 2.55%
Google+ 720 5.24% .5/min*
LinkedIn Updates 720 5.24% .5/min*
Facebook News Feed 1440 10.49% 1/min*
Twitter @bottlenoseapp 2800 20.39% 2/min**
Twitter @novaspivack 7200 52.44% (5/min)**
Total  13,730
Notes
 *  Estimated average based on counting; does not include comments on messages
 ** Estimated average based on counting

Keeping Up With the Stream — New Problems and Solutions

This is Part III of a series of articles on the new era of the Stream, a new phase of the Web.

In Part I, The Message is the Medium, I explored the shift in focus on the Web from documents to messages.

In Part II, Drowning in the Stream, we dove deep into some of the key challenges the Stream brings with it.

Here in Part III, we will discuss new challenges and solutions for keeping up with streams as they become increasingly noisy and fast-moving.

 

Getting Attention in Streams

Today if you post a message to Twitter, you have a very small chance of that message getting attention. What’s the solution?

You can do social SEO and try to come up with better, more attention-grabbing, search engine attracting, headlines. You can try to schedule your posts to appear at optimal times of day. You can even try posting the same thing many times a day to increase the chances of it being seen.

This last tactic is called “Repeat Posting” and it’s soon going to be clogging up all our streams with duplicate messages. Why am I so sure this is going to happen? Because we are in an arms race for attention. In a room where everyone is talking, everyone starts talking louder, and soon everyone is shouting.

Today when  you post a message to Twitter, the chances of getting anyone’s attention are low and they are getting lower.  If you have a lot of followers, the chances are a little better that at least some of them may be looking at their stream at precisely the time you post. But still, even with a lot of followers, the odds are that most of your followers probably won’t be online at that precise moment you post something, and so they’ll miss it.

Scheduled Posting

But it turns out there are optimal times of day to post, when more of your followers are likely to be looking at their streams. A new category of apps, typified by Buffer, has emerged to help you schedule your Tweets to post at such optimal times.

Using apps like Buffer, you can get more attention to your Tweets, but this is only a temporary solution. Because the exponential growth of the Stream means that soon even posting a message at an optimal time will not be enough to get it in front of everyone who should see it.

Repeat Posting

To really get noticed, above the noise, you need your message to be available at more than one optimal time, for example many times a day, or even every hour.

To achieve this, instead of posting a message once at the optimal time per day, we may soon see utilities that automatically post the same message many times a day – maybe every hour – perhaps with slightly different wording of headlines, to increase the chances that people will see them. I call this “repeat posting” or “message rotation.”

Repeat posting tools may get so sophisticated that they will A/B test different headlines and wordings and times of day to see what gets the best clickthroughs and then optimize for those. These apps may even intelligently rotate a set of messages over several days, repeating them optimally until they squeeze out every drop of potential attention and traffic, much like ad servers and ad networks rotate ads today.

But here’s the thing — as soon as anyone starts manually or automatically using repeat posting tactics, it will create an arms race – others will notice it, and compete for attention by doing the same thing. Soon everyone will have to post repeatedly to simply get noticed above the noise of all the other repeat posts.

This is exactly what happens when you are speaking in a crowded room. In a room full of people who are talking at once, some people start talking louder. Soon everyone is shouting and losing their voice at the same time.

This problem of everyone shouting at once is what is soon going to happen on Twitter and Facebook and other social networks. It’s already happening in some cases – more people are posting the same message more than once a day to get it noticed.

It’s inevitable that repeat posting behavior will increase, and when everyone starts doing it, our channels will become totally clogged with redundancy and noise. They will become unusable.

What’s the solution to this problem?

What to Do About Repeat Posting

One thing that is not the solution is to somehow create rules against repeat posting. That won’t work.

Another solution that won’t work is to attempt to detect and de-dupe repeats that occur. It’s hard to do this, and easy to create repeat posts that have different text and different links, to evade detection.

Another solution might be to recognize that repeat posting is inevitable, but to make the process smarter: Whenever a repeat posting happens, delete the previous repeat post. So at any given time the message only appears once in the stream. At least this prevents people from seeing the same thing many times at once in a stream. But it still doesn’t solve the problem of people seeing messages come by that they’ve seen already.

A better solution is to create a new consumption experience for keeping up with streams, where relevant messages are actually surfaced to users, instead of simply falling below the fold and getting buried forever. This would help to ensure that people would see the messages that were intended for them, and that they really wanted to see.

If this worked well enough, there would be less reason to do scheduled posting, let alone repeat posting. You could post a message once, and there would be much better chance of it being seen by your audience.

At Bottlenose, we’re working on exactly this issue in a number of ways. First of all, the app computes rich semantic metadata for messages in streams automatically, which makes it possible to filter them in many ways.

Bottlenose also computes the relevance of every message to every user, which enables ranking and sorting by relevancy, and the app provides smart automated assistants that can help to find and suggest relevant messages to users.

We’re only at the beginning of this and these features are still in early beta, but already we’re seeing significant productivity gains.

Fast-Moving Streams

As message volume increases exponentially in streams, our streams are going to not just going to be noisier, they are going to move faster. When we look at any stream there will be more updates per minute – more new messages scrolling in – and this will further reduce the chances of any message getting noticed.

Streams will begin to update so often they will literally move all the time. But how do you read, let alone keep up with, something that’s always moving?

Today, if you follow a Twitter stream for a breaking news story, such as a natural disaster like the Tsunami in Japan, or the death of Steve Jobs, you can see messages scrolling in, in real-time every second.

In fact, when Steve Jobs died, Twitter hit a record peak of around 50,000 Tweets per minute. If you were following that topic on Twitter at that time, the number of new messages pouring was impossible to keep up with.

Twitter has put together a nice infographic showing the highest Tweets Per Second events of 2011.

During such breaking news events, if you are looking at a stream for the topic, there is not even time to read a message before it has scrolled below the fold and been replaced by a bunch of more recent messages. The stream moves too fast to even read it.

But this doesn’t just happen during breaking news events. If you simply follow a lot of people and news sources, you will see that you start getting a lot of new messages every few minutes.

In fact, the more people and news sources, saved searches, and lists that you follow, the higher the chances are that at any given moment there are going to be many new messages for you.

Even if you just follow a few hundred people, the chances are pretty high that you are getting a number of new messages in Twitter and Facebook every minute. That’s way more messages than you get in email.

And even if you don’t follow a lot of people and news sources – even if you diligently prune your network, unfollow people, and screen out streams you don’t want, the mere exponential growth of message volume in coming years is soon going to catch up with you. Your streams are going to start moving faster.

But are there any ways to make it easier to keep up with these “whitewater streams?”

Scrolling is Not the Answer

One option is to just make people scroll. Since the 1990’s UX designers have been debating the issue of scrolling. Scrolling works, but it doesn’t work well when the scrolling is endless, or nearly endless. The longer the page, the lower percentage of users will scroll all the way down.

This becomes especially problematic if users are asked to scroll in long pages – for example infinite streams of messages going back from the present to the past (like Twitter, above). The more messages in the stream, the less attention those messages that are lower in the stream, below the fold, will get.

But that’s just the beginning of the problem. When a stream is not only long, but it’s also moving and changing all the time, it becomes much less productive to scroll. As you scroll down new stuff is coming in above you, so then you have to scroll up again, and then down again. It’s very confusing.

In long streams that are also changing constantly it is likely that engagement statistics will be very different than for scrolling down static pages. I think it’s likely engagement will be much lower, the farther down such dynamic streams one goes.

Pausing the Scroll is Not the Answer

Some apps handle this problem of streams moving out from under you by pausing auto-scrolling as you read – they simply notify you that there are new messages above whatever you are looking at. You can then click to expand the stream above and see the new messages. Effectively they make dynamic streams behave as if they are not dynamic, until you are ready to see the updates.

This at least enables you to read without the stream moving out from under you. It’s less disorienting that way. But in fast moving streams where there are constantly new updates coming in, you have to click on the “new posts above” notification frequently, and it gets tedious.

For example, here is Twitter, on a search for Instagram, a while after the news of their acquisition by Facebook. After waiting only a few seconds, there are 20 new tweets already. If you click the bar that says “20 new Tweets” they expand. But by the time you’ve done that and started reading them, there are 20 more.

 

Simply clicking to read “20 new tweets” again and again is tedious. And furthermore, it doesn’t really help users cope with the overwhelming number of messages and change in busy streams.

The problem here is that streams are starting to move faster than we can read, even faster than we can click. How do you keep up with this kind of change?

Tickers and Slideshows Are Helpful

Another possible solution to the problem of keeping up with moving streams is to make the streams become like news tickers, constantly updating and crawling by as new stuff comes in. Instead of trying to hide the movement of the stream, make it into a feature.

Some friends and I have tested this idea out in an iPad app we built for this purpose called StreamGlider. You can download StreamGlider and try it out for yourself.

StreamGlider shows streams in several different ways — including a ticker mode and a slideshow mode where streams advance on their own as new messages arrive.

 

The Power of Visualization

Another approach to keeping up with fast moving streams is to use visualization, like we’re doing in Bottlenose, with our Sonar feature. By visualizing what is going on in a stream you can provide a user with instant understanding of what is in the stream and what is important and potentially interesting to them, without requiring them to scroll, skim or read everything first.

Sonar reads all the messages in any stream, applies natural language and semantic analysis to them, detects and measures emerging topics, and then visualizes them in realtime as the stream changes.

It shows you what is going on in the stream – in that pile of messages you don’t have time to scroll through and read. As more messages come in, Sonar updates in realtime to show you what’s new.

You can click on any trend in Sonar that interests you, to quickly zoom into just the messages that relate.

The beauty of this approach is that it avoids scrolling until you absolutely want to. Instead of scrolling, or even skimming the messages in a stream, you just look at Sonar and see if there are any trends you care about. If there are, you click to zoom in and see only those messages. It’s extremely effective and productive.

Sonar is just one of many visualizations that could help with keeping up with change in huge streams. But it’s also only one piece of the solution. Another key piece of the solution is finding things in streams.

Finding Things in Streams

Above, we discussed problems and solutions related to keeping up with streams that are full of noise and constantly changing. Now let’s discuss another set of problems and solutions related to finding things in streams.

Filtering the Stream

For a visualization like Sonar to be effective, you need the ability to filter the stream for the sources and messages you want, so there isn’t too much noise in the visualization. The ability to filter the stream for just those subsets of messages you actually care about is going to be absolutely essential in coming years.

Streams are going to become increasingly filled with noise. But another way to think about noisy streams is that they are really just lots of less-noisy streams multiplexed together.

What we need is a way to intelligently and automatically de-multiplex them back into their component sub-streams.

For example, take the stream of all the messages you receive from Twitter and Facebook combined. That’s probably a pretty noisy stream. It’s hard to read, hard to keep up with, and quickly becomes a drag.

In Bottlenose you can automatically de-multiplex your streams into a bunch of sub-streams that are easier to manage. You can then read these, or view them via Sonar, to see what’s going on at a glance.

For example, you can instantly create sub-streams – which are really just filters on your  stream of everything. You might make one for just messages by people you like, another for just messages by influencers, another for just news articles related to your interests, another for just messages that are trending, another of just photos and videos posted by your friends, etc.

The ability to filter streams – to mix them and then unmix them – is going to be an essential tool for working with streams.

Searching the Stream

In the first article in this series we saw how online attention and traffic is shifting from search to social. Social streams are quickly becoming key drivers for how content on the Web is found. But how are things found in social streams? It turns out existing search engines, like Google, are not well-suited for searching in streams.

Existing algorithms for Web search do not work well for Streams. For example, consider Google’s PageRank algorithm.

In order to rank the relevancy of Web pages, PageRank needs a very rich link structure. It needs a Web of pages with lots of links between the documents. The link structure is used to determine which pages are the best for various topics. Effectively links are like votes – when pages about a topic link to other pages about that topic, they are effectively voting for or endorsing those pages.

While PageRank may be ideal for figuring out what Web pages are best, it doesn’t help much for searching messages, because messages may have no links at all, or may be only very sparsely linked together. There isn’t enough data in individual messages to figure out much about them.

So how do you know if a given message is important? How do you figure out what messages in a stream actually matter?

When searching the stream, instead of finding everything, we need to NOT find the stuff we don’t want. We need to filter out the noise. And that requires new approaches to search. We’ve already discussed filtering above and the ability to filter streams is a per-requisite for searching them intelligently. Beyond that, you need to be able to measure what is going on within streams, in order to detect emerging trends and influence.

The approach we’re taking in Bottlenose to solve this is a set of algorithms we call “StreamRank.” In StreamRank we analyze the series of messages in a stream to figure out what topics, people, links and messages are trending over time.

We also analyze the reputations or influence of message authors, and the amount of response (such as retweets or replies or likes) that messages receive.

In addition, we also measure the relevance of messages and their authors to the user, based on what we know of the user’s interest graph and social graph.

This knowledge enables us to rank messages in a number of ways: by date, by popularity, by relevance, by influence, and by activity.

Another issue that comes up when searching the Stream is that many messages in streams are quite strange looking – they don’t look like properly formed sentences or paragraphs. They don’t look like English, for example. They contain all sorts of abbreviations, hashtags, @replies, and short URLs, and they often lack punctuation and are scrunched to fit in 140 character Twitter messages.

Search algorithms that use any kind of linguistics, disambiguation, natural language processing, or semantics, don’t work well out of the box on these messy messages.

To apply such techniques you need to rewrite them so that they work on short, messy, strange looking messages. This is also something we’ve built in Bottlenose — we’ve built a new natural language processing and topic detection engine in Javascript that is designed specifically to handle these types of streams and messages.

These are some of the new challenges and solutions we’re applying in Bottlenose to make working with streams more productive. They are components of what we call our “StreamOS,” a new high-level Javascript and HTML5 operating system for applications that need to do smart things with streams. We’ll be writing a lot more about this in future articles.

 

Drowning in the Stream — New Challenges for a New Web

This is Part II of a three-part series of articles on how the Stream is changing the Web.

In Part I of this series, The Message is the Medium, I wrote about some of the shifts that are taking place as the center of online attention shifts from documents to messages.

Here in Part II, we will explore some of the deeper problems that this shift is bringing about.

New Challenges in the Era of the Stream

Today the Stream has truly arrived. The Stream is becoming primary and the Web is becoming secondary. And with this shift, we face tremendous new challenges, particularly around overload. I wrote about some of these problems for Mashable in an article called, “Sharepocalypse Now.”

The Sharepocalypse is here. It’s just too easy to share, there is too much stuff being shared, there are more people sharing, and more redundant ways to share the same things. The result is that we are overloaded with messages coming at us from all sides.

For example, I receive around 13,000 messages/day via various channels, and I’m probably a pretty typical case. You can see a more detailed analysis here.

As the barrier to messaging has become lower and people have started sending more messages than ever before, messaging behavior has changed. What used to be considered spam is now considered to be quite acceptable.

Noise is Increasing

In the 1990’s emailing out a photo of the interesting taco you are having for lunch to everyone you know would have been considered highly spammy behavior. But today we call that “foodspotting” and we happily send out pictures of our latest culinary adventure on multiple different social networks at once.

Spam is the New Normal

It’s not just foodspotting – the same thing is happening with check-ins, and with the new behavior of “pinning” things (the new social bookmarking) that is taking place in Pinterest. Activities that used to be considered noise have somehow started to be thought of as signal. But in fact, for most people, they are still really noise.

The reason this is happening is that the barrier to sharing is much lower than it once was. Email messages took some thought to compose – they were at least a few paragraphs long. But today you can share things that are 140 characters or less, or just a photo without even any comments. It’s instant and requires no investment or thought.

Likewise, in the days of email you had to at least think, “is it appropriate to send this or will it be viewed as spam?” Today people don’t even have that thought anymore. Send everything to everyone all the time. Spam is the new normal.

Sharing is a good thing, but like any good thing, too much of it becomes a problem.

The solution is not to get people to think before sharing, or to share less, or to unfollow people, or to join social networks where you can only follow a few people (like Path or Pair), it’s to find a smarter way to deal with the overload that is being created.

Notifications Overload

Sharing is not the only problem we’re facing. There are many other activities that generate messages as well. For example, we’re getting increasing numbers of notifications messages from apps. These notifications are not the result of a person sharing something, they are the result of an app wanting to get our attention.

We’re getting many types of notifications, for example:

  • When people follow us
  • When we’re tagged in photos
  • When people want to be friends with us
  • When there are news articles that match our interests
  • When friends check-in to various places
  • When people are near us
  • When our flights are delayed
  • When our credit scores change
  • When things we ordered are shipped
  • When there are new features in apps we use
  • When issue tickets are filed or changed
  • When files are shared with us
  • When people mention or reply to us
  • When we have meeting invites, acceptances, cancellations, or meetings are about to start
  • When we have unread messages waiting for us in a social network

The last bullet bears an extra mention. I have noticed that LinkedIn for example, sends me these notifications about notifications. Yes, we are even getting notifications about notifications!

When you get messages telling you that you have messages, that’s when you really know the problem is getting out of hand.

Fragmented Attention

Another major problem that the Stream is bringing about is the fragmentation of attention.

Today email is not enough. If it wasn’t enough work that we each have several email inboxes to manage, we are now also getting increasing volumes of messages outside of email in entirely different inboxes for specialized apps. We have too many inboxes.

It used to be that to keep up with your messages all you needed was an email client.

Then the pendulum swung to the Web and it started to become a challenge to keep up with all the Web sites we needed to track every day.

So RSS was invented and for a brief period it seemed that the RSS reader would be adopted widely and solve the problem of keeping up with the Web.

But then social networks came out and they circumvented RSS, forcing users to keep up in social-network specific apps and inboxes.

So a new class of “social dashboard” apps (like Tweetdeck) were created to keep up with social networks, but they didn’t include email or RSS, or all the other Web apps and silos.

This trend towards fragmentation has continued – an increasing array of social apps and web apps can really only be adequately monitored in those same apps. You can’t really effectively keep up with them in email, in RSS, or via social networks. You have to login to those apps to get high-fidelity information about what is going on.

We’re juggling many different inboxes. These include email, SMS, voicemail, Twitter, Facebook, LinkedIn, Pinterest, Tumblr, Google+, YouTube, Yammer, Dropbox, Chatter, Google Reader, Flipboard, Pulse, Zite, as well as inboxes in specialized tools like Github, Uservoice, Salesforce, and many other apps and services.

Alan Lepofsky, at Constellation Research, created a somewhat sarcastic graph to illustrate this problem, in his article, “Are We Really Better Off Without Email?” The graph is qualitative – it’s not based on direct numbers – but in my opinion it is probably very close to the truth.

What this graph shows is that email usage peaked around 2005/2006, after which several new forms of messaging began to get traction. As these new apps grew, they displaced email for some kinds of messaging activities, but more importantly, they fragmented our messaging and thus our attention.

The takeaway from this graph is that we will all soon be wishing for the good old days of email overload. Email overload was nothing compared to what we’re facing now.

The Message Volume Explosion

As well as increasing noise and the fragmentation of the inbox, we’re also seeing huge increases in message volume.

Message volume per day, in all messaging channels, is growing. In some of these channels, such as social messaging, it is growing exponentially. For example, look at this graph of Twitter’s growth in message volume per day since 2009, from the Bottlenose blog:

Twitter now transmits 340 Million messages per day, which is more than double the number of messages per day in March of 2011.

If this trend continues then in a year there will be between 500 million and 800 million messages per day flowing through Twitter.

And that’s just Twitter – Facebook, Pinterest, LinkedIn, Google+, Tumblr, and many other streams are also growing. And email messages are also increasing as well, thanks to all the notifications that are being sent to email by various apps.

Message volume is growing across all channels. This is going to have several repercussions for all of us.

Engagement is Threatened

First of all, the signal-to-noise ratio of social media, and other messaging channels, is going to become increasingly bad as volume increases. There’s going to be less signal and more noise. It is going to get harder to find the needles in the haystack that we want, because there is going to be so much more hay.

Today, on services like Twitter and Facebook, signal-to-noise is barely tolerable already. But as this situation gets worse in the next two years, we are going to become increasingly frustrated. And when this happens we are going to stop engaging.

When signal-to-noise in a channel gets too out of hand, it becomes unproductive and inefficient to use that channel. In the case of social media, we are right on cusp of this happening. And when this happens, people will simply stop engaging. And when engagement falls the entire premise of social media will start to fail.

This is already starting to happen. One recent article by George Colony, CEO of analyst firm, Forrester Research, cites a recent study that found that 56% of time spent on social media is wasted.

When you start hearing numbers like this, it means that consumers are not getting the signal they need most of the time, and this will inevitably result in a decrease in satisfaction and engagement.

What’s Next?

We have seen some of the issues that are coming about, or may soon come about, as the Stream continues to grow. But what’s going to happen next? How is the Stream, and our tools for interacting with it, going to adapt?

Click here to read Part III of this series, Keeping Up With the Stream, where we’ll explore various approaches do solving these problems.

The Message is the Medium – Attention is Shifting from the Web to the Stream

Shift Happens

A major shift has taken place on the Web. Web pages and Web search are no longer the center of online activity and attention. Instead, the new center of attention is messaging and streams. We have moved from the era of the Web to the era of the Stream. This changes everything.

Back in 2009, I wrote an article called “Welcome to the Stream – Next Phase of the Web” which discussed the early signs of this shift. Around the same time, Erick Schonfeld, at TechCrunch, also used the term in his article, “Jump Into the Stream.” Many others undoubtedly were thinking the same thing: The Stream would be the next evolution of the Web.

What we predicted has come to pass, and now we’re in this new landscape of the Stream, facing new challenges and opportunities that we’re only beginning to understand.

In this series of articles I’m going to explore some of the implications of this shift to the Stream, and where I think this trend is going. Along the way we’re going to dive deep into some major sea changes, emerging problems, and new solutions.

From Documents to Messages

The shift to the Stream is the latest step in a cycle that seems to repeat. Online attention appears to swing like a pendulum from documents to messages and back every few decades.

Before the advent of the Web, the pendulum was swinging towards messaging. The center of online attention was messaging via email, chat and threaded discussions. People spent most of their online time doing things with messages. Secondarily, they spent time in documents, for example doing word-processing.

Then the Web was born and the pendulum swung rapidly from messages to documents. All of a sudden Web pages – documents – became more important than messages. During this period the Web browser became more important than the email client.

But with the growth of social media, the pendulum is swinging back from documents to messaging again.

Today, the focus of our online attention is increasingly focused towards messages, not Web pages. We are getting more messages, and more types of messages, from more apps and relationships, than ever before.

We’re not only getting social messages, we’re getting notifications messages. And they are coming to us from more places – especially from social networks, content providers, and social apps of all kinds.

More importantly, messages are now our starting points for the Web — we are discovering things on the Web from messages. When we visit Web pages, it’s more often a result of us finding some link via a message that was sent to us, or shared with us. The messages are where we begin, they are primary, and Web pages are secondary.

From Search to Social

Another sign of the shift from the Web to the Stream is that consumers are spending more time in social sites like Facebook, Pinterest and Twitter than on search engines or content sites.

In December of 2011, Comscore reported that social networking ranked as the most popular content category in online engagement, accounting for 19% of all consumer time spent online.

These trends have led some such as VC, Fred Wilson, to ask, “how long until social drives more traffic than search?”  Fred’s observation was that his own blog was getting more traffic from social media sites than from Google.

Ben Elowitz, the CEO of Wetpaint, followed up on this by pointing out that according to several sources of metrics, the shift to social supplanting search as the primary traffic driver on the Web was well underway.

According to Ben’s analysis, the top 50 sites were getting almost as much traffic from Facebook as from Google by December of 2011. Seven of these top 50 sites were already getting 12% more visits from Facebook than from Google, up from 5 of these top sites just a month earlier.

The shift from search to social is just one of many signs that the era of the Stream has arrived and we are now in a different landscape than before.

The Web has changed, the focus is now on messages, not documents. This leads to many new challenges and opportunities. It’s almost as if we are in a new Web, starting from scratch – it’s 1994 all over again.

Click here to continue on to Part II of this series, Drowning in the Stream, where we’ll dig more deeply into some of the unique challenges of the Stream.

Consciousness is Not a Computation

In the previous article in this series, Is The Universe a Computer? New Evidence Emerges I wrote about some new evidence that appears to suggest that the universe may be like a computer, or least that it contains computer codes of a sort.

But while this evidence is fascinating, I don’t believe that ultimately the universe is in fact a computer. In this article, I explain why.

My primary argument for this is that consciousness is not computable. Since consciousness is an undeniable phenomenon that we directly experience, the universe has to be more than a mere computer, because a computer cannot create or simulate consciousness. No universe that is merely a computer or a computation can generate or account for consciousness. Below I explain this in more detail.

Consciousness is More Fundamental Than Computation

If the universe is a computer, it would have to be a very different kind of computer than what we think of as a computer today. It would have to be capable of a kind of computation that transcends what our computers can presently do. It would have to be capable of generating all the strangeness of general relativity and quantum mechanics. Perhaps one might posit that it is a quantum computer of some sort.

However, it’s not that simple. IF the universe is any kind of computer, it would actually have to be able to create every phenomenon that exists, and that includes consciousness.

The problem is that conscious is notoriously elusive, and may not even be something a computer could ever generate. After decades of thinking about this question from many angles, I seriously doubt that consciousness is computable.

In fact, I don’t think consciousness is an information process, or a material thing, at all. It seems, from my investigations, that consciousness is not a “thing” that exists “in” the universe, but rather it is in the category of fundamentals just like space and time. For example, space and time are not “in” the universe, rather the universe is “in” space and time. I think the same can be said about consciousness. In fact, I would go so far as to say consciousness is probably more fundamental than space and time, they are in “in” it rather than it being “in” them.

There are numerous arguments for why consciousness may be fundamental. Here I will summarize a few of my favorites:

  • Physics and Cosmology. First of all there is evidence in physics, such as the double slit experiment, that indicates there may be a fundamental causal connection between the act of consciously observing something and what is actually observed. Observation seems to be intimately connected to what the universe does, to what is actually measured. It is as if the act of observation — of measurement — actually causes the universe to make choices that collapse possibilities into specific outcomes. This implies that consciousness may be connected to the fundamental physical laws and the very nature of the universe. Taken to the extreme there are even physical theories, such as the anthropic principle, that postulate that the whole point of the universe, and all the physical laws, is consciousness.
  • Simulation. Another approach to analyzing consciousness is to attempt to simulate, or synthesize consciousness with software, where one quickly ends up in either an infinite regress or a system that is not conscious of its own consciousness. Trying to build a conscious machine, even in principle, is very instructive and everyone who is seriously interested in this subject should attempt it until they are convinced it is not possible. In particular self-awareness, the consciousness of consciousness, is hard to model. Nobody has succeeded in designing a conscious machine so far. Nobody has even succeeded in designing a non-conscious machine that can fool a conscious being into thinking it is a conscious being. Try it. I dare you. I tried many times and in end I came to the conclusion that consciousness, and in particular self-consciousness, lead to infinite regresses that computers are not capable of resolving in finite time.
  • Neuroscience. Another approach is to try to locate consciousness in the physical brain, the body, or anywhere in the physical world – nobody has yet found it. Consciousness may have correlates in the brain, but they are not equivalent to consciousness. John Searle and others have written extensively about this issue. Why do we even have brains then? Are they the source of consciousness, or are they more like electrical circuits that merely channel it without originating it, or are brains the source of memory and cognition, but not consciousness itself? There are many possibilities and we’re only at the beginning of understanding the mind-brain connection. However so far, after centuries of dissecting the brain, and mapping it, and measuring it in all kinds of ways, no consciousness has been found inside it.
  • Direct Introspection. One approach is through direct experience: search for an origin of knowing, by observing your own consciousness directly, with your own consciousness. No origin is found. There is no homunculus in the back of our minds that we can identify. In fact, when you search, even mere consciousness is not found, let alone its source. The more we look the more it dissolves. Consciousness is a word we use, but when we look for it we can’t find what it refers to. But that doesn’t mean consciousness isn’t a real phenomenon, or that it is an illusion. It is undeniable that we are aware of things, including of the experience of being conscious. It is unfindable, yet it is not a mere nothingness either – there is definitely some kind of awareness or consciousness taking place that is in fact the very essence of our minds. The nature of consciousness exemplifies the Buddhist concept of the “emptiness” in a manner that we can easily and directly experience for ourselves. But note that “empty” in this sense doesn’t mean nothingness, or non-existence, it means that it exists in a manner that transcends being either something or nothing. From the Buddhist perspective, although consciousness cannot be found, it is in fact the ultimate nature of reality, from which everything else appears.
  • Logic. Another approach is logical: Recognize that all experience is mediated by consciousness — all measurements, all science, all our own personal experience, all our collective experiences. Nothing ever happens or is known by us without first being mediated by consciousness. Thus consciousness is more fundamental than anything we know of, it is the most fundamental experience, even more fundamental than the experience of space and time, or our measurements thereof. From this perspective we cannot honestly say that anything ever can exist apart from consciousness, from someone or something knowing it. In fact, it would appear that everything depends on consciousness to be known, and possibly to exist, because we have no way to establish that anything exists apart from consciousness. Based on the evidence we have, consciousness is therefore fundamental. The universe appears to be in consciousness not vice-versa: This is in fact a more logical and more scientific conclusion than the standard belief that consciousness is an emergent property of the brain, or that it is a separate phenomenon from appearances. In the extreme, this investigation leads to a philosophical view called solipsism. However note that the Buddhist view (above) transcends solipsim because, in fact there is no self in consciousness – anything you can label as “self” or “I” is actually just an appearance in consciousness, not consciousness in pure form. Since there is no self, you cannot claim that you own consciousness, or that everything exists in “your” consciousness – because there is no way to assert a self that owns or is consciousness that contains everything else, nor can any “other” be asserted either. Since consciousness is more fundamental than self, or the self-other dichotomy, the view of solipsism is defeated. Instead consciousness transcends self and other, one and many.
  • Unusual experiences. Yet another approach is to observe consciousness under unusual or extreme conditions such as during dreaming, lucid dreaming, religious experiences, peak experiences, when under the influence of mind-altering drugs, or in numerous well-documented cases of apparent reincarnation, and well-documented near-death experiences. In such cases there is a wealth of both direct and anecdotal evidence suggestive of the idea that consciousness is able to transcend the limits of the body, as well as space and time. Whether you believe such evidence is valid is up to you, however there is an increasing body of careful studies on these topics that are indicative that there is a lot more to consciousness than our day-to-day waking state.

Beyond Computation

Because of the above lines of reasoning and observation I have come to the conclusion that consciousness transcends the physical, material world. It is something different, something special. And it does not seem to be computable, because it has no specific form, substance or even content that can be represented as information or as an information process.

For example, in order to to be the product of a computation, consciousness would need to be comprised of information — there would need to be some way to completely describe and implement it with information, or an a information process — that is, with bits in a computer system. Information processes cannot operate without information – they require bits 1’s and 0’s, and some kind of a program for doing things with them.

So the question is, can any set or process of 1’s and 0’s perfectly simulate or synthesize what is to be conscious? I don’t think so. Because consciousness, when examined, is found to be literally formless and unfindable, it has no content or form that can be represented with 1’s and 0’s. Furthermore, because consciousness, when examined is essentially changeless, it is not a process – for a process requires some kind of change. Therefore it is not information or an information process.

Some people counter the above argument by saying that consciousness is an illusion, a side-effect, or what is called an “epiphenomenon” of the brain. They claim that there is no such thing as actual consciousness, and that there is nothing more to cognition than the machinery of the brain. They are completely missing the fundamental point.

But let’s assume they are right for a moment – if there is no consciousness, then what is taking place when a being knows something, or when they know their own knowing capacity? How could that be modeled in a computer program? Simply creating a data structure and process that represents its own state recursively is not sufficient – because it is static, it is just data – there is no actual qualia of knowing taking place in that system.

Try as one might, there is no way to design a machine or program that manifests the ability to know or experience the actual qualia of experiences. John Searle’s Chinese Room though experiment is a famous logical argument that illustrates this. The simple act of following instructions – which is all a computer can do – never results in actually knowing what those instructions mean, or what it is doing. The knowing aspect of mind – the consciousness – is not computable.

Not only can consciousness not be simulated or synthesized by a computer, it cannot be found in a computer or computer program. It cannot magically emerge in a computer of sufficient complexity.

For example, suppose we build up a computer or computer program by gradually adding tiny bits of additional complexity — at what point does it suddenly transition from being not-conscious to being conscious? There is no sudden transition to consciousness — I call that kind of thinking “magical complexity” – and many people today are guilty of it. However it’s just an intellectual cop-out. There is nothing special about complexity that suddenly and magically causes consciousness to appear out of nowhere.

Consciousness is not an emergent property of anything, nor is dependent on anything. It does not come from the brain, and it does not depend on the brain. It is not part of the brain either. Instead, it would be more correct to say that brain is perhaps an instrument of consciousness, or a projection that occurs within consciousness.

One analogy is that the brain channels consciousness, like an electrical circuit channels electricity. In a circuit the electricity does not come from the circuitry, it’s fundamentally the energy of the universe – the circuit is just a conduit for it.

A better analogy however is that the brain is actually a projection of conscious just as a character in a dream is a projection of the dreaming mind. Within a dream there can be fully functional, free-standing characters that have bodies, personalities and that seem to have minds of their own, but in fact they are all just projections of the dreaming mind. Similarly the brain appears to be a machine that functions a certain way, but it is less fundamental than the consciousness that projects it.

How could this be the case, it sounds so strange! However, if I phrase it differently all of a sudden it sounds perfectly normal. Instead of “consciousness” let’s say “space-time.” The brain is a projection of space-time, space-time does not emerge from the brain. That sounds perfectly reasonable.

The key is that we have to think of consciousness as the same level of phenomena as space-time, as a fundamental aspect of the universe. The brain is a space-time-consciousness machine, and the conceptual mind is what that machine is experiencing and doing. However, space-time-consciousness is more fundamental than the machinery of the brain, and even when the brain dies, space-time-consciousness continues.

For the above reasons, I think that consciousness proves that the universe is not a computer — at least not on the ultimate, final level of analysis. Even if the universe contains computers, or contains processes that compute, the ultimate level of reality is probably not a computer.

But let’s, for the purpose of being thorough, suppose that we take the opposite view, that the universe IS a computer and everything in it is a computation. This view leads to all sorts of problems.

If we say that the universe is a computation, it would imply that everything — all energy, space, time and consciousness — are taking place within the computation. But then what is the computation coming from and where is it happening? A computation requires a computer to compute it — some substrate that does the computation. Where is this substrate? What is it made of? It cannot also be made of energy, space, time or consciousness — those are all “inside” the computation, they are not the substrate, the computer.

Where is the computer that generates this universal computation? Is it generating itself? That is a circular reference that doesn’t make sense. For example, you can’t make a computer program that generates the computer that runs it. The computer has to be there before the program, it can’t come from the program. A computation requires a computer to compute it, and that computer cannot be the same thing as the computation it generates.

If we posit a computer that exists beyond everything – beyond energy, space and time — how could it compute anything? Computation requires energy, space and time — without energy there is no information, and without space and time there is no change, and thus no computation. A computer that exists beyond everything could not actually do any computation.

One might try to answer this by saying that the universal computation takes place on a computer that exists in a meta-level space-time beyond ours — in other words it exists in a meta-universe beyond our universe. But that answer contradicts the claim that our universe is a computer – because it means that what appears to be a universe computer is really not the final level of reality. The final level of reality in this case is the meta-universe that contains the computer that is computing our universe. That just pushes the problem down a level.

Alternatively one could claim that in fact the meta-universe beyond our universe is also a computer – So our universe computer exists inside a meta-level universe computer. In this case it’s “computers all the way down” – an infinite regress of meta-computers containing meta-computers containing meta-computers. But to claim that is a bit of a logical cop-out, because then there is no final computer behind it all – thus there is no source or end of computation. If such infinite chains of computations could exist it would be difficult to say they actually compute anything since they could never start or complete, and thus this claim is not that unlike claiming that the universe is NOT a computer.

In the end we face the same metaphysical problems we’ve always faced – either there is a fundamental level of reality that we cannot ever really understand, or we fall into paradoxes and infinite regress. Digital physics may have some explanatory power, but it has its limits.

But then what does it mean that we find error correcting codes in the equations of supersymmetry? If the fundamental laws of our universe contain computer codes in them, how can we say the universe is not a computer? Perhaps the universe IS a computer, but it’s a computer that is appearing within something that fundamentally is not computable, something like consciousness perhaps. But can something that is not computable generate or contain computations? That’s an interesting question.

Consciousness is certainly capable of containing computations, even if it is not a computation. A simple example of this would be a dream about a computer that is computing something. In such a dream there is an actual computer doing computations, but the computer and the computations depend on something (consciousness) that is not coming from a computer and is not a computation.

In the end I think it’s more likely that ultimate reality is not a computer – that it is a field of consciousness that is beyond computation. But that doesn’t mean that universes that appear to be computations can’t appear within it.

“Once upon a time, I, Chuang Chou, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was Chou. Soon I awaked, and there I was, veritably myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man.” — Chuang Chou

Further Reading

If you are interested in exploring the nature of consciousness more directly, the next article in this series, Recognizing The Significance of Consciousness, explains what consciousness is actually like, in its pure form, and how to develop a better recognition of it for yourself.

Is the Universe a Computer? New Evidence Emerges.

I haven’t posted in a while, but this is blog-worthy material. I’ve recently become familiar with the thinking of University of Maryland physicist, James Gates Jr. Dr. Gates is working on a branch of physics called supersymmetry. In the process of his work he’s discovered the presence of what appear to resemble a form of computer code, called error correcting codes, embedded within, or resulting from, the equations of supersymmetry that describe fundamental particles.

You can read a non-technical description of what Dr. Gates has discovered in this article, which I highly recommend.

In the article, Gates asks, “How could we discover whether we live inside a Matrix? One answer might be ‘Try to detect the presence of codes in the laws that describe physics.'” And this is precisely what he has done. Specifically, within the equations of supersymmetry he has found, quite unexpectedly, what are called “doubly-even self-dual linear binary error-correcting block codes.” That’s a long-winded label for codes that are commonly used to remove errors in computer transmissions, for example to correct errors in a sequence of bits representing text that has been sent across a wire.

Gates explains, “This unsuspected connection suggests that these codes may be ubiquitous in nature, and could even be embedded in the essence of reality. If this is the case, we might have something in common with the Matrix science-fiction films, which depict a world where everything human being’s experience is the product of a virtual-reality-generating computer network.”

Why are these codes hidden in the laws of fundamental particles? “Could it be that codes, in some deep and fundamental way, control the structure of our reality?,” he asks. It’s a good question.

If you want to explore further, here is a Youtube video by someone who is interested in popularizing Dr. Gates’ work, containing an audio interview that is worth hearing. Here, you can hear Gates describe the potential significance of his discovery in layman’s terms. The video then goes on to explain how all of this might be further evidence for Bostrom’s Simulation Hypothesis (in which it is suggested that the universe is a computer simulation). (NOTE: The video is a bit annoying – in particular the melodramatic soundtrack, but it’s still worth watching in order to get a quick high level overview of what this is all about, and some of the wild implications).

Now why does this discovery matter? Well it is more than strange and intriguing that fundamental physics equations that describe the universe would contain these error correcting codes. Could it mean that the universe itself is built with error correcting codes in it, codes that that are just like those used in computers and computer networks? Did they emerge naturally, or are they artifacts of some kind of intelligent design? Or do they indicate the universe literally IS a computer? For example maybe the universe is a cellular automata machine, or perhaps a loop quantum gravity computer.

Digital Physics – A New Kind of Science

The view that the universe is some kind of computer is called digital physics – it’s a relatively new niche field within physics that may be destined for major importance in the future. But these are still early days.

I’ve been fascinated by the possibility that the universe is a computer since college, when I first found out about the work of Ed Fredkin on his theory that the universe is a cellular automaton — for, example, like John Conway’s Game of Life algorithm (particularly this article, excerpted from the book Three Scientists and their Gods).

Following this interest, I ended up interning in a supercomputing lab that was working on testing these possibilites, at MIT, with the authors of this book on “Cellular Automata Machines.”

Later I had the opportunity to become friends with Stephen Wolfram, whose magnum opus, “A New Kind of Science” is the ultimate, and also heaviest, book on this topic.

I asked Stephen about what he thinks about this idea and he said it is, “a bit like saying ‘there’s a Fibonacci sequence there; this must be a phenomenon based on rabbits’.  Error-correcting codes have a certain mathematical structure, associated e.g. with sphere packing.  You don’t have to use them to correct errors. But it’s definitely an amusing thought that one could detect the Matrix by looking for robustification features of code.  Of course, today’s technology/code rarely has these … because our computers are already incredibly reliable (and probably getting more so)”

The work of Dr. Gates, is at the very least, an interesting new development for this field. At best it might turn out to be a very important clue about the nature of the universe, although it’s very early and purely theoretical at this point. It will be interesting to see how this develops.

However, I personally don’t believe the universe will turn out to be a computer or a computation. Read the next article in this series to find out why I think Consciousness is Not a Computation.

 Notes:

  • Seth Lloyd, professor quantum mechanical engineering at MIT, has written a book that describes his theory that the universe is a quantum computer.
  • Here’s a good article that explores various views related to the idea the universe is a computation in some more detail.

Bottlenose Beta 2.0 Launched Today!

Bottlenose Beta 2.0 launched today, and it’s pretty innovative.

Three good articles came out covering it:

ReadWriteWeb – Bottlenose is a 6th Sense for the Social Web

SemanticWeb.com – Bottlenose Beta Two Features New Layout, Visuals

TechCrunch – Bottlenose 2.0: Taming the “Share-pocalypse”

Also Robert Scoble blogged about it on Google+ here.

Bottlenose also blogged about the new features and why they are important here.

I’m psyched for this launch. The app has come a long way. The new Sonar features really help me keep up with all my streams in a way that just has never been possible before. Check it out. You can use the semi-secret invite code: getsonar


Live Matrix Acquired by OVGuide

I’m really pleased to announce that a startup I helped co-found, Live Matrix, has been acquired by OVGuide, a leading video portal.

TechCrunch covered the deal here.

The new combined company is a unique powerhouse in the online video space – covering the entire life cycle of online videos from when they are upcoming, to when they go live, to when they are on-demand.

Sanjay Reddy, my co-founder, and friend, has done an amazing job bringing our vision to life. The deal with OVGuide is a big step forward in the evolution of this project. I look forward to great things from the combined company. Congratulations to the team, and my thanks to our loyal and helpful angel investors. It’s been a very interesting project to be a part of.

StreamGlider Launches Today!

Today I’m happy to announce the launch of StreamGlider, a new tablet app (initially on iPad) that provides the first live streaming dashboard for keeping up with your interests.

TechCrunch just broke the story.

The inspiration for StreamGlider was a product that launched in the early 1990’s called Pointcast. Pointcast streamed news, entertainment, ads and other updates to screensavers. Pointcast was great, and we, (myself and my co-founders, Bill McDaniel and John Breslin) wondered whether we could evolve that concept and update it for the tablet and mobile era.

We designed StreamGlider to be the ultimate live streaming newsreader. It does what you have come to expect, plus a lot more. And it does it live – it streams live updates to your tablet.

It also offers a lot of new functionality that supports new ways of using a reader.

  • StreamGlider pulls live updates from content sources on the Web (RSS feeds, Google Reader, and Web API’s like Twitter, Facebook, Youtube, Flickr, etc.) onto mobile devices, and displays them in a variety of formats.
  • It can function as a live digital picture frame for the Web, showing news articles, photos from friends, videos, etc. as full-screen slides that scroll past.
  • It can also show streams as a live interactive filmstrips that function like tickers.
  • And it can show streams in an interactive magazine format that is similar to a newspaper layout.
  • You can also play and watch videos in StreamGlider.

Powerful Features

StreamGlider is fully gesture controlled – everything can be controlled by swiping, pinching, pointing, tapping, etc. You can easily customize the streams you want.

You can also create mashups of streams that pull from many different sources on a theme – for example you can pull from different news sources about sports, or different photo and video sources about a topic.

In addition to all this, you can make very personalized streams that pull from your social media accounts, and filtered streams that search for particular topics in content sources.

StreamGlider is also social. You can share individual items, or even entire streams of items, with your friends.

We designed StreamGlider to be brandable. Partners and customers can create their own private-labelled versions of StreamGlider, with their brand and their content, for their audiences. Brands can sell it or give it away free and run ads in it if they want. (Contact StreamGlider, if you’re interested in doing this for your brand).

This frees publishers, brands, and enterprises to create their own powerful readers for their audiences, with their brand, instead of having to live inside of other apps like FlipBoard or Pulse. They can have their own icon on the desktop and keep their direct relationship with their customers.

There are many use-cases for this – for example, you might want to distribute your own branded StreamGlider for your publication, or for a consumer product, or to your fans, or for a big event, or to your customers or employees. There are many reasons to do this – and you don’t have to be a software company to do it – you can almost instantly get your own branded StreamGlider.

We also designed StreamGlider to be open-source in the future. More news on that later. We hope we can become the Mozilla of newsreaders.

What’s next?

The team behind StreamGlider has a long history of making smart, semantic apps. You can expect that in future versions of StreamGlider, the app will begin to get smarter, more personalized, and even more social. This is just the beginning of our roadmap.

We will also be adding in support for more types of streams. Stay tuned!

Meanwhile, you should check it out. Download it to your iPad and see what I’m talking about.

 

 

 

 

 

 

The Problem with Stream 3.0

After my former project, Twine.com, was sold, I began to turn my attention to the Next Big Challenge: How to make sense of the growing real-time Web, or what many call, “the Stream.”

I could see the writing on the wall, and it was less than 140 characters: Social media’s own success was going to be its biggest challenge. The Stream was going to soon become unusable.

In the early days of the Stream, it was actually possible to keep up with your community on Twitter and Facebook effectively. Not anymore. There are just too many people messaging too often. The chances of even seeing a message before it scrolls into history are getting lower every day.

Today, the Stream is growing exponentially. Twitter famously grew by 3x in the last year and sends out more than 250 million Tweets per day. Facebook sends billions of public and private messages per day. And this is just the tip of the iceberg — or the deluge, as it were.

There are so many new and growing sources of messages in the Stream: Google+, LinkedIn, Foursquare, Youtube, RSS feeds, and more are coming. And that’s just the consumer side of the Stream – there’s a whole other side to the Stream: Chatter, Yammer, Socialcast, Jive, and many other enterprise streams are also growing rapidly.

And on top of this there is a whole new deluge of machine and app-generated data that is just starting to join the stream, and may eventually dwarf human-generated data.

At the same time as all these new networks are popping up to enable messaging in the Stream, the barrier to creating and sharing messages has also never been lower. I call this The Sharepocalypse.

It’s never been easier to share — People are sharing more kinds of information, more often, with more people, than ever before. And it’s requiring less thought too — because the messages themselves are so short. This is resulting in a collective overshare of unimagined proportions.

With email, the messages were usually long and required some effort, so people sent relatively few emails per day. And at least with email there were some basic social rules about what you could send to everyone without being a spammer.

Not anymore. In the age of the Stream it’s quite normal to post out what you had for lunch, or some cool product you are looking at in a store window, with a photo, to the entire world. That would have been unthinkable in the email era. In the age of the Stream, it’s not even an afterthought. The Sharepocalypse is here, in spades.

The result of all this adoption and growth of the Stream is a new kind of information overload, stream overload.

Stream Overload is worse than email overload, because it includes email overload.

Email, in my opinion was “Stream 1.0.” Social media (RSS, Twitter, Facebook, etc.) were “Stream 2.0.” And now we’re entering “Stream 3.0” – when everything – all information, all applications, everyone, even things – become part of the Stream.

(Yes I know, version numbers are so Web 3.0, but it’s helpful to use them as handles for the discussion. Stream 3.0 is indeed a different era from the early days of the Stream.)

We’re already seeing the signs of stream overload — but this is just a preview of what’s to come as Stream 3.0 comes to maturity. The growth of the Stream is still only just beginning. Most of the planet isn’t using it yet. And most people don’t realize how integral it’s going to be in their lives in coming years.

If the Web is the planet’s brain, the Stream is its mind – it’s the living, breathing, thinking, learning, aware, acting part. And we’re all going to be part of it 24/7, whether we like it or not. So it better be good, it better be smart, it better be useable, or we’re all going to be gridlocked and buried in messages we don’t want.

And this is the Next Big Problem: The Stream is going to become both more important, and more noisy at the same time. This is a classic crisis. Either something must be done to reduce the noise, or it’s not going to be useable. And this will lead to problems, because it’s important that it actually is usable.

What happens if the Stream really breaks down under its own weight?

If the signal-to-noise problem isn’t solved, and people can’t keep up with the Stream, they’re going to give up. They’re going to stop paying attention. They’re going to stop trying to keep up. They will never be able to scroll down enough. They won’t even login to sites like Twitter and Facebook if they are too overloaded.

And if nobody is there listening, then there won’t be much point in posting news and updates to the Stream either. People will stop posting too.

And without the people there, marketers won’t post either – so the advertising money will go away. And even in the social enterprise, if streams for teams get too noisy, they will also stop being used and people will move to some new solution.

And without the people there, the Stream will become an automaton. All that will be left is machines posting to machines.

Unless something is done to solve it, of course.

And something IS being done, it turns out. We’re launching Bottlenose tonight. To read more about the history of the project, read Bottlenose has Launched!

 

 

Notes

 

Make sure to follow us on Twitter:

And come check out Bottlenose! The app is still in invite beta so you either have to have a high enough Klout score or an invite code to get in.

The first 500 readers of my blog who want to try it out, can get into Bottlenose using the invite code: novafriends

 

Check out the what the press is saying about Bottlenose:

Bottlenose Intelligent Social Dashboard Launches Private Beta  — ReadWriteWeb

Bottlenose is a Game Changer for Social Media Consumption — Mashable

Bottlenose is a Social Media Dashboard That Makes Sense of the Stream – Venturebeat

Can This Startup Eliminate Social Media Overload? — Inc.

The Day of the Dolphin: Swim in the Personalized Stream With Bottlenose — SemanticWeb

Bottlenose Launch – A Smarter Way to Skim the Stream – SiliconAngle

Bottlenose is a Web-Based Twitter Client for Power Users — AllThingsD

Managing the Sharepocalypse — AdWeek

Can Bottlenose Help Prevent the Social Sharepocalypse? — GigaOm

Social Overload? Bottlenose Promises Intelligent Filtering — Information Week

 

 

Bottlenose has Launched!

Today, after almost two years of work in stealth, I am proud to announce the launch of Bottlenose.

While I have co-founded and serve on the boards of several other ventures (The Daily Dot, Live Matrix, StreamGlider, and others), Bottlenose is different from all my other projects in that I am also in a full-time day-to-day role as the CEO. In short, Bottlenose is what I’m putting the bulk of my time into going forward, although I will continue to angel invest and advise other startups.

The story of Bottlenose began when my good friend and advisor, Josh Jones-Dilworth, introduced me to Dominiek ter Heide after I sold my last company, Twine.com in 2010.

Dominiek was at the time working on a new kind of personalization technology for social media. Meanwhile, I had been thinking about how to filter the Stream, and the emerging problem of the Sharepocalypse and what I have been calling “the Stream 3.0 Problem.”

Josh knew both of us and had a hunch that we were really thinking about the same problem from different angles. Dominiek and I started speaking via Skype and soon we teamed up. Bottlenose was officially born in 2010.

Working with Dominiek has been a true pleasure. He’s one of the most productive, talented, software engineers I’ve ever met. It’s been an amazing ride so far. Soon, thanks to Dominiek, we were joined by an A-team of killer engineers with expertise in natural language processing, Node.js, Javascript, HTML 5, machine learning, cloud computing, NoSQL, and more.

Our little band of hotshots has produced an amazingly robust and powerful app — something that even large companies with huge engineering teams would be hard-pressed to develop. I’m honored to be working with these guys, and very proud of the team and the what we’ve built.

We have also been fortunate to be joined by some terrific angel investors, including Andy Jenks, of Stage One Capital, and several others (see the About page on Bottlenose for the complete list).

So what is Bottlenose anyway? Well one way to find out is to visit the site and check out the Tour there. But I’ll summarize here as well:

Bottlenose is the smartest social media dashboard ever built. It’s designed for busy people who make heavy use of social media: prosumers, influencers, professionals.

Bottlenose uses next-generation “stream intelligence” technology to understand the messages that are flowing through Twitter, Facebook and other social networks. It also learns about your interests.

On the basis of this knowledge, Bottlenose helps you filter your streams to find what matters to you, what’s relevant, and what’s most important. Bottlenose also includes many new features, like Sonar, which visualizes what’s going on in any stream, and powerful rules and automation capabilities to help you become more productive.

This is just the beginning of this adventure. Our roadmap for Bottlenose is very ambitious, and it’s going to be a lot of fun, and hopefully will really make a difference too. We’re super excited about this product and we hope you will be as well.

Check back here for more posts and observations about Bottlenose and where I think social media is headed.

Make sure to follow us on Twitter:

And come check out Bottlenose! The app is still in invite beta so you either have to have a high enough Klout score or an invite code to get in.

The first 500 readers of my blog who want to try it out, can get into Bottlenose using the invite code: novafriends

I look forward to seeing you Bottlenose!

For more about the thinking behind Bottlenose, read The Problem of Stream 3.0

 

Announcing Common Crawl

Several years ago my friend Gil Elbaz (CEO of Factual; forefather of Google AdWords) approached me with an ambitious vision – he wanted to create an open not-for-profit crawl of the Web to ensure that everyone would have equal access to a Web-scale search index to build on and experiment with.

Search giants like Google and Microsoft were not likely to provide open access to their search indices because they couldn’t risk giving their crown jewels to potential competitors, and furthermore they were bound by the constraints of for-profit business models.

Gil felt that in the future it would be an important service to provide a truly open Web-scale search index that was not controlled by a for-profit company and was not bound by profit motives. This index would make it possible for startups to innovate in search, and for researchers and students to explore Web Science at scale, and furthermore it would level the playing field in search and distribute the index, preventing any one company from monopolizing the index of humanity’s knowledge.

As a longtime advocate of the open Web, I was excited by the vision Gil shared with me, and agreed to join the board of directors of what became The Common Crawl Foundation, along with Carl Malamud. Gil and lead engineer, Ahad Rana, then went to work actually building the thing. This was no small undertaking and required quite a bit of innovation and ingenuity. You can read about the cloud based solution that was developed here.

Several years later, after a lot of work, it’s starting to be ready for Prime Time, and so we’re happy to announce the Web’s first truly open, non-profit, 5 billion page search index!

With the recent addition of our director, Lisa Green, from Creative Commons, Common Crawl is now beginning a new phase in its rollout, and a new phase for the open Web. You can read our inaugural blog post announcing the project here.

We hope you will come in and take a look around, and we look forward to seeing what you dream up and build with this data set.

 

Creator of Delicious Wants to Meet Your Needs With Jig

Joshua Schachter, the creator of Delicious, has launched his newest creation, Jig.

At first glance the site seems a bit like Twitter, but it has a different focus. Instead of posting about what you are doing, you post about what you need. Then other people reply with suggestions, ideas, answers, help, or presumably commercial products and services that can meet your need.

This is not a new idea. It’s been done before, at least in print, quite successfully, in the form of “the want ads.” Want ads are classified ads, where instead of offering something, you ask for something. They are basically inverse classified ads. Like a reverse auction is an inverse auction.

But although it’s not groundbreakingly new, it’s beautifully executed and quite simple and elegant. It’s elegant enough in fact that it might catch on. And if it does, it could be quite useful.

The site has some similarities to Quora, but it’s broader. It’s not just about questions and answers – it’s about getting help with any kind of need.

Looking through the initial needs being posted by early users there are requests for restaurants suggestions, a guy asking what gift he should buy for his minimalist girlfriend, a request to understand how UFO propulsion works, requests to hire people, and even a request for affordable health insurance.

There also seems to be quite a bit of spam, or at least unhelpful questions and comments, including some harmless but irrelevant banter. Jig will need to provide for a way to rank needs, comments, and authors so that noise is filtered out. This is a problem that Schachter should be able to solve in his sleep, so I’m not worried about that being a barrier to adoption. It will be resolved soon, I’m betting.

There’s a lot of potential here, if people actively start helping to share their tips and advice for getting needs met. One challenge will be to make it easy for people to find needs they can help with. A categorization system, based on hashtags perhaps, would help to find needs that match your offers or areas of expertise.

All the product level issues are pretty easy to solve. This is not rocket science. But a harder problem to solve is, how is Jig going to make money? Who is going to have to pay for what? There’s always a catch somewhere. At least if the goal is to build a revenue business.

Will users eventually be charged to post certain kinds of needs? Or is the idea to charge companies, for example, as they are asked to do when posting job ads in Craigslist? Or will there be some kind of reverse auction or group buying angle to this – when enough people have the same need they can pool together and negotiate for a group deal?

Time will tell. But since it’s Joshua Schachter, Jig is bound to get a lot of attention. Check it out for yourself and see if it meets your needs.

By the way, if you’re reading this, tell our reporters at The Daily Dot (@dailydot) what you think of Jig, and whether it’s helped you in any interesting ways. We’re curious to hear your perspective.

Check out the new visualization widget on my sidebar

The team at Icosystem invited me to try out their new Infomous cloud widget. You can see it on the top of the right column of this blog. It visualizes the concept graph in my blog posts. It has some cool features – click on any topic and explore the related posts. If you sign up at their site, you can get your own widgets like this. They work on your blog, or for your tweets, or any Google search. They have a very nice widget editor where you can configure everything on their site and see the changes immediately in your widget. Thanks guys! I like it.

The Daily Dot – Our Newest Venture Production – Launches Today!

Today I’m pleased to announce that, The Daily Dot, our newest “venture production,” has launched into public beta.

The Daily Dot is the first of its kind – it’s the Web’s newspaper — the first community newspaper about the Web. We cover the Web like a town paper covers its community. Here’s a video overview of the site.

This venture began with the insight that each of us is spending an increasing amount of our lives online, in various online communities, yet we have very little insight into what’s going in this new landscape. These communities are literally places, and some of them are quite large. This is beautifully illustrated in this “map” of the Web as a geography.

I believe that it’s time for the Web community to have it’s own newspaper. The launch of the Daily Dot — the web community’s first actual newspaper of record — is a turning point, a coming-of-age, for the Web as a medium, as a place, and as a community.

Our editorial focus is different than other publications that cover the Web. Instead of covering the Web as an industry, a technology or a phenomenon, we cover it as a community. We tell the stories of the people, culture, content, events and issues that are making waves in communities around the Web. And to find and report on these stories, we have embedded reporters in those communities: Facebook, YouTube, Reddit, Twitter, Tumblr, with more communities coming soon.

Just like our physical cities and towns, our online communities are constantly moving and developing, and they are full of interesting people doing newsworthy and important things. The Daily Dot’s mission is to cover these communities just like physical community newspapers cover cities and towns.

Where a town newspaper covers the latest high school sports game, the town meeting, the local crime report, we cover the story behind the hottest viral video sweeping the planet, the latest social movement in Facebook, and important issues (like cybercrime or online bullying) that are happening in our online neighborhoods.

When a major event happens in the physical world – like the revolutions in Arab world, for example — we don’t cover the events themselves, we cover their online footprint — what’s happening online that relates to the story.

The Daily Dot will also cover what’s happening around the Web in time: just like physical community newspapers have calendar sections – The Daily Dot has an online events section, provided in partnership with Live Matrix, one of our other venture productions, that aggregates the schedule of the Web. These two companies are highly synergistic and form the beginnings of our online media network.

While those of us in the Web industry have our fingers slightly more on the pulse of the Web, the vast majority of people who use the Web do not read industry blogs and have little or no visibility into what’s going on in the online world or where it’s headed. Other than a few articles a week published by mainstream media, they are not being informed.

It’s time for that to change. The Daily Dot will be publishing dozens of articles each day about what’s happening online. We’re writing for the mainstream, not for elites or geeks. The Daily Dot is for the people who use the Web — who live in it — not just the people who are building it.

Our content is designed to be entertaining, interesting, informative — and sometimes edgy and controversial – kind of like People Magazine meets USA Today, with a little bit of TMZ thrown in.

If you want to know what’s happening online, or you’re looking to find the hottest emerging entertainment, personalities, viral videos, issues, etc — and the stories behind them — The Daily Dot is your newspaper.

But The Daily Dot is not just a newspaper, it’s also a very interesting business venture. It’s a chance to build what could become one of the largest circulation newspapers in the world someday – a global newspaper about the one community that we all share in common, no matter where we actually live.

I also want to congratulate and thank the amazing editorial and development team at the Daily Dot, who made this possible. And most importantly, I want to acknowledge Nicholas White (Daily Dot CEO), Owen Thomas (Daily Dot founding editor), and Josh Jones-Dilworth (marketing guru), my co-founders in this venture.

Nick and Owen are leading business and editorial, and running the operations, and Josh and myself are on the board, advising to help in our respective areas of expertise. Nick and Owen deserve all the credit here — they have done the heavy lifting to bring this vision to market, and I’m very proud to be working with them.

Please join me us helping to spread the word about The Daily Dot — it’s your newspaper — and we need your help to make it great (and we look forward to your feedback and participation in the comments).

This is going to be a fun ride and I can’t wait to see how it evolves.

Sharepocalypse Now

The social media landscape is changing quickly, but this change won’t be immediate, or for that matter, efficient. And that’s going to be a big problem for all of us.

I believe that Twitter, Facebook, Google+ and LinkedIn are fundamentally different, and thus, should not be in competition. However, I’m not sure the companies themselves see it this way. It’s likely they will continue dedicating resources to competition instead of differentiation.

And while the social media gods fight it out in the clouds above us, what will happen down here on Earth? What about all of us, the little people — the users?

We’re entering a new era of social network chaos, and this, in turn, is going to create new needs and opportunities for startups.


The Sharepocalypse


Welcome to the “Sharepocalypse,” a new era of social network insanity.

READ THE REST OF THE ARTICLE HERE

The New Social Media Landscape: A Roadmap

It may look like Google+ is competing with Facebook and Twitter, but I don’t think that is what will happen in the end. I think Google+ is a very different kind of service and it’s not clear that it can or will, or should, replace these other services.

In a series of articles here on my blog, I’ve explained the differences between these services, and what Google+ is really for and what it means for the rest of the social media giants:

  1. Google+ is Really for Sharing Knowledge, Not Social Networking
  2. Should Facebook be Worried About Google+?
  3. Why Twitter’s API Strategy Must Change in a Google+ and Facebook World
  4. Why the Google+ Developer Ecosystem Will be Different from Twitter

The conclusion I draw from all this is that instead of one social network to rule them all, I think it’s more likely that the social media landscape is going to divide into different territories, with each of the major social networks playing a different role.

Here’s how I think this all going to shake out:

  • Facebook is for social networking
  • LinkedIn is for business networking
  • Google+ is for knowledge networking
  • Twitter is for notifications

They just don’t know it yet.

Here is some more detail on this idea:

  • Facebook is for social networking
    • Facebook is the new social infrastructure for the planet, and Google+ is no match for it. By social, I mean non-professional, personal, friend-to-friend and group communication. There’s a lot more happening in Facebook than this however: gaming, branding, groups, marketing. But all this other activity depends on the fact that people spend so much time in Facebook, socializing. This is very different from what’s happening on Google+ and Twitter as well.
  • LinkedIn is for business networking
    • It’s the infrastructure for professional networking in the old-school sense – as in getting a job, finding customers, locating partners, hiring people, doing biz dev and sales, etc. LinkedIn is the most differentiated and focused of all these players: they know what they’re good at and they’re not trying to be all things to all people. Now LinkedIn needs to build more bridges into more third-party applications and services to keep people aware of it and using it.
  • Google+ is for knowledge networking
    • Google+ is an infrastructure for sharing knowledge, not social networking. Knowledge has always been Google’s strength and core focus. Knowledge is not just articles, but the conversations around them, and these conversations are one of Google+’s best features. More importantly, because Google has such a powerful search infrastructure, and such a powerful computing architecture, they are in a position to combine Google+ with search and massive analytics and machine learning, to dynamically re-organize and connect both the Web and the real-time Stream. By doing this Google+ could be a potential successor for the Blogosphere, and could leap far ahead of other competing search engines as well.
  • Twitter is for notifications
    • Twitter is really a notifications infrastructure. That’s what they do best, and what they should be focusing on. They are executing on the wrong strategy right now. They are trying to be a media company, but that is not their strength and others already are far ahead of them at that. But as an infrastructure for short notifications, Twitter has an opportunity to be unique and win, if they focus on that. Twitter has replaced RSS, for better or for worse, as the primary way people and applications share and track these kinds of notifications. Twitter could leverage this position to become the notifications infrastructure for the whole world – and for all of the other networks – even for G+ and Facebook – if they played their cards right and stopped focusing on competing for eyeballs.

Why Google+ Is Really For Sharing Knowledge, Not Social Networking

Everyone, including possibly even the Google+ team, is currently thinking that Google+ is a Twitter and Facebook competitor. But I think in fact, Google+ is for something entirely different.

Google+ is not really for socializing; it’s for sharing knowledge. That’s what makes it different from other social networks. It supports more flexible access permissions on content, longer form content, threaded conversations, and soon it will integrate deeply with search.

In many ways, Google+ is a potential replacement for the Blogosphere, which always suffered from the lack of an integrated commenting and search infrastructure. Blog posts and the conversations that emerge around them are fragmented around the Web, but in Google+ they are all in one place. More importantly, in Google+ the conversation around each post is something you can watch growing in real-time.

I don’t think all bloggers will move to Google+, because it certainly lacks the power or customization potential of a WordPress or Moveable Type for example, but there’s certainly a chance that good portion of lightweight blogging market share may go there.

As such, Google+ may be more competitive with lightweight blogging services like Tumblr and Posterous, and with knowledge sharing and Q&A services like Quora, than with Twitter or Facebook.

But that’s just the beginning. By combining Google+ with Google Search, a new synthesis is possible that could make both the static Web and the real-time Stream better. This could be the next evolution of Google’s “organize the world’s information” mission. And this is nothing like Twitter or Facebook: It’s a totally different value proposition.

What happens when Google connects the power of their search engine and their massive compute capabilities with Google+? Both Google+ and Google search will become smarter. This is the Holy Grail of social search that we’ve all been talking about for years.

Google started out with a mission to “organize the world’s information,” and Google+ provides them with a new way to accomplish this. I think this is actually Google’s core competency, and what could be Google+’s unique role in the ecosystem.

Knowledge is not merely information, it is organized information. Google organizes the Web’s information via a search index, but with the addition of Google+ it can start to use the Stream to organize the Web, and vice-versa.

By connecting Google+ and Google Search, Google can figure out what Web resources are important to whom, by looking at the conversations around them. And it can figure out what conversations are important to whom by looking at the Web content and people they cite.

Most importantly, by capturing all this content and conversation in an environment where it can be analyzed, Google, can data-mine to learn things. Like who is interested in what, who is an expert at what, who influences whom, who is influential about what, and which content is relevant to various people or topics.

This will make Google’s graph much richer – and it will also enable Google to begin to do some new things with their graph: things like helping to guide people to conversations they are interested in, helping to connect similar or related conversations, helping people get answers more productively, helping to distribute content to the right people.

The reason Google has the potential to do this better than anyone else is not their Search engine; it’s their backend, which effectively is the world’s largest and most powerful supercomputer.

Google has unmatched computing capacity, and unmatched data to compute on. They are in the best position to do massively distributed computations that combine search analytics, social analytics, and machine learning on both the static Web and the real-time Web (“The Stream”).

With the addition of Google+ to Google, the Web is going to get a lot smarter, and Google’s original mission may evolve from “organize the world’s information” to “organize the world’s intelligence.”

But what’s important to note here, is that Google+ is for doing smart things with knowledge – not necessarily fun things. Sure, Google+ can be used to share the same viral videos that one shares in other places too, but what makes Google+ different is the control it gives around sharing, and the discussions that emerge.

Currently using Google+ requires quite a bit of thought. It’s not easy to figure out. There are many features that are hard to find, or that don’t quite make sense, or are simply non-obvious. At this stage it is still probably not ready for mainstream consumer use. And so the people who are making the most use of it are early-adopter types. This in turn affects the content that is being shared there. It’s pretty brainy in general.

But even once Google+ irons out its wrinkles, it may never be a replacement for the social fun of Facebook or the utility of Twitter.

Google+ is no match for Facebook at Facebook’s core value proposition: socializing. Facebook is way ahead of everyone on that front. Here’s why Facebook does not have to worry about Google+.

But at the same time, Facebook is unlikely to be able to compete with Google+ for knowledge. Google+ has the advantage of being combined with all the other Google products – especially Search – and the power of the Google supercomputer behind it. Facebook doesn’t have anything equivalent.

Google+ is also no match for Twitter at what Twitter does best: enabling everyone to keep up, via short notifications. In fact, Google+ is very hard to keep up with. Their content streams are full of massive posts that take time to read, and long threads that take up a lot of space on the page. It’s not easy to quickly scan and see what’s going on. And Google+’s notification system, while useful, simply cannot scale to notifying every user of thousands of things a day – at least not in current form – it would be extremely overwhelming.

So there are very clear distinctions here. Google+ is a very different kind of animal from Facebook and Twitter; each service has certain talents that make them unique from the others. There is a possible future in which they really don’t compete: they could each play a different but complementary role.

Should Facebook be Worried About Google+?

In previous articles, I’ve written about how Google+ can build a developer ecosystem on Chrome that is different from Twitter’s ecosystem, and how Twitter must change to survive against that. It’s clear that Google+ and Twitter are very different animals.

Now what about Facebook? Should Facebook be worried about Google+? Are Facebook and Google+ really competitors? I don’t think so.

Google+ is not as geeky as Twitter, but it’s still too complicated for most consumers to want to use it.

Figuring out how to use Google+, and how to make effective use of it, at this early stage, is like trying to use an old shortwave radio. Actually, it’s like trying to figure out a shortwave radio that is only halfway built. This is not an activity my mom is going to enjoy.

It’s going to be a while before Google+ is ready for primetime consumer use. Facebook is way ahead on that front.

And there’s also the fun factor issue — Facebook has focused on fun: games, pokes, virtual gifts, and all sorts of social silliness that consumers just love.

The lack of play in the Google+ experience is actually a plus, not a minus, for many early users – there’s more signal, less noise, there – at least potentially. And this creates a self-selecting use-case: people are using Google+ for sharing ideas and having real conversations (and as of week two, not only about Google+ it turns out).

As of this article there is certainly an increase in non-serious content showing up on Google+, but it’s still a drop in the bucket compared to Facebook’s content mix. This could be an early-adopter effect that could change if more mainstream users adopt G+, but currently, my instincts are telling me G+ content is going to be more serious than fun. I’m not convinced the mainstream consumer audience is going to use G+ for fun.

Google+ is best used for sharing knowledge. This may result in Google+ filling a role that USENET used to play and that the fragmented blogosphere never really succeeded at solving: a unified knowledge sharing and conversation medium.

Hopefully the folks at Google+ will realize that the slightly more serious communication that’s happening in the service is a good thing. Instead of trying to change that by introducing more ways to play, they might want to consider celebrating it.

Keep out the silly social games, don’t introduce the fluff. This will preserve Google+ as a higher signal-to-noise communication channel and will make it unique from Facebook.

Hopefully Google+ won’t immediately integrate Zynga, for example, because that would totally ruin their differentiation from Facebook and take them in a direction they have no in-house DNA for: fun and games.

It’s just not too likely that the serious engineering and science culture of Google can replicate the lightheartedness of Facebook. And anyway if they could make Google+ fun, will anyone want it? After all they already have Facebook for that.

People are not going to use Facebook for serious conversations – it’s already too late for that. And they’re not going to use Google+ for superpoking. They can already poke each other to death perfectly well in Facebook.

Google+ is different from Facebook. And that’s a good thing for both companies. There may actually be room for both of them in this town.

Why Twitter’s API Strategy Must Change in a Google+ and Facebook World

As a result of the emergence of Google+, Twitter could soon find itself in a tough spot. A large chunk of their core developer base might migrate to Google+ because there is simply more opportunity there.

Why? Well for starters, it’s really easy to crank out Chrome extensions and you can market and sell them instantly in the Chrome Web Store to a ginormous captive audience that is many multiples of the size of Twitter’s user-base. I’ve written about how Google+ can leverage Chrome to build an ecosystem here.

And if you succeed, your shiny new Google+ feature might even get you bought by Google for a million bucks. What engineer wouldn’t want to spend a few weeks making a feature that could net them a million bucks and a job at Google in a few months?

Compare that to what it’s like to be a Twitter developer today. Twitter has no plugin framework, no app store, no browser, no OS, and they are clamping down on their API terms of use, and even actively going to war against some of their third-party developers. And they don’t have the kind of acquisition budget or appetite that Google has. Twitter has only made a few acquisitions to date.

To make matters worse for Twitter, there’s very little loyalty to Twitter among Twitter developers right now – mostly there’s fear because of the recent Ubermedia and Tweetdeck situation, and Twitter’s recent moves to add their own photo-sharing, and soon their own analytics.

What opportunities are there really for developers on the Twitter platform, that Twitter doesn’t actually want for itself? Twitter has suggested that it wants it’s developers to “move up the value chain,” but to what exactly? How high should they jump? And if they do, will Twitter just pull the rug out from under them when they land?

This kind of FUD may very likely drive Twitter’s core third-party app developers over to the seemingly greener and safer pastures at Google+. And on Google+ developers can rapidly crank out new features as Chrome extensions, they don’t have to use an API. And this gives them instant marketing to a huge captive market of Chrome users too.

Now it’s worth noting that being a Google+ Chrome extension developer won’t necessarily be safer than developing on the Twitter API in the long run. But it will seem safer for a while, and that will be enough for many developers to go there.

Like Twitter, Google will be able to cherry pick the best opportunities on its platform. Any Chrome extension that really becomes a big hit on top of Google+ will be either acquired or copied by Google, and since Google owns the means of distribution (Chrome and Google+) there will be no competition for such deals (what buyer would compete with Google to buy a Chrome extension that Google wanted to own?).  But there is at least a 12 to 24 month window for developers to create value and potentially get bought by Google before Google starts competing with them.

Meanwhile, while their developers start moving to Google+, Twitter is likely to continue to focus on being a media company. This could be a fatal mistake.

Twitter simply does not have the reach of Google. They will never have it. Google is simply everywhere. It’s a completely hopeless battle to try to be a bigger destination than Google. Google has already won that battle. Twitter will never be as big as Google.

What Twitter DOES have — which Google does NOT have (yet) — is a massive installed base of third party apps publishing and subscribing to their message stream API. Assuming Google+ doesn’t come out with an API quickly, and that they drive innovation onto Chrome before they release a full API, there is a window of opportunity for Twitter to beat Google on the API front.

If Twitter focused on building around their real strength, their API, as I have suggested previously, instead of trying to become a media company and destination, they could have shot at long-term prosperity and differentiation as the messaging infrastructure of the planet. That’s a much bigger play for Twitter than being a media company, and it’s something Google+ is not positioned for. Twitter could win this.

(So why aren’t they doing this? What is Twitter’s management thinking? If you think you know, please comment on this article with your theory)

Twitter does not have the distribution and platform leverage that Google has, nor the huge installed base that Facebook has. And they have another problem: Twitter is still too geeky for mainstream consumers.

It’s just too hard to learn to use Twitter’s syntax properly. And the 140-character limitation results in all kinds of geeky abbreviations and conventions in the content and social behaviors in the system. Compared to other apps like Google+ and Facebook, which support long messages, richer text, and real threaded discussions, Twitter is going to seem cryptic and retro – like IRC.

No offense to Twitter – They’ve done something amazing. And I love geeks and count myself as one of them. So I totally get and like the geekiness. But it’s not going to work for mainstream consumers in the long-term.

Unfortunately, geekiness is hard-wired into Twitter’s DNA. It’s in the syntax of the app, their user-experience, and their culture. It’s also the in DNA of the core of their audience. So it’s not something that’s going to be easy to change. But to win the eyeballs war – the consumer war – you just can’t be that geeky.

So either Twitter has to undergo gene therapy to completely change their DNA to become a lot less geeky (unlikely), or they need to embrace their inner-geekiness and focus on their API and developers again: Cater to the geeks. Love the geeks. Make other geeks rich.

Is the Twitter/Apple deal the solution? Perhaps Apple could eventually buy Twitter and perform gene-therapy on them, transforming them into a more consumer-friendly product company. But if that doesn’t happen (and I doubt it will) then a deal with Apple is probably not enough to transform Twitter into a mainstream consumer product.

The key is that Twitter is not the same kind of animal as Facebook or Google+. Twitter is not a media company; it’s a notification company. It’s insufficient for creating rich content, or building rich conversations, but it’s great for short one-off notifications – and the 140-character limit is actually a good thing when viewed from this perspective.

Instead of trying to be a media company, Twitter should pivot back to fundamentals and focus more on their notifications API. This is what they do best. They should do this soon, and while they are at it, they should encourage third-party clients to build on this API again, instead of discouraging them.

By doing this right Twitter could become be the publish-subscribe messaging architecture for the world – including even for the other messaging networks like Facebook and Google+.

That’s seriously frikkin huge. And it’s unique too: It’s not something that Facebook or Google+ are technically designed or positioned to do. It’s what Twitter does better than anyone else, and it’s really what everyone is using Twitter for anyway.

Twitter Future

How Twitter Can Win As an API


As an API focused company, Twitter could be woven into literally every app and service in the world as the means of publishing and subscribing to notifications of all kinds: Notifications between people and people, notifications between people and apps, and even notifications between apps and apps.

If they did this right, people might even use Twitter to keep up with notifications from Facebook, LinkedIn and Google+, as well as every publisher, other apps, and individuals.

Twitter wouldn’t necessarily be where the content is created or where it lives – it would be how everyone got notified of the content. The value is in the API, not the eyeballs.

As a global notification infrastructure, Twitter would not be able to monetize the eyeballs on the content, but they could monetize the notifications by including ads in their notification streams, and optionally requiring services to pay to not include Twitter’s ads in their streams.

Here are some steps Twitter could take to make this vision a reality:

  1. Buy Gnip and Datasift — the companies they presently (and inexplicably) have handed their entire API business to. Twitter should own these companies and be the source for its own data.
  2. Provide a free and premium version of their API and firehose streams. The free versions carry Twitter ads, the premium versions don’t.
  3. Stop trying to own and monetize all the eyeballs on Twitter.com and official Twitter apps. Instead, do a 180 and go back to encouraging third-party developers to build Twitter client apps again. Use these apps to massively increase Twitter’s reach, traction, and monetization. Distribute Twitter into the streams of any apps that use the free API, or make money from any apps that opt out of the ads and pay for an optional premium API.
  4. Sell Tweetdeck to Ubermedia (or someone else) for $50mm. That would not only be ironic and hilarious, it would be brilliant. That money could be better spent on enriching their publish-subscribe infrastructure. Twitter should be working on becoming like a TIBCO, but for the entire Internet.

Twitter has to take evasive action to increase their surface area by letting as many apps as possible integrate their API. They have to spread out, instead of fighting to be a destination. They have to stop cherry picking their ecosystem and instead enable it. Twitter’s strength is their ecosystem and their massive surface area. Without that they will be marginalized.

We’re already seeing the beginnings of Twitter marginalization happening with Google not renewing their licensing agreement to include Twitter in their real-time search results. Microsoft appears to be following suit.

Twitter’s best move to counter this is to make sure that Twitter content appears everywhere else, in every app, in every website. But they can’t do this by trying to compete with those apps and websites for the same eyeballs. Instead, turn all of them into “Twitter clients” and build a massive distributed real-time ad network.

Twitter cannot win as a destination and they are wasting their ammunition trying to do that. Facebook has them boxed in on one side and Google+ has just flanked them on the other. They have to punch through or they will be totally surrounded. But they CAN win as a notifications infrastructure. And that’s their real strength anyway.