The Web Wide World — The Web Spreads Into the Physical World

I have noticed an interesting and important trend of late. The Web is starting to spread outside of what we think of as “the Web” and into “the World.” This trend is exemplified by many data points. For example:

  • The Web on mobile devices like the iPhone. Finally it’s really usable on a phone. Now it goes everywhere with us. Soon we will track our own paths on our phones as we move around, creating a virtual map of our favorite places and routes.
  • Location aware applications and services, such as Google Maps Mobile. They link physical places to virtual places on the Web.
  • The Web in cars.  Auto avigation units will soon be Web-enabled.
  • Next-generation Wi-Fi digital cameras are wifi-enabled, linking directly to camera GPS and to photo sharing and storage services. Will cloud-centric wireless cameras with zero local storage come next?
  • Web picture frames such as Ceiva bring the Web into your grandma’s livingroom.
  • The Web in restaurants and stores. Your server gets your reservation on the Web from OpenTable. In-store kiosks connect to the Web to help you shop, or to bring up your online account and shopping cart.
  • The Web in your garden. GardenGro‘s sensor connects your garden to the Web, in order to figure out what to plant and how to cultivate it in your actual location.
  • Everything becomes trackable with RFID. Physical objects have virtual locations.
  • Sensors are connecting to the Web and popping up everywhere. For example here.
  • Plastic Logic‘s portable plastic reading device. The pad of paper, version 2.0.
  • The beginnings of an Internet of Things — where every thing has an address on the Web.
  • The rise of Lifestreaming, in which everything (or much of what) one does is captured to the Web and even broadcast.
  • Progress on Augmented Reality — instead of the physical world going into virtual worlds, the virtual world is going to flow into the physical world.

These are just a few data points. There are many many more. The trendline is clear to me.

Things are not going to turn out the way we thought. Instead of everything going digital — a future in which we all live as avatars in cyberspace — The digital world is going to invade the physical world. We already are the avatars and the physical world is becoming cyberspace. The idea that cyberspace is some other place is going to dissolve because everything will be part of the Web. The digital world is going physical.

When this happens — and it will happen soon, perhaps within 20 years or less — the notion of “the Web” will become just a quaint, antique concept from the early days when the Web still lived in a box. Nobody will think about “going on the Web” or “going online” because they will never NOT be on the Web, they will always be online.

Think about that. A world in which every physical object, everything we do, and eventually perhaps our every thought and action is recorded, augmented, and possibly shared. What will the world be like when it’s all connected? When all our bodies and brains are connected together — when even our physical spaces, furniture, products, tools, and even our natural environments, are all online? Beyond just a Global Brain, we are really building a Global Body.

The World is becoming the Web. The “Web Wide World” is coming and is going to be a big theme of the next 20 years.

A New Economic Framework for Content in Web 3.0

(FIRST DRAFT — A Work in Progress. Comments Welcome)

——

Print media publications of all kinds — newspapers and magazines –are dying out, as the Web and online advertising take their place. Increasing amounts of what used to be premium content (via paid wire services and databases for example) is now available for free on the Web.

At the same time the rise of blogs and wikis is giving individuals and groups of people effective ways to publish and distribute content to global audiences. As the major publishing brands decline in audience, upstart online brands are rapidly gaining eyeballs. And now, in the middle of this chaos, social networks like Twitter and Facebook are changing the way content is discovered, further chipping away at the value of the traditional leading media brands.

Major newspapers are closing, journalists, writers and editors are being fired in droves, and there is a sense among those who work in print media that it is the end of an era. Print as a medium is in the process of being superceded by online media. As this happens the content and advertising industries that have formed around print media will undergo radical disruptions and change as well. As we shift to an online-media centric world the economics of content and advertising must and will adapt.

But what will the new model be like? How will the economics of content publishing and distribution be different in the near future of the Web?

In this brief article I will propose the beginnings of a possible new economic framework for Web 3.0 and beyond — one which could revitalize the media business and help it transition to the online world.

I’ll call this new economic model “Content 3.0” or “C3” (to coincide with Web 3.0, the third-decade of the Web, when media goes completely online).

In the Content 3.0 (C3) media economy it all begins with pieces of original content. Each piece of content has a corresponding block of “stock” available to be owned by various kinds of investors. The principal classes of stock are:

  • Creators Writers, journalists, photographers, artists, designers, editors and their representatives such as agents etc.
  • Distributors Publishers or other types of distributors that aggregate audience for content, and who monetize access to that content by their audiences, and their agents if any.
  • Participants Audience members or customers who consume the content  (for free or for a fee), rate, annotate, discuss, and share the content. Participants are not just any consumer of the content, they are consumers of the content who choose to invest to earn or purchase shares in the content.

Each piece of content has a certain number of shares of virtual stock, just like a corporation.

When a piece of content is first created 100% of its stock is owned by the Creators. The Creators may then sell some of their shares to Distributors in order to bring it to market.

Distributors bring Participants and revenues to the content, creating a market for it. To attract Participants, Distributors pay to market the content. To attract revenues, Distributors invest in sales and other processes to attract and/or integrate with various monetization partners (such as advertisers, ad networks, affiliate networks, etc.).

Distributors frequently buy and sell shares in content with other Distributors, with some focusing on debut-only content portfolios and others on portfolios of reference and archival material. This aftermarket in content shares is facilitated by various brokers and agents.

Participants may also invest in shares of content, by helping to spread the content (and thereby earning shares) or by buying shares from the other shareholders (Creators and Distributors and any other Participants who hold stock). Participants may also buy and sell shares in content in the same aftermarket that Distributors participate in.

Any profits from monetization of a piece of content are shared as dividends, pro-rata, among the shareholders.

Each piece of content functions like a public company stock in a virtual stock market. This virtual content stock market, like other public markets in securities, is regulated by the SEC or an equivalent regulatory body.

Once a framework like this is in place, complete with the necessary micropayment and legal systems to make it work, the new content economy can really take off. It is a much more loosely coupled and equitable world — one that creates strong entrepreneurial opportunities for professional content Creators, while still providing a solid ROI for content Distributors who team up with them. Participants can also
participate by finding hot content early and investing in order to reap shares in the profits, and to potentially flip their shares to someone else before the price goes down. It works just like the stock market.

The final major element of this picture is that there may not be just one stock market for buying and selling shares of content items. Instead there may be many. Each of these stock markets will be the equivalent of the media empires of today. Various content Creators, Distributors and Participants will participate in these marketplaces in order to transact around the shares of particular pieces of content that are listed in them. It may also be possible for an item of content to list across more than one of these markets at the same time.

While a system like this would face numerous hurdles to actually become real and get official legal status, I believe it could be where we are ultimately headed. It may take 20 or 30 years to fully emerge however. I believe there could be compelling business opportunities to form new business that enable this Content 3.0 ecosystem.

Vote for My Panels & Twine at SXSWi 2010

The panel picker for SXSWi went live this morning, and Twine has propsed several submissions. Browsing through the huge list of proposals (over 2200), it’s clear that the Semantic Web will be popular topic at this year’s conference.

With “Beyond Algorithms: Search and the Semantic Web,” we are planning to offer both an overview of the current state of the technology, as well as a careful look at what needs to be
addressed for semantic search to finally reach its potential. We think that semantic search needs to be present, personalized, and precise. What are the catalysts? What are the roadblocks?

At last week’s SES Conference in San Jose, the interactions on and around our “Don’t Call it a Comeback: Semantic Technology and Search” panel showed just how complex these issues are, so we anticipate a lively and wide-ranging discussion for the panel at SXSWi 2010.

We have also proposed a panel on interfacing content streams as real-time interaction becomes the Web’s dominant paradigm.

As we showcased with Twine’s new interface visualization this summer, we feel there are better ways to organize and interact with the stream, and our panel “Islands in the Stream: Interfacing
Real-time Content”
will address user experience and interface design for the real-time Web from a variety of perspectives.

We also want to note that Brendan Kessler, the Founder and CEO of ChallengePost, has submitted a panel on “Why Challenge Prizes are the Future of Innovation”.

My $10K challenge to design unblockable, anonymous, and encrypted mobile internet
access is still open, and I will be joining the discussion on the panel, as well.

Thanks for your consideration, and please help us bring these ideas to SXSWi next by voting for the panels!

Welcome to the Stream – Next Phase of the Web

May 8, 2009

Welcome to The Stream

The Internet began evolving many decades before the Web emerged. And while today many people think of the Internet and the Web as one and the same, in fact they are different. The Web lives on top of the Internet’s infrastructure much like software and documents live on top of an operating system on a computer.

And just as the Web once emerged on top of the Internet, now something new is emerging on top of the Web: I call this the Stream. The Stream is the next phase of the Internet’s evolution. It’s what comes after, or on top of, the Web we’ve all been building and using.

Perhaps the best and most current example of the Stream is the rise of Twitter, Facebook and other microblogging tools. These services are visibly streamlike, their user-interfaces are literally streams; streams of ideas, thinking and conversation. In reaction to microblogs we are also starting to see the birth of new tools to manage and interact with these streams, and to help understand, search, and follow the trends that are rippling across them. Just as the Web is not any one particular site or service, the Stream is not any one site or service — it’s the collective movement that is taking place across them all.

To meet the challenges and opportunities of the Stream a new ecosystem of services is rapidly emerging: stream publishers, stream syndication tools, stream aggregators, stream readers, stream filters, real-time stream search engines, and stream analytics engines, stream advertising networks, and stream portals are emerging rapidly. All of these new services are the beginning of the era of the Stream.

Web History

The original Tim Berners-Lee proposal that started the Web was in March, 1989. The first two decades of the Web (Web 1.0 from 1989 – 1999, and Web 2.0 from 1999 – 2009) were focused on the development of the Web itself. Web 3.0 (2009 – 2019), the third-decade of the Web, officially began in March of this year and will be focused around the Stream.

  • In the 1990’s with the advent of HTTP and HTML, the metaphor of “the Web” was born and concepts of webs and sites captured our imaginations.
  • In the early 2000’s the focus shifted to graphs such as social networks and the beginnings of the Semantic Web.
  • Now, in the coming third decade, the focus is shifting to the Stream and with it, stream oriented metaphors of flows, currents, and ripples.

The Web has always been a stream. In fact it has been a stream of streams. Each site can be viewed as a stream of pages developing over time. Each page can be viewed as a stream of words, that changes whenever it is edited. Branches of sites can also be viewed as streams of pages developing in various directions.

But with the advent of blogs, feeds, and microblogs, the streamlike nature of the Web is becoming more readily visible, because these newer services are more 1-dimensional and conversational than earlier forms of websites, and they update far more frequently.

Defining the Stream

Just as the Web is formed of sites, pages and links, the Stream is formed of streams.

Streams are rapidly changing sequences of information around a topic. They may be microblogs, hashtags, feeds, multimedia services, or even data streams via APIs.

The key is that streams change often. This change is an important part of the value they provide (unlike static Websites, which do not necessarily need to change in order to provide value). In addition, it is important to note that streams have URI’s — they are addressable entities.

So what defines a stream versus an ordinary website?

  1. Change. Change is the key reason why a stream is valuable. That is not always so with a website.  Websites do not have to change at all to be valuable — they could for example just be static but comprehensive reference library collections. But streams on the other hand change very frequently, and it is this constant change that is their main point.
  2. Interface Independence.
    Streams are streams of data, and they can be fully accessed and consumed independently of any particular user-interface — via syndication of their data into various tools. Websites on the other hand, are only accessible via their user-interfaces. In the era of the Web the provider controlled the interface. In the new era of the stream, the consumer controls the interface.
  3. Conversation is king.
    An interesting and important point is that streams are linked together not by hotlinks, but by acts of conversation — for example, replies, “retweets,” comments and ratings, and “follows.” In the era of the Web the hotlink was king. But in the era of the Stream conversation is king.

In terms of structure, streams are comprised of agents, messages and interactions:

  • Agents are people as well as software apps that publish to streams.
  • Messages are publications by agents to streams — for example, short posts to their microblogs.
  • Interactions are communication acts, such as sending a direct message or a reply, or quoting someone (“retweeting”), that connect and transmit messages between agents.

The Global Mind

If the Internet is our collective nervous system, and the Web is our collective brain, then the Stream is our collective mind. The nervous system and the brain are like the underlying hardware and software, but the mind is what the system is actually thinking in real-time. These three layers are interconnected, yet are distinctly different aspects, of our emerging and increasingly awakened planetary intelligence.

The Stream is what the Web is thinking and doing, right now. It’s our collective stream of consciousness.

The Stream is the dynamic activity of the Web, unfolding over time. It is the conversations, the live streams of audio and video, the changes to Web sites that are happening, the ideas and trends — the memes — that are rippling across millions of Web pages, applications, and human minds.

The Now is Getting Shorter

The Web is changing faster than ever, and as this happens, it’s becoming more fluid. Sites no longer change in weeks or days, but hours, minutes or even seconds. if we are offline even for a few minutes we may risk falling behind, or even missing something absolutely critical. The transition from a slow Web to a fast-moving Stream is happening quickly. And as this happens we are shifting our attention from the past to the present, and our “now” is getting shorter.

The era of the Web was mostly about the past — pages that were published months, weeks, days or at least hours before we looked for them. Search engines indexed the past for us to make it accessible: On the Web we are all used to searching Google and then looking at pages from the recent past and even farther back in the past. But in the era of the Stream, everything is shifting to the present — we can see new
posts as they appear and conversations emerge around them, live, while we watch.

Yet as the pace of the Stream quickens, what we think of as “now” gets shorter. Instead of now being a day, it is an hour, or a few minutes. The unit of change is getting more granular.

For example, if you monitor the public timeline, or even just your friends timeline in Twitter or Facebook you see that things quickly flow out of view, into the past. Our attention is mainly focused on right now: the last few minutes or hours. Anything that was posted before this period of time is “out of sight, out of mind.”

The Stream is a world of even shorter attention spans, online viral sensations, instant fame, sudden trends, and intense volatility. It is also a world of extremly short-term conversations and thinking.

This is the world we may be entering. It is both the great challenge, and the great opportunity of the coming decade of the Web.

How Will We Cope With the Stream?

The Web has always been a stream — it has been happening in real-time since it started, but it was slower — pages changed less frequently, new things were published less often, trends developed less quickly. Today it is getting so much faster, and as this happens its feeding back on itself and we’re feeding into it, amplifying it even more.

Things have also changed qualitatively in recent months. The streamlike aspects of the Web have really moved into the foreground of our mainstream cultural conversation. Everyone is suddenly talking about Facebook and Twitter. Celebrities. Talk show hosts. Parents. Teens.

And suddenly we’re all finding ourselves glued to various activity streams, microblogging manically and squinting to catch fleeting references to things we care about as they rapidly flow by and out of view. The Stream has arrived.

But how can we all keep up with this ever growing onslaught of information effectively? Will we each be knocked over by our own personal firehose, or will tools emerge to help us filter our streams down to managable levels? And if we’re already finding that we have too many streams today, and must jump between them ever more often, how will we ever be able to function with 10X more streams in a few years?

Human attention is a tremendous bottleneck in the world of the Stream. We can only attend to one thing, or at most a few things, at once. As information comes at us from various sources, we have to jump from one item to the next. We cannot absorb it all at once. This fundamental barrier may be overcome with technology in the future, but for the next decade at least it will still be a key obstacle.

We can follow many streams, but only one-item-at-a-time; and this requires rapidly shifting our focus from one article to another and from one stream to another. And there’s no great alternative: Cramming all our separate streams into one merged activity stream quickly gets too noisy and overwhelming to use.

The ability to view different streams for different contexts is very important and enables us to filter and focus our attention effectively. As a result, it’s unlikely there will be a single activity stream — we’ll have many, many streams. And we’ll have to find ways to cope with this reality.

Streams may be unidirectional or bidirectional. Some streams are more like “feeds” that go from content providers to content consumers. Other streams are more like conversations or channels in which anyone can be both a provider and a consumer of content.

As streams become a primary mode of content distribution and communication, they will increasingly be more conversational and less like feeds. And this is important — because to participate in a feed you can be passive, you don’t have to be present synchronously.  But to participate in a conversation you have to be present and synchronous — you have to be there, while it happens, or you may miss out on it entirely.

A Stream of Challenges and Opportunities

We are going to need new kinds of tools for managing and participating in streams, and we are already seeing the emergence of some of them. For example Twitter clients like Tweetdeck, RSS feed readers, and activity stream tracking tools like Facebook and Friendfeed. There are also new tools for filtering our streams around interests, for example Twine.com (* Disclosure: the author of this article is a principal in Twine.com). Real-time search tools are also emerging to provide quick ways to scan the Stream as a whole. And trend discovery tools are helping us to see
what’s hot in real-time.

One of the most difficult challenges will be how to know what to pay attention to in the Stream: Information and conversation flow by so quickly that we can barely keep up with the present, let alone the past. How will know what to focus on, what we just have to read, and what to ignore or perhaps read later?

Recently many sites have emerged that attempt to show what is trending up in real-time, for example by measuring how many retweets various URLs are getting in Twitter. But these services only show the huge and most popular trends. What about all the important stuff that’s not trending up massively? Will people even notice things that are not widely RT’d or “liked”? Does popularity equal importance of content?

Certainly one measure of the value of an item in the Stream is social popularity. Another measure is how relevant it is to a topic, or even more importantly, to our own personal and unique interests. To really cope with the Stream we will need ways to filter that combine both these different approaches. Furthermore as our context shifts throughout the day (for example from work to various projects or clients to shopping to health to entertainment, to family etc) we need tools that can adapt to filter the Stream differently based on what we now care about.

A Stream oriented Internet also offers new opportunities for monetization. For example, new ad distribution networks could form to enable advertisers to buy impressions in near-real time across URLs that are trending up in the Stream, or within various slices of it. For example, an advertiser could distribute their ad across dozens of pages that are getting heavily retweeted right now. As those pages begin to decline in RT’s per minute, the ads might begin to move over to different URLs that are starting to gain.

Ad networks that do a good job of measuring real-time attention trends may be able to capitalize on these trends faster and provide better results to advertisers. For example, an advertiser that is able to detect and immediately jump on the hot new meme of the day, could get their ad in front of the leading influencers they want to reach, almost instantly. And this could translate to sudden gains in awareness and branding.

The emergence of the Stream is an interesting paradigm shift that may turn out to characterize the next evolution of the Web, this coming third-decade of the Web’s development. Even though the underlying data model may be increasingly like a graph, or even a semantic graph, the user experience will be increasingly stream oriented.

Whether Twitter, or some other app, the Web is becoming increasingly streamlike. How will we filter this stream? How will we cope? Whoever can solve these problems first and best is probably going to get rich.

Other Articles on This Topic

http://www.techmeme.com/090517/p6#a090517p6

http://www.techcrunch.com/2009/05/17/jump-into-the-stream/

http://www.techcrunch.com/2009/02/15/mining-the-thought-stream/

Can We Design Better Communities?

(DRAFT 2. A Work-In-Progress)

The Problem: Our Communities are Failing

I’ve been thinking about community lately. There is a great need for a new and better model for communities in the world today.

Our present communites are not working and most are breaking down or stagnating. Cities are experiencing urbanization and a host of ensuing  social and economic challenges. Meanwhile the movement towards cities has drained the people — particularly young professionals — away from rural communities, causing them to stagnate and decline.

Local economies have been challenged by national and global economic integratio — from outsourcing of jobs away to other places, to giant retail chains such as Walmart swooping in and driving out local businesses.

From giant megacities and multi-city urban sprawls, to inner city neighborhoods, to suburban bedroom communities, and rural towns and villages, the pain is being felt everywhere and at all levels.

Our current models for community don’t scale, they don’t work anymore, and they don’t fit the kind of world we are living in today. And why should they? After all, they were designed a long time ago for a very different world.

At the same time there are increasing numbers of singles or couples without children, and even families and neighborhoods that are breaking down as cities get larger.

The need for community is growing not declining — especially as existing communities fail and no other alternatives take their place. Loneliness, social isolation, and social fragmentation are huge and growing problems — they lead to crime, suicide, mental illness, lack of productivity, moral decay, civil unrest, and just about every other social and economic problem there is.

The need for an updated and redesigned model for community is increasingly important to all of us.

Intentional Communities

In particular, I am thinking about intentional communities — communities in which people live geographically near one another, and participate in community together, by choice. They may live together or not, dine together or not, work together or not, worship together or not — but at least they need to live within some limit of proximity to one another and participate in community together. These are the minimum requirements.

But is there a model that works? Or is it time to design a new model that fits the time and place in which we live better?

Is this simply a design problem that we can solve by adopting the right model, or is there something about human nature that makes it impossible to succeed no matter what model we apply?

I am an optimist and I don’t think human nature prevents healthy communities from forming and being sustainable. I think it’s a design problem. I think this problem can (and must) be solved with a set of design principles that work better than the ones we’ve come up with so far. This would be a great problem to solve. It could even potentially improve the lives of billions of people.

Models of Intentional Community

Community is extremely valuable and important. We are social beings. And communities enable levels of support and collaboration, economic growth, resiliance, and perhaps personal growth, that individuals or families cannot achieve on their own.

However, do intentional communities work? What examples can we look at and what can we glean from them about what worked and what didn’t?

All of the cities and towns in the world started as intentional communities but today many seem to have lost their way as they got larger or were absorbed into larger communities.

As for smaller intentional communities — recent decades are littered with all kinds of spectacular failures.

The communes and experiemental communities of the 1960’s and 1970’s have mostly fallen apart.

Spiritual communities seem to either tend towards becoming personality cults that are highly prone to tyrranny and corruption, or they too seem to fall apart eventually as well.

There have been so many communities around various gurus, philosophers, or cult-figures, but they have almost all universally become cults or have broken apart.

Human nature is hard to wrangle without strong leadership, yet strong leadership and the power it entails leads inevitably to ego and corruption.

At least some ashrams in India seem to be working well, although their internal dynamics are usually centered around a single guru or leadership group — and while there may be a strong social agreement within these communities, this is not a model of community that will work for everyone. And in fact, only in extremely rare cases, are there any gurus who are actually selfless enough to hold that position without abusing it.

Other kinds of religious communities are equally prone to problems — however perhaps at least some, such as the Quakers, Shakers, and Amish may have solved this — I am not sure however. If they were so successful, why are there so few of them?

Temporary communities are another type of intentional community, for example, Burning Man, seem to work quite well, but only for temporary periods of time — they would have the same problems of all other communities if they became institutionalized or tried to not be temporary.

Educational communities, such as university towns and campuses, do appear to work in many cases. They combine both an ongoing community (tenured faculty, staff and townspeople) and temporary communities (seasonal student and faculty residents).

Economic communes — such as the communes in Soviet-era Russia were prone to corruption, and failed as economic experiments. In Soviet Russia “some were more equal than others” and that ultimately led to corruption and tyranny.

Political-economic communities such as the neighborhood groups in Maoist China only worked because they were firmly, even brutally, controlled from the central government. They were not exactly voluntary intentional communities.

I don’t know enough about the Israeli Kibbutzim experiments, but they at least seem to be continuing, although I am not sure how well they function — I admit my ignorance on that topic.

One type of intentional community that does seem to work are caregiving communities such as assisting living communities, nursing homes, halfway houses, etc — but perhaps they seem to work only because their members don’t remain very long.

Why Aren’t There More Intentional Communities?

So here is my question: Do intentional communities work? And if they work so well, why aren’t there more of them? Or are they flourishing and multiplying under the radar?

Is there a model (or are there models) for intentional community that have proven long-term success? Where are the examples?

Is the fact that there are not more intentional communities emerging and thriving, evidence that intentional communities just don’t work or have stopped replicating or evolving? Or is it evidence that the communities we already live in work well enough, even though they are no longer intentional for most of us?

I don’t think our present-day communities work well enough, nor are they very healthy or rewarding to their participants. I do believe there is the possibility, and even the opportunity, to come up with a better model — one which works so well that it attracts people, grows and self-replicates around the world rapidly. But I don’t yet know what that new model is.

Design Principles

To design the next-evolution of intentional community, perhaps we can start with a set of design principles gleaned from what we have learned from existing communities?

This set of design principles should be selected to be practical for the world we live in today — a world of rapid transit, economic and social mobility, urban sprawls, cultural and ethnic diversity, cheap air travel, declining birth rates, the 24-7 work week, the Internet, and the globally interdependent economy.

In thinking about this further there are a few key “design principles” which seem to be necessary to make a successful, sustainable, healthy community.

This is not an exhaustive list, but it is what we have thought of so far:

Shared intention.
There has to be a common reason for the group of people to be together. The participants each have to share a common intention to form and participate in a community around common themes and purposes together.

Shared contribution . The participants have to each contribute in various ways to the community as part of their membership.

Shared governance.
The participants each have a role to play in the process of decision making, policy formation, dispute resolution, and operations of the community.

Shared boundaries. There are shared, mutually agreed upon and mutually enforced rules.

Freedom to leave. Anyone can leave the community at any time without pressure to remain.

Freedom of choice.
While in the community people are free to make choices about their roles and participation in the community, within the communities boundaries and governance process. This freedom of choice also includes the freedom to opt out of any role or rule, but that might have the consequence of voluntarily recusing oneself from further participation in the community.

Freedom of expression. The ability for community members to freely and fearlessly express their opinions within the community is an essential element of healthy communities. Systems need to be designed to support and channel this activity. If it is restrained it seeks out other channels anyway (subversion, revolution, etc.). By not restraining expression, but instead desiging a community process that authentically engages members in conversation with one another, the
community can be more self-aware and creativity and innovation can flow more freely.

Representative democratic leadership. The leadership is either by consensus and includes everyone equally, or there is a democratic representative process of electing leaders and making decisions.

Community mobility. This is an interesting topic. In the world today, each person may have different sets of interests and purposes, and they are not all compatible. It may be necessary or desirable to be a member of different communities in different places, times of the year, or periods of one’s life. It
should be possible to be able to be in more than one community, or to rotate through communities, or to change communities as one’s interests, goals, needs and priorities shift over time — so long as one participates in each community fully while they are there. The concept of timesharing in various communities, or what one friend calls “colonies,” is interesting. One might be a member of different colonies — one for their religious interests, one for social kinship, one for a hobby, one for recreation and vacation, etc. These might be in different places and have different members and their role and level of participation might be different in each one. Rather than living in only one particular community, perhaps we need a model where there is more mobility.

Size limitations. One thing I would suggest is that communities work better when they are smaller. The reason for this is that once communities reach a size where each member no longer can maintain a personal relationship with each other member, they stop working and begin to fragment into subgroups. So perhaps limiting the size of a community is a good idea. Or alternatively, when a community reaches a certain size it spawns a new separate community where further growth can happen and all new members go there. In fact, you could even see two communities spawning a new “child” community together to absorb their growth.

Proximity. Communities don’t require that people live near each other — they can function non-locally, for example online. However, the kind of intentional communities I am interested in here are ones where people do live together or near one another, at least part of the time. For this kind of community people need to live and/or dine and/or work together on a periodic, if not frequent basis. An eating co-op in a metropolitan area is an example — at least if everyone has to live within a certain distance and eat together a few times a week, and work a few hours in
the co-op per month. A food co-op, such as co-op grocery store is another example.

Shared Economic Participation. For communities to function there needs to be a form of common currency (either created by the community or from a larger economy the community is situated within), and there should be a form of equitable sharing of collective costs and profits among the community members. There are different ways to distribute the wealth — everyone can be equal no matter what, or reward can be proportional to role, or reward can be proportional to level of contribution, etc. What economic works best in the long-term, for both creating sustainability and growth, for maintaining social order and social justice, and for preventing corruption?

Agility. Communities must be designed to change in order to adapt to new environmental, economic and social realities. Communities that are too rigid in structure or process, or even location, are like species of animals that are unable to continue evolving — and that usually leads to extinction. Part of being agile is being open to new ideas and opportunities. Agility is not just the ability to recognize and react to emerging threats, it is the ability to recognize and react to emerging opportunities as well.

Resiliance. Communities must be designed to be resiliant — Challenges and even damages and setbacks are inevitable. They can be minimized and mitigated, but they will still happen to various degrees. Therefore the design should not assume they can be prevented entirely, but rather should plan for the ability to heal and eventually restore the community as effectively as possible when they do.

Diversity. There are many types of diversity: diversity of opinion, ethnic diversity, age group diversity, religious diversity. Not all communities need to support all kinds of diversity, however it is probably safe to say that for a community to be healthy it must at least support diversity of beliefs and opinions among the membership. No matter what selection criteria is used, there must still be freedom
of thought and belief, and expression, within that group. Communities must be designed to support this diversity, and even encourage it. They also must be designed to manage and process the conversations, conflicts, and changes that diversity brings about. Diversity is a key ingredient that powers growth, agility, and resiliance. In biology diversity is essential to species-survival — mutations are key to evolution. Communities must be designed to mutate, and to intelligently filter in or out those mutations that help or harm the community. Processes that encourange and process diversity are essential for this to happen.

Can Twitter Survive What is About to Happen to It?

I am worried about Twitter. I love it the way it is today. But it’s about to change big time, and I wonder whether it can survive the transition.

Twitter is still relatively small in terms of users, and most of the content is still being added by people. But not for long. Two things are beginning to happen that will change Twitter massively:

  1. Mainstream Adoption. Tens of millions of new users are going to flood into the service. It is going to fill up with mainstream consumers. Many of them won’t have a clue how to use Twitter.
  2. Notifications Galore. Every service on the Web is going to rush to pump notifications and invites into Twitter.

Twitter reminds me of CB radio — and that is a double-edged blessing. In Twitter the “radio frequencies” are people and hashtags. If you post to your Twitter account, or do an @reply to someone else, you are broadcasting to all the followers of that account. Similarly, if you tweet something and add hashtags to it, you are broadcasting that to everyone who follows those hashtags.

This reminds me of something I found out about in New York City a few years back. If you have ever been in a taxi in NYC you may have noticed that your driver was chatting on the radio with other drivers — not the taxi dispatch radio, but a second radio that many of them have in their cabs. It turns out the taxi drivers were tuned into a short range radio frequency for chatting with each other — essentially a pirate CB radio channel.

This channel was full of taxi driver banter in various languages and seemed to be quite active. But there was a problem. Every five minutes or so, the normal taxi chatter would be punctuated by someone shouting insults at all the taxi drivers.

When I asked my driver about this he said, “Yes, that is very annoying. Some guy has a high powered radio somewhere in Manhattan and he sits there all day on this channel and just shouts insults at us.” This is the problem that Twitter may soon face. Open channels are great because they are open. They also can become aweful, because they are open.

The fact that Twitter has open channels for communication is great. But these channels are fragile and are at risk from several kinds of overload:

  • Hypertweeting. Some Twitter users tweet legitimately, but far too much. Or the content they tweet is just innane. In doing so they market themselves and dominate everyone’s attention with their presence.
  • Hashtag Spam. For example, an advertiser could easily pump out tweets that market their products, and simply attach popular hashtags to them, thus spamming those “channels” with ads. Similarly, clueless users could do the same thing.
  • @reply Spam.
    Another way that spammers could create annoyances in Twitter is by doing @replies to people, with ads for products, or simply to make trouble.
  • Twitter Chains. It is easy to package a highly contagious meme as a tweet that spreads linearly or exponentially. Some variations of this are highly self-replicating and can quickly spread to millions of people. There are various ways to design such memes to spread exponentially and across multi-level relationships with extreme virality. Multi-level marketers and others could take advantage of this to create havoc, and even potentially flood Twitter with multi-level messages to the point of crashing it.
  • Notification Overload. Another issue is the rise of Twitter bots from various services, whether benign in nature or deliberately spammy:
    • It won’t be long before every social network starts pumping updates into Twitter.
    • News and content sites are starting to pump updates into Twitter for every article they publish.
    • Games
      and MMPORG’s are starting to pump notifications for things that take place in their worlds, into Twitter (e.g. player x just defeated player y in a battle)
    • A variety of other desktop, online and mobile apps will be pumping notifications into Twitter

There is soon going to be vastly more content in Twitter, and too much of it will be noise.

The Solution: New Ways to Filter Twitter

The solution to this is filtering. But filtering capabilities are weak at best in existing Twitter apps. And even if app developers start adding them, there are limitations built into Twitter’s messaging system that make it difficult to do sophisticated filtering.

Number of Followers as a Filter. One way to filter would be to use social filtering to infer the value of content. For example, content by people with more followers might have a higher reputation score. But let’s face it, there are people on Twitter who are acquiring followers using all sorts of tricky techniques — like using auto-follow or simply following everyone they can find in the hopes that they will be followed back. Or offering money or prizes to followers — a recent trend. The number of followers someone has does not necessarily reflect reputation.

Re-Tweeting Activity as a Filter. A better measure of reputation might be how many times someone is re-tweeted. RT’s definitely indicate whether someone is adding value to the network. That is worth considering.

Social Network Analysis as a Filter. One might also analyze the social graph to build filters. For example, by looking at who is followed by who. Something similar to Google PageRank might even be possible in Twitter. You could figure out that for certain topics, certain people are more central than others,  by analyzing how many other people who tweet about those topics are following them. Ok good.
Nobody can patent this now.

Metadata for Filtering. But we are going to need more than inferred filtering I believe. We are going to need ways to filter Twitter messages by sender, type of content, size, publisher, trust, popularity, content rating, MIME type, etc. This is going to require metadata in Twitter, ultimately.

Broadly speaking there are two main ways that metadata could be added to Twitter:

  1. Metadata Added Outside Twitter. Twitter messages could simply be URLs that point to further resources that in turn carry the actual body and metadata of each message. Thus a message might just be a single URL. Clicking that URL would yield a web page with the content and then XML or RDF metadata about the message. If this were to happen, Twitter messages would be simply URLs created and sent by outside client software — and they would require outside software (special Twitter clients) to unpack and read them.
  2. Metadata Added Inside Twitter. Another solution would be for Twitter to extend their message schema so every Twitter message has two parts, a 140 char body and a metadata section with a certain amount of space as well. This would be great. It would be a good move for the people at Twitter to jump the gun by enabling this sooner rather than later. It will help them protect their control over their own franchise.

One thing is certain. In the next 2 years Twitter is going to fill up with so much information, spam and noise that it will become unusable. Just like much of USENET. The solution will be to enable better filtering of Twitter, and this will require metadata about each tweet.

Someone IS going to do this — perhaps it will come from third-party developers who make Twitter clients, or perhaps from the folks who make Twitter itself. It has to happen.

(To followup on this find me at http://twitter.com/novaspivack)

Now read Part II: Best Practices – Proposed Do’s and Don’t’s for Using Twitter

See Also:

This article on CNET

A new article on CNET mentioning this article

Wolfram Alpha is Coming — And It Could be as Important as Google

Notes:

– This article last updated on March 11, 2009.

– For follow-up, connect with me about this on Twitter here.

– See also: for more details, be sure to read the new review by Doug Lenat, creator of Cyc. He just saw the Wolfram Alpha demo and has added many useful insights.

——————————————————————–

Introducing Wolfram Alpha

Stephen Wolfram is building something new — and it is really impressive and significant. In fact it may be as important for the Web (and the world) as Google, but for a different purpose. It’s not a “Google killer” — it does something different. It’s an “answer engine” rather than a search engine.

Stephen was kind enough to spend two hours with me last week to demo his new online service — Wolfram Alpha (scheduled to open in May). In the course of our conversation we took a close look at Wolfram Alpha’s capabilities, discussed where it might go, and what it means for the Web, and even the Semantic Web.

Stephen has not released many details of his project publicly yet, so I will respect that and not give a visual description of exactly what I saw. However, he has revealed it a bit in a recent article, and so below I will give my reactions to what I saw and what I think it means. And from that you should be able to get at least some idea of the power of this new system.

A Computational Knowledge Engine for the Web

In a nutshell, Wolfram and his team have built what he calls a “computational knowledge engine” for the Web. OK, so what does that really mean? Basically it means that you can ask it factual questions and it computes answers for you.

It doesn’t simply return documents that (might) contain the answers, like Google does, and it isn’t just a giant database of knowledge, like the Wikipedia. It doesn’t simply parse natural language and then use that to retrieve documents, like Powerset, for example.

Instead, Wolfram Alpha actually computes the answers to a wide range of questions — like questions that have factual answers such as “What is the location of Timbuktu?” or “How many protons are in a hydrogen atom?,” “What was the average rainfall in Boston last year?,” “What is the 307th digit of Pi?,” or “what would 80/20 vision look like?”

Think about that for a minute. It computes the answers. Wolfram Alpha doesn’t simply contain huge amounts of manually entered pairs of questions and answers, nor does it search for answers in a database of facts. Instead, it understands and then computes answers to certain kinds of questions.

(Update: in fact, Wolfram Alpha doesn’t merely answer questions, it also helps users to explore knowledge, data and relationships between things. It can even open up new questions — the “answers” it provides include computed data or facts, plus relevant diagrams, graphs, and links to other related questions and sources. It also can be used to ask questions that are new explorations between relationships, data sets or systems of knowledge. It does not just provides textual answers to questions — it helps you explore ideas and create new knowledge as well)

How Does it Work?

Wolfram Alpha is a system for computing the answers to questions. To accomplish this it uses built-in models of fields of knowledge, complete with data and algorithms, that represent real-world knowledge.

For example, it contains formal models of much of what we know about science — massive amounts of data about various physical laws and properties, as well as data about the physical world.

Based on this you can ask it scientific questions and it can compute the answers for you. Even if it has not been programmed explicity to answer each question you might ask it.

But science is just one of the domains it knows about — it also knows about technology, geography, weather, cooking, business, travel, people, music, and more.

Alpha does not answer natural language queries — you have to ask questions in a particular syntax, or various forms of abbreviated notation. This requires a little bit of learning, but it’s quite intuitive and in some cases even resembles natural language or the keywordese we’re used to in Google.

The vision seems to be to create a system wich can do for formal knowledge (all the formally definable systems, heuristics, algorithms, rules, methods, theorems, and facts in the world) what search engines have done for informal knowledge (all the text and documents in various forms of media).

How Does it Differ from Google?

Wolfram Alpha and Google are very different animals. Google is designed to help people find Web pages. It’s a big lookup system basically, a librarian for the Web. Wolfram Alpha on the other hand is not at all oriented towards finding Web pages, it’s for computing factual answers. It’s much more like a giant calculator for computing all sorts of answers to questions that involve or require numbers. Alpha is for calculating, not for finding. So it doesn’t compete with Google’s core business at all. In fact, it is much more comptetive with the Wikipedia than with Google.

On the other hand, while Alpha doesn’t compete with Google, Google may compete with Alpha. Google is increasingly trying to answer factual questions directly — for example unit conversions, questions about the time, the weather, the stock market, geography, etc. But in this area, Alpha has a powerful advantage: it’s built on top of Wolfram’s Mathematica engine, which represents decades of work and is perhaps the most powerful calculation engine ever built.

How Smart is it and Will it Take Over the World?

Wolfram Alpha is like plugging into a vast electronic brain. It provides extremely impressive and thorough answers to a wide range of questions asked in many different ways, and it computes answers, it doesn’t merely look them up in a big database.

In this respect it is vastly smarter than (and different from) Google. Google simply retrieves documents based on keyword searches. Google doesn’t understand the question or the answer, and doesn’t compute answers based on models of various fields of human knowledge.

But as intelligent as it seems, Wolfram Alpha is not HAL 9000, and it wasn’t intended to be. It doesn’t have a sense of self or opinions or feelings. It’s not artificial intelligence in the sense of being a simulation of a human mind. Instead, it is a system that has been engineered to provide really rich knowledge about human knowledge — it’s a very powerful calculator that doesn’t just work for math problems — it works for many other kinds of questions that have unambiguous (computable) answers.

There is no risk of Wolfram Alpha becoming too smart, or taking over the world. It’s good at answering factual questions; it’s a computing machine, a tool — not a mind.

One of the most surprising aspects of this project is that Wolfram has been able to keep it secret for so long. I say this because it is a monumental effort (and achievement) and almost absurdly ambitious. The project involves more than a hundred people working in stealth to create a vast system of reusable, computable knowledge, from terabytes of raw data, statistics, algorithms, data feeds, and expertise. But he appears to have done it, and kept it quiet for a long time while it was being developed.

Computation Versus Lookup

For those who are more scientifically inclined, Stephen showed me many interesting examples — for example, Wolfram Alpha was able to solve novel numeric sequencing problems, calculus problems, and could answer questions about the human genome too. It was also able to compute answers to questions about many other kinds of topics (cooking, people, economics, etc.). Some commenters on this article have mentioned that in some cases Google appears to be able to answer questions, or at least the answers appear at the top of Google’s results. So what is the Big Deal? The Big Deal is that Wolfram Alpha doesn’t merely look up the answers like Google does, it computes them using at least some level of domain understanding and reasoning, plus vast amounts of data about the topic being asked about.

Computation is in many cases a better alternative to lookup. For example, you could solve math problems using lookup — that is what a multiplication table is after all. For a small multiplication table, lookup might even be almost as computationally inexpensive as computing the answers. But imagine trying to create a lookup table of all answers to all possible multiplication problems — an infinite multiplication table. That is a clear case where lookup is no longer a better option compared to computation.

The ability to compute the answer on a case by case basis, only when asked, is clearly more efficient than trying to enumerate and store an infinitely large multiplication table. The computation approach only requires a finite amount of data storage — just enough to store the algorithms for solving general multiplication problems — whereas the lookup table approach requires an infinite amount of storage — it requires actually storing, in advance, the products of all pairs of numbers.

(Note: If we really want to store the products of ALL pairs of numbers, it turns out this is impossible to accomplish, because there are an infinite number of numbers. It would require an infinite amount of time to simply generate the data, and an infinite amount of storage to store it. In fact, just to enumerate and store all themultiplication products of the numbers between 0 and 1 would require an infinite amount of time and storage. This is because the real-numbers are uncountable. There are in fact more real-numbers than integers (see the work of Georg Cantor on this). However, the same problem holds even if we are speaking of integers — it would require an infinite amount of storage to store all their multiplication products, although they at least could be enumerated, given infinite time.)

Using the above analogy, we can see why a computational system like Wolfram Alpha is ultimately a more efficient way to compute the answers to many kinds offactual questions than a lookup system like Google. Even though Google is becoming increasingly comprehensive as more information comes on-line and gets indexed, it will never know EVERYTHING. Google is effectively just a lookup table of everything that has been written and published on the Web, that Google has found. But not everything has been published yet, and furthermore Google’s index is also incomplete, and always will be.

Therefore Google does and always will contain gaps. It cannot possibly index the answer to every question that matters or will matter in the future — it doesn’t contain all the questions or all the answers. If nobody has ever published a particular question-answer pair onto some Web page, then Google will not be able to index it, and won’t be able to help you find the answer to that question — UNLESS Google also is able to compute the answer like Wolfram Alpha does (an area that Google is probably working on, but most likely not to as sophisticated a level as Wolfram’s Mathematica engine enables).

While Google only provide answers that are found on some Web page (or at least in some data set they index), a computational knowledge engine like Wolfram Alpha can provide answers to questions it has never seen before — provided however that it at least knows the necessary algorithms for answering such questions, and it at least has sufficient data to compute the answers using these algorithms. This is a “big if” of course.

Wolfram Alpha substitutes computation for storage. It is simply more compact to store general algorithms for computing the answers to various types of potential factual questions, than to store all possible answers to all possible factual questions. In then end making this tradeoff in favor of computation wins, at least for subject domains where the space of possible factual questions and answers islarge. A computational engine is simply more compact and extensible than a database of all questions and answers.

This tradeoff, as Mills Davis points out in the comments to this article is also referred to as the tradeoff between time and space in computation. For very difficult computations, it may take a long time to compute the answer. If the answer was simply stored in a database already of course that would be faster and more efficient. Therefore, a hybrid approach would be for a system like Wolfram Alpha to store all the answers to any questions that have already been asked of it, so that they can be provided by simple lookup in the future, rather than recalculated each time. There may also already be databases of precomputed answers to very hard problems, such as finding very large prime numbers for example. These should also be stored in the system for simple lookup, rather than having to be recomputed. I think that Wolfram Alpha is probably taking this approach. For many questions it doesn’t make sense to store all the answers in advance, but certainly for some questions it is more efficient to store the answers, when you already know them, and just look them up.

Other Competition

Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for COMPUTING answers to questions about what we as a civilization collectively know. It’s the next step in the distribution of knowledge and intelligence around the world — a new leap in the intelligence of our collective”Global Brain.” And like any big next-step, Wolfram Alpha works in a new way — it computes answers instead of just looking them up.

Wolfram Alpha, at its heart is quite different from a brute force statistical search engine like Google. And it is not going to replace Google — it is not a general search engine: You would probably not use Wolfram Alpha to shop for a new car, find blog posts about a topic, or to choose a resort for your honeymoon. It is not a system that will understand the nuances of what you consider to be the perfect romanticgetaway, for example — there is still no substitute for manual human-guided search for that. Where it appears to excel is when you want facts about something, or when you need to compute a factual answer to some set of questions about factual data.

I think the folks at Google will be surprised by Wolfram Alpha, and they will probably want to own it, but not because it risks cutting into their core search engine traffic. Instead, it will be because it opens up an entirely new field of potential traffic around questions, answers and computations that you can’t do on Google today.

The services that are probably going to be most threatened by a service like Wolfram Alpha are the Wikipedia, Cyc, Metaweb’s Freebase, True Knowledge, the START Project, and natural language search engines (such as Microsoft’s upcoming search engine, based perhaps in part on Powerset‘s technology), and other services that are trying to build comprehensive factual knowledge bases.

As a side-note, my own service, Twine.com, is NOT trying to do what Wolfram Alpha is trying to do, fortunately. Instead, Twine uses the Semantic Web to help people filter the Web, organize knowledge, and track their interests. It’s a very different goal. And I’m glad, because I would not want to be competing withWolfram Alpha. It’s a force to be reckoned with.

Relationship to the Semantic Web

During our discussion, after I tried and failed to poke holes in his natural language parser for a while, we turned to the question of just what this thing is, and how it relates to other approaches like the Semantic Web.

The first question was could (or even should) Wolfram Alpha be built using the Semantic Web in some manner, rather than (or as well as) the Mathematica engine it is currently built on. Is anything missed by not building it with Semantic Web’s languages (RDF, OWL, Sparql, etc.)?

The answer is that there is no reason that one MUST use the Semantic Web stack to build something like Wolfram Alpha. In fact, in my opinion it would be far too difficult to try to explicitly represent everything Wolfram Alpha knows and can compute using OWL ontologies and the reasoning that they enable. It is just too wide a range of human knowledge and giant OWL ontologies are too difficult to build and curate.

It would of course at some point be beneficial to integrate with the Semantic Web so that the knowledge in Wolfram Alpha could be accessed, linked with, and reasoned with, by other semantic applications on the Web, and perhaps to make it easier to pull knowledge in from outside as well. Wolfram Alpha could probably play better with other Web services in the future by providing RDF and OWL representations of it’s knowledge, via a SPARQL query interface — the basic open standards of the Semantic Web. However for the internal knowledge representation and reasoning that takes places in Wolfram Alpah, OWL and RDF are not required and it appears Wolfram has found a more pragmatic and efficient representation of his own.

I don’t think he needs the Semantic Web INSIDE his engine, at least; it seems to be doing just fine without it. This view is in fact not different from the current mainstream approach to the Semantic Web — as one commenter on this article pointed out, “what you do in your database is your business” — the power of the Semantic Web is really for knowledge linking and exchange — for linking data and reasoning across different databases. As Wolfram Alpha connects with the rest ofthe “linked data Web,” Wolfram Alpha could benefit from providing access to its knowledge via OWL, RDF and Sparql. But that’s off in the future.

It is important to note that just like OpenCyc (which has taken decades to build up a very broad knowledge base of common sense knowledge and reasoning heuristics), Wolfram Alpha is also a centrally hand-curated system. Somehow, perhaps just secretly but over a long period of time, or perhaps due to some new formulation or methodology for rapid knowledge-entry, Wolfram and his team have figured out a way to make the process of building up a broad knowledge base about the world practical where all others who have tried this have found it takes far longer than expected. The task is gargantuan — there is just so much diverse knowledge in the world. Representing even a small area of it formally turns out to be extremely difficult and time-consuming.

It has generally not been considered feasible for any one group to hand-curate all knowledge about every subject. The centralized hand-curation of Wolfram Alpha is certainly more controllable, manageable and efficient for a project of this scale and complexity. It avoids problems of data quality and data-consistency. But it’s also apotential bottleneck and most certainly a cost-center. Yet it appears to be a tradeoff that Wolfram can afford to make, and one worth making as well, from what I could see. I don’t yet know how Wolfram has managed to assemble his knowledge base in less than a very long time, or even how much knowledge he and his team have really added, but at first glance it seems to be a large amount. I look forward to learning more about this aspect of the project.

Building Blocks for Knowledge Computing

Wolfram Alpha is almost more of an engineering accomplishment than a scientific one — Wolfram has broken down the set of factual questions we might ask, and the computational models and data necessary for answering them, into basic building blocks — a kind of basic language for knowledge computing if you will. Then, with these building blocks in hand his system is able to compute with them — to break down questions into the basic building blocks and computations necessary to answer them, and then to actually build up computations and compute the answers on the fly.

Wolfram’s team manually entered, and in some cases automatically pulled in, masses of raw factual data about various fields of knowledge, plus models and algorithms for doing computations with the data. By building all of this in a modular fashion on top of the Mathematica engine, they have built a system that is able to actually do computations over vast data sets representing real-world knowledge. More importantly, it enables anyone to easily construct their own computations — simply by asking questions.

The scientific and philosophical underpinnings of Wolfram Alpha are similar to those of the cellular automata systems he describes in his book, “A New Kind of Science” (NKS). Just as with cellular automata (such as the famous “Game of Life” algorithm that many have seen on screensavers), a set of simple rules and data can be used to generate surprisingly diverse, even lifelike patterns. One of the observations of NKS is that incredibly rich, even unpredictable patterns, can be generated from tiny sets of simple rules and data, when they are applied to their own output over and over again.

In fact, cellular automata, by using just a few simple repetitive rules, can compute anything any computer or computer program can compute, in theory at least. But actually using such systems to build real computers or useful programs (such as Web browsers) has never been practical because they are so low-level it would not be efficient (it would be like trying to build a giant computer, starting from theatomic level).

The simplicity and elegance of cellular automata proves that anything that may be computed — and potentially anything that may exist in nature — can be generated from very simple building blocks and rules that interact locally with one another. There is no top-down control, there is no overarching model. Instead, from a bunch of low-level parts that interact only with other nearby parts, complex global behaviors emerge that, for example, can simulate physical systems such as fluid flow, optics, population dynamics in nature, voting behaviors, and perhaps even the very nature of space-time. This is the main point of the NKS book in fact, and Wolfram draws numerous examples from nature and cellular automata to make his case.

But with all its focus on recombining simple bits of information according to simple rules, cellular automata is not a reductionist approach to science — in fact, it is much more focused on synthesizing complex emergent behaviors from simple elements than in reducing complexity back to simple units. The highly synthetic philosophy behind NKS is the paradigm shift at the basis of Wolfram Alpha’s approach too. It is a system that is very much “bottom-up” in orientation. This isnot to say that Wolfram Alpha IS a cellular automaton itself — but rather that it is similarly based on fundamental rules and data that are recombined to form highly sophisticated structures.

Wolfram has created a set of building blocks for working with formal knowledge to generate useful computations, and in turn, by putting these computations together you can answer even more sophisticated questions and so on. It’s a system for synthesizing sophisticated computations from simple computations. Of course anyone who understands computer programming will recognize this as the very essence of good software design. But the key is that instead of forcing users to writeprograms to do this in Mathematica, Wolfram Alpha enables them to simply ask questions in natural language and then automatically assembles the programs to compute the answers they need.

Wolfram Alpha perhaps represents what may be a new approach to creating an “intelligent machine” that does away with much of the manual labor of explicitly building top-down expert systems about fields of knowledge (the traditional AI approach, such as that taken by the Cyc project), while simultaneously avoiding the complexities of trying to do anything reasonable with the messy distributed knowledge on the Web (the open-standards Semantic Web approach). It’s simplerthan top down AI and easier than the original vision of Semantic Web.

Generally if someone had proposed doing this to me, I would have said it was not practical. But Wolfram seems to have figured out a way to do it. The proof is that he’s done it. It works. I’ve seen it myself.

Questions Abound

Of course, questions abound. It remains to be seen just how smartWolfram Alpha really is, or can be. How easily extensible is it? Willit get increasingly hard to add and maintain knowledge as more is addedto it? Will it ever make mistakes? What forms of knowledge will it beable to handle in the future?

I think Wolfram would agree that it is probably never going to be able to give relationship or career advice, for example, because that is “fuzzy” — there is often no single right answer to such questions. And I don’t know how comprehensive it is, or how it will be able to keep up with all the new knowledge in the world (the knowledge in the system is exclusively added by Wolfram’s team right now, which is a labor intensive process). But Wolfram is an ambitious guy. He seems confident that he has figured out how to add new knowledge to the system at a fairly rapid pace, and he seems to be planning to make the system extremely broad.

And there is the question of bias, which we addressed as well. Is there any risk of bias in the answers the system gives because all the knowledge is entered by Wolfram’s team? Those who enter the knowledge and design the formal models in the system are in a position to both define the way the system thinks — both the questions and the answers it can handle. Wolfram believes that by focusing on factual knowledge — things like you might find in the Wikipedia or textbooks or reports — the bias problem can be avoided. At least he is focusing the systemon questions that do have only one answer — not questions for which there might be many different opinions. Everyone generally agrees for example that the closing price of GOOG on a certain data is a particular dollar amount. It is not debatable. These are the kinds of questions the system addresses.

But even for some supposedly factual questions, there are potential biases in the answers one might come up with, depending on the data sources and paradigms used to compute them. Thus the choice of data sources has to be made carefully to try to reflect as non-biased a view as possible. Wolfram’s strategy is to rely on widely accepted data sources like well-known scientific models, public data about factual things like the weather, geography and the stock market published byreputable organizatoins and government agencies, etc. But of course even this is a particular worldview and reflects certain implicit or explicit assumptions about what data sources are authoritative.

This is a system that reflects one perspective — that of Wolfram and his team — which probably is a close approximation of the mainstream consensus scientific worldview of our modern civilization. It is a tool — a tool for answering questions about the world today, based on what we generally agree that we know about it. Still, this is potentially murky philosophical territory, at least for some kinds ofquestions. Consider global warming — not all scientists even agree it is taking place, let alone what it signifies or where the trends are headed. Similarly in economics, based on certain assumptions and measurements we are either experiencing only mild inflation right now, or significant inflation. There is not necessarily one right answer — there are valid alternative perspectives.

I agree with Wolfram, that bias in the data choices will not be a problem, at least for a while. But even scientists don’t always agree on the answers to factual questions, or what models to use to describe the world — and this disagreement is essential to progress in science in fact. If there is only one “right” answer to any question there could never be progress, or even different points of view. Fortunately, Wolfram is desigining his system to link to alternative questions andanswers at least, and even to sources for more information about the answers (such as the Wikipeda for example). In this way he can provide unambiguous factual answers, yet also connect to more information and points of view about them at the same time. This is important.

It is ironic that a system like Wolfram Alpha, which is designed to answer questions factually, will probably bring up a broad range of questions that don’t themselves have unambiguous factual answers — questions about philosophy, perspective, and even public policy in the future (if it becomes very widely used). It is a system that has the potential to touch our lives as deeply as Google. Yet how widely it will be used is an open question too.

The system is beautiful, and the user interface is already quite simple and clean. In addition, answers include computationally generated diagrams and graphs — not just text. It looks really cool. But it is also designed by and for people with IQ’s somewhere in the altitude of Wolfram’s — some work will need to be done dumbing it down a few hundred IQ points so as to not overwhelm the average consumer with answers that are so comprehensive that they require a graduate degree to fully understand.

It also remains to be seen how much the average consumer thirsts for answers to factual questions. I do think all consumers at times have a need for this kind of intelligence once in a while, but perhaps not as often as they need something like Google. But I am sure that academics, researchers, students, government employees, journalists and a broad range of professionals in all fields definitely need a tool like this and will use it every day.

Future Potential

I think there is more potential to this system than Stephen has revealed so far. I think he has bigger ambitions for it in the long-term future. I believe it has the potential to be THE online service for computing factual answers. THE system for factual knowlege on the Web. More than that, it may eventually have the potential to learn and even to make new discoveries. We’ll have to wait and see where Wolfram takes it.

Maybe Wolfram Alpha could even do a better job of retrieving documents than Google, for certain kinds of questions — by first understanding what you really want, then computing the answer, and then giving you links to documents that related to the answer. But even if it is never applied to document retrieval, I think it has the potential to play a leading role in all our daily lives — it could function likea kind of expert assistant, with all the facts and computational power in the world at our fingertips.

I would expect that Wolfram Alpha will open up various API’s in the future and then we’ll begin to see some interesting new, intelligent, applications begin to emerge based on its underlying capabilities and what it knows already.

In May, Wolfram plans to open up what I believe will be a first version of Wolfram Alpha. Anyone interested in a smarter Web will find it quite interesting, I think. Meanwhile, I look forward to learning more about this project as Stephen reveals more in months to come.

One thing is certain, Wolfram Alpha is quite impressive and Stephen Wolfram deserves all the congratulations he is soon going to get.

Appendix: Answer Engines vs. Search Engines

The above article about Wolfram Alpha has created quite a stir on the blogosphere (Note: For those who haven’t used Techmeme before: just move your mouse over the “discussion” links under the Techmeme headline and expand to see references to related responses)

But while the response from most was quite positive and hopeful, some writers jumped to conclusions, went snarky, or entirely missed the point.

For example some articles such as this one by Jon Stokes at Ars Technica, quickly veered into refuting points that I in fact never made (Stokes seems to have not actually read my article in full before blogging his reply perhaps, or maybe he did read it but simply missed my point).

Other articles such as this one by Saul Hansell of the New York Times’ Bits blog,focused on the business questions — again a topic that I did not address in my article. My article was about the technology, not the company or the business opportunity.

The most common misconception in the articles that misesd the point concerns whether Wolfram Alpha is a “Google killer.”

In fact I was very careful in the title of my article, and the content, to make the distinction between Wolfram Alpha and Google. And I tried to make it clear that Wolfram Alpha is not designed to be a “Google killer.” It has a very different purpose: it doesn’t compete with Google for general document retreival, instead it answers factual questions.

Wolfram Alpha is an “answer engine” not a search engine.

Answer engines are different category of tool from search engines. They understand and answer questions — they don’t simply retrieve documents. (Note: in fact, Wolfram Alpha doesn’t merely answer questions, it also helps users to explore knowledge and data visually and can even open up new questions)

Of course Wolfram Alpha is not alone in making a system that can answer questions. This has been a longstanding dream of computer scientists, artificial intelligence theorists, and even a few brave entrepreneurs in the past.

Google has also been working on answering questions that are typed directly into their search box. For example, type a geography question or even “what time is it in Italy” into the Google search box and you will get a direct answer. But the reasoning and computational capabilities of Google’s “answer engine” features are primitivecompared to what Wolfram Alpha does.

For example, the Google search box does not compute answers to calculus problems, or tell you what phase the moon will be in on a certain future date, or tell you the distance from San Francisco to Ulan Bator, Mongolia.

Many questions can or might be answered by Google, using simple database lookup, provided that Google already has the answers in its index or databases. But there are many questions that Google does not yet find or store the answers to efficiently. And there always will be.

Google’s search box provides some answers to common computational questions (perhaps via looking them up in a big database in some cases, or perhaps by computing the answers in other cases). But so far it has limited range. Of course the folks at Google could work more on this. They have the resources if they want to. But they are far behind Wolfram Alpha, and others (for example, the START project, which I recently learned about today, True Knowledge and Cyc project, among many others).

The approach taken by Wolfram Alpha — and others working on “answer engines” is not to build the world’s largest database of answers but rather to build a system that can compute answers to unanticipated questions. Google has built a system that can retrieve any document on the Web. Wolfram Alpha is designed to be a system that can answer any factual question in the world.

Of course, if the Wolfram Alpha people are clever (and they are), they will probably design their system to also leverage databases of known answers whenever they can, and to also store any new answers they compute to save the trouble of re-computing them if asked again in the future. But they are fundamentally not making a database lookup oriented service. They are making a computation oriented service.

Answer engines do not compete with search engines, but some search engines (such as Google) may compete with answer engines. Time will tell if search engine leaders like Google will put enough resources into this area of functionality to dominate it, or whether they will simply team up with the likes of Wolfram and/or others who have put a lot more time into this problem already.

In any case, Wolfram Alpha is not a “Google killer.” It wasn’t designed to be one. It does however answer useful questions — and everyone has questions. There is an opportunity to get a lot of traffic, depending on things that still need some thought (such as branding, for starters). The opportunity is there, although we don’t yet know whether Wolfram Alpha will win it. I think it certainly has all the hallmarks of a strong contender at least.

Challenges Twitter, and the Twitter Community, Will Soon Face

Challenges Twitter Will Face

As I think about Twitter more deeply, one thing that jumps out to me is that in each wave of messaging technology, the old way is supplanted by a new way that is faster, more interactive, and has less noise. And then noise inevitably comes again and everyone moves to a new tool with less noise. This is the boom and bust cycle of messaging tools on the Web. Twitter is the new “new tool” but inevitably, as Twitter gains broader adoption the noise will come. I see several near-term challenges for Twitter as a service, and for the community of Twitter users:

Spam.
So far I have not encountered much real, deliberate, spam on Twitter. The community does a good job of self-policing, and the spammers haven’t figured out how to co-opt it. Most of what people call spam on Twitter is inadvertent from what I can tell. But the real spammers are coming and that is going to be a serious challenge for Twitter’s relatively simple social networking and messaging model.What is the Twitter community going to do when all the spam and noise inevitably arrives?

Mainstream Users.
Currently Twitter seems a bit like the early Web, and the early blogosphere — it is mostly an elite group of influencers and early adopters who have some sense of connectedness and decorum. But what happens when everyone else joins Twitter? What happens when the mainstream users arrive and fill Twitter up with more voices, and potentially more noise (at least from the perspective of the early users of Twitter) than it contains today.

Keeping Up.
Another challenge that I see as a new user of Twitter is that it is very hard to keep up with what so many people are tweeting effectively and I get the feeling I miss a lot of important things because I simply don’t have time to monitor Twitter at all hours. I need a way to see just the things that are really important, popular or likely to be of interest to me, instead of everything. I’m monitoring a number of
Twitter searches in my Twitter client and this seems to help. I also monitor Twitter searches and certain people’s tweets via RSS. But it’s a lot to keep up with.

Conversation Overload.
Secondly its difficult to manage conversations or to follow many conversations because there is no threading in the Twitter clients I have tried. Without actual threading it is quite hard to follow the flow of conversations, let alone multiple simultaneous conversations. It seems like a great opportunity for visualizaton as well — for example I would love a way to visually see conversations grow and split
into sub-threads in real-time.

Integration Overload.

As an increasing number of external social networks, messaging systems, and publishing engines all start to integrate with Twitter, there will be friction. What are the rules for how services can integrate with Twitter — beyond the API level, I am talking about the user-experience level.

How many messages, of what type, for what purpose can an external service send into Twitter? Are there standards for this that everyone must abide by or is it optional?

The potential for abuse, or for Twitter to just fill up to the point of being totally overloaded with content is huge. It appears inevitable that this will happen. Will a new generation of Twitter clients with more powerful filtering have to be generated to cope with this?

These are certainly opportunities for people making Twitter clients. Whatever Twitter app solves these problems could become very widely used.

How Twitter Makes Things Faster: A Timeline

The World is Getting Faster

In the world of Twitter things happen in real-time, not Internet-time. It’s even faster than the world of the 1990’s and the early 2000’s.

Here’s an interesting timeline:

  1. In the 1980’s the fax machine made snailmail almost obsolete. Faxing was faster.
  2. In the 1990’s email made faxing almost obsolete. Email was faster.
  3. In the 2000’s social media rose to challenge email’s dominance. The blogosphere became the center of focus.Blogging about something was often a faster way to get attention (to oneself, or to the topic) than emailing people. And you could more easily reach a larger audience.
  4. In the 2010’s it looks like Twitter (and other real-time messaging systems) may become more important than email and even blogging. Twitter is simply faster. And you can reach more people in less time, more interactively, in Twitter than via email. Twitter may overcome the asynchronous nature of the Web. Even search may go “real-time.”

Why Your Brand or Company Must be On Twitter

Why Your Brand or Company Should be Watching Twitter

Messages spread so virally and quickly in Twitter when they are “hot” that there
is almost no time to react. It’s at once fascinating to watch, and be a part of, and terrifying. It’s almost too “live.” There is no time to even think. And this is what I mean when I say that Twitter makes the world faster. And that this is somewhat scary.

If you have an online service or a brand that is widely used, you just cannot afford
to ignore Twitter anymore. You have to have people watching it and engaging with the Twitter community, 24/7. It’s a big risk if you don’t. And a missed opportunity as well, on the positive side. My company is starting to do this via @twine_official on Twitter.

People might be complaining about you, or they might be giving you compliments
or asking important questions on Twitter — about you personally (if you are a CEO or exec) or your company or support or marketing teams if they are on Twitter. Or they might be simply talking about you or your company or product.

In any case, you need to know this and you need to be there to respond either way. Twitter is becoming too important and influential to not pay attention to it.

If you wait several hours to reply to a developing Twitter flare-up it is already too late. And furthermore, if your product and marketing teams are not posting officially in Twitter you are missing the chance to keep your audience informed in what may be the most important new online medium since blogs. Because, simply put, Twitter is where the action is now, and it is going to be huge. I mean really huge. Like Google. You cannot ignore it.

Who has Time for Twitter?

But who has time for this? Nobody. But you have to make time anyway. It’s that important.

It was bad enough with email and Blackberries taking away any shred of free time or being offline. But at least with Email and Blackberries you don’t have to pay attention every second.

With Twitter, there is a feeling that you have to be obsessively watching it all the time or you might miss something important or even totally vital. Positive and negative flare-ups happen all the time on Twitter and they could develop at any moment. You need to have someone from your company or brand keeping tabs on this so you are there if you need to be. Being late to the party, or the crisis, is not an option.

It appears that monitoring and participating in Twitter is absolutely vital to any big brand, and even the smaller ones. But it’s not easy to figure out how to do this effectively.

For a Twitter newbie like me, there is a bit of a learning curve. It’s not easy to figure out how to use Twitter effectively. The basic Web interface on the Twitter Website
is not productive enough to manage vast amounts of tweets and conversations. I’m now experimenting with Twitter clients and so far have found TweetDeck to be pretty good.

Why Twitter is Actually Something New and Different

Why is Twitter Different From What’s Come Before?

I pride myself on being on top of the latest technologies but I think I unfairly judged Twitter a while back. I decided it wasn’t really useful or important; just another IM type tool. Chat-all-over-again. But I was wrong. Twitter is something new.

  • It is both real-time and asynchronous
  • You can reach more people, more quickly, and the ratio of influencers to the general population is still quite high on Twitter because it is early in adoption cycle
  • The threshold for interaction and sharing is lower: People accept messages, publish messages to, and forward things (by “Re-tweeting”) along weaker social links (meaning, to, from and via people they barely know or don’t know at all; this does not happen with email except in the case of chain-letters).
  • Twitter has a different social structure and sharing dynamic than email — there is a strong sense of shared place, where everyone is able to see the public activity stream, and can easily follow sudden conversations and issues that flare up in the commons.
  • Twitter has its own somewhat unspoken rules of ettiquette around following, direct messaging, tweeting, etc. The best-practices for using Twitter, and for integrating with Twitter, are not easy to find, and I’m not sure there really are rules or standards. You sort of have to figure it all out on your own.
  • Twitter is also quite different from other IM systems because the way it is designed encourages public and group discourse, not just person-to-person messaging. It has more of a “commons” in it.

Twitter Changes Everything. The World Just Got Faster — A Case Study (Full Version)

Intro

Because we think Twitter is important, my company has been working on integrating Twine with Twitter. Last week we soft-launched the first features in this direction.

It turns out there is some room for improvement to our implementation of Twine-Twitter integration — which many Twitterers have pointed out. This has really opened my eyes to the power and importance of Twitter, and also to how different the Twitter-enabled world is going to be (or already is, in fact).

Before last week, I never really paid much attention to Twitter, relative to other forms of interaction. In order of time-spent-per-medium I did most of my communication via email, face-to-face, SMS, phone, or online chat. I had only used Twitter lightly and didn’t really know how to use it effectively, let alone what a “DM” was. Now I’m getting up to speed with it.

I have had an interesting experience this week really immersing myself in Twitter for the first time. It hasn’t been easy though. In fact it has been a real learning experience, even for a veteran social media tools builder like myself!

You can see a bit of what I’m referring to by following me @novaspivack on Twitter and/or searching for the keyword “twine” or the hashtag #twine on Twitter, and by viewing a recent conversation on Twitter between myself and the popular Twitterer, Chris Brogan @chrisbrogan.

Twitter changes everything. My world, and in fact The World, have just changed because of it. And I’m not sure any of us are prepared for what this is going to mean for our lives. For how we communicate. For how we do business. The world just got faster. But most people haven’t realized this yet. They soon will.

In this article I will discuss some observations about Twitter, and why Twitter is going to be so important to your brand, your business, and probably your life.

Why is Twitter Different From What’s Come Before?

I pride myself on being on top of the latest technologies but I think I unfairly judged Twitter a while back. I decided it wasn’t really useful or important; just another IM type tool. Chat-all-over-again. But I was wrong. Twitter is something new.

  • It is both real-time and asynchronous
  • You can reach more people, more quickly, and the ratio of influencers to the general population is still quite high on Twitter because it is early in adoption cycle
  • The threshold for interaction and sharing is lower: People accept messages, publish messages to, and forward things (by “Re-tweeting”) along weaker social links (meaning, to, from and via people they barely know or don’t know at all; this does not happen with email except in the case of chain-letters).
  • Twitter has a different social structure and sharing dynamic than email — there is a strong sense of shared place, where everyone is able to see the public activity stream, and can easily follow sudden conversations and issues that flare up in the commons.
  • Twitter has its own somewhat unspoken rules of ettiquette around following, direct messaging, tweeting, etc. The best-practices for using Twitter, and for integrating with Twitter, are not easy to find, and I’m not sure there really are rules or standards. You sort of have to figure it all out on your own.
  • Twitter is also quite different from other IM systems because the way it is designed encourages public and group discourse, not just person-to-person messaging. It has more of a “commons” in it.

What is Twine?

Before I explain the potential for integrating Twine and Twitter, and what I’ve observed and learned so far, I’ll explain what Twine is, for those who don’t know yet.

Twine is a social network for gathering and keeping up with knowledge around interests, on your own and with other people who share your interests.

Twine is smarter than bookmarking and interest tracking tools that have come before. It combines collective intelligence of humans plus machine learning, language understanding and the Semantic Web.

For example, suppose you are interested in technology news. You can bookmark any interesting articles about tech that you find into Twine, for your own private memory, and/or into various public or private interest groups (called “twines”) that are for collecting and sharing tech news on various sub-topics. The content is found via the wisdom of crowds.

But that is just the beginning. The real payoff to users for participating in Twine is that it automatically turns your data into knowledge using machine learning, language understanding, and the Semantic Web.

Twine is Smart

What makes Twine different from social bookmarking tools like Delicious, or from social news tools like Digg, StumbleUpon and Mixx? The difference is that Twine is smarter.

Twine learns what you are interested in as you add stuff to it, by using natural language technology to crawl and read every web page you bookmark, and every note or email you send into it. Twine does this for individuals, and for groups.

From this learning Twine auto-tags your content with tags for related people, places, organizations and other topics. That in itself is useful because your content becomes self-organizing. It becomes easier to see what a collection is about (by looking at the Semantic tags), but you can quickly search and browse to exactly what you want.

Twine also learns from your social and group connections in Twine. By learning from your social graph, Twine is able to infer even more about who and what you might be interested in. This learning — about your Semantic graph and your Social graph in Twine — results in personalized recommendations for things you might like.

Finally, like Twitter, Twine helps you keep up with your interests by notifying you whenever new things are added to the twines you follow. You can get notified in your Interest Feed on Twine, or via our daily email digests, RSS feeds, and soon by following Twine activity in Twitter (Coming Soon).

Twine and Twitter — Different yet Complementary

Twitter is for participating in discussions. Twine is for participating in collections of knowledge.  They are quite different yet complimentary. Because of this I think there is great potential to integrate Twine and Twitter more deeply.

Both services have one thing in common: -you can share and follow bits of information with individuals and groups — except Twine is focused on sharing larger chunks of knowledge rather than just 140 character tweets, and it also adds more value to what is shared by semantically analyzing the content and growing communal pools of shared knowledge.

Whereas Twitter is largely focused on sharing messages and brief thoughts about what you’re doing, Twine is for collecting and sharing longer-form knowledge — like bookmarks and their metadata, and metadata about videos, photos, notes, emails, longer comments.

There is a difference in user-intent between Twitter and Twine however. In Twitter the intent is to update people on what you are doing. In Twine the intent is to gather and track knowledge around interests.

Twitter + Twine = Smarter Collective Intelligence

Twitter’s live discussions plus Twine’s growing knowledge and intelligence could eventually enable a new leap in collective intelligence on the Web. We could use the analogy of a collective distributed brain — a Global Brain, as some call it.

In that (future) scenario, Twitter is the real-time attention, perception and thinking and Twine is the learning, organizing, and memory behind it. If linked together properly they could form a kind of feedback loop between people and information that exhibits the characteristics of a vast, distributed intelligent system (like the human brain, in some respects).

I spend a fair amount of time thinking about the coming Global Brain, and speaking about it to others. Twitter + Twine may be a real step in that direction. It is one route to how the Web might become dramatically more intelligent.

By connecting the real-time collective thinking of live people (Twitter), with Web-scale knowledge management and artificial intelligence on the backend (Twine) we can make both services smarter.

Our Near-Term Twitter Integration Plan

Big futuristic thoughts aside, our near-term goals for integrating Twine and Twitter are much more modest.

  • For phase 1, we are simply enabling Twine users and admins to invite Twitter followers to connect to them on Twine, and to join their twines around various topics of interest.
  • For phase 2 we plan to enable Twine users to reflect things they post to Twine to their Twitter followers so, for example, if you bookmark a cool article to Twine it will be automatically tweeted your followers (not as a DM). If you post something to a particular twine on a topic, it will be tweeted to anyone who follows that Twine on Twitter.
  • For phase 3 — who knows? Perhaps we might enable it to work in the other direction as well: For example what Twine could pull all or selected content (such as URLs) tweets into various twines and automatically semanticize it, crawl it, index it, tag it, organize it and make it searchable? It’s an intriguing possibility. This would make Twine into a powerful add-on for Twitter. I’m thinking this over, but this is pure speculation at this point.

Difficult First Step

Phase 1 of Twine-Twitter integration has had a few hiccups however.

For this phase, we enabled our users to invite their Twitter followers to connect with them on Twine, and to join their twines, from inside of Twine. This sends an invite message as a direct message (“DM” — a private tweet) from the user’s Twine account to whichever Twitter followers they select to connect with.

But the wording of our invite message came off as too impersonal and some Twitter users mistook it for a bot-generated ad rather than a personal invitation from one of their followers.

Also we had an unexpected bug that resulted in the tweet URL taking the user to login or join Twine, but not eventually landing them at a page where they could connect to a friend or join the group they were invited to..

(*** Note: The hiccups will be fixed by Thursday of this week. The wording of the invite message and the bugs will be fixed in a patch release. We are also thinking about ways to modify this feature to be less noisy on Twitter).

We have certainly had a few complaints on Twitter about the way this feature is (not) working right now. Thankfully most of the comments have been positive, or at least understanding. We’re very sorry to anyone who was annoyed by the invite message seeming like an ad.

That said, we believe that we’ll have this fixed and working right very soon, and this should cut down on the annoyance factor. We’re open to suggestions however.

Flare-Ups Happen In Minutes On Twitter

Ordinarily a seemingly minor wording issue and bug like what I have described above would not be a problem and could wait a few days for resolution. But in the case of Twitter all it took was one very widely followed Twitterer (@chrisbrogan) to tweet that he was annoyed by the invite message today and a mini-firestorm erupted as his followers then re-tweeted it to their followers and so on. The cascade showed the signs of becoming a pretty big mess.

Fortunately I was alerted by my team in time and replied to the tweets to explain that our invite message wasn’t spam, and that fixes were in process. Chris Brogan and his followers and others were quick to reply and fortunately they were understanding and appreciative of our transparency around this issue. The transcript is here.

This situation ended well because we were quick and transparent, and because Chris and his followers were understanding. It didn’t turn into a PR nightmare. But it could have.

What worries me is what if nobody on my team had been watching Twitter when this happened??? We might have been toast. In a matter of minutes, literally, tens of thousands of people might have become angry and it would have taken on a life of its own.

Why Your Brand or Company Should be Watching Twitter

Messages spread so virally and quickly in Twitter when they are “hot” that there is almost no time to react. It’s at once fascinating to watch, and be a part of, and terrifying. It’s almost too “live.” There is no time to even think. And this is what I mean when I say that Twitter makes the world faster. And that this is somewhat scary.

If you have an online service or a brand that is widely used, you just cannot afford to ignore Twitter anymore. You have to have people watching it and engaging with the Twitter community, 24/7. It’s a big risk if you don’t. And a missed opportunity as well, on the positive side. My company is starting to do this via @twine_official on Twitter.

People might be complaining about you, or they might be giving you compliments or asking important questions on Twitter — about you personally (if you are a CEO or exec) or your company or support or marketing teams if they are on Twitter. Or they might be simply talking about you or your company or product. In any case, you need to know this and you need to be there to respond either way. Twitter is becoming too important and influential to not pay attention to it.

If you wait several hours to reply to a developing Twitter flare-up it is already too late. And furthermore, if your product and marketing teams are not posting officially in Twitter you are missing the chance to keep your audience informed in what may be the most important new online medium since blogs. Because, simply put, Twitter is where the action is now, and it is going to be huge. I mean really huge. Like Google. You cannot ignore it.

But who has time for this? It was bad enough with email and Blackberries taking away any shred of free time or being offline. But at least with Email and Blackberries you don’t have to pay attention every second. With Twitter, there is a feeling that you have to be obsessively watching it all the time or you might miss something important or even totally vital. Positive and negative flare-up happen all the time on Twitter and they could develop at any moment.

It appears that monitoring and participating in Twitter is absolutely vital to any big brand, and even the smaller ones. But it’s not easy to figure out how to do this effectively. For a Twitter newbie like me, there is a bit of a learning curve. It’s not easy to figure out how to use Twitter effectively. The basic Web interface on the Twitter Website is not productive enough to manage vast amounts of tweets and conversations. I’m now experimenting with Twitter clients and so far have found TweetDeck pretty good.

The World is Getting Faster

In the world of Twitter things happen in real-time, not Internet-time. It’s even faster than the world of the 1990’s and the early 2000’s. Here’s an interesting timeline:

  1. In the 1980’s the fax machine made snailmail almost obsolete. Faxing was faster.
  2. In the 1990’s email made faxing almost obsolete. Email was faster.
  3. In the 2000’s social media rose to challenge email’s dominance. The blogosphere became the center of focus.Blogging about something was often a faster way to get attention (to oneself, or to the topic) than emailing people. And you could more easily reach a larger audience.
  4. In the 2010’s it looks like Twitter (and other real-time messaging systems) may become more important than email and even blogging. Twitter is simply faster. And you can reach more people in less time, more interactively, in Twitter than via email.Twitter may overcome the asynchronous nature of the Web. Even search may go “real-time.”

Challenges Twitter Will Face

As I think about this one thing that jumps out to me is that in each wave of messaging technology, the old way is supplanted by a new way that is faster, more interactive, and has less noise. But as Twitter gains broader adoption the noise will come.

Spam. So far I have not encountered much real, deliberate, spam on Twitter. The community does a good job of self-policing, and the spammers haven’t figured out how to co-opt it. Most of what people call spam on Twitter is inadvertent from what I can tell. But the real spammers are coming and that is going to be a serious challenge for Twitter’s relatively simple social networking and messaging model. What is the Twitter community going to do when all the spam and noise inevitably arrives?

Mainstream Users. Currently Twitter seems a bit like the early Web, and the early blogosphere — it is mostly an elite group of influencers and early adopters who have some sense of connectedness and decorum. But what happens when everyone else joins Twitter? What happens when the mainstream users arrive and fill Twitter up with more voices, and potentially more noise (at least from the perspective of the early users of Twitter) than it contains today.

Keeping Up. Another challenge that I see as a new user of Twitter is that it is very hard to keep up with what so many people are tweeting effectively and I get the feeling I miss a lot of important things because I simply don’t have time to monitor Twitter at all hours. I need a way to see just the things that are really important, popular or likely to be of interest to me, instead of everything. I’m monitoring a number of Twitter searches in my Twitter client and this seems to help. I also monitor Twitter searches and certain people’s tweets via RSS. But it’s a lot to keep up with.

Conversation Overload. Secondly its difficult to manage conversations or to follow many conversations because there is no threading in the Twitter clients I have tried. Without actual threading it is quite hard to follow the flow of conversations, let alone multiple simultaneous conversations. It seems like a great opportunity for visualizaton as well — for example I would love a way to visually see conversations grow and split into sub-threads in real-time.

Integration Overload. A s an increasing number of external social networks, messaging systems, and publishing engines all start to integrate with Twitter, there will be friction. What are the rules for how services can integrate with Twitter — beyond the API level, I am talking about the user-experience level.

How many messages, of what type, for what purpose can an external service send into Twitter? Are there standards for this that everyone must abide by or is it optional?

The potential for abuse, or for Twitter to just fill up to the point of being totally overloaded with content is huge. It appears inevitable that this will happen. Will a new generation of Twitter clients with more powerful filtering have to be generated to cope with this?

These are certainly opportunities for people making Twitter clients. Whatever Twitter app solves these problems could become very widely used.

Conclusion

I am still just learning about Twitter but already I can tell it is going to become a major part of my online life now. I’m not sure whether I am happy about this or worried that I’m going to have no free time at all. Maybe both. It’s a new world.And it’s even faster than I expected. I don’t know how I will cope with Twitter, but I have a fascination with it that is turning into an obsession. I guess all new Twitter users go through this phase. The question is, what comes next?

One thing is for sure. You have to pay attention to Twitter.

Kevin Kelly's View of Collective Intelligence

Kevin Kelly wrote an interesting post today, which cites one of my earlier diagrams on the future of the Web. His diagram is a map of two types of collective intelligence — collective human intelligence and collective machine intelligence. It's a helpful view of where the Web is headed. I am of the opinion that the "One Machine" aka the Global Brain will include both humans and machines working together to achieve a form of collective intelligence that transcends the limitations of either form of intelligence on its own. At Twine we are combining these two forms of intelligence to help people discover and organize content around their interests.(Thanks to Kevin for citing Twine)

How to Build the Global Mind

Kevin Kelly recently wrote another fascinating article about evidence of a global superorganism. It’s another useful contribution to the ongoing evolution of this meme.

I tend to agree that we are at what Kevin calls, Stage III. However, an important distinction in my own thinking is that the superorganism is not comprised just of machines, but it is also comprised of people.

(Note: I propose that we abbreviate the One Machine, as “the OM.” It’s easier to write and it sounds cool.)

Today, humans still make up the majority of processors in the OM. Each human nervous system comprises billions of processors, and there are billions of humans. That’s a lot of processors.

However, Ray Kurzweil posits that the balance of processors is rapidly movingtowards favoring machines — and that sometime in the latter half of this century, machine processors will outnumber or at least outcompute all the human processors combined, perhaps many times over.

While agree with Ray’s point that machine intelligence will soon outnumber human intelligence, I’m skeptical of Kurzweil’s timeline, especially in light of recent research that shows evidence of quantum level computation within microtubules inside nuerons. If in fact the brain computes at the tubulin level then it may have many orders of magnitude more processors than currently estimated. This remains to be determined. Those who argue against this claim that the brain can be modelled on a Classical level and that quantum computing need not be invoked. To be clear, I am not claiming that the brain is a quantum computer, I am claiming that there seems to be evidence that computation in the brain takes place at the quantum level, or near it. Whether quantum effects have any measurable effect on what the brain does is not the question, the question is simply whether microtubules are the lowest level processing elements of the brain. If they are,then there are a whole lot more processors in the brain than previously thought.

Another point worth considering is that much of the brain’s computation is not taking place within the neurons but rather in the gaps between synapses, and this computation happens chemically rather than electrically. There are vastly more synapses than neurons, and computation within the synapses happens at a much faster and more granular level than neuronal firings. It is definitely the case thatchemical-level computations take place with elements that are many orders of magnitude smaller than neurons. This is another case for the brain computing at a much lower level than is currently thought.

In other words the resolution of computation in the human brain is still unknown. We have several competing approximations but no final answer on this. I do think however that evidence points to computation being much more granular than we currently think.

In any case, I do agree with Kurzweil that at least it is definitely the case that artificial computers will outnumber naturally occurring human computers on this planet — it’s just a question of when. In my view it will take a little longer than he thinks: it is likely to happen after 100 to 200 years at the most.

There is another aspect of my thinking on this subject which I think may throw a wrench in the works. I don’t think that what we call “consciousness” is something that can be synthesized. Humans appear to be conscious, but we have no idea what that means yet. It is undeniable that we all have an experience of being conscious, and this experience is mysterious. It is also the case that at least so far, nobody hasbuilt a software program or hardware device that seems to be having this experience. We don’t even know how to test for consciousness in fact. For example, the much touted Turing Test does not test consciousness, it tests humanlike intelligence. There really isn’t a test for consciousness yet. Devising one is an interesting an important goal that we should perhaps be working on.

In my own view, consciousness is probably fundamental to the substrate of the universe, like space, time and energy. We don’t know what space, time and energy actually are. We cannot actually measure them directly either. All our measurements of space, time and energy are indirect — we measure other things that imply that space, time and energy exist. Space, time and energy are inferred by effects we observe on material things that we can measure. I think the same may be true of consciousness. So the question is, what are the measureable effects ofconsciousness? Well one candidate seems to be the Double Slit experiment, which shows that the act of observation causes the quantum wave function to collapse. Are there other effects we can cite as evidence of consciousness?

I have recently been wondering how connected consciousness is to the substrate of the universe we are in. If consciousness is a property of the substrate, then it may be impossible to synthesize. For example, we never synthesize space, time or energy — no matter what we do, we are simply using the space, time and energy of the substrate that is this universe.

If this is the case, then creating consciousness is impossible. The best we can do is somehow channel the consciousness that is already there in the substrate of the universe. In fact, that may be what the human nervous system does: it channels consciousness, much in the way that an electrical circuit channels electricity. The reason that software programs will probably not become conscious is that they aretoo many levels removed from the substrate. There is little or no feedback between the high-level representations of cognition in AI programs and the quantum-level computation (and possibly consciousness) of the physical substrate of the universe. That is not the case in the human nervous system — in the human nervous system the basic computing elements and all the cognitive activity are directly tied to thephysical substrate of the universe. There is at least the potential for two-way feedback to take place between the human mind (the software), the human brain (a sort of virtual machine), and the quantum field (the actual hardware).

So the question I have been asking myself lately is how connected is consciousness to the physical substrate? And furthermore, how important is consciousness to what we consider intelligence to be? If consciousness is important to intelligence, then artificial intelligence may not be achievable through software alone — it mayrequire consciousness, which may in turn require a different kind of computing system, one which is more connected (through bidirectional feedback) to the physical quantum substrate of the universe.

What all this means to me is that human beings may form an important and potentially irreplaceable part of the OM — the One Machine — the emerging global superorganism. In particular today the humans are still the most intelligent parts. But in the future when machine intelligence may exceed human intelligence a billionfold, humans may still be the only or at least most conscious parts of the system. Because of the uniquely human capacity for consciousness (actually, animals and insects are conscious too), I think we have an important role to playin the emerging superorganism. We are it’s awareness. We are who watches, feels, and knows what it is thinking and doing ultimately.

Because humans are the actual witnesses and knowers of what the OM does and thinks, the function of the OM will very likely be to serve and amplify humans, rather than to replace them. It will be a system that is comprised of humans and machines working together, for human benefit, not for machine benefit. This is a very different future outlook than that of people who predict a kind of “Terminator-esque” future in which machines get smart enough to exterminate the human race. It won’t happen that way. Machines will very likely not get that smart for a long time, if ever, because they are not going to be conscious. I think we should be much more afraid of humans exterminating humanity than of machines doing it.

So to get to Kevin Kelly’s Level IV, what he calls “An Intelligent Conscious Superorganism” we simply have to include humans in the system. Machines alone are not, and will not ever be, enough to get us there. I don’t believe consciousness can be sythesized or that it will suddenly appear in a suitably complex computer program. I think it is a property of the substrate, and computer programs are just too many levels removed from the substrate. Now, it is possible that we mightdevise a new kind of computer architecture — one which is much more connected to the quantum field. Perhaps in such a system, consciousness, like electricity, could be embodied. That’s a possibility. It is likely that such a system would be more biological in nature, but that’s just a guess. It’s an interesting direction forresearch.

In any case, if we are willing to include humans in the global superorganism — the OM, the One Machine — then we are already at Kevin Kelly’s Level IV. If we are not willing to include them, then I don’t think will reach Level IV anytime soon, or perhaps ever.

It is also important to note that consciousness has many levels, just like intelligence. There is basic raw consciousness which simply perceives the qualia of what takes place. But there are also forms of consciousness which are more powerful — for example, consciousness that is aware of itself, and consciousness which is so highly tuned that it has much higher resolution, and consciousness which is aware of the physical substrate and its qualities of being spacelike and empty of any kind of fundamental existence. These are in fact the qualities of the quantum substrate we live in. Interestingly, they are also the qualities of reality that Buddhists masters also point out to be the ultimate nature of reality and of the mind (they do not consider reality and mind to be two different things ultimately). Consciousness may or may not be aware of these qualities of consciousness and ofreality itself — consciousness can be dull, or low-grade, or simply not awake. The level to which consciousness is aware of the substrate is a way to measure the grade of consciousness taking place. We might call this dimension of consciousness, “resolution.” The higher the resolution of consciousness is, the more acutely aware it is of the actual nature of phenomena, the substrate. At the highest  resolutionit can directly percieve the space-like, mind-like, quantum nature of what it observes. At the highest level of resolution, there is no perception of duality between observer and observed — consciousness perceives everything to be essentially consciousness appearing in different forms and behaving in a quantum fashion.

Another dimension of consciousness that is important to consider is what we could call “unity.” On the lowest level of the unity scale, there is no sense of unity, but rather a sense of extreme isolation or individuality. At the highest level of the scale there is a sense of total unification of everything within one field of consciousness. That highest-level corresponds to what we could call “omniscience.” TheBuddhist concept of spiritual enlightenment is essentially consciousness that has evolved to BOTH the highest level of resolution and the highest level of unity.

The global superorganism is already conscious, in my opinion, but it has not achieved very high resolution or unity. This is because most humans, and most human groups and organizations, have only been able to achive the most basic levels of consciousness themselves. Since humans, and groups of humans, comprise the consciousness of the global superorganism, our individual and collective conscious evolution is directly related to the conscious evolution of the superorganism as a whole. This is why it is important for individuals and groups to work on their own consciousnesses. Consciousness is “there” as a basic property of the physical substrate, but like mass or energy, it can be channelled and accumulated and shaped. Currently the consciousness that is present in us as individuals, and in groups of us, is at best, nascent and underdeveloped.

In our young, dualistic, materialistic, and externally-obsessed civilization, we have made very little progress on working with consciousness. Instead we have focused most or all of our energy on working with certain other more material-seeming aspects of the substrate — space, time and energy. In my opinion a civilizationbecomes fully mature when it spends equal if not more time on the concsiousness dimension of the substrate. That is something we are just beginning to work on, thanks to the strangeness of quantum mechanics breaking our classical physical paradims and forcing us to admit that consciousness might play a role in our reality.

But there are ways to speed up the evolution of individual and collective consciousness, and in doing so we can advance our civilization as a whole. I have lately been writing and speaking about this in more detail.

On an individual level one way to rapidly develop our own consciousness is the path of meditation and spirituality — this is most important and effective. There may also be technological improvements, such as augmented reality, or sensory augmentation, that can improve how we perceive, and what we perceive. In the not too distant future we will probably have the opportunity to dramatically improve the range and resolution of our sense organs using computers or biological means. We may even develop new senses that we cannot imagine yet. In addition, using the Internet for example, we will be able to be aware of more things at once than ever before. But ultimately, the scope of our individual consciousness has to develop on an internal level in order to truly reach higher levels of resolution and unity.Machine augmentation can help perhaps, but it is not a substitute for actually increasing the capacity of our consciousnesses. For example, if we use machines to get access to vastly more data, but our consciousnesses remain at a relatively low-capacity level, we may not be able to integrate or make use of all that new data anyway.

It is a well known fact that the brain filters out most of the information we actually percieve. Furthermore when taking a a hallucinogenic drug, the filter opens up a little wider, and people become aware of things which were there all along but which they previously filtered out. Widening the scope of consciousness — increasing the resolution and unity of consciousness, is akin to what happens when taking such a drug, except that it is not a temporary effect and it is more controllable and functional on a day-to-day basis. Many great Tibetan lamas I know seem to have accomplished this — the scope of their consciousness is quite vast, and the resolution is quite precise. They literally can and do see every detail of eventhe smallest things, and at the same time they have very little or no sense of individuality. The lack of individuality seems to remove certain barriers which in turn enable them to perceive things that happen beyond the scope of what would normally be considered their own minds — for example they may be able to perceive the thoughts of others, or see what is happening in other places or times. This seems to take place because they have increased the resolution and unity oftheir consciousnesses.

On a collective level, there are also things we can do to make groups, organizations and communities more conscious. In particular, we can build systems that do for groups what the “self construct” does for individuals.

The self is an illusion. And that’s good news. If it wasn’t an illusion we could never see through it and so for one thing spiritual enlightenment would not be possible to achieve. Furthermore, if it wasn’t an illusion we could never hope to synthesize it for machines, or for large collectives. The fact that “self” is an illusion is something that Buddhist, neuroscientists, and cognitive scientists all seem to agree on. The self is an illusion, a mere mental construct. But it’s a very useful one, when applied in the right way. Without some concept of self we humans would find it difficult to communicate or even navigate down the street. Similarly, without some concept of self groups, organizations and communities also cannot function very productively.

The self construct provides an entity with a model of itself, and its environment. This model includes what is taking place “inside” and what is taking place “outside” what is considered to be self or “me.” By creating this artificial boundary, and modelling what is taking place on both sides of the boundary, the self construct is able to measure and plan behavior, and to enable a system to adjust and adaptto “itself” and the external environment. Entities that have a self construct are able to behave far more intelligently than those which do not. For example, consider the difference between the intelligence of a dog and that of a human. Much of this is really a difference in the sophistication of the self-constructs of these two different species. Human selves are far more self-aware, introspective, and sophisticatedthan that of dogs. They are equally conscious, but humans have more developed self-constructs. This applies to simple AI programs as well, and to collective intelligences such as workgroups, enterprises, and online communities. The more sophisticated the self-construct, the smarter the system can be.

The key to appropriate and effective application of the self-construct is to develop a healthy self, rather than to eliminate the self entirely. Eradication of the self is form of nihilism that leads to an inability to function in the world. That is not somethingthat Buddhist or neuroscientists advocate. So what is a healthy self? In an individual, a healthy self is a construct that accurately represents past, present and projected future internal and external state, and that is highly self-aware, rational but not overly so, adaptable, respectful of external systems and other beings, and open to learning and changing to fit new situations. The same is true for a healthy collective self. However, most individuals today do not have healthy selves — they have highly delluded, unhealthy self-constructs. This in turn is reflected in the higher-order self-constructs of the groups, organizations and communities we build.

One of the most important things we can work on now is creating systems that provide collectives — groups, organizations and communities — with sophisticated, healthy, virtual selves. These virtual selves provide collectives with a mirror of themselves. Having a mirror enables the members of those systems to see the whole, and how they fit in. Once they can see this they can then begin to adjust their own behavior to fit what the whole is trying to do. This simplemirroring function can catalyze dramatic new levels of self-organization and synchrony in what would otherwise be a totally chaotic “crowd” of individual entities.

In fact, I think that collectives move through three levels of development:

  • Level 1: Crowds. Crowds are collectives in which the individuals are not aware of the whole and in which there is no unified sense of identity or purpose. Nevertheless crowds do intelligent things. Consider for example, schools of fish, or flocks of birds. There is no single leader, yet the individuals, by adapting to what their nearby neighbors are doing, behave collectively as a single entity of sorts. Crowds are amoebic entities that ooze around in a bloblike fashion. They are not that different from physical models of gasses.
  • Level 2: Groups. Groups are the next step up from crowds. Groups have some form of structure, which usually includes a system for command and control. They are more organized. Groups are capable of much more directed and intelligent behaviors. Families, cities, workgroups, sports teams, armies, universities, corporations, and nations are examples of groups. Most groups have intelligences that are roughly similar to that of simple animals. Theymay have a primitive sense of identity and self, and on the basis of that, they are capable of planning and acting in a more coordinated fashion.
  • Level 3: Meta-Individuals. The highest level of collective intelligence is the meta-individual. This emerges when what was once a crowd of separate individuals, evolves to become a new individual in its own right, and is faciliated by the formation of a sophisticated meta-level self-construct for the collective. This evolutionary leap is called a metasystem transition — the parts join together to form a new higher-order whole that is made of the parts themselves. This new whole resembles the parts, but transcends theirabilities. To evolve a collective to the level of being a true individual, it has to have a well-designed nervous system, it has to have a collective brain and mind, and most importantly it has to achieve a high-level of collective consciousness. High level collective consciousness requires a sophisticated collective self construct to serve as a catalyst. Fortunately, this is something we can actually build, because as has been asserted previously, self is an illusion, a consturct, and therefore selves can be built, even for large collectives comprised of millions or billions of members.

The global superorganism has been called The Global Brain for over a century by a stream of forward looking thinkers. Today we may start calling it the One Machine, or the OM, or something else. But in any event, I think the most important work that we can can do to make it smarter is to provide it with a more developed and accurate sense of collective self. To do this we might start by working on ways toprovide smaller collectives with better selves — for example, groups, teams, enterprises and online communities. Can we provide them with dashboards and systems which catalyze greater collective awareness and self-organization? I really believe this is possible, and I am certain there are technological advances that can support this goal. That is what I’m working on with my own project, Twine.com. But this is just the beginning.

Let's Move Their Market Caps By Several Hundred Million! — My Panel

I’m moderating a panel at the upcoming DEMOfall 2008 conference this year on Where the Web is Going.

I’ve assembled an all-star cast of panelists, including:

Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century

Peter Norvig, Director of Research, Google Inc.

Jon Udell, Evangelist, Microsoft Corporation

Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.

You can read more about it here. I hope you can attend!

I’m hoping that the market caps of some big public companies go up or down by a few hundred million after this panel. Stock brokers will be standing by to take your orders! :^)

Please Vote for Twine! Industry Standard Innovation 100 Awards

Great news. Twine is a finalist in the Industry Standard’s Innovation 100 Awards. Twine / Radar Networks was chosen as a finalist in the community category.

There will be one “winner” in each category depending on which companies and products receive the most community votes in each category.  You may vote for one company/product in each category.  Voting will close at midnight Pacific Time on October 3, 2008.

So please vote for Twine!!!

Solving the Landmine and Cluster Bomb Problem

For decades the world has struggled with what to do about unexploded land mines and cluster bombs killing innocent civilians, even years after a conflict has ended. The problem is that a significant percentage (10% – 40% in the case of cluster bombs) of these weapons do not explode when they are deployed, and instead blow up later on when they are disturbed by a person or animal. They also result in creating dead-zones that cannot be used for other purposes after a conflict because of the risk of unexploded ordinance.

Various treaties and proposals have been floated to ban these weapons, but they are not going to go away that easily. First of all, leading nations such as the USA, Russia and China (which also lead the production and sale of these weapons), refuse to participate in these treaties, and secondly, even if they do these weapons will still probably be used by outlaw nations.

While trying to get everyone to agree not to use these weapons is a noble goal, it is not very realistic. The genie is already out of the bottle. Putting it back in is very hard.

Instead, there is a more practical solution to this problem: Timed Deactivation. The basic idea is to redesign these weapons systems such that they simply cannot explode after a set period of time unless they are manually reset. A simple way to achieve this is to design them such that a crucial part of the weapon corrodes with exposure to naturally present environmental air or water over time. Or alternatively there can be a mechanical switch or even a battery powered timer.  In either case, after a set period of time (1 month, 6 months, 1 year, 3 years, for example) the device simply decays and can no longer explode without a replacement part. In the best case, after an even longer period of time the explosives in the device should decay and be unusable, even with a replacement part.

Designing these weapons to self-destruct safely is a practical measure that should be part of the solution. Nations that refuse to agree not to use such weapons should at least be able to commit to designing them to deactivate automatically in this manner.

Good Article on History of Talks Between Tibet and China

This article sheds some light on the history of attempts to find a resolution between the Dalai Lama and the Chinese government. I found it to be quite educational. There have in fact been numerous attempts to find a solution, but the process has been frozen in a deadlock for 50 years. The Chinese government has been the principle roadblock to this process — they do not want to engage in high-level talks with the Dalai Lama’s government. It would be easy to resolve this if serious, genuine high-level talks were to happen — talks between the Dalai Lama and the Premier of China, for example. Until that happens, this situation will only get worse. It has to be resolved at the highest levels. The Dalai Lama has said he would be happy to engage in such talks. Why is the Chinese government not willing to participate?

A Note to Fans of This Blog — Thanks for your Comments and Emails

To all my readers — and especially to those of you who have commented or sent me emails — I just wanted to say thanks! I don’t always have time to reply, but I always read everything, and I do try to reply to the messages that are most relevant. As you probably understand, I get hundreds of emails a day for my work, plus many comments and emails from readers of this blog so I am somewhat buried in information overload. But I still really appreciate the feedback and hearing from you about what you think and what you are working on too.

Thank you for taking the time to comment!

— Nova

Dice.com IPO Update

I just heard some good news from a friend in the investment business about Dice.com, a company that we acquired and helped to grow when I was running EarthWeb with my co-founers Jack and Murray Hidary. It turns out they are raising more than I thought, and at a higher valuation in their IPO. Looks like it will be $200M at what is rumored to be around a $1B valuation! Not bad!!! This is still not confirmed so do your own research if this matters to you. I’m proud of Dice.com — it’s the second company I’ve helped to build that reached a big IPO.

Moving to a Web OS

John Markoff published an interesting article today in the New York Times about the shift in software and operating systems from the desktop to the Web, in which I am quoted. The article focuses on the rivalry and different styles between Microsoft and Apple’s next-generation projects that attempt to tie desktop operating systems and the Internet together more closely. I have been tracking this trend for a while now — a trend towards the evolution of what I call a "WebOS."

In my view the coming WebOS will not live only on the desktop, rather it will be a web service that lives "in the cloud." Desktops will become views into it, rather than the center of it. The desktop PC era is almost over. We are entering a new era of mobility and plurality — our digital lives will be spread across multiple devices, most of which will be mobile. We will require access to everything, no matter what device we are on.

When a user logs onto any device — be it a laptop or a mobile device — they will connect to their account in the WebOS. The local device will synch with their WebOS account to get their latest desktop layout, their preferences, and any new notifications or changes.

End-User access to the WebOS will be primarily through browser-based applications written in scripting languages, or running on server-side apps written in Java, C# or Ruby, rather than native desktop apps. Cases where native desktop code may still be needed will include high-end graphics and audio processing, or numerical calculations, that require a lot of computation. But for most consumers, such high-end needs are rare, except in the cases of gaming and multimedia. With the increase in mobile broadband and improvements in user-interface technologies, it will become less necessary to have native desktop code for such experiences — more and more of this will move to the Web. When native computation is needed it will take place via embedding and running scripts in the local browser to leverage local resources, rather than installing and running software locally on a permanent basis. Most applications will actually be hybrids, combining local and remote services in a seamless interface.

Once connected, the WebOS will provide users with a single point of access to their data, their relationships, their preferences, and their applications, anywhere, anytime, on any device. It will also begin to unify, or at least integrate, the data and functionality of different online and desktop applications in what will appear to the end-user to be "one place." Even though we may have accounts, data and relationships in many different services around the Web, our WebOS will provide us with a unified, centralized way to access this information. It will reduce the fragmentation in our digital lives and help to improve our productivity.

Imagine being able to go to one place on the Web to access all your email, documents, photos, videos, contacts and social relationships, RSS, data records, bookmarks, notes, and any other kind of knowledge or information. Imagine also that in this place you could also access all your "applications" — which themselves would be modular widgets or bits of functionality provided by various different web services and app developers around the Web. Imagine that in this place it would be easy to create new data types, populate them with data, and share them with others. Imagine that it would be just as easy to create new applications that could use that data, and share them too.

Think of the WebOS as the ultimate personal mashup. It would not matter anymore where information was actually stored — it could live in the cloud on the Net so it was available 24/7, and it could also be cached onto local devices like phones and laptops so that it was available locally or offline when needed. You could start to mix and mash your data in all sorts of new ways — you could for example see the connections between different kinds of things, or you could generate reports that might show for example, photos and videos by people you work with, or blog posts by your friends, or files related to meetings you are scheduled for, etc.

Because all information and application functionality would start to be integrated on a meta-level in the WebOS, new efficiencies in search, navigation and discovery would become possible. But to accomplish this there would need to be an easier and more flexible way to represent the data itself — a more open, extensible, remixable data model. Enter RDF, SPARQL, OWL and the Semantic Web. I believe these technologies provide a data framework that can help to accomplish this vision.

This vision of a WebOS is something I have been wishing for, and working towards, for a long time. My own startup, Radar Networks, is actually building something like this, based completely on RDF and the Semantic Web. Stay tuned! We plan to go beta in the fall. If you are interested, visit our website and sign up for our mailing list to be invited for early-access.