Eliminating the Need for Search – Help Engines

We are so focused on how to improve present-day search engines. But that is a kind of mental myopia. In fact, a more interesting and fruitful question is why do people search at all? What are they trying to accomplish? And is there a better way to help them accomplish that than search?

Instead of finding more ways to get people to search, or ways to make existing search experiences better, I am starting to think about how to reduce or  eliminate the need to search — by replacing it with something better.

People don’t search because they like to. They search because there is something else they are trying to accomplish. So search is in fact really just an inconvenience — a means-to-an-end that we have to struggle through to do in order to get to what we actually really want to accomplish. Search is “in the way” between intention and action. It’s an intermediary stepping stone. And perhaps there’s a better way to get to where we want to go than searching.

Searching is a boring and menial activity. Think about it. We have to cleverly invent and try pseudo-natural-language queries that don’t really express what we mean. We try many different queries until we get results that approximate what we’re looking for. We click on a bunch of results and check them out. Then we search some more. And then some more clicking. Then more searching. And we never know whether we’ve been comprehensive, or have even entered the best query, or looked at all the things we should have looked at to be thorough. It’s extremely hit or miss. And takes up a lot of time and energy. There must be a better way! And there is.

Instead of making search more bloated and more of a focus, the goal should really be get search out of the way.  To minimize the need to search, and to make any search that is necessary as productive as possible. The goal should be to get consumers to what they really want with the least amount of searching and the least amount of effort, with the greatest amount of confidence that the results are accurate and comprehensive. To satisfy these constraints one must NOT simply build a slightly better search engine!

Instead, I think there’s something else we need to be building entirely. I don’t know what to call it yet. It’s not a search engine. So what is it?

Bing’s term “decision engine” is pretty good, pretty close to it. But what they’ve actually released so far still looks and feels a lot like a search engine. But at least it’s pushing the envelope beyond what Google has done with search. And this is good for competition and for consumers. Bing is heading in the right direction by leveraging natural language, semantics, and structured data. But there’s still a long way to go to really move the needle significantly beyond Google to be able to win dominant market share.

For the last decade the search wars have been fought in battles around index size, keyword search relevancy, and ad targeting — But I think the new battle is going to be fought around semantic understanding, intelligent answers, personal assistance, and commerce affiliate fees. What’s coming next after search engines are things that function more like assistants and brokers.

Wolfram Alpha is an example of one approach to this trend. The folks at Wolfram Alpha call their system a “computational knowledge engine” because they use a knowledge base to compute and synthesize answers to various questions. It does a lot of the heavy lifting for you, going through various data, computing and comparing, and then synthesizes a concise answer.

There are also other approaches to getting or generating answers for people — for example, by doing what Aardvark does: referring people to experts who can answer their questions or help them. Expert referral, or expertise search, helps reduce the need for networking and makes networking more efficient. It also reduces the need for searching online — instead of searching for an answer, just ask an expert.

There’s also the semantic search approach — perhaps exemplified by my own Twine “T2” project — which basically aims to improve the precision of search by helping you get to the right results faster, with less irrelevant noise. Other consumer facing semantic search projects of interest are Goby and Powerset (now part of Bing).

Still another approach is that of Siri, which is making an intelligent “task completion assistant” that helps you search for and accomplish things like “book a romantic dinner and a movie tonight.” In some ways Siri is a “do engine” not a “search engine.” Siri uses artificial intelligence to help you do things more productively. This is quite needed and will potentially be quite useful, especially on mobile devices.

All of these approaches and projects are promising. But I think the next frontier — the thing that is beyond search and removes the need for search is still a bit different — it is going to combine elements of all of the above approaches, with something new.

For a lack of a better term, I call this a “help engine.” A help engine proactively helps you with various kinds of needs, decisions, tasks, or goals you want to accomplish. And it does this by helping with an increasingly common and vexing problem: choice overload.

The biggest problem is that we have too many choices, and the number of choices keeps increasing exponentially. The Web and globalization have increased the number of choices that are within range for all of us, but the result has been overload. To make a good, well-researched, confident choice now requires a lot of investigation, comparisons, and thinking. It’s just becoming too much work.

For example, choosing a location for an event, or planning a trip itinerary, or choosing what medicine to take, deciding what product to buy, who to hire, what company to work for, what stock to invest in, what website to read about some topic. These kinds of activities require a lot of research, evaluations of choices, comparisons, testing, and thinking. A lot of clicking. And they also happen to be some of the most monetizable activities for search engines. Existing search engines like Google that make money from getting you to click on their pages as much as possible have no financial incentive to solve this problem — if they actually worked so well that consumers clicked less they would make less money.

I think the solution to what’s after search — the “next Google” so to speak — will come from outside the traditional search engine companies. Or at least it will be an upstart project within one of them that surprises everyone and doesn’t come from the main search teams within them. It’s really such a new direction from traditional search and will require some real thinking outside of the box.

I’ve been thinking about this a lot over the last month or two. It’s fascinating. What if there was a better way to help consumers with the activities they are trying to accomplish than search? If it existed it could actually replace search. It’s a Google-sized opportunity, and one which I don’t think Google is going to solve.

Search engines cause choice overload. That wasn’t the goal, but it is what has happened over time due to the growth of the Web and the explosion of choices that are visible, available, and accessible to us via the Web.

What we need now is not a search engine — it’s something that solves the problem created by search engines. For this reason, the next Google probably won’t be Google or a search engine at all.

I’m not advocating for artificial intelligence or anything that tries to replicate human reasoning, human understanding, or human knowledge. I’m actually thinking about something simpler. I think that it’s possible to use computers to provide consumers with extremely good, automated decision-support over the Web and the kinds of activities they engage in. Search engines are almost the most primitive form of decision support imaginable. I think we can do a lot better. And we have to.

People use search engines as a form of decision-support, because they don’t have a better alternative. And there are many places where decision support and help are needed: Shopping, travel, health, careers, personal finance, home improvement, and even across entertainment and lifestyle categories.

What if there was a way to provide this kind of personal decision-support — this kind of help — with an entirely different user experience than search engines provide today? I think there is. And I’ve got some specific thoughts about this, but it’s too early to explain them; they’re still forming.

I keep finding myself thinking about this topic, and arriving at big insights in the process. All of the different things I’ve worked on in the past seem to connect to this idea in interesting ways. Perhaps it’s going to be one of the main themes I’ll be working on and thinking about for this coming decade.

Sneak Peak – Siri — Interview with Tom Gruber

Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff

In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:

Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?

Tom Gruber: A virtual personal assistant is a software system that

  • Helps the user find or do something (focus on tasks, rather than information)
  • Understands the user’s intent (interpreting language) and context (location, schedule, history)
  • Works on the user’s behalf, orchestrating multiple services and information sources to help complete the task

In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don’t do things for me – I have to use them as tools to do something, and I have to adapt to their ways of taking input.

Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?

Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time.  Apple’s famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT’s Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book “The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us”.  These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results.  These are hallmarks of the Siri assistant.  Some of the elements of these visions
are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator.  Or self-awareness a la Singularity.  But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.

Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)

Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual
assistant that helps people do things.  It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.

Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant.  Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.

Nova Spivack: What are primary aspects of Siri that you would say are “novel”?

Tom Gruber: The demands of the consumer internet focus — instant usability and robust interaction with the evolving web — has driven us to come up with some new innovations:

  • A conversational interface that combines the best of speech and semantic language understanding with an interactive dialog that helps guide
    people toward saying what they want to do and getting it done. The
    conversational interface allows for much more interactivity that one-shot search style interfaces, which aids usability and improves intent understanding.  For example, if Siri didn’t quite hear what you said, or isn’t sure what you meant, it can ask for clarifying information.   For example, it can prompt on ambiguity: did you mean pizza restaurants in Chicago or Chicago-style pizza places near you? It can also make reasonable guesses based on context. Walking around with the phone at lunchtime, if the speech interpretation comes back with something garbled about food you probably meant “places to eat near my current location”. If this assumption isn’t right, it is easy to correct in a conversation.
  • Semantic auto-complete – a combination of the familiar “autocomplete” interface of search boxes with a semantic and linguistic model of what might be worth saying. The so-called “semantic completion” makes it possible to rapidly state complex requests (Italian restaurants in the SOMA neighborhood of San Francisco that have tables available tonight) with just a few clicks. It’s sort of like the power of faceted search a la Kayak, but packaged in a clever command line style interface that works in small form factor and low bandwidth environments.
  • Service delegation – Siri is particularly deep in technology for operationalizing a user’s intent into computational form, dispatching to multiple, heterogeneous services, gathering and integrating results, and presenting them back to the user as a set of solutions to their request.  In a restaurant selection task, for instance, Siri combines information from many different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and the user’s own favorites) to show a set of candidates that meet the intent expressed in the user’s natural language request.

Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?

Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:

  • Task focus. Siri is very focused on a bounded set of specific human tasks, like finding something to do, going out with friends, and getting around town.  This task focus allows it to have a very rich model of its domain of competence, which makes everything more tractable from language understanding to reasoning to service invocation and results presentation
  • Structured data focus. The kinds of tasks that Siri is particularly good at involve semistructured data, usually on tasks involving multiple criteria and drawing from multiple sources.  For example, to help find a place to eat, user preferences for cuisine, price range, location, or even specific food items come into play.  Combining results from multiple sources requires
    reasoning about domain entity identity and the relative capabilities of different information providers.  These are hard problems of semantic
    information processing and integration that are difficult but feasible
    today using the latest AI technologies.
  • Architecture focus. Siri is built from deep experience in integrating multiple advanced technologies into a platform designed expressly for virtual assistants. Siri co-founder Adam Cheyer was chief architect of the CALO project, and has applied a career of experience to design the platform of the Siri product. Leading the CALO project taught him a lot about what works and doesn’t when applying AI to build a virtual assistant. Adam and I also have rather unique experience in combining AI with intelligent interfaces and web-scale knowledge integration. The result is a “pure  play” dedicated architecture for virtual assistants, integrating all the components of intent understanding, service delegation, and dialog flow management. We have avoided the need to solve general AI problems by concentrating on only what is needed for a virtual assistant, and have chosen to begin with a
    finite set of vertical domains serving mobile use cases.

Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?

Tom Gruber: Rather than trying to be like a search engine to all the world’s information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface.  The smaller the form factor, the more mobile the context,
the more limited the bandwidth : the more it is important that the interface make intelligent use of the user’s attention and the resources at hand.  In other words, “smaller needs to be smarter.”  And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure.  When you are on the go, you just don’t have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.

Nova Spivack: What language and platform is Siri written in?

Tom Gruber: Java, Javascript, and Objective C (for the iPhone)

Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?

Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards.  A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier.  For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic
Web endpoint, and that makes it easy to deal with.  The more the API declares its data model, the more automated we can make our coupling to it.

Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?

Tom Gruber: Siri’s knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models.  As much as possible we represent things declaratively (i.e., as data in models, not lines of code).  This is a tried and true best practice for complex AI systems.  This makes the whole system more robust and scalable, and the development process more agile.  It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.


Nova Spivack: Will Siri be part of the Semantic Web, or at least the open linked data Web (by making open API’s, sharing of linked data, RDF, available, etc.)?

Tom Gruber: Siri isn’t a source of data, so it doesn’t expose data using Semantic Web standards.  In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop – an intelligent interface that knows about user needs
and sources of information to meet those needs, and intermediates.  The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.).  The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data.  For example, if a virtual assistant wants to schedule a dinner it needs more than the information
about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies.  That is the original purpose of ontologies-as-specification that I promoted in the
1990s – to help specify how to interact with these agents via knowledge-level APIs.

Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication.  As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.

All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text.  So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.

Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?

Tom Gruber: Siri’s top line measure of success is task completion (not relevance).  A subtask is intent recognition, and subtask of that is NLP.  Speech is another element, which couples to NLP and adds its own issues.  In this context, Siri’s NLP is “pretty darn good” — if the user is talking about something in Siri’s domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese.  All NLP is tuned for some class of natural language, and Siri’s is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don’tknow how it would compare to standard message and news corpuses using by the NLP research community.


Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?

Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.

Nova Spivack: Will Siri be able to talk back to users at any point?

Tom Gruber: It could use speech synthesis for output, for the appropriate contexts.  I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone.  For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.

Nova Spivack: Can you give me more examples of how the NLP in Siri works?

Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)

Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?

Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time.  As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live.  Siri doesn’t forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results.  The evolution in learning comes as users have a history with Siri, which gives it achance to make some generalizations about preferences.  There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.

Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?

Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes.  Siri knows about the data because we (humans) explicitly model what is in those sources.  With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request.  For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.


Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.

Tom Gruber: Thank you, Nova, it’s a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It’s easy to project intelligence onto an assistant, but Siri isn’t going to pass the Turing Test. It’s just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.

Video: My Talk on The Future of Libraries — "Library 3.0"

If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!

Interest Networks are at a Tipping Point

UPDATE: There’s already a lot of good discussion going on around this post in my public twine.

I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.

In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.

At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.t

So, what is an interest network?

In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.

Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.

I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.

This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi–dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.

We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:

What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t.

We let our users tell us what they’re most interested in, and we follow their lead).

Most interest networks exhibit the following characteristics as well:

  • They have some sort of bookmarking/submission/markup function to store and map data (often using existing metaphors, even if what’s under the hood is new)
  • They also have some sort of social sharing function to provide the network benefit (this isn’t exclusive to interest networks, obviously, but it is characteristic)
  • And in most cases, interest networks look to add some sort of “smarts” or “recommendations” capability to the mix (that is, you get more out than you put in)

This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.

To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.

At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.

The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.

Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.

6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.

I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts — Carla, Jeremiah, and others, are you listening?

Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.

Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”

Now that anyone can join, it will be fun and gratifying to watch Twine grow.

Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.

Stay tuned!

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

New Video: Leading Minds from Google, Yahoo, and Microsoft talk about their Visions for Future of The Web

Video from my panel at DEMO Fall ’08 on the Future of the Web is now available.

I moderated the panel, and our panelists were:

Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century

Peter Norvig, Director of Research, Google Inc.

Jon Udell, Evangelist, Microsoft Corporation

Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.

The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.

Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft’s longer-term views as well.

Enjoy!!!

The Future of the Desktop

This is an older version of this article. The most recent version is located here:

http://www.readwriteweb.com/archives/future_of_the_desktop.php

—————

I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I’ve come up so far.

(Author’s Note: This is a raw, first-draft of what I think it will be like. Please forgive any typos — I am still working on this and editing it…)

What Will Happen to the Desktop?

As we enter the third decade of the Web we are seeing an increasing shift from local desktop applications towards Web-hosted software-as-a-service (SaaS). The full range of standard desktop office tools (word processors, spreadsheets, presentation tools, databases, project management, drawing tools, and more) can now be accessed as Web-hosted apps within the browser. The same is true for an increasing range of enterprise applications. This process seems to be accelerating.

As more kinds of applications become available in Web-based form, the Web browser is becoming the primary framework in which end-users work and interact. But what will happen to the desktop? Will it too eventually become a Web-hosted application? Will the Web browser swallow up the desktop? Where is the desktop headed?

Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?

No. There have already been several attempts at doing this — and they never catch on. People don’t want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.

Partly this is due to the difference in user experience between using files and folders on a local machine and doing that in “simulated” fashion via some Flash-based or HTML-based imitation of a desktop. Imitations desktops to-date have simply been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it. The desktop of the future – what some have called “the Webtop” – still has yet to be invented.

It’s going to be a hosted web service

Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there will have to be some kind of interface that we consider to be our personal “home” and “workspace” — but ultimately it will have to be a unified space that all our devices connect to and share. This requires that it be a hosted online service.

Currently we have different information spaces on different devices (laptop, mobile device, PC). These will merge. Native local clients could be created for various devices, but ultimately the simplest and therefore most likely choice is to just use the browser as the client. This coming “Webtop” will provide an interface to your local devices, applications and information, as well as to your online life and information.

Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.

Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it’s will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.

The Web 3.0 desktop is going to be completely merged with the Web — it is going to be part of the Web. In fact there may eventually be no distinction between the desktop and the Web anymore.

The focus shifts from information to attention

As our digital lives shift from being focused on the old fashioned desktop to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (feeds, lifestreams, microblogs, timelines, etc.).

Instead of being just a directory, the desktop of the future is going to be more like a feed reader or social news site. The focus will be on keeping up with all the stuff flowing in and out of the user’s environment. The interface will be tuned to help the user understand what the trends are, rather than just on how things are organized.

The focus will be on helping the user to manage their attention rather than just their information. This is a leap to the meta-level: A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).

Users are going to shift from acting as librarians to acting as daytraders.

Our digital roles are already shifting from acting as librarians to becoming more like daytraders. In the PC era we were all focused on trying to manage the stuff on our computers — in other words, we were acting as librarians. But this is going to shift. Librarians organize stuff, but daytraders are focused on discovering and keeping track of trends. It’s a very different focus and activity, and it’s what we are all moving towards.

We are already spending more of our time keeping up with change and detecting trends, than on organizing information. In the coming decade the shelf-life of information is going to become vanishingly short and the focus will shift from storage and recall to real-time filtering, trend detection and prediction.

The Webtop will be more social and will leverage and integrate collective intelligence

The Webtop is going to be more socially oriented than desktops of today — it will have built-in messaging and social networking, as well as social-media sharing, collaborative filtering, discussions, and other community features.

The social dimension of our lives is becoming perhaps our most important source of information. We get information via email from friends, family and colleagues. We get information via social networks and social media sharing services. We co-create information with others in communities.

The social dimension is also starting to play a more important role in our information management and discovery activities. Instead of those activities remaining as solitary, they are becoming more communal. For example many social bookmarking and social news sites use community sentiment and collaborative filtering to help to highlight what is most interesting, useful or important.

It’s going to have powerful semantic search and social search capabilities built-in

The Webtop is going to have more powerful search built-in. This search will combine both social and semantic search features. Users will be able to search their information and rank it by social sentiment (for example, “find documents about x and rank them by how many of my friends liked them.”)

Semantic search will enable highly granular search and navigation of information along a potentially open-ended range of properties and relationships.

For example you will be able to search in a highly structured way — for example, search for products you once bookmarked that have a price of $10.95 and are on-sale this week. Or search for documents you read which were authored by Sue and related to project X, in the last month.

The semantics of the future desktop will be open-ended. That is to say that users as well as other application and information providers will be able to extend it with custom schemas, new data types, and custom fields to any piece of information.

Interactive shared spaces instead of folders

Forget about shared folders — that is an outmoded paradigm. Instead, the  new metaphor will be interactive shared spaces.

The need for shared community space is currently being provided for online by forums, blogs, social network profile pages, wikis, and new community sites. But as we move into Web 3.0 these will be replaced by something that combines their best features into one. These next-generation shared spaces will be like blogs, wikis, communities, social networks, databases, workspaces and search engines in one.

Any group of two or more individuals will be able to participate in a shared space that connects their desktops for a particular purpose. These new shared spaces will not only provide richer semantics in the underlying data, social network, and search, but they will also enable groups to seamlessly and collectively add, organize, track, manage, discuss, distribute, and search for information of mutual interest.

The personal cloud

The future desktop will function like a “personal cloud” for users. It will connect all their identities, data, relationships, services and activities in one virtual integrated space. All incoming and outgoing activity will flow through this space. All applications and services that a user makes use of will connect to it.

The personal cloud may not have a center, but rather may be comprised of many separate sub-spaces, federated around the Web and hosted by different service-providers. Yet from an end-user perspective it will function as a seamlessly integrated service. Users will be able to see and navigate all their information and applications, as if they were in one connected space, regardless of where they are actually hosted. Users will be able to search their personal cloud from any point within it.

Open data, linked data and open-standards based semantics

The underlying data in the future desktop, and in all associated services it connects, will be represented using open-standard data formats. Not only will the data be open, but the semantics of the data – the schema – will also be defined in an open way. The emerigng Semantic Web provides a good infrastructure for enabling this to happen.

The value of open linked-data and open semantics is that data will not be held prisoner anywhere and can easily be integrated with other data.

Users will be able to seamlessly move and integrate their data, or parts of their data, in different services. This means that your Webtop might even be portable to a different competing Webtop provider someday. If and when that becomes possible, how will Webtop providers compete to add value?

It’s going to be smart

One of the most important aspects of the coming desktop is that it’s going to be smart. It’s going to learn and help users to be more productive. Artificial intelligence is one of the key ways that competing Webtop providers will differentiate their offerings.

As you use it, it’s going to learn about your interests, relationships, current activities, information and preferences. It will adaptively self-organize to help you focus your attention on what is most important to whatever context you are in.

When reading something while you are taking a trip to Milan it may organize itself to be more contextually relevant to that time, place and context. When you later return home to San Francisco it will automatically adapt and shift to your home context. When you do a lot of searches about a certain product it will realize your context and intent has to do with that product and will adapt to help you with that activity for a while, until your behavior changes.

Your desktop will actually be a semantic knowledge base on the back-end. It will encode a rich semantic graph of your information, relationships, interests, behavior and preferences. You will be able to permit other applications to access part or all of your graph to datamine it and provide you with value-added views and even automated intelligent assistance.

For example, you might allow an agent that cross-links things to see all your data: it would go and add cross links to relevant things onto all the things you have created or collected. Another agent that makes personalized buying recommendations might only get to see your shopping history across all shopping sites you use.

Your desktop may also function as a simple personal assistant at times. You will be able to converse with your desktop eventually — through a conversational agent interface. While on the road you will be able to email or SMS in questions to it and get back immediate intelligent answers. You will even be able to do this via a voice interface.

For example, you might ask, “where is my next meeting?” or “what Japanese restaurants do I like in LA?” or “What is Sue’s Smith’s phone number?” and you would get back answers. You could also command it to do things for you — like reminding you to do something, or helping you keep track of an interest, or monitoring for something and alerting you when it happens.

Because your future desktop will connect all the relationships in your digital life — relationships connecting people, information, behavior, prefences and applications — it will be the ultimate place to learn about your interests and preferences.

Federated, open policies and permissions

This rich graph of meta-data that comprises your future desktop will enable the next-generation of smart services to learn about you and help you in an incredibly personalized manner. It will also of course be rife with potential for abuse and privacy will be a major function and concern.

One of the biggest enabling technologies that will be necessary is a federated model for sharing meta-data about policies and permissions on data. Information that is considered to be personal and private in Web site X should be recognized and treated as such by other applications and websites you choose to share that information with. This will require a way for sharing meta-data about your policies and permissions between different accounts and applicaitons you use.

The semantic web provides a good infrastructure for building and deploying a decentralized framework for policy and privacy integration, but it has yet to be developed, let alone adopted. For the full vision of the future desktop to emerge a universally accepted standard for exchanging policy and permission data will be a necessary enabling technology.

Who is most likely to own the future desktop?

When I think about what the future desktop is going to look like it seems to be a convergence of several different kinds of services that we currently view as separate.

It will be hosted on the cloud and accessible across all devices. It will place more emphasis on social interaction, social filtering, and collective intelligence. It will provide a very powerful and extensible data model with support for both unstructured and arbitrarily structured information. It will enable almost peer-to-peer like search federation, yet still have a unified home page and user-experience. It will be smart and personalized. It will be highly decentralized yet will manage identity, policies and permissions in an integrated cohesive and transparent manner across services.

By cobbling together a number of different services that exist today you could build something like this in a decentralized fashion. Is that how the desktop of the future will come about? Or will it be a new application provided by one player with a lot of centralized market power? Or could an upstart suddently emerge with the key enabling technologies to make this possible? It’s hard to predict, but one thing is certain: It will be an interesting process to watch.

A Few Predictions for the Near Future

This is a five minute video in which I was asked to make some predictions for the next decade about the Semantic Web, search and artificial intelligence. It was done at the NextWeb conference and was a fun interview.


Learning from the Future with Nova Spivack from Maarten on Vimeo.

My Visit to DERI — World's Premier Semantic Web Research Institute

Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.

DERI has become the world’s premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what’s happening there.

Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:

  • Semantic Web Search Engine (SWSE) and YARS, a massively scalable triplestore.  These projects are concerned with crawling and indexing the information on the Semantic Web so that end-users can find it. They have done good work on consolidating data and also on building a highly scalable triplestore architecture.
  • Sindice — An API and search infrastructure for the Semantic Web. This project is focused on providing a rapid indexing API that apps can use to get their semantic content indexed, and that can also be used by apps to do semantic searches and retrieve semantic content from the rest of the Semantic Web. Sindice provides Web-scale semantic search capabilities to any semantic application or service.
  • SIOC — Semantically Interlinked Online Communities. This is an ontology for linking and sharing data across online communities in an open manner, that is getting a lot of traction. SIOC is on its way to becoming a standard and may play a big role in enabling portability and interoperability of social Web data.
  • JeromeDL is developing technology for semantically enabled digital libraries. I was impressed with the powerful faceted navigation and search capabilities they demonstrated.
  • notitio.us. is a project for personal knowledge management of bookmarks and unstructured data.
  • SCOT, OpenTagging and Int.ere.st.  These projects are focused on making tags more interoperable, and for generating social networks and communities from tags. They provide a richer tag ontology and framework for representing, connecting and sharing tags across applications.
  • Semantic Web Services.  One of the big opportunities for the Semantic Web that is often overlooked by the media is Web services. Semantics can be used to describe Web services so they can find one another and connect, and even to compose and orchestrate transactions and other solutions across networks of Web services, using rules and reasoning capabilities. Think of this as dynamic semantic middleware, with reasoning built-in.
  • eLite. I was introduced to the eLite project, a large e-learning initiative that is applying the Semantic Web.
  • Nepomuk.  Nepomuk is a large effort supported by many big industry players. They are making a social semantic desktop and a set of developer tools and libraries for semantic applications that are being shipped in the Linux KDE distribution. This is a big step for the Semantic Web!
  • Semantic Reality. Last but not least, and perhaps one of the most eye-opening demos I saw at DERI, is the Semantic Reality project. They are using semantics to integrate sensors with the real world. They are creating an infrastructure that can scale to handle trillions of sensors eventually. Among other things I saw, you can ask things like "where are my keys?" and the system will search a network of sensors and show you a live image of your keys on the desk where you left them, and even give you a map showing the exact location. The service can also email you or phone you when things happen in the real world that you care about — for example, if someone opens the door to your office, or a file cabinet, or your car, etc. Very groundbreaking research that could seed an entire new industry.

In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI — they are clearly the leader in the space.

Video of My Semantic Web Talk

This is a video of me giving commentary on my "Understanding the Semantic Web" talk and how it relates to Twine, to a group of French business school students who made a visit to our office last month.


Here is the link to the video,
if the embedded version below does not play.

Nova Spivack – Semantic Web Talk from Nicolas Cynober on Vimeo.

A Universal Classification of Intelligence

I’ve been thinking lately about whether or not it is possible to formulate a scale of universal cognitive capabilities, such that any intelligent system — whether naturally occurring or synthetic — can be classified according to its cognitive capacity. Such a system would provide us with a normalized scientific basis by which to quantify and compare the relative cognitive capabilities of artificially intelligent systems, various species of intelligent life on Earth, and perhaps even intelligent lifeforms encountered on other planets.

One approach to such evaluation is to use a standardized test, such as an IQ test. However, this test is far too primitive and biased towards human intelligence. A dolphin would do poorly on our standardized IQ test, but that doesn’t mean much, because the test itself is geared towards humans. What is needed is a way to evaluate and compare intelligence across different species — one that is much more granular and basic.

What we need is a system that focuses on basic building blocks of intelligence, starting by measuring the presence or ability to work with fundamental cognitive constructs (such as the notion of object constancy, quantities, basic arithmetic constructs, self-constructs, etc.) and moving up towards higher-level abstractions and procedural capabilities (self-awareness, time, space, spatial and temporal reasoning, metaphors, sets, language, induction, logical reasoning, etc.).

What I am asking is whether we can develop a more "universal" way to rate and compare intelligences? Such a system would provide a way to formally evaluate and rate any kind of intelligent system — whether insect, animal, human, software, or alien — in a normalized manner.

Beyond the inherent utility of having such a rating scale, there is an additional benefit to trying to formulate this system: It will lead us to really question and explore the nature of cognition itself. I believe we are moving into an age of intelligence — an age where humanity will explore the brain and the mind (the true "final frontier"). In order to explore this frontier, we need a map — and the rating scale I am calling for would provide us with one, for it maps the range of possible capabilities that intelligent systems are capable of.

I’m not as concerned with measuring the degree to which any system is more or less capable of some particular cognitive capability within the space of possible capabilities we map (such as how fast it can do algebra for example, or how well it can recall memories, etc.) — but that is a useful second step. The first step, however, is to simply provide a comprehensive map of all the possible fundamental cognitive behaviors there are — and to make this map as minimal and elegant as we can. Ideally we should be seeking the simplest set of cognitive building blocks from which all cognitive behavior, and therefore all minds, are comprised.

So the question is: Are there in fact "cognitive universals" or universal cognitive capabilities that we can generalize across all possible intelligent systems? This is a fascinating question — although we are human, can we not only imagine, but even prove, that there is a set of basic universal cognitive capabilities that applies everywhere in the universe, or even in other possible universes? This is an exploration that leads into the region where science, pure math, philosophy, and perhaps even spirituality all converge. Ultimately, this map must cover the full range of cognitive capabilities from the most mundane, to what might be (from our perspective) paranormal, or even in the realm of science fiction. Ordinary cognition as well as forms of altered or unhealthy cognition, as well as highly advanced or even what might be said to be enlightened cognition, all have to fit into this model.

Can we develop a system that would apply not just to any form of intelligence on Earth, but even to far-flung intelligent organisms that might exist on other worlds, and that perhaps might exist in dramatically different environments than humans? And how might we develop and test this model?

I would propose that such a system could be developed and tuned by testing it across the range of forms of intelligent life we find on Earth — including social insects (termite colonies, bee hives, etc.), a wide range of other animal species (dogs, birds, chimpanzees, dolphins, whales, etc.), human individuals, and human social organizations (teams, communities, enterprises). Since there are very few examples of artificial intelligence today it would be hard to find suitable systems to test it on, but perhaps there may be a few candidates in the next decade. We should also attempt to imagine forms of intelligence on other planets that might have extremely different sensory capabilities, totally different bodies, and perhaps that exist on very different timescales or spatial scales as well — what would such exotic, alien intelligences be like, and can our model encompass the basic building blocks of their cognition as well?

It will take decades to develop and tune a system such as this, and as we learn more about the brain and the mind, we will continue to add subtlety to the model. But when humanity finally establishes open dialog with an extraterrestrial civilization, perhaps via SETI or some other means of more direct contact, we will reap important rewards. A system such as what I am proposing will provide us with a valuable map for understanding alien cognition, and that may prove to be the key to enabling humanity to engage in successful interactions and relations with alien civilizations as we may inevitably encounter as humanity spreads throughout the galaxy. While some skeptics may claim that we will never encounter intelligent life on other planets, the odds would indicate otherwise. It may take a long time, but eventually it is inevitable that we will cross paths — if they exist at all. Not to be prepared would be irresponsible.

Artificial Stupidity: The Next Big Thing

There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don’t need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I’m skeptical to say the least. I don’t need or want artificial intelligence.

No, what I really need is artificial stupidity.

I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks — like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.

The human brain is the result of millions of years of evolution. It’s already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don’t require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it’s going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.

The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don’t mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren’t good at." In fact humans are really bad at doing relatively simple, "stupid" things — tasks that don’t require much intelligence at all.

For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That’s what computers are for – or should be for at least.

Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving — but we are just terrible at managing email, or making sense of the Web. Let’s play to our strengths and use computers to compensate for our weaknesses.

I think it’s time we stop talking about artificial intelligence — which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.

Defining the Semantic Graph — What is it Really?

This is written in response to a post by Anne Zelenka.

I’ve been talking about the coming “semantic graph” for quite some time now, and it seems the meme has suddenly caught on thanks to a recent article by Tim Berners-Lee in which he speaks of an emerging “Giant Global Graph” or “GGG.” But if the GGG emerges it may or may not be semantic. For example social networks are NOT semantic today, even though they contain various kinds of links between people and other things.

So what makes a graph “semantic?” How is the semantic graph different from social networks like Facebook for example?

Many people think that the difference between a social graph and a semantic graph is that a semantic graph contains more types of nodes and links. That’s potentially true, but not always the case. In fact, you can make a semantic social graph or a non-semantic social graph. The concept of whether a graph is semantic is orthogonal to whether it is social.

A graph is “semantic” if the meaning of the graph is definedand exposed in an open and  machine-understandable fashion. In otherwords, a graph is semantic if the semantics of the graph are  part ofthe graph or atleast connected from the graph. This can be accomplished by representing a social graph using RDF and OWL, the languages of the Semantic Web.

Today most social networks are non-semantic, but it is relatively easy to transform them into semantic graphs. A simple way to make any non-semantic social graph into a semantic social graph is touse the FOAF ontology to define the entities and links in the graph.

FOAF stands for “friend of a friend” and is a simple ontology of peopleand social relationships. If a social network links its data to theFOAF ontology, and exposes these linkages to other applications on theWeb, then other applications can understand the meaning of the data inthe network in an unambiguous manner. In other words it is now asemantic social graph because its semantics are visible to otherapplications.

As illustrated by the FOAF example above, one way to make a graphsemantic is to use the W3C open standards for the Semantic Web (RDF andOWL) to represent, and define the meaning of, the nodes and links inthe graph. By using the Semantic Web, the graph becomesmachine-understandable and thus more easily navigated, imported by,searched, and integrated by other applications.

For example, let’s say that social network Application A comes alongand wants to use the dataset of social network Application B. App Asees the graph of nodes and links in B, and it sees something called a”has team” link connecting various nodes in the graph together. Whatdoes that mean? What kinds of things can or cannot be connected withthis link? What can be inferred if things are connected this way?

The meaning of “has team” is ambiguous to App A because it’s notdefined anywhere that the software can see. The only way App A can useApp B’s data correctly is if the programmer of App A speaks to theprogrammer of App B (or reads something they wrote such asdocumentation of some sort) that defines what they meant by the “hasteam” link.

Only by knowing what was intended by the programmer of App B, canApp A treat App B’s data appropriately, without any misinterpretationthat might lead to mistakes or inconsistencies. This is importantbecause, for example, if a user searches for “Yankees Players” shouldpeople who are linked by the “has team” link to sports teams called”Yankees” be returned, or does “has team” mean “a connection from aperson to a sports team they support,” or does it mean “a connectionfrom a person to a sports team they play on,” or does it mean “aconnection from a person to a workgroup they participate in?” In short,App A has no idea what to do with data that is linked by App B’s “hasteam” link unless it is explicitly programmed to make use of it.

The OWL language (Web Ontology Language) provides a way for theprogrammers of App A and App B to define what the links in their graphsmean in an unambiguous and machine-understandable way.  So App A justhas to look up this definition and it can instantly start to use AppB’s data correctly, without any new programming or difficultintegration.

How is this accomplished? The programmer of App B simply uses OWL todefine an ontology of social relationships for their service: forexample they define the “has team” link to be a link that connects aperson to a sports team they play on. They also define what they meanby a “sports team” (for example, “a group of two or more people thatplay a sport” and a sport is one of “baseball, basketball, football,soccer, hockey, tennis” and they link these terms to another ontologyof sports somewhere else on the Web.) The ontology file that definesApp B’s data is added to the Website of App B, and linked from it’sdata, so that other applications can see it.

Now when another application such as App A comes along and looks atApp B’s data it can reference App B’s ontology to see for itself whatwas intended by the “has team” link — it can see exactly what thatlink implies and what can be inferred by it. It understands how to useApp B’s data set, and how to correctly make new links using that dataset which are consistent with the meaning of the links it contains.

This is the real point of the Semantic Web open standards — RDFenables data to be represented in a database independent manner, andOWL enables the semantic of that data to be defined in an openmachine-understandable way so that other applications can use that datawithout having to first be programmed to do so. As long as they speakRDF/OWL, applications can use any data they find and lookup the meaningof any data they need to use so they can use the data appropriately.

For example, suppose another application, App C, that is OWL-awareapplication but has never seen App B’s data-set before and was notprogrammed specifically to use it, pulls some data out from App B’sAPI. App C can immediately begin to use this data correctly andconsistently with how App B uses it, because all that is necessary forunderstanding how to use B’s data is encoded in the OWL ontology thatApp B’s data refers to.

The point is here that using Semantic Web open standards such as RDFand OWL to encode what data means is a giant leap beyond just puttingraw data onto the Web in an open format. It doesn’t just put the dataitself on the Web, it also puts the definition of what the data meansand how to use it, on the Web in an open format.  A semantic graph isfar more  reusable than a non-semantic graph — it’s a graph thatcarries its own meaning.

The semantic graph is not merely a graph with links to more kinds ofthings than the social graph. It’s a graph of interconnected thingsthat is machine-understandable — it’s meaning or “semantics” isexplicitly represented on the Web, just like its data. This is the realway to make social networks open. Merely opening up their API’s is justthe first step.

Only when the semantics of data is defined and shared in an open waycan any graph truly be said to be semantic. Once data around the Web isdefined in a machine-understandable way, a whole new world of easy,instant mashups becomes possible. Applications can start to freely andinstantly mix and match each other’s data, including new data they werenot programmed in advance to understand. This opens up the door to theWeb truly becoming a giant database and eventually an integratedoperating system in which all applications are able to more easilyinteroperate and share data.

The Giant Global Graph may or may not be a semantic graph. Thatdepends on whether it is implemented with, or at least connected to,W3C standards for the Semantic Web.

I believe that because the Semantic Web makes data-integrationeasier, it will ultimately be widely adopted. Simply put, applicationsthat wish to access or integrate data in the Age of the Web can moreeasily do so using RDF and OWL. That alone is reason enough to usethese standards.

Of course there are many other benefits as well, such as the abilityto do more sophisticated reasoning across the data, but that is lessimportant. Simply making data more accessible, connectable, andreusable across applications would be a huge benefit.

Quick Video Preview of Twine

The New Scientist just posted a quick video preview of Twine to YouTube. It only shows a tiny bit of the functionality, but it’s a sneak peak.

We’ve been letting early beta testers into Twine and we’re learning a lot from all the great feedback, and also starting to see some cool new uses of Twine. There are around 20,000 people on the wait-list already, and more joining every day. We’re letting testers in slowly, focusing mainly on people who can really help us beta test the software at this early stage, as we go through iterations on the app. We’re getting some very helpful user feedback to make Twine better before we open it up the world.

For now, here’s a quick video preview:

Web 3.0 — The Best Official Definition Imaginable

Jason just blogged his take on an official definition of "Web 3.0" — in his case he defines it as better content, built using Web 2.0 technologies. There have been numerous responses already, but since I am one of the primary co-authors of the Wikipedia page on the term Web 3.0, I thought I should throw my hat in the ring here.

Web 3.0, in my opinion is best defined as the third-decade of the Web (2009 – 2019), during which time several key technologies will become widely used. Chief among them will be RDF and the technologies of the emerging Semantic Web. While Web 3.0 is not synonymous with the Semantic Web (there will be several other important technology shifts in that period), it will be largely characterized by semantics in general.

Web 3.0 is an era in which we will upgrade the back-end of the Web,
after a decade of focus on the front-end (Web 2.0 has mainly been about
AJAX, tagging, and other front-end user-experience innovations.) Web
3.0 is already starting to emerge in startups such as my own Radar Networks (our product is Twine) but will really become mainstream around 2009.

Why is defining Web 3.0 as a decade of time better than just about any other possible definition of the term? Well for one thing, it's a definition that can't easily be co-opted by any company or individual around some technology or product. It's also a completely unambiguous definition — it refers to a particular time period and everything that happens in Web technology and business during that period. This would end the debate about what the term means and move it to something more useful to discuss: What technologies and trends will actually become important in the coming decade of the Web?

It's time to once again pull out my well-known graph of Web 3.0 to illustrate what I mean…

Radarnetworkstowardsawebos

(Click the thumbnail for a larger, reusable version)

I've written fairly extensively on the subjects of defining Web 3.0 and the Semantic Web. Here are some links to get you started if you want to dig deeper:

The Semantic Web: From Hypertext to Hyperdata
The Meaning and Future of the Semantic Web
How the WebOS Evolves
Web 3.0 Roundup
Gartner is Wrong About Web 3.0
Beyond Keyword (And Natural Language) Search
Enriching the Connections of the Web: Making the Web Smarter
Next Step for the Web
Doing for Data What HTML Did for Documents

Open Source Projects for Extracting Data and Metadata from Files & the Web

I’ve been looking around for open-source libraries (preferably in Java, but not required) for extracting data and metadata from common file formats and Web formats. One project that looks very promising is Aperture. Do you know of any others that are ready or almost ready for prime-time use? Please let me know in the comments! Thanks.

Enriching the Connections of the Web — Making the Web Smarter

Web 3.0 — aka The Semantic Web — is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.

I  believe that collective intelligence primarily comes from connections — this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain’s connections than in the neurons alone. There are several kinds of connections on the Web:

  1. Connections between information (such as links)
  2. Connections between people (such as opt-in social relationships, buddy lists, etc.)
  3. Connections between applications (web services, mashups, client server sessions, etc.)
  4. Connections between information and people (personal data collections, blogs, social bookmarking, search results, etc.)
  5. Connections between information and applications (databases and data sets stored or accessible by particular apps)
  6. Connections between people and applications (user accounts, preferences, cookies, etc.)

Are there other kinds of connections that I haven’t listed — please let me know!

I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.

In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It’s a very simple, yet very flexible and extensible data model that can represent any kind of data structure.

The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used.  The meaning of these connections can be very specific or very general.

For example one might define a type of connection called "friend of" or a type of connection called "employee of" — these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.

This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It’s a new place to put meaning in fact — you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole — the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).

Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood — it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.

It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.

Now where will all these rich semantic connections come from? That’s the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people — for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" — far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.

These are subtle points that are very hard for non-specialists to see — without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!

Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I’m saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.

Web 3.0 — Next-Step for Web?

The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It’s a huge article. In many ways it’s one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.

One point of clarification, just in case anyone is wondering…

Web 3.0 is not just about machines — it’s actually all about humans — it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it’s own. Web 3.0 is Web 2.0 + 1. It’s NOT Web 2.0 – people. The "+ 1" is the
addition of software and metadata that help people and other
applications organize and make better sense of the Web. That new layer
of semantics — often called "The Semantic Web" — will add to and
build on the existing value provided by social networks, folksonomies,
and collaborative filtering that are already on the Web.

So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.

Business 2.0 and BusinessWeek Articles About Radar Networks

It’s been an interesting month for news about Radar Networks. Two significant articles came out recently:

Business 2.0 Magazine published a feature article about Radar Networks in their July 2007 issue. This article is perhaps the most comprehensive article to-date about what we are working on at Radar Networks, it’s also one of the better articulations of the value proposition of the Semantic Web in general. It’s a fun read, with gorgeous illustrations, and I highly recommend reading it.

BusinessWeek  posted an article about Radar Networks on the Web. The article covers some of the background that led to my interests in collective intelligence and the creation of the company. It’s a good article and covers some of the bigger issues related to the Semantic Web as a paradigm shift. I would add one or two points of clarification in addition to what was stated in the article: Radar Networks is not relying solely on software to organize the Internet — in fact, the service we will be launching combines human intelligence and machine intelligence to start making sense of information, and helping people search and collaborate around interests more productively. One other minor point related to the article — it mentions the story of EarthWeb, the Internet company that I co-founded in the early 1990’s: EarthWeb’s content business actually was sold after the bubble burst, and the remaining lines of business were taken private under the name Dice.com. Dice is the leading job board for techies and was one of our properties. Dice has been highly profitable all along and recently filed for a $100M IPO.

Listen to this Discussion on the Future of the Web

If you are interested in the future of the Web, you might enjoy listening to this interview with me, moderated by Dr. Paul Miller of Talis. We discuss, in-depth: the Semantic Web, Web 3.0, SPARQL, collective intelligence, knowledge management, the future of search, triplestores, and Radar Networks.

A Bunch of New Press About Radar Networks

We had a bunch of press hits today for my startup, Radar
Networks

PC World  Article on  Web 3.0 and Radar Networks

Entrepreneur Magazine interview

We’re also proud to announce that Jim
Hendler
, one of the founding gurus of the Semantic Web, has joined our technical advisory board.

Metaweb and Radar Networks

This is just a brief post because I am actually slammed with VC meetings right now. But I wanted to congratulate our friends at Metaweb for their pre-launch announcement. My company, Radar Networks, is the only other major venture-funded play working on the Semantic Web for consumers so we are thrilled to see more action in this sector.

Metaweb and Radar Networks are working on two very different applications (fortunately!). Metaweb is essentially making the Wikipedia of the Semantic Web. Here at Radar Networks we are making something else — but equally big — and in a different category. Just as Metaweb is making a semantic analogue to something that exists and is big, so are we: but we’re more focused on the social web — we’re building something that everyone will use. But we are still in stealth so that’s all I can say for now.

This is now an exciting two-horse space. We look forward to others joining the excitement too. Web 3.0 is really taking off this year.

An interesting side note: Danny Hillis (founder of Metaweb), myself (founder of Radar Networks) and Lew Tucker (CTO of Radar Networks) all worked together at Thinking Machines (an early AI massively parallel computer company). It’s fascinating that we’ve all somehow come to think that the only practical way to move machine intelligence forward is by having us humans and applications start to employ real semantics in what we record in the digital world.

Is it Only Wednesday?

Is it only Wednesday? It feels like a whole week already! I’ve been in back-to-back VC meetings, board discussions and strategy meetings since last week. I think this must be related to the heating-up of the "Web 3.0" meme and the semantic sector in general. Perhaps it is also due to the coverage we got in the Guidewire Report and newsletter which went out to everyone who went to DEMO, and also perhaps because of some influential people in the biz have been talking about us. We’ve been very careful not to show our app to anyone because it does some things that are really new. We don’t want to spread that around (yet). Anyway it’s been pretty busy — not just for me, but for the whole team. Everyone is on full afterburners right now.

By the way — I’m really proud or product team (hope you guys are reading this)– the team has made an alpha that is not only a breakthrough on the technical level, but it also looks incredibly good too. Some of the select few who have seen our app so far have said, "the app looks beautiful" and "wow, that’s amazing" etc. We’ve done some cool things with NLP, graph analysis, and statistics under the hood. And the GUI is also very slick. Probably the best team I’ve worked with.

If you are interested in helping to beta-test the consumer Semantic Web, We’re planning on doing invite-only beta trials this summer — sign up at our website to be on our beta invite list.

Breaking the Collective IQ Barrier — Making Groups Smarter

I’ve been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call “The Collective IQ Barrier.” Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.

In a nutshell, here is how I define this barrier:

The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.

Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?

I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.

The Effective Size of Groups

For millions of years — in fact since the dawn of humanity — humansocial organizations have been limited in effective size. Groups aremost effective when they are small, but they have less collectiveknowledge at their disposal. Slightly larger groups optimize both effectiveness and access to resources such as knowledge and expertise. In my own experience working on many different kinds of teams, I think that the sweet-spot is between 20and 50 people. Above this size groups rapidly become inefficient andunproductive.

The Invention of Hierarchy

The solution that humans have used to get around this limitation in the effective size of groups is hierarchy.When organizations grow beyond 50 people we start to break them intosub-organizations of less than 50 people. As a result if you look atany large organization, such as a Fortune 100 corporation, you find ahuge complex hierarchy of nested organizations and cross-functionalorganizations. This hierarchy enables the organization to createspecialized “cells” or “organs” of collective cognition aroundparticular domains (like sales, marketing, engineering, HR, strategy,etc.) that remain effective despite the overall size of theorganization.

By leveraging hierarchy an organization of even hundreds ofthousands of members can still achieve some level of collective IQ as awhole. The problem however is that the collective IQ of the wholeorganization is still quite a bit lower than the combined collectiveIQ’s of the sub-organizations that comprise it. Even in well-structured, well-managed hierarchies, the hierarchy is still less thanthe sum of it’s parts. Hierarchy also has limits — the collective IQof an organization is also inversely proportional to the number ofgroups it contains, and the average number of levels of hierarchybetween those groups (Perhaps this could be defined more elegantly asan inverse function of the average network distance between groups inan organization).

The reason that organizations today still have to make suchextensive use of hierarchy is that our technologies for managingcollaboration, community, knowledge and intelligence on a collectivescale are still extremely primitive. Hierarchy is still one of the only and best solutions we have at our disposal. But we’re getting better fast.

Modern organizations are larger and far more complex than ever would have beenpractical in the Middle Ages, for example. They contain more people,distributed more widely around the globe, with more collaboration andspecialization, and more information, making more rapid decisions, thanwas possible even 100 years ago. This is progress.

Enabling Technologies

There have beenseveral key technologies that made modern organizations possible: the printing press,telegraph, telephone, automobile, airplane, typewriter, radio,television, fax machine, and personal computer. These technologies haveenabled information and materials to flow more rapidly, at less cost,across ever more widely distributed organizations. So we can see that technology does make a big difference in organizational productivity. The question is, can technology get us beyond the Collective IQ Barrier?

The advent of the Internet, and in particular the World Wide Webenabled a big leap forward in collective intelligence. These technologies havefurther reduced the cost to distributing and accessing information andinformation products (and even “machines” in the form of software codeand Web services). They have made it possible for collectiveintelligence to function more rapidly, more dynamically, on a wider scale, and at lesscost, than any previous generation of technology.

As a result of evolution of the Web we have seen new organizationalstructures begin to emerge that are less hierarchical, moredistributed, and often more fluid. For example, virtual teams that caninstantly form, collaborate across boundaries, and then dissolve backinto the Webs they come from when their job is finished. Thisprocess is now much easier than it ever was. Numerous hosted Web-basedtools exist to facilitate this: email, groupware, wikis, messageboards, listservers, weblogs, hosted databases, social networks, searchportals, enterprise portals, etc.

But this is still just the cusp of this trend. Even today with thecurrent generation of Web-based tools available to us, we are still notable to effectively tap much more of the potential Collective IQ of ourgroups, teams and communities. How do we get from where we are today(the whole is dumber than the sum of its parts) to where we want to bein the future (the whole is smarter than the sum of its parts)?

The Future of Productivity

The diagram below illustrates how I think about the past, present and future of productivity. In my view, from the advent of PC’s onwards we have seen a rapid growth in individual and group productivity, enabling people to work with larger sets of information, in larger groups. But this will not last — soon as we reach a critical level of information and groups of ever larger size, productivity will start to decline again, unless new technologies and tools emerge to enable us to cope with these increases in scale and complexity. You can read more about this diagram here.

http://novaspivack.typepad.com/nova_spivacks_weblog/2007/02/steps_towards_a.html

In the last 20 years the amount of information that knowledgeworkers (and even consumers) have to deal with on a daily basis has mushroomed by a factor of almost 10orders of magnitude and it will continue like this for several moredecades. But our information tools — and particular our tools forcommunication, collaboration, community, commerce and knowledgemanagement — have not advanced nearly as quickly. As a result thetools that we are using today to manage our information andinteractions are grossly inadequate for the task at hand: They were simply not designed tohandle the tremendous volumes of distributed information, and the rate of change ofinformation, that we are witnessing today.

Case in point: Email. Email was never designed for what it is beingused for today. Email was a simple interpersonal notification andmessaging tool and essentially that is what it is good for. But todaymost of us use our email as a kind of database, search engine,collaboration tool, knowledge management tool, project management tool,community tool, commerce tool, content distribution tool, etc. Emailwasn’t designed for these functions and it really isn’t very productive whenapplied to them.

For groups the email problem is even worse than it is for individuals –not only is everyone’s individual email productivity declining anyway,but collectively as groupsize increases (and thus group information size increases as well),there is a multiplier effect that further reduces everyone’semail productivity in inverse proportion to the size of the group.Email becomes increasingly unproductive as group size and informationsize increase.

This is not just true of email, however, it’s true of almost all theinformation tools we use today: Search engines, wikis, groupware,social networks, etc. They all suffer from this fundamental problem.Productivity breaks down with scale — and the problem is exponentially worse than it is for individuals in groups and organizations. But scale is increasing incessantly — that is a fact — and it will continue to do so for decades at least. Unless something is done about this we will simply be completely buried in our own information within about a decade.

The Semantic Web

I think the Semantic Web is a critical enabling technology that will help us get through this transition. It willenable the next big leap in productivity and collective intelligence.It may even be the technology that enables humans to flip the ratio so thatfor the first time in human history, larger groups of people canfunction more productively and intelligently than smaller groups. Itall comes down to enabling individuals and groups to maintain (andultimately improve) their productivity in theface of the continuing explosion in information and social complexitythat they areexperiencing.

The Semantic Web provides a richer underlying fabric for expressing,sharing, and connecting information. Essentially it provides a betterway to transform information into useful knowledge, and to share andcollaborate with it. It essentially upgrades the medium — in this case the Web and any other data that is connected to the Web — that we use for our information today.

By enriching the medium we can inturn enable new leaps in how applications, people, groups andorganizations can function. This has happened many times before in thehistory of technology.  The printing press is one example. The Web is a more recent one. The Web enriched themedium (documents) with HTML and a new transport mechanism, HTTP, forsharing it. This brought about one of the largest leaps in humancollective cognition and productivity in history. But HTML really onlydescribes formatting and links. XML came next, to start to provide away to enrich the medium with information about structure –the parts of documents. The Semantic Web takes this one step further –it provides a way to enrich the medium with information about the meaning of the structure — what are those parts, what do various links actually mean?

Essentially the Semantic Web provides a means to abstract andexternalize human knowledge about information — previously the meaningof information lived only in our heads, and perhaps in certainspecially-written software applications that were coded to understandcertain types of data. The Semantic Web will disrupt this situation by providingopen-standards for encoding this meaning right into the medium itself.Any application that can speak the open-standards of the Semantic Webcan then begin to correctly interpret the meaning of information, andtreat it accordingly, without having to be specifically coded tounderstand each type of data it might encounter.

This is analogous to the benefit of HTML. Before HTML everyapplication had to be specifically coded to each different documentformat in order to display it. After HTML applications could all juststandardize on a single way to define the formats of differentdocuments. Suddenly a huge new landscape of information becameaccessible both to applications and to the people who used them.The Semantic Web does something similar: It provides a way to makethe data itself “smarter” so that applications don’t have to know somuch to correctly interpret it. Any data structure — a document or adata record of any kind — that can be marked up with HTML to define its formatting, can also be marked up with RDFand OWL (the languages of the Semantic Web) to define its meaning.

Once semantic metadata is added, the document can not only bedisplayed properly by any application (thanks to HTML and XML), but itcan also be correctly understood by that application. For example theapplication can understand what kind of document it is, what it isabout, what the parts are, how the document relates to other things,and what particular data fields and values mean and how they map todata fields and values in other data records around the Web.

The Semantic Web enriches information with knowledge about what thatinformation means, what it is for, and how it relates to other things.With this in hand applications can go far beyond the limitations ofkeyword search, text processing, and brittle tabular data structures.Applications can start to do a much better job of finding, organizing,filtering, integrating, and making sense of ever larger and morecomplex distributed data sets around the Web.

Another great benefit ofthe Semantic Web is that this additional metadata can be added in atotally distributed fashion. The publisher of a document can add theirown metadata and other parties can then annotate that with their ownmetadata. Even HTML doesn’t enable that level of cooperative markup (exceptperhaps in wikis). It takes a distributed solution to keep up with ahighly distributed problem (the Web). The Semantic Web is just such adistributed solution.

The Semantic Web will enrich information and this in turn will enable people, groups and applications to work with information more productively. In particular groups and organizations will benefit the most because that is where the problems of information overload and complexity are the worst. Individuals at least know how they organize their own information so they can do a reasonably good job of managing their own data. But groups are another story — because people don’t necessarily know how others in their group organize their information. Finding what you need in other people’s information is much harder than finding it in your own.

Where the Semantic Web can help with this is by providing a richer fabric for knowledge management. Information can be connected to an underlying ontology that defines not only the types of information available, but also the meaning and relationships between different tags or subject categories, and even the concepts that occur in the information itself. This makes organizing and finding group knowledge easier. In fact, eventually the hope is that people and groups will not have to organize their information manually anymore — it will happen in an almost fully-automatic fashion. The Semantic Web provides the necessary frameworks for making this possible.

But even with the Semantic Web in place and widely adopted, moreinnovation on top of it will be necessary before we can truly breakpast the Collective IQ Barrier such that organizations can in practiceachieve exponential increases in Collective IQ. Human beings are only able to cope with a few chunks ofinformation at a given moment, and our memories and ability to processcomplex data sets are limited. When group size and data size growbeyond certain limits, we simply cannot cope, we become overloaded andjammed, even with rich Semantic Web content at our disposal.

Social Filtering and Social Networking — Collective Cognition

Ultimately, to remain productive in the face of such complexity wewill need help. Often humans in roles that require them to cope with large scales of information, relationships andcomplexity hire assistants, but not all of us can affordto do that, and in some cases even assistants are not able to keep upwith the complexity that has to be managed.

Social networking andsocial filtering are two ways to expand the number of “assistants” weeach have access to, while also reducing the price of harnessing the collective intelligence of those assistants to just about nothing. Essentially these methodologies enable people toleverage the combined intelligence and attention of large communitiesof like-minded people who contribute their knowledge and expertise for free. It’s a collective tit-for-tat form of altruism.

For example, Diggis a community that discovers the most interesting news articles. Itdoes this by enabling thousands of people to submit articles and voteon them. What Digg adds are a few clever algorithms on top of this for rankingarticles such that the most active ones bubble up to the top. It’s notunlike a stock market trader’s terminal, but for a completely differentclass of data. This is a great example of social filtering.

Anothergood example are prediction markets, where groups of people vote onwhat stock or movie or politician is likely to win — in some cases bybuying virtual stock in them — as a means to predict the future. Ithas been shown that prediction markets do a pretty good job of makingaccurate predictions in fact. In addition expertise referral serviceshelp people get answers to questions from communities of experts. Theseservices have been around in one form or another for decades and haverecently come back into vogue with services like Yahoo Answers. Amazonhas also taken a stab at this with their Amazon Mechanical Turk, whichenables “programs” to be constructed in which people perform the work.

I think social networking, social filtering, prediction markets,expertise referral networks, and collective collaboration are extremelyvaluable. By leveraging other people individuals and groups can stayahead of complexity and can also get the benefit of wide-areacollective cognition. These approaches to collective cognition arebeginning to filter into the processes of organizations and othercommunities. For example, there is recent interest in applying socialnetworking to niche communities and even enterprises.

The Semantic Webwill enrich all of these activities — making social networks andsocial filtering more productive. It’s not an either/or choice — thesetechnologies are extremely compatible in fact. By leveraging acommunity to tag, classify and organize content, for example, themeaning of that content can be collectively enriched. This is alreadyhappening in a primitive way in many social media services. TheSemantic Web will simply provide a richer framework for doing this.

The combination of the Semantic Web with emerging social networkingand social filtering will enable something greater than either on it’sown. Together, together these two technologies will enable much smarter groups, social networks, communities and organizations. But this still will not get us all the way past the Collective IQBarrier. It may get us close the threshold though. To cross thethreshold we will need to enable an even more powerful form ofcollective cognition.

The Agent Web

To cope with the enormous future scale andcomplexity of the Web, desktop and enterprise, each individual and group willreally need not just a single assistant, or even a community of humanassistants working on common information (a social filtering communityfor example), they will need thousands or millions of assistants working specificallyfor them. This really only becomes affordable and feasible if we canvirtualize what an “assistant” is.

Human assistants are at the top ofthe intelligence pyramid — they are extremely smart and powerful, and they are expensive — they  should not beused for simple tasks like sorting content, that’s just a waste oftheir capabilities. It would be like using a supercomputer array tospellcheck a document. Instead, we need to free humans up to do thereally high-value information tasks, and find a way to farm out thelow-value, rote tasks to software. Software is cheap or even free and it can be replicated as much asneeded in order to parallelize. A virtual army of intelligent agents is less expensive than a single human assistant, and much more suited to sifting through millions of Web pages every day.

But where will these future intelligent agents get their intelligence? In past attempts at artificial intelligence, researchers tried to buildgigantic expert systems that could reason as well as a small child forexample. These attempts met with varying degrees of success, but theyall had one thing in common: They were monolithic applications.

I believe that that future intelligent agents should be simple. They should not be advanced AI programs or expert systems. They should be capable of a few simple behaviors, the most important of which is to reason against sets of rules and semantic data. The basic logic necessary for reasoning is not enormous and does not require any AI — it’s just the ability to follow logical rules and perhaps do set operations. They should be lightweight and highly mobile. Insteadof vast monolithic AI, I am talking about vast numbers of very simpleagents that working together can do  emergent, intelligent operations en masse.

For example search — you might deploy a thousand agents to search all the sites about Italy for recipes and then assemble those results into a database instantaneously.  Or you might dispatch a thousand or more agents to watch for a job that matches your skills and goals across hundreds of thousands or millions of Websites. They could watch and wait until jobs that matched your criteria appeared, and then they could negotiate amongst themselves to determine which of the possible jobs they found were good enough to show you. Another scenario might be commerce — you could dispatch agents to find you the best deal on a vacation package, and they could even negotiate an optimal itinerary and price for you. All you would have to do is choose between a few finalist vacation packages and make the payment. This could be a big timesaver.

The above examples illustrate how agents might help an individual, but how might they help a group or organization? Well for one thing agents could continuously organize and re-organize information for a group. They could also broker social interactions — for example, by connecting people to other people with matching needs or interests, or by helping people find experts who could answer their questions. One of the biggest obstacles to getting past the Collective IQ Barrier is simply that people cannot keep track of more than a few social relationships and information sources at aany given time — but with an army of agents helping them, individuals might be able to cope with more relationships and data sources at once; the agents would act as their filters, deciding what to let through and how much priority to give it. Agents can also help to make recommendations, and to learn to facilitate and even automate various processes such as finding a time to meet, or polling to make a decision, or escalating an issue up or down the chain of command until it is resolved.

To make intelligent agents useful, they will need access to domain expertise. But the agents themselves will not contain any knowledge or intelligence of their own. The knowledge will exist outside on the Semantic Web, and so will the intelligence. Their intelligence, like their knowledge, will be externalized and virtualized in the form of axioms or rules that will exist out on the Web just like web pages.

For example, a set of axioms about travel could be published to the Web in the form of a document that formally defined them. Any agent that needed to process travel-related content could reference these axioms in order to reason intelligently about travel in the same way that it might reference an ontology about travel in order to interpret travel data structures. The application would not have to be specifically coded to know about travel — it could be a generic simple agent — but whenever it encountered travel-related content it could call up the axioms about travel from the location on the Web where they were hosted, and suddenly it could reason like an expert travel agent. What’s great about this is that simple generic agents would be able to call up domain expertise on an as-needed basis for just about any domain they might encounter. Intelligence — the heuristics, algorithms and axioms that comprise expertise, would be as accessible as knowledge — the data and connections between ideas and information on the Web.

The axioms themselves would be created by human experts in various domains, and in some cases they might even be created or modified by agents as they learned from experience. These axioms might be provided for free as a public service, or as fee-based web-services via API’s that only paying agents could access.

The key is that model is extremely scaleable — millions or billions of axioms could be created, maintained, hosted, accessed, and evolved in a totally decentralized and parallel manner by thousands or even hundreds of thousands of experts all around the Web. Instead of a few monolithic expert systems, the Web as a whole would become a giant distributed system of experts. There might be varying degrees of quality among competing axiom-sets available for any particular domain, and perhaps a ratings system could help to filter them over time. Perhaps a sort of natural selection of axioms might take place as humans and applications rated the end-results of reasoning using particular sets of axioms, and then fed these ratings back to the sources of this expertise, causing them to get more or less attention from other agents in the future. This process would be quite similar to the human-level forces of intellectual natural-selection at work in fields of study where peer-review and competition help to filter and rank ideas and their proponents.

Virtualizing Intelligence

What I have been describing is the virtualization of intelligence — making intelligence and expertise something that can be “published” to the Web and shared just like knowledge, just like an ontology, a document, a database, or a Web page. This is one of the long-term goals of the Semantic Web and it’s already starting now via new languages, such as SWRL, that are being proposed for defining and publishing axioms or rules to the Web. For example, “a non-biologicalparent of a person is their step-parent” is asimple axiom. Another axiom might be, “A child of a sibling of your parent is your cousin.” Using such axioms, an agent could make inferences and do simple reasoning about social relationships for example.

SWRL and other proposed rules languages provide potentialopen-standards for defining rules and publishing them to the Web sothat other applications can use them. By combining these rules withrich semantic data, applications can start to do intelligent things,without actually containing any of the intelligence themselves. The intelligence– the rules and data — can live “out there” on the Web, outside the code of various applications.

All theapplications have to know how to do is find relevant rules, interpret them, and apply them. Even the reasoning that may be necessary can be virtualized into remotely accessible Web services so applications don’t even have to do that part themselves (although many may simply include open-source reasoners in the same way that they include open-source databases or search engines today).

In other words, just as HTML enables any app to process and formatany document on the Web, SWRL + RDF/OWL may someday enable any application to reasonabout what the document discusses. Reasoning is the last frontier. Byvirtualizing reasoning — the axioms that experts use to reason aboutdomains — we can really begin to store the building blocks of humanintelligence and expertise on the Web in a universally-accessibleformat. This to me is when the actual “Intelligent Web” (what I callWeb 4.0) will emerge.

The value of this for groups and organizations is that they can start to distill their intelligence from individuals that comprise them into a more permanent and openly accessible form — axioms that live on the Web and can be accessed by everyone. For example, a technical support team for a product learns many facts and procedures related to their product over time. Currently this learning is stored as knowledge in some kind of tech support knowledgebase. But the expertise for how to find and apply this knowledge still resides mainly in the brains of the people who comprise the team itself.

The Semantic Web provides ways to enrich the knowledgebase as well as to start representing and saving the expertise that the people themselves hold in their heads, in the form of sets of axioms and procedures. By storing not just the knowledge but also the expertise about the product, the humans on the team don’t have to work as hard to solve problems — agents can actually start to reason about problems and suggest solutions based on past learning embodied in the common set of axioms. Of course this is easier said than done — but the technology at least exists in nascent form today. In a decade or more it will start to be practical to apply it.

Group Minds

Someday in the not-too-distant-future groups will be able toleverage hundreds or thousands of simple intelligent agents. Theseagents will work for them 24/7 to scour the Web, the desktop, theenterprise, and other services and social networks they are related to. They will help both the individuals as well as the collectives as-a-whole. They willbe our virtual digital assistants, always alert and looking for thingsthat matter to us, finding patterns, learning on our behalf, reasoningintelligently, organizing our information, and then filtering it,visualizing it, summarizing it, and making recommendations to us sothat we can see the Big Picture, drill in wherever we wish, and makedecisions more productively.

Essentially these agents will give groups something like their own brains. Today the only brains in a group reside in the skulls of the people themselves. But in the future perhaps we will see these technologies enable groups to evolve their own meta-level intelligences: systems of agents reasoning on group expertise and knowledge.

This will be a fundamental leap to a new order of collective intelligence. For the first time groups will literally have minds of their own, minds that transcend the mere sum of the individual human minds that comprise their human, living facets. I call these systems “Group Minds” and I think they are definitely coming. In fact there has been quite a bit of research on the subject of facilitating group collaboration with agents, for example, in government agencies such as DARPA and the military, where finding ways to help groups think more intelligently is often a matter of life and death.

The big win from a future in which individuals and groups canleverage large communities of intelligent agents is that they will bebetter able to keep up with the explosive growth of information complexity andsocial complexity. As the saying goes, “it takes a village.” There is just too much information, and too many relationships, changing too fast and this is only going to get more intense in years to come. The only way to cope with such a distributed problem is a distributed solution.

Perhaps by 2030 it will not be uncommon for Individuals and groups to maintain largenumbers of virtual assistants — agents that will help them keep abreast of themassively distributed, always growing and shifting information and sociallandscapes. When you really think about this, how else could we eversolve this? This is really the only practical long-term solution. But today it is still a bit of a pipedream; we’re not there yet. The key however is that we are closer than we’ve ever been before.

Conclusions

The Semantic Web provides the key enabling technology for all ofthis to happen someday in the future. By enriching the content of theWeb it first paves the way to a generation of smarter applications andmore productive individuals, groups and organizations.

The next majorleap will be when we begin to virtualize reasoning in the form ofaxioms that become part of the Semantic Web. This will enable a newgeneration of applications that can reason across information andservices. This will ultimately lead to intelligent agents that will be able to assist individuals,groups, social networks, communities, organizations and marketplaces sothat they can remain productive in the fact of the astonishinginformation and social network complexity in our future.

By adding more knowledge into our information, the Semantic Webmakes it possible for applications (and people) to use information moreproductively. By adding more intelligence between people,  information,and applications, the Semantic Web will also enable people andapplications to become smarter. In the future, these more-intelligentapps will facilitate higher levels of individual and collectivecognition by functioning as virtual intelligent assistants forindividuals and groups (as well as for online services).

Once we begin to virtualize not just knowledge (semantics) but alsointelligence (axioms) we will start to build Group Minds — groups that have primitive minds of their own. When we reach this point we will finally enable organizations to breakpast the Collective IQ Barrier: Organizations will start to becomesmarter than the sum of their parts. The intelligence of anorganization will not just be from its people, it will also come fromits applications. The number of intelligent applications in anorganization may outnumber the people by 1000 to 1, effectivelyamplifying each individual’s intelligence as well as the collectiveintelligence of the group.

Because software agents work all the time,can self-replicate when necessary, and are extremely fast and precise,they are ideally-suited to sifting in parallel through the millions or billions ofdata records on the Web, day in and day out. Humans and even groups ofhumans will never be able to do this as well. And that’s not what theyshould be doing! They are far too intelligent for that kind of work.Humans should be at the top of the pyramid, making the decisions,innovating, learning, and navigating.

When we finally reach this stage where networks of humans and smartapplications are able to work together intelligently for common goals,I believe we will witness a real change in the way organizations arestructured. In Group Minds, hierarchy will not be as necessary — the maximum effectivesize of a human Group Mind will be perhaps in the thousands or even themillions instead of around 50 people. As a result the shape of organizations in thefuture will be extremely fluid, and most organizations will be flat orcontinually shifting networks. For more on this kind of organization,read about virtual teams and networking, such as these books (by friends of mine who taught me everything I know about network-organization paradigms.)

I would also like to note that I am not proposing “strong AI” — a vision in which we someday makeartificial intelligences that are as or more intelligent thanindividual humans. I don’t think intelligent agents will individually be very intelligent. It will only be in vast communities of agents that intelligence will start to emerge. Agents are analogous to the neurons in the human brain — they really aren’t very powerful on their own.

I’m also not proposing that Group Minds will beas or more intelligent as the individual humans in groups anytime soon. I don’t think thatis likely in our lifetimes. The cognitive capabilities of an adult human are the product of millions of years of evolution. Even in the accelerated medium of the Web where evolution can take place much faster in silico, it may still take decades or even centuries to evolve AI that rivals the human mind (and I doubt such AI will ever be truly conscious, which means that humans, with their inborn natural consciousness, may always play a special and exclusive role in the world to come, but that is the subject of a different essay). But even if they will not be as intelligent as individual humans, Ido think that Group Minds, facilitated by masses of slightly intelligent agents and humans working in concert, can goa long way in helping individuals and groups become more productive.

It’s important to note that the future I am describing is notscience-fiction, but it also will not happen overnight. It will take atleast several decades, if not longer. But with the seeminglyexponential rate of change of innovation, we may make very large stepsin this direction very soon. It is going to be an exciting lifetime forall of us.

Diagram: Beyond Keyword (and Natural Language) Search

Here at Radar Networks we are working on practical ways to bring the Semantic Web to end-users. One of the interesting themes that has come up a lot, both internally, as well as in discussions with VC’s, is the coming plateau in the productivity of keyword search. As the Web gets increasingly large and complex, keyword search becomes less effective as a means for making sense of it. In fact, it will even decline in productivity in the future. Natural language search will be a bit better than keyword search, but ultimately won’t solve the problem either — because like keyword search it cannot really see or make use of the structure of information.

I’ve put together a new diagram showing how the Semantic Web will enable the next step-function in productivity on the Web. It’s still a work in progress and may change frequently for a bit, so if you want to blog it, please link to this post, or at least the .JPG image behind the thumbnail below so that people get the latest image. As always your comments are appreciated. (Click the thumbnail below for a larger version).

Futureofproductivity_2

Today a typical Google search returns up to hundreds of thousands or even millions of results — but we only really look at the first page or two of results. What about the other results we don’t look at? There is a lot of room to improve the productivity of search, and the help people deal with increasingly large collections of information.

Keyword search doesn’t understand the meaning of information, let alone its structure. Natural language search is a little better at understanding the meaning of information — but it still won’t help with the structure of information. To really improve productivity significantly as the Web scales, we will need forms of search that are data-structure-aware — that are able to search within and across data structures, not just unstructured text or semistructured HTML. This is one of the key benefits of the coming Semantic Web: it will enable the Web to be navigated and searched just like a database.

Starting with the "data web" enabled by RDF, OWL, ontologies and SPARQL, structured data is becoming increasingly accessible, searchable and mashable. This in turn sets the stage for a better form of search: semantic search. Semantic search combines the best of keyword, natural language, database and associative search capabilities together.

Without the Semantic Web, productivity will plateau and then gradually decline as the Web, desktop and enterprise continue to grow in size and complexity. I believe that with the appropriate combination of technology and user-experience we can flip this around so that productivity actually increases as the size and complexity of the Web increase.

See Also: A Visual Timeline of the Past, Present and Future of the Web

New Findings Overturn our Understanding of How Neurons Communicate

Thanks to Bram for pointing me to this article about how new research indicates that communication in the brain is quite different than we thought. Essentially neurons may release neurotransmitters all along axons, not just within synapses. This may enable new forms of global communication or state changes within the brain, beyond the "circuit model" of neuronal signaling that has been the received view for the last 100 years. It also may open up a wide range of new drugs and discoveries in brain science.

Capturing Your Digital Life

Nice article in Scientific American about Gordon Bell’s work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web — additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.