The Next Generation of Web Search — Search 3.0

The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.

Web 1.0, the first decade of the Web (1989 – 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.

Web 2.0, the second decade of the Web (1999 – 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive “web of trust” to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value — how many people in the community liked them and current activity level — as
well as by semantic relevancy measures.

In the coming third decade of the Web, Web 3.0 (2009 – 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.

Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.

Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past — the more timely something is the more relevant it may be as well.

These two themes — present and personal — will define the next great search experience.

To accomplish this, we need to make progress on a number of fronts.

First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.

Metadata reduces the need for computation in order to determine what content is about — it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.

This applies especially to the area of the real-time Web, where for example short “tweets” of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.

In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a “one-size fits all” ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.

Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what’s most important effectively. Social graph analysis is a key tool for doing this, but in
addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.

Sneak Peak – Siri — Interview with Tom Gruber

Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff

In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:

Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?

Tom Gruber: A virtual personal assistant is a software system that

  • Helps the user find or do something (focus on tasks, rather than information)
  • Understands the user’s intent (interpreting language) and context (location, schedule, history)
  • Works on the user’s behalf, orchestrating multiple services and information sources to help complete the task

In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don’t do things for me – I have to use them as tools to do something, and I have to adapt to their ways of taking input.

Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?

Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time.  Apple’s famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT’s Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book “The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us”.  These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results.  These are hallmarks of the Siri assistant.  Some of the elements of these visions
are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator.  Or self-awareness a la Singularity.  But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.

Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)

Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual
assistant that helps people do things.  It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.

Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant.  Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.

Nova Spivack: What are primary aspects of Siri that you would say are “novel”?

Tom Gruber: The demands of the consumer internet focus — instant usability and robust interaction with the evolving web — has driven us to come up with some new innovations:

  • A conversational interface that combines the best of speech and semantic language understanding with an interactive dialog that helps guide
    people toward saying what they want to do and getting it done. The
    conversational interface allows for much more interactivity that one-shot search style interfaces, which aids usability and improves intent understanding.  For example, if Siri didn’t quite hear what you said, or isn’t sure what you meant, it can ask for clarifying information.   For example, it can prompt on ambiguity: did you mean pizza restaurants in Chicago or Chicago-style pizza places near you? It can also make reasonable guesses based on context. Walking around with the phone at lunchtime, if the speech interpretation comes back with something garbled about food you probably meant “places to eat near my current location”. If this assumption isn’t right, it is easy to correct in a conversation.
  • Semantic auto-complete – a combination of the familiar “autocomplete” interface of search boxes with a semantic and linguistic model of what might be worth saying. The so-called “semantic completion” makes it possible to rapidly state complex requests (Italian restaurants in the SOMA neighborhood of San Francisco that have tables available tonight) with just a few clicks. It’s sort of like the power of faceted search a la Kayak, but packaged in a clever command line style interface that works in small form factor and low bandwidth environments.
  • Service delegation – Siri is particularly deep in technology for operationalizing a user’s intent into computational form, dispatching to multiple, heterogeneous services, gathering and integrating results, and presenting them back to the user as a set of solutions to their request.  In a restaurant selection task, for instance, Siri combines information from many different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and the user’s own favorites) to show a set of candidates that meet the intent expressed in the user’s natural language request.

Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?

Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:

  • Task focus. Siri is very focused on a bounded set of specific human tasks, like finding something to do, going out with friends, and getting around town.  This task focus allows it to have a very rich model of its domain of competence, which makes everything more tractable from language understanding to reasoning to service invocation and results presentation
  • Structured data focus. The kinds of tasks that Siri is particularly good at involve semistructured data, usually on tasks involving multiple criteria and drawing from multiple sources.  For example, to help find a place to eat, user preferences for cuisine, price range, location, or even specific food items come into play.  Combining results from multiple sources requires
    reasoning about domain entity identity and the relative capabilities of different information providers.  These are hard problems of semantic
    information processing and integration that are difficult but feasible
    today using the latest AI technologies.
  • Architecture focus. Siri is built from deep experience in integrating multiple advanced technologies into a platform designed expressly for virtual assistants. Siri co-founder Adam Cheyer was chief architect of the CALO project, and has applied a career of experience to design the platform of the Siri product. Leading the CALO project taught him a lot about what works and doesn’t when applying AI to build a virtual assistant. Adam and I also have rather unique experience in combining AI with intelligent interfaces and web-scale knowledge integration. The result is a “pure  play” dedicated architecture for virtual assistants, integrating all the components of intent understanding, service delegation, and dialog flow management. We have avoided the need to solve general AI problems by concentrating on only what is needed for a virtual assistant, and have chosen to begin with a
    finite set of vertical domains serving mobile use cases.

Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?

Tom Gruber: Rather than trying to be like a search engine to all the world’s information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface.  The smaller the form factor, the more mobile the context,
the more limited the bandwidth : the more it is important that the interface make intelligent use of the user’s attention and the resources at hand.  In other words, “smaller needs to be smarter.”  And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure.  When you are on the go, you just don’t have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.

Nova Spivack: What language and platform is Siri written in?

Tom Gruber: Java, Javascript, and Objective C (for the iPhone)

Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?

Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards.  A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier.  For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic
Web endpoint, and that makes it easy to deal with.  The more the API declares its data model, the more automated we can make our coupling to it.

Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?

Tom Gruber: Siri’s knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models.  As much as possible we represent things declaratively (i.e., as data in models, not lines of code).  This is a tried and true best practice for complex AI systems.  This makes the whole system more robust and scalable, and the development process more agile.  It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.


Nova Spivack: Will Siri be part of the Semantic Web, or at least the open linked data Web (by making open API’s, sharing of linked data, RDF, available, etc.)?

Tom Gruber: Siri isn’t a source of data, so it doesn’t expose data using Semantic Web standards.  In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop – an intelligent interface that knows about user needs
and sources of information to meet those needs, and intermediates.  The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.).  The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data.  For example, if a virtual assistant wants to schedule a dinner it needs more than the information
about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies.  That is the original purpose of ontologies-as-specification that I promoted in the
1990s – to help specify how to interact with these agents via knowledge-level APIs.

Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication.  As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.

All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text.  So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.

Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?

Tom Gruber: Siri’s top line measure of success is task completion (not relevance).  A subtask is intent recognition, and subtask of that is NLP.  Speech is another element, which couples to NLP and adds its own issues.  In this context, Siri’s NLP is “pretty darn good” — if the user is talking about something in Siri’s domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese.  All NLP is tuned for some class of natural language, and Siri’s is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don’tknow how it would compare to standard message and news corpuses using by the NLP research community.


Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?

Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.

Nova Spivack: Will Siri be able to talk back to users at any point?

Tom Gruber: It could use speech synthesis for output, for the appropriate contexts.  I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone.  For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.

Nova Spivack: Can you give me more examples of how the NLP in Siri works?

Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)

Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?

Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time.  As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live.  Siri doesn’t forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results.  The evolution in learning comes as users have a history with Siri, which gives it achance to make some generalizations about preferences.  There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.

Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?

Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes.  Siri knows about the data because we (humans) explicitly model what is in those sources.  With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request.  For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.


Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.

Tom Gruber: Thank you, Nova, it’s a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It’s easy to project intelligence onto an assistant, but Siri isn’t going to pass the Turing Test. It’s just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.

Video: My Talk on The Future of Libraries — "Library 3.0"

If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!

Twine's Explosive Growth

Twine has been growing at 50% per month since launch in October. We've been keeping that quiet while we wait to see if it holds. VentureBeat just noticed and did an article about it. It turns out our January numbers are higher than Compete.com estimates and February is looking strong too. We have a slew of cool viral features coming out in the next few months too as we start to integrate with other social networks. Should be an interesting season.

Fast Company Interview — "Connective Intelligence"

In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.

Interest Networks are at a Tipping Point

UPDATE: There’s already a lot of good discussion going on around this post in my public twine.

I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.

In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.

At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.t

So, what is an interest network?

In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.

Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.

I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.

This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi–dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.

We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:

What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t.

We let our users tell us what they’re most interested in, and we follow their lead).

Most interest networks exhibit the following characteristics as well:

  • They have some sort of bookmarking/submission/markup function to store and map data (often using existing metaphors, even if what’s under the hood is new)
  • They also have some sort of social sharing function to provide the network benefit (this isn’t exclusive to interest networks, obviously, but it is characteristic)
  • And in most cases, interest networks look to add some sort of “smarts” or “recommendations” capability to the mix (that is, you get more out than you put in)

This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.

To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.

At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.

The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.

Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.

6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.

I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts — Carla, Jeremiah, and others, are you listening?

Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.

Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”

Now that anyone can join, it will be fun and gratifying to watch Twine grow.

Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.

Stay tuned!

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

The Future of the Desktop

This is an older version of this article. The most recent version is located here:

http://www.readwriteweb.com/archives/future_of_the_desktop.php

—————

I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I’ve come up so far.

(Author’s Note: This is a raw, first-draft of what I think it will be like. Please forgive any typos — I am still working on this and editing it…)

What Will Happen to the Desktop?

As we enter the third decade of the Web we are seeing an increasing shift from local desktop applications towards Web-hosted software-as-a-service (SaaS). The full range of standard desktop office tools (word processors, spreadsheets, presentation tools, databases, project management, drawing tools, and more) can now be accessed as Web-hosted apps within the browser. The same is true for an increasing range of enterprise applications. This process seems to be accelerating.

As more kinds of applications become available in Web-based form, the Web browser is becoming the primary framework in which end-users work and interact. But what will happen to the desktop? Will it too eventually become a Web-hosted application? Will the Web browser swallow up the desktop? Where is the desktop headed?

Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?

No. There have already been several attempts at doing this — and they never catch on. People don’t want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.

Partly this is due to the difference in user experience between using files and folders on a local machine and doing that in “simulated” fashion via some Flash-based or HTML-based imitation of a desktop. Imitations desktops to-date have simply been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it. The desktop of the future – what some have called “the Webtop” – still has yet to be invented.

It’s going to be a hosted web service

Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there will have to be some kind of interface that we consider to be our personal “home” and “workspace” — but ultimately it will have to be a unified space that all our devices connect to and share. This requires that it be a hosted online service.

Currently we have different information spaces on different devices (laptop, mobile device, PC). These will merge. Native local clients could be created for various devices, but ultimately the simplest and therefore most likely choice is to just use the browser as the client. This coming “Webtop” will provide an interface to your local devices, applications and information, as well as to your online life and information.

Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.

Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it’s will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.

The Web 3.0 desktop is going to be completely merged with the Web — it is going to be part of the Web. In fact there may eventually be no distinction between the desktop and the Web anymore.

The focus shifts from information to attention

As our digital lives shift from being focused on the old fashioned desktop to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (feeds, lifestreams, microblogs, timelines, etc.).

Instead of being just a directory, the desktop of the future is going to be more like a feed reader or social news site. The focus will be on keeping up with all the stuff flowing in and out of the user’s environment. The interface will be tuned to help the user understand what the trends are, rather than just on how things are organized.

The focus will be on helping the user to manage their attention rather than just their information. This is a leap to the meta-level: A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).

Users are going to shift from acting as librarians to acting as daytraders.

Our digital roles are already shifting from acting as librarians to becoming more like daytraders. In the PC era we were all focused on trying to manage the stuff on our computers — in other words, we were acting as librarians. But this is going to shift. Librarians organize stuff, but daytraders are focused on discovering and keeping track of trends. It’s a very different focus and activity, and it’s what we are all moving towards.

We are already spending more of our time keeping up with change and detecting trends, than on organizing information. In the coming decade the shelf-life of information is going to become vanishingly short and the focus will shift from storage and recall to real-time filtering, trend detection and prediction.

The Webtop will be more social and will leverage and integrate collective intelligence

The Webtop is going to be more socially oriented than desktops of today — it will have built-in messaging and social networking, as well as social-media sharing, collaborative filtering, discussions, and other community features.

The social dimension of our lives is becoming perhaps our most important source of information. We get information via email from friends, family and colleagues. We get information via social networks and social media sharing services. We co-create information with others in communities.

The social dimension is also starting to play a more important role in our information management and discovery activities. Instead of those activities remaining as solitary, they are becoming more communal. For example many social bookmarking and social news sites use community sentiment and collaborative filtering to help to highlight what is most interesting, useful or important.

It’s going to have powerful semantic search and social search capabilities built-in

The Webtop is going to have more powerful search built-in. This search will combine both social and semantic search features. Users will be able to search their information and rank it by social sentiment (for example, “find documents about x and rank them by how many of my friends liked them.”)

Semantic search will enable highly granular search and navigation of information along a potentially open-ended range of properties and relationships.

For example you will be able to search in a highly structured way — for example, search for products you once bookmarked that have a price of $10.95 and are on-sale this week. Or search for documents you read which were authored by Sue and related to project X, in the last month.

The semantics of the future desktop will be open-ended. That is to say that users as well as other application and information providers will be able to extend it with custom schemas, new data types, and custom fields to any piece of information.

Interactive shared spaces instead of folders

Forget about shared folders — that is an outmoded paradigm. Instead, the  new metaphor will be interactive shared spaces.

The need for shared community space is currently being provided for online by forums, blogs, social network profile pages, wikis, and new community sites. But as we move into Web 3.0 these will be replaced by something that combines their best features into one. These next-generation shared spaces will be like blogs, wikis, communities, social networks, databases, workspaces and search engines in one.

Any group of two or more individuals will be able to participate in a shared space that connects their desktops for a particular purpose. These new shared spaces will not only provide richer semantics in the underlying data, social network, and search, but they will also enable groups to seamlessly and collectively add, organize, track, manage, discuss, distribute, and search for information of mutual interest.

The personal cloud

The future desktop will function like a “personal cloud” for users. It will connect all their identities, data, relationships, services and activities in one virtual integrated space. All incoming and outgoing activity will flow through this space. All applications and services that a user makes use of will connect to it.

The personal cloud may not have a center, but rather may be comprised of many separate sub-spaces, federated around the Web and hosted by different service-providers. Yet from an end-user perspective it will function as a seamlessly integrated service. Users will be able to see and navigate all their information and applications, as if they were in one connected space, regardless of where they are actually hosted. Users will be able to search their personal cloud from any point within it.

Open data, linked data and open-standards based semantics

The underlying data in the future desktop, and in all associated services it connects, will be represented using open-standard data formats. Not only will the data be open, but the semantics of the data – the schema – will also be defined in an open way. The emerigng Semantic Web provides a good infrastructure for enabling this to happen.

The value of open linked-data and open semantics is that data will not be held prisoner anywhere and can easily be integrated with other data.

Users will be able to seamlessly move and integrate their data, or parts of their data, in different services. This means that your Webtop might even be portable to a different competing Webtop provider someday. If and when that becomes possible, how will Webtop providers compete to add value?

It’s going to be smart

One of the most important aspects of the coming desktop is that it’s going to be smart. It’s going to learn and help users to be more productive. Artificial intelligence is one of the key ways that competing Webtop providers will differentiate their offerings.

As you use it, it’s going to learn about your interests, relationships, current activities, information and preferences. It will adaptively self-organize to help you focus your attention on what is most important to whatever context you are in.

When reading something while you are taking a trip to Milan it may organize itself to be more contextually relevant to that time, place and context. When you later return home to San Francisco it will automatically adapt and shift to your home context. When you do a lot of searches about a certain product it will realize your context and intent has to do with that product and will adapt to help you with that activity for a while, until your behavior changes.

Your desktop will actually be a semantic knowledge base on the back-end. It will encode a rich semantic graph of your information, relationships, interests, behavior and preferences. You will be able to permit other applications to access part or all of your graph to datamine it and provide you with value-added views and even automated intelligent assistance.

For example, you might allow an agent that cross-links things to see all your data: it would go and add cross links to relevant things onto all the things you have created or collected. Another agent that makes personalized buying recommendations might only get to see your shopping history across all shopping sites you use.

Your desktop may also function as a simple personal assistant at times. You will be able to converse with your desktop eventually — through a conversational agent interface. While on the road you will be able to email or SMS in questions to it and get back immediate intelligent answers. You will even be able to do this via a voice interface.

For example, you might ask, “where is my next meeting?” or “what Japanese restaurants do I like in LA?” or “What is Sue’s Smith’s phone number?” and you would get back answers. You could also command it to do things for you — like reminding you to do something, or helping you keep track of an interest, or monitoring for something and alerting you when it happens.

Because your future desktop will connect all the relationships in your digital life — relationships connecting people, information, behavior, prefences and applications — it will be the ultimate place to learn about your interests and preferences.

Federated, open policies and permissions

This rich graph of meta-data that comprises your future desktop will enable the next-generation of smart services to learn about you and help you in an incredibly personalized manner. It will also of course be rife with potential for abuse and privacy will be a major function and concern.

One of the biggest enabling technologies that will be necessary is a federated model for sharing meta-data about policies and permissions on data. Information that is considered to be personal and private in Web site X should be recognized and treated as such by other applications and websites you choose to share that information with. This will require a way for sharing meta-data about your policies and permissions between different accounts and applicaitons you use.

The semantic web provides a good infrastructure for building and deploying a decentralized framework for policy and privacy integration, but it has yet to be developed, let alone adopted. For the full vision of the future desktop to emerge a universally accepted standard for exchanging policy and permission data will be a necessary enabling technology.

Who is most likely to own the future desktop?

When I think about what the future desktop is going to look like it seems to be a convergence of several different kinds of services that we currently view as separate.

It will be hosted on the cloud and accessible across all devices. It will place more emphasis on social interaction, social filtering, and collective intelligence. It will provide a very powerful and extensible data model with support for both unstructured and arbitrarily structured information. It will enable almost peer-to-peer like search federation, yet still have a unified home page and user-experience. It will be smart and personalized. It will be highly decentralized yet will manage identity, policies and permissions in an integrated cohesive and transparent manner across services.

By cobbling together a number of different services that exist today you could build something like this in a decentralized fashion. Is that how the desktop of the future will come about? Or will it be a new application provided by one player with a lot of centralized market power? Or could an upstart suddently emerge with the key enabling technologies to make this possible? It’s hard to predict, but one thing is certain: It will be an interesting process to watch.

Great Collective Intelligence Book; Includes a Chapter I Wrote

I highly recommend this new book on Collective Intelligence. It features chapters by a Who’s Who of thinkers on Collective Intelligence, including a chapter by me about “Harnessing the Collective Intelligence of the World Wide Web.”

Here is the full-text of my chapter, minus illustrations (the rest of the book is great and I suggest you buy it to have on your shelf. It’s a big volume and worth the read):

Continue reading

My Visit to DERI — World's Premier Semantic Web Research Institute

Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.

DERI has become the world’s premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what’s happening there.

Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:

  • Semantic Web Search Engine (SWSE) and YARS, a massively scalable triplestore.  These projects are concerned with crawling and indexing the information on the Semantic Web so that end-users can find it. They have done good work on consolidating data and also on building a highly scalable triplestore architecture.
  • Sindice — An API and search infrastructure for the Semantic Web. This project is focused on providing a rapid indexing API that apps can use to get their semantic content indexed, and that can also be used by apps to do semantic searches and retrieve semantic content from the rest of the Semantic Web. Sindice provides Web-scale semantic search capabilities to any semantic application or service.
  • SIOC — Semantically Interlinked Online Communities. This is an ontology for linking and sharing data across online communities in an open manner, that is getting a lot of traction. SIOC is on its way to becoming a standard and may play a big role in enabling portability and interoperability of social Web data.
  • JeromeDL is developing technology for semantically enabled digital libraries. I was impressed with the powerful faceted navigation and search capabilities they demonstrated.
  • notitio.us. is a project for personal knowledge management of bookmarks and unstructured data.
  • SCOT, OpenTagging and Int.ere.st.  These projects are focused on making tags more interoperable, and for generating social networks and communities from tags. They provide a richer tag ontology and framework for representing, connecting and sharing tags across applications.
  • Semantic Web Services.  One of the big opportunities for the Semantic Web that is often overlooked by the media is Web services. Semantics can be used to describe Web services so they can find one another and connect, and even to compose and orchestrate transactions and other solutions across networks of Web services, using rules and reasoning capabilities. Think of this as dynamic semantic middleware, with reasoning built-in.
  • eLite. I was introduced to the eLite project, a large e-learning initiative that is applying the Semantic Web.
  • Nepomuk.  Nepomuk is a large effort supported by many big industry players. They are making a social semantic desktop and a set of developer tools and libraries for semantic applications that are being shipped in the Linux KDE distribution. This is a big step for the Semantic Web!
  • Semantic Reality. Last but not least, and perhaps one of the most eye-opening demos I saw at DERI, is the Semantic Reality project. They are using semantics to integrate sensors with the real world. They are creating an infrastructure that can scale to handle trillions of sensors eventually. Among other things I saw, you can ask things like "where are my keys?" and the system will search a network of sensors and show you a live image of your keys on the desk where you left them, and even give you a map showing the exact location. The service can also email you or phone you when things happen in the real world that you care about — for example, if someone opens the door to your office, or a file cabinet, or your car, etc. Very groundbreaking research that could seed an entire new industry.

In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI — they are clearly the leader in the space.

Artificial Stupidity: The Next Big Thing

There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don’t need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I’m skeptical to say the least. I don’t need or want artificial intelligence.

No, what I really need is artificial stupidity.

I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks — like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.

The human brain is the result of millions of years of evolution. It’s already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don’t require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it’s going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.

The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don’t mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren’t good at." In fact humans are really bad at doing relatively simple, "stupid" things — tasks that don’t require much intelligence at all.

For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That’s what computers are for – or should be for at least.

Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving — but we are just terrible at managing email, or making sense of the Web. Let’s play to our strengths and use computers to compensate for our weaknesses.

I think it’s time we stop talking about artificial intelligence — which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.

Powerpoint Deck: Making Sense of the Semantic Web, and Twine

Now that I have been asked by several dozen people for the slides from my talk on "Making Sense of the Semantic Web," I guess it’s time to put them online. So here they are, under the Creative Commons Attribution License (you can share it with attribution this site).

You can download the Powerpoint file at the link below:

Download nova_spivack_semantic_web_talk.ppt


Or you can view it right here:

Enjoy! And I look forward to your thoughts and comments.

Quick Video Preview of Twine

The New Scientist just posted a quick video preview of Twine to YouTube. It only shows a tiny bit of the functionality, but it’s a sneak peak.

We’ve been letting early beta testers into Twine and we’re learning a lot from all the great feedback, and also starting to see some cool new uses of Twine. There are around 20,000 people on the wait-list already, and more joining every day. We’re letting testers in slowly, focusing mainly on people who can really help us beta test the software at this early stage, as we go through iterations on the app. We’re getting some very helpful user feedback to make Twine better before we open it up the world.

For now, here’s a quick video preview:

True Knowledge is Cool

The most interesting and exciting new app I’ve seen this month (other than Twine of course!) is a new semantic search engine called True Knowledge. Go to their site and watch their screencast to see what the next generation of search is really going to look like.

True Knowledge is doing something very different from Twine — whereas Twine is about helping individuals, groups and teams manage their private and shared knowledge, True Knowledge is about making a better public knowledgebase on the Web — in a sense they are a better search engine combined with a better Wikipedia. They seem to overlap more with what is being done by natural language search companies like Powerset and companies working on public databases, such as Metaweb and Wikia.

I don’t yet know whether True Knowledge is supporting W3C open-standards for the Semantic Web, but if they do, they will be well-positioned to become a very central service in the next phase of the Web. If they don’t they will just be yet another silo of data — but a very useful one at least. I personally hope they provide SPARQL API access at the very least. Congratulations to the team at True Knowledge! This is a very impressive piece of work.

A Video and an Audio Cast About Twine

Last night I saw that the video of my presentation of Twine at the Web 2.0 Summit is online. My session, "The Semantic Edge," featured Danny Hillis of Metaweb demoing Freebase, Barney Pell demoing Powerset, and myself Demoing Twine, followed by a brief panel discussion with Tim O’Reilly (in that order). It’s a good panel and I recommend the video, however, the folks at Web 2.0 only filmed the presenters; they didn’t capture what we were showing on our screens, so you have to use your imagination as we describe our demos.

An audio cast of one of my presentations about Twine to a reporter was also put online recently, for a more in-depth description.

What a Week!

What a week it has been for Radar Networks. We have worked so hard these last few days to get ready to unveil Twine, and it has been a real thrill to show our work and get such positive feedback and support from the industry, bloggers, the media and potential users.

We really didn’t expect so much excitement and interest. In fact we’ve been totally overwhelmed by the response as thousands upon thousands of people have contacted us in the last 24 hours asking to join our beta, telling us how they would use Twine for their personal information management, their collaboration, their organizations, and their communities. Clearly there is such a strong and growing need out there for the kind of Knowledge Networking capabilities that Twine provides, and it’s been great to hear the stories and make new connections with so many people who want our product. We love hearing about your interest in Twine, what you would use it for, what you want it to do, and why you need it! Keep those stories coming. We read them all and we really listen to them.

Today, in unveiling Twine, over five years of R&D, and contributions from dozens of core contributors, a dedicated group of founders and investors, and hundreds of supporters, advisors, friends and family, all came to fruition. As a company, and a team, we achieved an important milestone and we should all take some time to really appreciate what we have accomplished so far. Twine is a truly ambitious and pardigm-shifting product, that is not only technically profound but visually stunning — There has been so much love and attention to detail in this product.

In the last 6 months, Twine has really matured into a product, a product that solves real and growing needs (for a detailed use-case see this post). And just as our product has matured, so has our organization: As we doubled in size, our corporate culture has become tremendously more interesting, innovative and fun. I could go on and on about the cool things we do as a company and the interesting people who work here. But it’s the passion, dedication and talent of this team that is most inspiring. We are creating a team and a culture that truly has the potential to become a great Silicon Valley company: The kind of company that I’ve always wanted to build.

Although we launched today, this is really just the beginning of the real adventure. There is still much for us to build, learn about, and improve before Twine will really accomplish all the goals we have set out  for it. We have a five-year roadmap. We know this is a marathon, not a sprint and that "slow and steady wins the race." As an organization we also have much learning and growing to do. But this really doesn’t feel like work — it feels like fun — because we all love this product and this company. We all wake up every day totally psyched to work on this.

It’s been an intense, challenging, and rewarding week. Everyone on my team has impressed me and really been at the top of their game. Very few of us got any real sleep, and most of us went far beyond the call of duty. But we did it, and we did it well. As a company we have never cut corners, and we have always preferred to do things the right way, even if the right way is the hard way. But that pays off in the end. That is how great products are built. I really want to thank my co-founders, my team, my investors, advisors, friends, and family, for all their dedication and support.

Today, we showed our smiling new baby to the world, and the world smiled back.

And tonight , we partied!!!

Radar Networks Announces Twine.com

My company, Radar Networks, has just come out of stealth. We’ve announced what we’ve been working on all these years: It’s called Twine.com. We’re going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There’s lot’s of press coming out where you can read about what we’re doing in more detail. The team is extremely psyched and we’re all working really hard right now so I’ll be brief for now. I’ll write a lot more about this later.

Continue reading

Radar Networks Coming Out of Stealth – Friday, October 19

News Flash!

My company, Radar Networks, is coming out of stealth this Friday, October 19, 2007 at the Web 2.0 Summit, in San Francisco. I’ll be speaking on "The Semantic Edge Panel" at 4:10 PM, and publicly showing our Semantic Web online service for the first time. If you are planning to come to Web 2.0, I hope to see you at my panel.

Here’s the official Media Alert below:

               

(PRWEB)
October 15, 2007 — At the Web2.0 Summit on October 19th, Radar
Networks will announce a revolutionary new service that uses the power
of the emerging Semantic Web to enable a smarter way of sharing,
organizing and finding information. Founder and CEO Nova Spivack will
also give the first public preview of Radar’s application, which is one
of the first examples of “Web 3.0” – the next-generation of the Web, in
which the Web begins to function more like a database, and software
grows more intelligent and helpful.

Join Nova as he participates in “The Semantic Edge” panel discussion
with esteemed colleagues including Powerset’s Barney Pell and Metaweb’s
Daniel Hillis, moderated by Tim O’Reilly.

Who:   
Radar Networks Founder and CEO Nova Spivack

When:   
Friday, October 19, 2007
4:10 – 4:55 p.m.
   
Where: 
Web2.0 Summit
Palace Hotel
Grand Ballroom
2 New Montgomery Street
San Francisco,  California  94105
   

The Semantic Web, Collective Intelligence and Hyperdata

I’m posting this in response to a recent post by Tim O’Reilly which focused on disambiguating what the Semantic Web is and is not, as well as the subject of Collective Intelligence. I generally agree with Tim’s post, but I do have some points I would add by way of clarification. In particular, in my opinion,  the Semantic Web is all about collective intelligence, on several levels. I would also suggest that the term "hyperdata" is a possibly useful way to express what the Semantic Web is really all about.

What Makes Something a Semantic Web Application?

I agree with Tim that the term "Semantic Web" refers to the use of a particular set of emerging W3C open standards. These standards include RDF, OWL, SPARQL, and GRDDL. A key requirement for an application to have "Semantic Web inside" so to speak, is that it makes use of or is compatible with, at the very least, basic RDF. Another alternative definition is that for an application to be "Semantic Web" it must make at least some use of an ontology, using a W3C standard for doing so.

Semantic Versus Semantic Web

Many applications and services claim to be "semantic" in one manner or another, but that does not mean they are "Semantic Web." Semantic applications include any applications that can make sense of meaning, particularly in language such as unstructured text, or structured data in some cases. By this definition, all search engines today are somewhat "semantic" but few would qualify as "Semantic Web" apps.

The Difference Between "Data On the Web" and a "Web of Data"

The Semantic Web is principally about working with  data in a new and hopefully better way, and making that data available on the Web if desired in an open fashion such that other applications can understand and reuse it more easily. We call this idea "The Data Web" — the notion is that we are transforming the Web from a distributed file server into something that is more like a distributed database.

Instead of the basic objects being web pages, they are actually pieces of data (triples) and records formed from them (sets, trees, graphs or objects comprised of triples). There can be any number of triples within a Web page, and there can also be triples on the Web that do not exist within Web pages at all — they can come directly from databases for example.

One might respond to this by noting that there is already a lot of data on the Web, in XML and other formats — how is the Semantic Web different from that? What is the difference between "Data on the Web" and the idea of "The Data Web?"

The best answer to this question that I have heard was something that Dean Allemang said at a recent Semantic Web SIG in Palo Alto. Dean said, "Sure there is data on the Web, but it’s not actually a web of data."  The difference is that in the Semantic Web paradigm, the data can be linked to other data in other places, it’s a web of data, not just data on the Web.

I call this concept of interconnected data, "Hyperdata." It does for data what hypertext did for text. I’m probably not the originator of this term, but I think it is a very useful term and analogy for explaining the value of the Semantic Web.

Another way to think of it is that the current Web is a big graph of interconnected nodes, where the nodes are usually HTML documents, but in the Semantic Web we are talking about a graph of interconnected data statements that can be as general or specific as you want. A data record is a set of data statements about the same subject, and they don’t have to live in one place on the network — they could be spread over many locations around the Web.

A statement to the effect of "Sue lives in Palo Alto" could exist on site A, refer to a URI for a statement defining Sue on site B, a URI for a statement that defines "lives in" on site C, and a URI for a statement defining "Palo Alto" on site D. That’s a web of data. What’s cool is that anyone can potentially add statements to this web of data, it can be completely emergent.

The Semantic Web is Built by and for Collective Intelligence

This is where I think Tim and others who think about the Semantic Web may be missing an essential point. The Semantic Web is in fact highly conducive to "collective intelligence." It doesn’t require that machines add all the statements using fancy AI. In fact, in a next-generation folksonomy, when tags are created by human users, manually, they can easily be encoded as RDF statements. And by doing this you get lots of new capabilities, like being able to link tags to concepts that define their meaning, and to other related tags.

Humans can add tags that become semantic web content. They can do this manually or software can help them. Humans can also fill out forms that generate RDF behind the scenes, just as filling out a blog posting form generates HTML, XML, ATOM etc. Humans don’t actually write all that code, software does it for them, yet blogging and wikis for example are considered to be collective intelligence tools.

So the concept of folksonomy and tagging is truly orthogonal to the Semantic Web. They are not mutually exclusive at all. In fact the Semantic Web — or at least "Semantic Web Lite" (RDF + only basic use of OWL + basic SPARQL) is capable of modelling and publishing any data in the world in a more open way.

Any application that uses data could do everything it does using these technologies. Every single form of social, user-generated content and community could, and probably will, be implemented using RDF in one manner or another within the next decade or so. And in particular, RDF and OWL + SPARQL are ideal for social networking services — the data model is a much better match for the structure of the data and the network of users and the kinds of queries that need to be done.

Folktologies

This notion that somehow the Semantic Web is not about folksonomy needs to be corrected. For example, take Metaweb’s Freebase. Freebase is what I call a "folktology" — it’s an emergent, community generated ontology. Users collaborate to add to the ontology and the knowledge base that is populated within it. That’s a wonderful example of collective intelligence, user generated content, and semantics (although technically to my knowledge they are not using RDF for this, their data model is from what I can see functionally equivalent and I would expect at least a SPARQL interface from them eventually).

But that’s not all — check out TagCommons and this Tag Ontology discussion, and also the SKOS ontology — all of which are working on semantic ways of characterizing simple tags in order to enrich folksonomies and enable better collective intelligence.

There are at least two other places where the Semantic Web naturally leverages and supports collective intelligence. The first is the fact that people and software can generate triples (people could do it by hand, but generally they will do it by filling out Web forms or answering questions or dialog boxes etc.) and these triples can live all over the Web, yet interconnect or intersect (when they are about the same subjects or objects).

I can create data about a piece of data you created, for example to state that I agree with it, or that I know something else about it. You can create data about my data. Thus a data-set can be generated in a distributed way — it’s not unlike a wiki for example. It doesn’t have to work this way, but at least it can if people do this.

The second point is that OWL, the ontology language, is designed to support an infinite number of ontologies — there doesn’t have to be just one big ontology to "rule them all." Anyone can make a simple or complex ontology and start to then make data statements that refer to it. Ontologies can link to or include other ontologies, or pieces of them, to create bigger distributed ontologies that cover more things.

This is kind of like not only mashing up the data, but also mashing up the schemas too. Both of these are examples of collective intelligence. In the case of ontologies, this is already happening, for example many ontologies already make use of other ontologies like the Dublin Core and Foaf.

The point here is that there is in fact a natural and very beneficial fit between the technologies of the Semantic Web and what Tim O’Reilly defines Web 2.0 to be about (essentially collective intelligence). In fact the designers of the underlying standards of the Semantic Web specifically had "collective intelligence" in mind when they came up with these ideas. They were specifically trying to rectify several problems in the closed, data-silo world of old fashioned databases. The big motivation was to make data more integrated, to enable applications to share data more easily, and to be able to build data with other data, and to build schemas with other schemas. It’s all about enabling connections and network effects.

Now, whether people end up using these technologies to do interesting things that enable human-level collective intelligence (as opposed to just software level collective intelligence) is an open question. At least some companies such as my own Radar Networks and Metaweb, and Talis (thanks, Danny), are directly focused on this, and I think it is safe to say this will be a big emerging trend. RDF is a great fit for social and folksonomy-based applications.

Web 3.0 and the concept of "Hyperdata"

Where Tim defines Web 2.0 as being about collective intelligence generally, I would define Web 3.0 as being about "connective intelligence." It’s about connecting data, concepts, applications and ultimately people. The real essence of what makes the Web great is that it enables a global hypertext medium in which collective intelligence can emerge. In the case of Web 3.0, which begins with the Data Web and will evolve into the full-blown Semantic Web over a decade or more, the key is that it enables a global hyperdata medium (not just hypertext).

As I mentioned above, hyperdata is to data what hypertext is to text. Hyperdata is a great word — it is so simple and yet makes a big point. It’s about data that links to other data. It does for data what hypertext does for text. That’s what RDF and the Semantic Web are really all about. Reasoning is NOT the main point (but is a nice future side-effect…). The main point is about growing a web of data.

Just as the Web enabled a huge outpouring of collective intelligence via an open global hypertext medium, the Semantic Web is going to enable a similarly huge outpouring of collective knowledge and cognition via a global hyperdata medium. It’s the Web, only better.

Open Source Projects for Extracting Data and Metadata from Files & the Web

I’ve been looking around for open-source libraries (preferably in Java, but not required) for extracting data and metadata from common file formats and Web formats. One project that looks very promising is Aperture. Do you know of any others that are ready or almost ready for prime-time use? Please let me know in the comments! Thanks.

Knowledge Networking

I’ve been thinking for several years about Knowledge Networking. It’s not a term I invented, it’s been floating around as a meme for at least a decade or two. But recently it has started to resurface in my own work.

So what is a knowledge network? I define a knowledge network as a form of collective intelligence in which a network of people (two or more people connected by social-communication relationships) creates, organizes, and uses a collective body of knowledge. The key here is that a knowledge network is not merely a site where a group of people work on a body of information together (such as the wikipedia), it’s also a social network — there is an explicit representation of a social relationship within it. So it’s more like a social network than for example a discussion forum or a wiki.

I would go so far as to say that knowledge networks are the third-generation of social software. (Note this is based in-part on ideas that emerged in conversations I have had with Peter Rip, so this also his idea):

  • First-generation social apps were about communication (eg.
    messaging such as Email, discussion boards, chat rooms, and IM)
  • Second-generation social apps were about people and content (eg. Social networks, social media sharing, user-generated content)
  • Third-generation social apps are about relationships and knowledge  (eg. Wikis, referral networks, question and answer systems, social recommendation systems, vertical knowledge and expertise portals, social mashup apps, and coming soon, what we’re building at Radar Networks)

Just some thoughts on a Saturday morning…

The Rise of the Social Operating System

In recent months we have witnessed a number of social networking sites begin to open up their platforms to outside developers. While this trend has been exhibited most prominently by Facebook, it is being embraced by all the leading social networking services, such as Plaxo, LinkedIn, Myspace and others. Along separate dimensions we also see a similar trend towards "platformization" in IM platforms such as Skype as well as B2B tools such as Salesforce.com.

If we zoom out and look at all this activity from a distance it appears that there is a race taking place to become "the social operating" system of the Web. A social operating system might be defined as a system that provides for systematic management and facilitation of human social relationships and interactions.

We might list some of the key capabilities of an ideal "social operating system" as:

  • Identity management
    • Open portable identity
    • Personal profiles ("personas")
    • Privacy control
  • Relationship management
    • Directory and lookup services (location of people to communicate with)
    • Social networking (opt-in relationship formation, indirect social connectivity via social networks)
    • Spam control
  • Communication
    • Person to person communication
      • Synchronous (IM, VOIP)
      • Asynchronous (email, SMS)
    • Group communication
      • Synchronous (conferencing)
      • Asynchronous (group discussions)
  • Social Content distribution
    • Personal publishing (blogging, home pages)
    • Public content distribution
  • Social Coordination
    • Event management (scheduling, invitations, RSVP’s)
    • Calendaring
  • Social Collaboration
    • File sharing
    • Document collaboration (communal authoring/editing)
    • Collaborative filtering
    • Recommendation systems
    • Knowledge management
    • Human powered search
    • Project management
    • Workflow
  • Commerce
    • Classified advertising
    • Auctions
    • Shopping

Today I have not seen any single player that provides a coherent solution to this entire "social stack" however Microsoft, Yahoo, and AOL are probably the strongest contenders. Can Facebook and other social networks truly compete or will they ultimately be absorbed into one of these larger players?

Enriching the Connections of the Web — Making the Web Smarter

Web 3.0 — aka The Semantic Web — is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.

I  believe that collective intelligence primarily comes from connections — this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain’s connections than in the neurons alone. There are several kinds of connections on the Web:

  1. Connections between information (such as links)
  2. Connections between people (such as opt-in social relationships, buddy lists, etc.)
  3. Connections between applications (web services, mashups, client server sessions, etc.)
  4. Connections between information and people (personal data collections, blogs, social bookmarking, search results, etc.)
  5. Connections between information and applications (databases and data sets stored or accessible by particular apps)
  6. Connections between people and applications (user accounts, preferences, cookies, etc.)

Are there other kinds of connections that I haven’t listed — please let me know!

I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.

In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It’s a very simple, yet very flexible and extensible data model that can represent any kind of data structure.

The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used.  The meaning of these connections can be very specific or very general.

For example one might define a type of connection called "friend of" or a type of connection called "employee of" — these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.

This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It’s a new place to put meaning in fact — you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole — the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).

Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood — it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.

It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.

Now where will all these rich semantic connections come from? That’s the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people — for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" — far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.

These are subtle points that are very hard for non-specialists to see — without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!

Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I’m saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.

Web 3.0 — Next-Step for Web?

The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It’s a huge article. In many ways it’s one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.

One point of clarification, just in case anyone is wondering…

Web 3.0 is not just about machines — it’s actually all about humans — it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it’s own. Web 3.0 is Web 2.0 + 1. It’s NOT Web 2.0 – people. The "+ 1" is the
addition of software and metadata that help people and other
applications organize and make better sense of the Web. That new layer
of semantics — often called "The Semantic Web" — will add to and
build on the existing value provided by social networks, folksonomies,
and collaborative filtering that are already on the Web.

So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.

Business 2.0 and BusinessWeek Articles About Radar Networks

It’s been an interesting month for news about Radar Networks. Two significant articles came out recently:

Business 2.0 Magazine published a feature article about Radar Networks in their July 2007 issue. This article is perhaps the most comprehensive article to-date about what we are working on at Radar Networks, it’s also one of the better articulations of the value proposition of the Semantic Web in general. It’s a fun read, with gorgeous illustrations, and I highly recommend reading it.

BusinessWeek  posted an article about Radar Networks on the Web. The article covers some of the background that led to my interests in collective intelligence and the creation of the company. It’s a good article and covers some of the bigger issues related to the Semantic Web as a paradigm shift. I would add one or two points of clarification in addition to what was stated in the article: Radar Networks is not relying solely on software to organize the Internet — in fact, the service we will be launching combines human intelligence and machine intelligence to start making sense of information, and helping people search and collaborate around interests more productively. One other minor point related to the article — it mentions the story of EarthWeb, the Internet company that I co-founded in the early 1990’s: EarthWeb’s content business actually was sold after the bubble burst, and the remaining lines of business were taken private under the name Dice.com. Dice is the leading job board for techies and was one of our properties. Dice has been highly profitable all along and recently filed for a $100M IPO.

Listen to this Discussion on the Future of the Web

If you are interested in the future of the Web, you might enjoy listening to this interview with me, moderated by Dr. Paul Miller of Talis. We discuss, in-depth: the Semantic Web, Web 3.0, SPARQL, collective intelligence, knowledge management, the future of search, triplestores, and Radar Networks.

A Bunch of New Press About Radar Networks

We had a bunch of press hits today for my startup, Radar
Networks

PC World  Article on  Web 3.0 and Radar Networks

Entrepreneur Magazine interview

We’re also proud to announce that Jim
Hendler
, one of the founding gurus of the Semantic Web, has joined our technical advisory board.

Metaweb and Radar Networks

This is just a brief post because I am actually slammed with VC meetings right now. But I wanted to congratulate our friends at Metaweb for their pre-launch announcement. My company, Radar Networks, is the only other major venture-funded play working on the Semantic Web for consumers so we are thrilled to see more action in this sector.

Metaweb and Radar Networks are working on two very different applications (fortunately!). Metaweb is essentially making the Wikipedia of the Semantic Web. Here at Radar Networks we are making something else — but equally big — and in a different category. Just as Metaweb is making a semantic analogue to something that exists and is big, so are we: but we’re more focused on the social web — we’re building something that everyone will use. But we are still in stealth so that’s all I can say for now.

This is now an exciting two-horse space. We look forward to others joining the excitement too. Web 3.0 is really taking off this year.

An interesting side note: Danny Hillis (founder of Metaweb), myself (founder of Radar Networks) and Lew Tucker (CTO of Radar Networks) all worked together at Thinking Machines (an early AI massively parallel computer company). It’s fascinating that we’ve all somehow come to think that the only practical way to move machine intelligence forward is by having us humans and applications start to employ real semantics in what we record in the digital world.