Meet Me at the Innotribe at SIBOS 2010 – October 25 – 29

I’m soon headed to Amsterdam to keynote the SIBOS conference. SIBOS is the largest annual banking conference in the world, attracting around 10,000 attendees.

But what’s most interesting about this event is the innovation stream within the conference. It’s called the Innotribe and is focused on exploring and innovating the future of financial services.

This year the Innotribe will have an incredible “who’s who” of speakers from the tech sector coming to speak about topics like cloud computing, the Long Now, Smart Data and the Semantic Web, digital identity, mobile transactions, social media for financial services, and other hot topics. Innovation is just one track among many topics at SIBOS, but it’s bigger than many stand-alone technology conferences.

It’s going to be a fascinating week. For more background, here’s a promo video showing some coverage of last year’s Innotribe Labs. The Labs are meant to be fun, creative, and a place to think “outside of the box.” Here’s a powerpoint with more info.

In addition to the roster of A-list speakers, the Innotribe will have several Labs taking place in which participants will work interactively over several days to innovate together, culminating in a competition for the best breakthrough proposals. I will be helping to mentor the participants in the Smart Data and the Semantic Web stream.

If you are interested in where financial services are heading, the Innotribe is going to be the place to be.

The conference runs from October 25 – 29, 2010.

I encourage to participate in the Innotribe if you can make it to Holland for the event. Hope to meet you there!

Web 3.0 Documentary by Kate Ray – I'm interviewed

Kate Ray has done a terrific job illustrating and explaining Web 3.0 and the Semantic Web in her new documentary. She interviews, Tim Berners-Lee, Clay Shirky, me, and many others. If you’re interested in where the Web is headed, and the challenges and opportunities ahead, then you should watch this, and share it too!

Evri Ties the Knot with Twine — Twine CEO Comments and Analysis

Today I am announcing that my company, Radar Networks, and its flagship product, Twine, have been acquired by Evri. TechCrunch broke the story here.

This acquisition consolidates two leading providers of semantic discovery and search. It is also the culmination of a long and challenging venture to pioneer the adoption of the consumer Semantic Web.

As the CEO and founder of Radar Networks and, it is difficult to describe what it feels like to have reached this milestone during what has been a tumultuous period of global recession. I am very proud of my loyal and dedicated team and the incredible work and accomplishments that we have made together, and I am grateful for the unflagging support of our investors, and the huge community of Twine users and supporters.

Selling was not something we had planned on doing at this time, but given the economy and the fact that is a long-term project that will require significant ongoing investment and work to reach our goals, it is the best decision for the business and our shareholders.

While we received several offers for the company, and were in discussions about M&A with multiple industry leading companies in media, search and social software, we eventually selected Evri.

The Twine team is joining Evri to continue our work there. The Evri team has assured me that’s data and users are safe and sound and will be transitioned into the service over time, in a manner that protects privacy and data, and is minimally disruptive. I believe they will handle this with care and respect for the Twine community.

It is always an emotional experience to sell a company. Building has been a long, intense, challenging, rewarding, and all-consuming effort. There were incredible high points and some very deep lows along the way. But most of all, it has been an adventure I will never forget. I was fortunate to help pioneer a major new technology — the Semantic Web — with an amazing team, including many good friends. Bringing something as big, as ambitious, and as risky as to market was exhilarating.

Twine has been one of the great learning experiences of my life. I am profoundly grateful to everyone I’ve worked with, and especially to those who supported us financially and personally with their moral support, ideas and advocacy.

I am also grateful to unsung heroes behind the project — the families of all of us who worked on it, who never failed to be supportive as we worked days, nights, weekends and vacations to bring Twine to market.

What I’m Doing Next

I will advise Evri through the transition, but will not be working full-time there. Instead, I will be turning my primary focus to several new projects, including some exciting new ventures:

  • Live Matrix, a new venture focusing on making the live Web more navigable. Live Matrix is led by Sanjay Reddy (CEO of Live Matrix; formerly SVP of Corp Dev for Gemstar TV Guide). Live Matrix is going to give the Web a new dimension: time. More news about this soon.
  • Klout, the leading provider of social analytics about influencers on Twitter and Facebook (which I was the first angel investor in, and which I now advise). Klout is a really hot  company and it’s growing fast.
  • I’m experimenting with a new way to grow ventures. It’s part incubator, part fund, part production company. I call it a Venture Production Studio. Through this initiative my partners and I are planning to produce a number of original startups, and selected outside startups as well. There is a huge gap in the early-stage arena, and to fill this we need to modify the economics and model of early stage venture investing.
  • I’m looking forward to working more on my non-profit interests, particularly those related to supporting democracy and human rights around the world, and one of my particular interests, Tibetan cultural preservation.
  • And last but not least, I’m getting married later this month, which may turn out to be my best project of all.

If you want to keep up with what I am thinking about and working on, you should follow me on Twitter at @novaspivack, and also keep up with my blog here at and my mailing list (accessible in the upper right hand corner of this page).

The Story Behind the Story

In making this transition, it seems appropriate to tell the story. This will provide some insight into how we got here, including some of our triumphs, and our mistakes, and some of the difficulties we faced along the way. Hopefully this will shed some light on the story behind the story, and may even be useful to other entrepreneurs out there in what is perhaps one of the most difficult venture capital and startup environments in history.

(Note: You may also be interested in viewing this presentation, “A Yarn About Twine” which covers the full history of the project with lots of pictures of various iterations of our work from the early semantic desktop app to Twine, to T2.)

The Early Years of the Project

The ideas that led to Twine were born in the 1990’s from my work as a co-founder of EarthWeb (which today continues as, where among many things we prototyped a number of new knowledge-sharing and social networking tools, along with our primary work developing large Web portals and communities for customers, and eventually our own communities for IT professionals. My time with EarthWeb really helped me to understand that challenges and potential of sharing and growing knowledge socially on the Web. I became passionately interested in finding new ways to network people’s minds together, to solve information overload, and to enable the evolution of a future “global brain.”

After EarthWeb’s IPO I worked with SRI and Sarnoff to build their business incubator, nVention, and then eventually started my own incubator, Lucid Ventures, through which I co-founded Radar Networks with Kristin Thorisson, from the MIT Media Lab, and Jim Wissner (the continuing Chief Architect of Twine) in 2003. Our first implementation was a peer-to-peer Java-based knowledge sharing app called “Personal Radar.”

Personal Radar was a very cool app — it organized all the information on the desktop in a single semantic information space that was like an “iTunes for information” and then made it easy to share and annotate knowledge with others in a collaborative manner. There were some similarities to apps like Ray Ozzie’s Groove and the MIT Haystack project, but Personal Radar was built for consumers, entirely with Java, RDF, OWL and the standards of the emerging Semantic Web. You can see some screenshots pictures of this early work in this slideshow, here.

But due to the collapse of the first Internet bubble there was simply no venture funding available at the time and so instead, we ended up working as subcontractors on the DARPA CALO project at SRI. This kept our research alive through the downturn and also introduced us to a true Who’s Who of AI and Semantic Web gurus who worked on the CALO project. We eventually helped SRI build OpenIRIS, a personal semantic desktop application, which had many similarities to Personal Radar. All of our work for CALO was open-sourced under the LGPL license.

Becoming a Venture-Funded Company

Deborah L. McGuinness, who was one of the co-designers of the OWL language (the Web Ontology Language, one of the foundations of the Semantic Web standards at the W3C), became one of our science advisers and kindly introduced us to Paul Allen, who invited us to present our work to his team at Vulcan Capital. The rest is history. Paul Allen and Ron Conway led an angel round to seed-fund us and we moved out of consulting to DARPA and began work on developing our own products and services.

Our long-term plan was to create a major online portal powered by the Semantic Web that would provide a new generation of Web-scale semantic search and discovery features to consumers. But for this to happen, first we had to build our own Web-scale commercial semantic applications platform, because there was no platform available at that time that could meet the requirements we had. In the process of building our platform numerous technical challenges had to be overcome.

At the time (the early 2000’s) there were few development tools in existence for creating ontologies or semantic applications, and in addition there were no commercial-quality databases capable of delivering high-performance Web-scale storage and retrieval of RDF triples. So we had to develop our own development tools, our own semantic applications framework, and our own federated high-performance semantic datastore.

This turned out to be a nearly endless amount of work. However we were fortunate to have Jim Wissner as our lead technical architect and chief scientist. Under his guidance we went through several iterations and numerous technical breakthroughs, eventually developing the most powerful and developer-friendly semantic applications platform in the world. This led to the  development of a portfolio of intellectual property that provides fundamental DNA for the Semantic Web.

During this process we raised a Series A round led by Vulcan Capital and Leapfrog Ventures, and our team was joined by interface designer and product management expert, Chris Jones (now leading strategy at HotStudio, a boutique design and user-experience firm in San Francisco). Under Chris’ guidance we developed, our first application built on our semantic platform.

The mission of was to help people keep up with their interests more efficiently, using the Semantic Web. The basic idea was that you could add content to Twine (most commonly by bookmarking it into the site, but also by authoring directly into it), and then Twine would use natural language processing and analysis, statistical methods, and graph and social network analysis, to automatically store, organize, link and semantically tag the content into various topical areas.

These topics could easily be followed by other users who wanted to keep up with specific types of content or interests. So basically you could author or add stuff to Twine and it would then do the work of making sense of it, organizing it, and helping you share it with others who were interested. The data was stored semantically and connected to ontologies, so that it could then be searched and reused in new ways.

With the help of Lew Tucker, Sonja Erickson and Candice Nobles, as well as an amazing team of engineers, product managers, systems admins and designers, Twine was announced at the Web 2.0 Summit in October of 2007 and went into full public beta in Q1 of 2008. Twine was well-received by the press and early-adopter users.

Soon after our initial beta launch we raised a Series B round, led by Vulcan Capital and Velocity Interactive Group (now named Fuse Capital), as well as DFJ. This gave us the capital to begin to grow rapidly to become the major online destination we envisioned.

In the course of this work we made a number of additional technical breakthroughs, resulting in more than 20 patent filings in total, including several fundamental patents related to semantic data management, semantic portals, semantic social networking, semantic recommendations, semantic advertising, and semantic search.

Four of those patents have been granted so far and the rest are still pending — and perhaps the most interesting of these patents are related to our most recent work on “T2” and are not yet visible.

At the time of beta launch and for almost six months after, Twine was still very much a work in progress. Fortunately our users and the press were fairly forgiving as we worked through evolving the GUI and feature set from what was initially just slightly better than an alpha site to the highly refined and graphical UI we have today.

During these early days of we were fortunate to have a devoted user-base and this became a thriving community of power-users who really helped us to refine the product and develop great content within it.

Rapid Growth, and Scaling Challenges

As Twine grew the community went through many changes and some growing pains, and eventually crossed the chasm to a more mainstream user-base. Within less than a year from launch the site grew to around 3 million monthly visitors, 300,000 registered users, 25,000 “twines” about various interests, and almost 5 million pieces of user-contributed content. It was on its way to becoming the largest semantic web on the Web.

By all accounts Twine was looking like a potential “hit.” During this period the company staff increased to more than 40 people (inclusive of contractors and offshore teams) and our monthly burn rate increased to aggressive levels of spending to keep up with growth.

Despite this growth and spending we still could not keep up with demand for new features and at times we experienced major scaling and performance challenges. We had always planned for several more iterations of our backend architecture to facilitate scaling the system. But now we could see the writing on the wall — we had to begin to develop a more powerful, more scalable backend for Twine, much sooner than we had expected we would need to.

This required us to increase our engineering spending further in order to simultaneously support the live version of Twine and its very substantial backend, and run a parallel development team working on the next generation of the backend and the next version of Twine on top of it. Running multiple development teams instead of one was a challenging and costly endeavor. The engineering team was stretched thin and we were all putting in 12 to 15 hour days every day.

Breakthrough to “T2”

We began to work in earnest on a new iteration of our back-end architecture and application framework — one that could scale fast enough to keep up with our unexpectedly fast growth rate and the increasing demands on our servers that this was causing.

This initiative yielded unexpected fruit. Not only did we solve our scaling problems, but we were able to do so to such a degree that entirely new possibilities were opened up to us — ones that had previously been out of reach for purely technical reasons. In particular, semantic search.

Semantic search had always been a long-term goal of ours, however, in the first version of Twine (the one that is currently online) search was our weakest feature area, due to the challenge of scaling a semantic datastore to handle hundreds of billions of triples. But our user-studies revealed that it was in fact the feature our users wanted us to develop the most – search slowly became the dominant paradigm within Twine, especially when the content in our system reached critical mass.

Our new architecture initiative solved the semantic search problem to such a degree that we realized that not only could we scale, we could scale it to eventually become a semantic search engine for the entire Web.

Instead of relying on users to crowdsource only a subset of the best content into our index, we could crawl large portions of the Web automatically and ingest millions and millions of Web pages, process them, and make them semantically searchable — using a true W3C Semantic Web compliant backend. (Note: Why did we even attempt to do this? We believed strongly in supporting open-standards for the Semantic Web, despite the fact that they posed major technical challenges and required tools that did not exist yet, because they promised to enable semantic application and data interoperability, one of the main potential benefits of the Semantic Web).

Based on our newfound ability to do Web-scale semantic search, we began planning the next version of Twine — Twine 2.0 (“T2”), with the help of Bob Morgan, Mark Erickson, Sasi Reddy, and a team of great designers.

The new T2 plan would merge new faceted semantic search features with the existing social, personalization and knowledge management features of Twine 1.0. It would be the best of both worlds: semantic search + social search. We began working intensively on developing T2, along with a new hosted developer tools that would make it easy for any webmaster to easily add their site into our semantic index. We were certain that with T2 we had finally “cracked the code” to the Semantic Web — we had a product plan and a strategy that could really bring the Semantic Web to everyone on the Web. It elegantly solved the key challenges to adoption and on a technical level, using SOLR instead of a giant triplestore, we were able to scale to unprecedented levels. It was an exciting plan and everyone on the team was confident in the direction.

To see screenshots that demo T2 and our hosted development tools click here.

The Global Recession

Our growth was fast, and so was our spending, but at the time this seemed logical because the future looked bright and we were in a race to keep ahead of our own curve. We were quickly nearing a point where we would soon need to raise another round of funding to sustain our pace, but we were confident that with our growth trends steadily increasing and our exciting plans for T2, the necessary funding would be forthcoming at favorable valuations.

We were wrong.

The global economy crashed unexpectedly, throwing a major curveball in our path. We had not planned on that happening and it certainly was inconvenient to say the least.

The recession not only hit Wall Street, it hit Silicon Valley. Venture capital funding dried up almost overnight. VC funds sent alarming letters to their portfolio companies warning of dire financial turmoil ahead. Many startups were forced to close their doors, while others made drastic sudden layoffs for better or for worse. We too made spending cuts, but we were limited in our ability to slash expenses until the new T2 platform could be completed. Once that was done, we would be able to move Twine to a much more scalable and less costly architecture, and we would no longer need parallel development teams. But until that happened, we still had to maintain a sizeable infrastructure and engineering effort.

As the recession dragged on, and the clock kept ticking down, the urgency of raising a C round increased, and finally we were faced with a painful decision. We had to drastically reduce our spending in order to wait out the recession and live to raise more funding in the future.

Unfortunately, the only way to accomplish such a drastic reduction in spending was to lay off almost 30% of our staff and cut our monthly spending by almost 40%. But by doing that we could not possibly continue to work on as many fronts as we had been doing. The result was that we had to stop most work on Twine 1.0 (the version that was currently online) and focus all our remaining development cycles and spending on the team needed to continue our work on T2.

This was extremely painful for me as the CEO, and for everyone on our team. But it was necessary for the survival of the business and it did buy us valuable time. However, it also slowed us down tremendously. The irony of making this decision was that it reduced our burn-rate but slowed us down, reduced productivity, and cost us time to such a degree that in the end it may have cost us the same amount of money anyway.

While much of our traffic had been organic and direct, we also had a number of marketing partnerships and PR initiatives that we had to terminate. In addition, as part of this layoff we lost our amazing and talented marketing team, as well as half our product management team, our entire design team, our entire marketing and PR budget, and much of our support and community management team. This made it difficult to continue to promote the site, launch new features, fix bugs, or to support our existing online community. And as a result the service began to decline and usage declined along with it.

To make matters worse, at around the same time as we were making these drastic cuts, Google decided to de-index Twine. To this day we still are not sure why they decided to do this – it could have been that Google suddenly decided we were a competitive search engine, or it could be that their algorithm changed, or it could be that there was some error in our HTML markup that may have caused an indexing problem. We had literally millions of pages of topical user-generated content – but all of a sudden we saw drastic reductions in the number of pages being indexed, and in the ranking of those pages. This caused a very significant drop in organic traffic. With what little team I had remaining we spent time petitioning Google and trying to get reinstated. But we never managed to return to our former levels of index prominence.

Eventually, with all these obstacles, and the fact that we had to focus our remaining budget on T2, we put on auto-pilot and let the traffic fall off, believing that we would have the opportunity to win it back once we launched next versipn. While painful to watch, this reduction in traffic and user activity at least had the benefit of reducing the pressure on the engineering team to scale the system and support it under load, giving us time to focus all our energy on getting T2 finished and on raising more funds.

But the recession dragged on and on and on, without end. VC’s remained extremely conservative and risk-averse. Meanwhile, we focused our internal work on growing a large semantic index of the Web in T2, vertical by vertical, starting with food, then games, and then many other topics (technology, health, sports, etc.). We were quite confident that if we could bring T2 to market it would be a turning point for Web search, and funding would follow.

Meanwhile we met with VC’s in earnest. But nobody was able to invest in anything due to the recession. Furthermore we were a pre-revenue company working on a risky advanced technology and VC partnerships were far too terrified by the recession to make such a bet. We encountered the dreaded “wait and see” response.

The only way we could get the funding we needed to continue was to launch T2, grow it, and generate revenues from it, but the only way we could reach those milestones was to launch T2 in the first place: a classic catch-22 situation.

We took comfort in the fact that we were not alone in this predicament. Almost every tech company at our stage was facing similar funding challenges. However, we were determined to find a solution despite the obstacles in our path.

Selling the Business

Had the recession not happened, I believe we would have raised a strong C round based on the momentum of the product and our technical achievements. Unfortunately, we, like many other early-stage technology ventures, found ourselves in the worst capital crunch in decades.

We eventually came to the conclusion that there was no viable path for the company but to use the runway we had left to sell to another entity that was more able to fund the ongoing development and marketing necessary to monetize T2.

While selling the company had always been a desirable exit strategy, we had hoped to do it after the launch and growth of T2. However, we could not afford to wait any longer. With some short-term bridge funding from our existing investors, we worked with Growth Point Technology Partners to sell the company.

We met with a number of the leading Internet and media companies and received numerous offers. In the end, the best and most strategically compatible offer came from Evri, one of our sibling companies in Vulcan Capital’s portfolio. While we had the option to sell to larger and more established companies with very compelling offers, it was simply the best option to join Evri.

And so we find ourselves at the present day. We got the best deal possible for our shareholders given the circumstances., my team, our users and their data are safe and sound. As an entrepreneur and CEO it is, as one advisor put it, of the utmost importance to always keep the company moving forward. I feel that I did manage to achieve this under extremely difficult economic circumstances. And for that I am grateful.

Outlook for the Semantic Web

I’ve been one of the most outspoken advocates of the Semantic Web during my tenure at Twine. So what about my outlook for the Semantic Web now that Twine is being sold and I’m starting to do other things? Do I still believe in the promise of the Semantic Web vision? Where is it going? These are questions I expect to be asked, so I will attempt to answer them here.

I continue to believe in the promise of semantic technologies, and in particular the approach of the W3C semantic web standards (RDF, OWL, SPARQL). That said, having tried to bring them to market as hard as anyone ever has, I can truly say they present significant challenges both to developers and to end-users. These challenges all stem from one underlying problem: Data storage.

Existing SQL databases are not optimal for large-scale, high-performance semantic data storage and retrieval. Yet triplestores are still not ready for prime-time. New graph databases and column stores show a lot of promise, but they are still only beginning to emerge. This situation makes it incredibly difficult to bring Web-scale semantic applications to market cost-effectively.

Enterprise semantic applications are much more feasible today however — because existing and emerging databases and semantic storage solutions do scale to enterprise levels. But for consumer-grade, enormous, Web services, there are still challenges. This is single greatest technical obstacle that Twine faced and it cost us a large amount of our venture funding to surmount. Finally we did find a solution with our T2 architecture, but it is still not a general solution for all types of applications.

I have recently seen some new graph data storage products that may provide the levels of scale and performance needed, but pricing has not been determined yet. In short, storage and retrieval of semantic graph datasets is a big unsolved challenge that is holding back the entire industry. We need federated database systems that can handle hundreds of billions to trillions of triples under high load conditions, in the cloud, on commodity hardware and open source software. Only then will it be affordable to make semantic applications and services at Web-scale.

I believe that semantic metadata is essential for the growth and evolution of the Web. It is one of the only ways we can hope to dig out from the increasing problem of information overload. It is one of the only ways to make search, discovery, and collaboration smart enough to really be significantly better than it is today.

But the notion that everyone will learn and adopt standards for creating this metadata themselves is flawed in my opinion. They won’t. Instead, we must focus on solutions (like Twine and Evri) that make this metadata automatically by analyzing content semantically. I believe this is the most practical approach to bringing the value of semantic search and discovery to consumers, as well as Webmasters and content providers around the Web.

The major search engines are all working on various forms of semantic search, but to my knowledge none of them are fully supporting the W3C standards for the Semantic Web. In some cases this is because they are attempting to co-opt the standards for their own competitive advantage, and in other cases it is because it is simply easier not to use them. But in taking the easier path, they are giving up the long-term potential gains of a truly open and interoperable semantic ecosystem.

I do believe that whoever enables this open semantic ecosystem first will win in the end — because it will have greater and faster network effects than any closed competing system. That is the promise and beauty of open standards: everyone can feel safe using them since no single commercial interest controls them. At least that’s the vision I see for the Semantic Web.

As far as where the Semantic Web will add the most value in years to come, I think we will see it appear in some new areas. First and foremost is e-commerce, an area that is ripe with structured data that needs to be normalized, integrated and made more searchable. This is perhaps the most potentially profitable and immediately useful application of semantic technologies. It’s also one where there has been very little innovation. But imagine if eBay or Amazon or provided open-standards-compliant semantic metadata and semantic search across all their data.

Another important opportunity is search and SEO — these are the areas that Twine’s T2 project focused on, by enabling webmasters to easily and semi-automatically add semantic descriptions of their content into search indexes, without forcing them to learn RDF and OWL and do it manually. This would create a better SEO ecosystem and would be beneficial not only to content providers and search engines, but also to advertisers. This is the approach that I believe the major search engines should take.

Another area where semantics could add a lot of value is social media — by providing semantic descriptions of user profiles and user profile data, as well as social relationships on the Web, it would be possible to integrate and search across all social networks in a unified manner.

Finally, another area where semantics will be beneficial is to enable easier integration of datasets and applications around the Web — currently every database is a separate island, but by using the Semantic Web appropriately data can be freed from databases and easily reused, remixed and repurposed by other applications. I look forward to the promise of a truly open data layer on the Web, when the Web becomes essentially one big open database that all applications can use.

Lessons Learned and Advice for Startups

While the outcome for Twine was decent under the circumstances, and was certainly far better than the alternative of simply running out of money, I do wonder how it could have been different. I ask myself what I learned and what I would do differently if I had the chance or could go back in time.

I think the most important lessons I learned, and the advice that I would give to other entrepreneurs can be summarized with a few key points:

  1. Raise as little venture capital as possible. Raise less than you need, not more than you need. Don’t raise extra capital just because it is available. Later on it will make it harder to raise further capital when you really need it. If you can avoid raising venture capital at all, do so. It comes with many strings attached. Angel funding is far preferable. But best of all, self-fund from revenues as early as you can, if possible. If you must raise venture capital, raise as little as you can get by on — even if they offer you more. But make sure you have at least enough to reach your next funding round — and assume that it will take twice as long to close as you think. It is no easy task to get a startup funded and launched in this economy — the odds are not in your favor — so play defense, not offense, until conditions improve (years from now).
  2. Build for lower exits. Design your business model and capital strategy so that you can deliver a good ROI to your investors at an exit under $30mm. Exit prices are going lower, not higher. There is less competition and fewer buyers and they know it’s a buyer’s market. So make sure your capital strategy gives the option to sell in lower price ranges. If you raise too much you create a situation where you either have to sell at a loss, or raise even more funding which only makes the exit goal that much harder to reach.
  3. Spend less. Spend less than you want to, less than you need to, and less than you can. When you are flush with capital it is tempting to spend it and grow aggressively, but don’t. Assume the market will crash — downturns are more frequent and last longer than they used to. Expect that. Plan on it. And make sure you keep enough capital in reserve to spend 9 to 12 months raising your next round, because that is how long it takes in this economy to get a round done.
  4. Don’t rely on user-traction to raise funding. You cannot assume that user traction is enough to get your next round done. Even millions of users and exponential growth are not enough. VC’s and their investment committees want to see revenues, and particularly at least breakeven revenues. A large service that isn’t bringing in revenues yet is not a business, it’s an experiment. Perhaps it’s one that someone will buy, but if you can’t find a buyer then what? Don’t assume that VC’s will fund it. They won’t. Venture capital investing has changed dramatically — early stage and late stage deals are the only deals that are getting real funding. Mid-stage companies are simply left to die, unless they are profitable or will soon be profitable.
  5. Don’t be afraid to downsize when you have to. It sucks to fire people, but it’s sometimes simply necessary. One of the worst mistakes is to not fire people who should be fired, or to not do layoffs when the business needs require it. You lose credibility as a leader if you don’t act decisively. Often friendships and personal loyalties prevent or delay leaders from firing people that really should be fired. While friendship and loyalty are noble they unfortunately are not always the best thing for the business. It’s better for everyone to take their medicine sooner rather than later. Your team knows who should be fired. Your team knows when layoffs are needed. Ask them. Then do it. If you don’t feel comfortable firing people, or you can’t do it, or you don’t do it when you need to, don’t be the CEO.
  6. Develop cheaply, but still pay market salaries. Use offshore development resources, or locate your engineering team outside of the main “tech hub” cities. It is simply too expensive to compete with large public and private tech companies to pay top dollar for engineering talent in places like San Francisco and Silicon Valley.  The cost of top-level engineers is too high in major cities to be affordable and the competition to hire and retain them is intense. If you can get engineers to work for free or for half price then perhaps you can do it, but I believe you get what you pay for. So rather thank skimp on salaries, pay people market salaries, but do it where market salaries are more affordable.
  7. Only innovate on one frontier at a time. For example, either innovate by making a new platform, or a new application, or a new business model. Don’t do all of these at once, it’s just too hard. If you want to make a new platform, just focus on that, don’t try to make an application too. If you want to make a new application, use an existing platform rather than also building a platform for it. If you want to make a new business model, use an existing application and platform — they can be ones you have built in the past, but don’t attempt to do it all at once. If you must do all three, do them sequentially, and make sure you can hit cash flow breakeven at each stage, with each one. Otherwise you’re at risk in this economy.

I hope that this advice is of some use to entrepreneurs (and VC’s) who are reading this. I’ve personally made all these mistakes myself, so I am speaking from experience. Hopefully I can spare you the trouble of having to learn these lessons the hard way.

What we did Well

I’ve spent considerable time in this article focusing on what didn’t go according to plan, and the mistakes we’ve learned from. But it’s also important to point out what we did right. I’m proud of the fact that Twine accomplished many milestones, including:

  • Pioneering the Semantic Web and leading the charge to make it a mainstream topic of conversation.
  • Creating the most powerful, developer friendly, platform for the Semantic Web.
  • Successfully completing our work on CALO, the largest Semantic Web project in the US.
  • Launching the first mainstream consumer application of Semantic Web.
  • Having a very successful launch, covered by hundreds of articles.
  • Gaining users extremely rapidly — faster than Twitter did in it’s early years.
  • Hiring and retaining an incredible team of industry veterans.
  • Raising nearly $24mm of venture capital over 2 rounds, because our plan was so promising.
  • Developing more than 20 patents, several of which are fundamentally important for the Semantic Web field.
  • Surviving two major economic bubbles and the downturns that followed.
  • Innovating and most of all, adapting to change rapidly.
  • Breaking through to T2 — a truly awesome technological innovation for Web-scale semantic search.
  • Selling the company in one of the most difficult economic environments in history.

I am proud of what we accomplished with Twine. It’s been “a long strange trip” but one that has been full of excitement and accomplishments to remember.


If you’ve actually read this far, thank you. This is a big article, but after all, Twine is a big project – One that lasted nearly 5 years (or 9 years if you include our original research phase). I’m still bullish on the Semantic Web, and genuinely very enthusiastic about what Evri will do with going forward.

Again I want to thank the hundreds of people who have helped make Twine possible over the years – but in particular the members of our technical and management team who went far beyond the call of duty to get us to the deal we have reached with Evri.

While this is certainly the end of an era, I believe that this story has only just begun. The first chapters are complete and now we are moving into a new era. Much work remains to be done and there are certainly still challenges and unknowns, but progress continues and the Semantic Web is here to stay.

Eliminating the Need for Search – Help Engines

We are so focused on how to improve present-day search engines. But that is a kind of mental myopia. In fact, a more interesting and fruitful question is why do people search at all? What are they trying to accomplish? And is there a better way to help them accomplish that than search?

Instead of finding more ways to get people to search, or ways to make existing search experiences better, I am starting to think about how to reduce or  eliminate the need to search — by replacing it with something better.

People don’t search because they like to. They search because there is something else they are trying to accomplish. So search is in fact really just an inconvenience — a means-to-an-end that we have to struggle through to do in order to get to what we actually really want to accomplish. Search is “in the way” between intention and action. It’s an intermediary stepping stone. And perhaps there’s a better way to get to where we want to go than searching.

Searching is a boring and menial activity. Think about it. We have to cleverly invent and try pseudo-natural-language queries that don’t really express what we mean. We try many different queries until we get results that approximate what we’re looking for. We click on a bunch of results and check them out. Then we search some more. And then some more clicking. Then more searching. And we never know whether we’ve been comprehensive, or have even entered the best query, or looked at all the things we should have looked at to be thorough. It’s extremely hit or miss. And takes up a lot of time and energy. There must be a better way! And there is.

Instead of making search more bloated and more of a focus, the goal should really be get search out of the way.  To minimize the need to search, and to make any search that is necessary as productive as possible. The goal should be to get consumers to what they really want with the least amount of searching and the least amount of effort, with the greatest amount of confidence that the results are accurate and comprehensive. To satisfy these constraints one must NOT simply build a slightly better search engine!

Instead, I think there’s something else we need to be building entirely. I don’t know what to call it yet. It’s not a search engine. So what is it?

Bing’s term “decision engine” is pretty good, pretty close to it. But what they’ve actually released so far still looks and feels a lot like a search engine. But at least it’s pushing the envelope beyond what Google has done with search. And this is good for competition and for consumers. Bing is heading in the right direction by leveraging natural language, semantics, and structured data. But there’s still a long way to go to really move the needle significantly beyond Google to be able to win dominant market share.

For the last decade the search wars have been fought in battles around index size, keyword search relevancy, and ad targeting — But I think the new battle is going to be fought around semantic understanding, intelligent answers, personal assistance, and commerce affiliate fees. What’s coming next after search engines are things that function more like assistants and brokers.

Wolfram Alpha is an example of one approach to this trend. The folks at Wolfram Alpha call their system a “computational knowledge engine” because they use a knowledge base to compute and synthesize answers to various questions. It does a lot of the heavy lifting for you, going through various data, computing and comparing, and then synthesizes a concise answer.

There are also other approaches to getting or generating answers for people — for example, by doing what Aardvark does: referring people to experts who can answer their questions or help them. Expert referral, or expertise search, helps reduce the need for networking and makes networking more efficient. It also reduces the need for searching online — instead of searching for an answer, just ask an expert.

There’s also the semantic search approach — perhaps exemplified by my own Twine “T2” project — which basically aims to improve the precision of search by helping you get to the right results faster, with less irrelevant noise. Other consumer facing semantic search projects of interest are Goby and Powerset (now part of Bing).

Still another approach is that of Siri, which is making an intelligent “task completion assistant” that helps you search for and accomplish things like “book a romantic dinner and a movie tonight.” In some ways Siri is a “do engine” not a “search engine.” Siri uses artificial intelligence to help you do things more productively. This is quite needed and will potentially be quite useful, especially on mobile devices.

All of these approaches and projects are promising. But I think the next frontier — the thing that is beyond search and removes the need for search is still a bit different — it is going to combine elements of all of the above approaches, with something new.

For a lack of a better term, I call this a “help engine.” A help engine proactively helps you with various kinds of needs, decisions, tasks, or goals you want to accomplish. And it does this by helping with an increasingly common and vexing problem: choice overload.

The biggest problem is that we have too many choices, and the number of choices keeps increasing exponentially. The Web and globalization have increased the number of choices that are within range for all of us, but the result has been overload. To make a good, well-researched, confident choice now requires a lot of investigation, comparisons, and thinking. It’s just becoming too much work.

For example, choosing a location for an event, or planning a trip itinerary, or choosing what medicine to take, deciding what product to buy, who to hire, what company to work for, what stock to invest in, what website to read about some topic. These kinds of activities require a lot of research, evaluations of choices, comparisons, testing, and thinking. A lot of clicking. And they also happen to be some of the most monetizable activities for search engines. Existing search engines like Google that make money from getting you to click on their pages as much as possible have no financial incentive to solve this problem — if they actually worked so well that consumers clicked less they would make less money.

I think the solution to what’s after search — the “next Google” so to speak — will come from outside the traditional search engine companies. Or at least it will be an upstart project within one of them that surprises everyone and doesn’t come from the main search teams within them. It’s really such a new direction from traditional search and will require some real thinking outside of the box.

I’ve been thinking about this a lot over the last month or two. It’s fascinating. What if there was a better way to help consumers with the activities they are trying to accomplish than search? If it existed it could actually replace search. It’s a Google-sized opportunity, and one which I don’t think Google is going to solve.

Search engines cause choice overload. That wasn’t the goal, but it is what has happened over time due to the growth of the Web and the explosion of choices that are visible, available, and accessible to us via the Web.

What we need now is not a search engine — it’s something that solves the problem created by search engines. For this reason, the next Google probably won’t be Google or a search engine at all.

I’m not advocating for artificial intelligence or anything that tries to replicate human reasoning, human understanding, or human knowledge. I’m actually thinking about something simpler. I think that it’s possible to use computers to provide consumers with extremely good, automated decision-support over the Web and the kinds of activities they engage in. Search engines are almost the most primitive form of decision support imaginable. I think we can do a lot better. And we have to.

People use search engines as a form of decision-support, because they don’t have a better alternative. And there are many places where decision support and help are needed: Shopping, travel, health, careers, personal finance, home improvement, and even across entertainment and lifestyle categories.

What if there was a way to provide this kind of personal decision-support — this kind of help — with an entirely different user experience than search engines provide today? I think there is. And I’ve got some specific thoughts about this, but it’s too early to explain them; they’re still forming.

I keep finding myself thinking about this topic, and arriving at big insights in the process. All of the different things I’ve worked on in the past seem to connect to this idea in interesting ways. Perhaps it’s going to be one of the main themes I’ll be working on and thinking about for this coming decade.

Twine "T2" – Latest Demo Screenshots (Internal Alpha)

This is a series of screenshots that demo the latest build of the consumer experience and developer tools for’s “T2” semantic search product. This is still in internal alpha — not released to public yet.

The Road to Semantic Search — The Story

This is the story of — our early research (with never before seen screenshots of our early semantic desktop work), and our evolution from Twine 1.0 towards Twine 2.0 (“T2”) which is focused on semantic search.

What's After the Real Time Web?

In typical Web-industry style we’re all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call “The Stream,” is not an end in itself, it’s a means to an end. So what will it enable, where is it headed, and what’s it going to look like when we look back at this trend in 10 or 20 years?

In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:

  1. Web Attention Deficit Disorder. The first problem with the real-time Web that is becoming increasingly evident is that it has a bad case of ADD. There is so much information streaming in from so many places at once that it’s simply impossible to focus on anything for very long, and a lot of important things are missed in the chaos. The first generation of tools for the Stream are going to need to address this problem.
  2. Web Intention Deficit Disorder. The second problem with the real-time Web will emerge after we have made some real headway in solving Web attention deficit disorder. This second problem is about how to get large numbers of people to focus their intention not just their attention. It’s not just difficult to get people to notice something, it’s even more difficult to get them to do something. Attending to something is simply noticing it. Intending to do something is actually taking action, expending some energy or effort to do something. Intending is a lot more expensive, cognitively speaking, than merely attending. The power of collective intention is literally what changes the world, but we don’t have the tools to direct it yet.

The Stream is not the only big trend taking place right now. In fact, it’s just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I’m tracking:

  • Messaging. The real-time Web aka The Stream is really about messaging in essence. It’s a subset of the global trend towards building a better messaging layer for the Web. Multiple forms of messaging are emerging, from the publish-and-subscribe nature of Twitter and RSS, to things like Google Wave, Pubsubhubub, and broadcast style messaging or multicasting via screencast, conferencing and media streaming and events in virtual worlds. The effect of these tools is that the speed and interactivity of the Web are increasing — the Web is getting faster. Information spreads more virally, more rapidly — in other words, “memes” (which we can think of as collective thoughts) are getting more sophisticated and gaining more mobility.
  • Semantics. The Web becomes more like a database. The resolution of search, ad targeting, and publishing increases. In other words, it’s a higher-resolution Web. Search will be able to target not just keywords but specific meaning. For example, you will be able to search precisely for products or content that meet certain constraints. Multiple approaches from natural language search to the metadata of the Semantic Web will contribute to increased semantic understanding and representation of the Web.
  • Attenuation. As information moves faster, and our networks get broader, information overload gets worse in multiple dimensions. This creates a need for tools to help people filter the firehose. Filtering in its essence is a process of attenuation — a way to focus attention more efficiently on signal versus noise. Broadly speaking there are many forms of filtering from automated filtering, to social filtering, to personalization, but they all come down to helping someone focus their finite attention more efficiently on the things they care about most.
  • The WebOS.  As cloud computing resources, mashups, open linked data, and open API’s proliferate, a new level of aggregator is emerging. These aggregators may focus on one of these areas or may cut across them. Ultimately they are the beginning of true cross-service WebOS’s. I predict this is going to be a big trend in the future — for example instead of writing Web apps directly to various data and API’s in dozens of places, just write to a single WebOS aggregator that acts as middleware between your app and all these choices. It’s much less complicated for developers. The winning WebOS is probably not going to come from Google, Microsoft or Amazon — rather it will probably come from someone neutral, with the best interests of developers as the primary goal.
  • Decentralization. As the semantics of the Web get richer, and the WebOS really emerges it will finally be possible for applications to leverage federated, Web-scale computing. This is when intelligent agents will actually emerge and be practical. By this time the Web will be far too vast and complex and rapidly changing for any centralized system to index and search it. Only massively federated swarms of intelligent agents, or extremely dynamic distributed computing tools, that can spread around the Web as they work, will be able to keep up with the Web.
  • Socialization. Our interactions and activities on the Web are increasingly socially networked, whether individual, group or involving large networks or crowds. Content is both shared and discovered socially through our circles of friends and contacts. In addition, new technologies like Google Social Search enable search results to be filtered by social distance or social relevancy. In other words, things that people you follow like get higher visibility in your search results. Socialization is a trend towards making previously non-social activities more social, and towards making already-social activities more efficient and broader. Ultimately this process leads to wider collaboration and higher levels of collective intelligence.
  • Augmentation. Increasingly we will see a trend towards augmenting things with other things. For example, augmenting a Web page or data set with links or notes from another Web page or data set. Or augmenting reality by superimposing video and data onto a live video image on a mobile phone. Or augmenting our bodies with direct connections to computers and the Web.

If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it’s collective intelligence — not just of humans, but also our computing systems, working in concert.

Collective Intelligence

I think that these trends are all combining, and going real-time. Effectively what we’re seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.

But that’s in the very distant future still. In the nearer term — the next 100 years or so — we’re going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.

Social Evolution

Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.

Physical Evolution

As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we’ll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:

  • Individual brains will get less good at things like memorization and recall, calculation, reasoning, and long-term planning and action.
  • Individual brains will get better at multi-tasking, information filtering, trend detection, and social communication. The parts of the nervous system involved in processing live information will increase disproportionately to other parts.
  • Our bodies may actually improve in certain areas. We will become more, not less, mobile, as computation and the Web become increasingly embedded into our surroundings, and into augmented views of our environments. This may cause our bodies to get into better health and shape since we will be less sedentary, less at our desks, less in front of TV’s. We’ll be moving around in the world, connected to everything and everyone no matter where we are. Physical strength will probably decrease overall as we will need to do less manual labor of any kind.

These are just some of the changes that are likely to occur as a result of the things we’re working on today. The Web and the emerging Real-Time Web are just a prelude of things to come.

The Next Generation of Web Search — Search 3.0

The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.

Web 1.0, the first decade of the Web (1989 – 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.

Web 2.0, the second decade of the Web (1999 – 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive “web of trust” to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value — how many people in the community liked them and current activity level — as
well as by semantic relevancy measures.

In the coming third decade of the Web, Web 3.0 (2009 – 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.

Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.

Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past — the more timely something is the more relevant it may be as well.

These two themes — present and personal — will define the next great search experience.

To accomplish this, we need to make progress on a number of fronts.

First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.

Metadata reduces the need for computation in order to determine what content is about — it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.

This applies especially to the area of the real-time Web, where for example short “tweets” of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.

In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a “one-size fits all” ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.

Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what’s most important effectively. Social graph analysis is a key tool for doing this, but in
addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.

Sneak Peak – Siri — Interview with Tom Gruber

Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff

In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:

Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?

Tom Gruber: A virtual personal assistant is a software system that

  • Helps the user find or do something (focus on tasks, rather than information)
  • Understands the user’s intent (interpreting language) and context (location, schedule, history)
  • Works on the user’s behalf, orchestrating multiple services and information sources to help complete the task

In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don’t do things for me – I have to use them as tools to do something, and I have to adapt to their ways of taking input.

Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?

Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time.  Apple’s famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT’s Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book “The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us”.  These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results.  These are hallmarks of the Siri assistant.  Some of the elements of these visions
are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator.  Or self-awareness a la Singularity.  But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.

Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)

Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual
assistant that helps people do things.  It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.

Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant.  Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The team has been evolving and hardening the technology since January 2008.

Nova Spivack: What are primary aspects of Siri that you would say are “novel”?

Tom Gruber: The demands of the consumer internet focus — instant usability and robust interaction with the evolving web — has driven us to come up with some new innovations:

  • A conversational interface that combines the best of speech and semantic language understanding with an interactive dialog that helps guide
    people toward saying what they want to do and getting it done. The
    conversational interface allows for much more interactivity that one-shot search style interfaces, which aids usability and improves intent understanding.  For example, if Siri didn’t quite hear what you said, or isn’t sure what you meant, it can ask for clarifying information.   For example, it can prompt on ambiguity: did you mean pizza restaurants in Chicago or Chicago-style pizza places near you? It can also make reasonable guesses based on context. Walking around with the phone at lunchtime, if the speech interpretation comes back with something garbled about food you probably meant “places to eat near my current location”. If this assumption isn’t right, it is easy to correct in a conversation.
  • Semantic auto-complete – a combination of the familiar “autocomplete” interface of search boxes with a semantic and linguistic model of what might be worth saying. The so-called “semantic completion” makes it possible to rapidly state complex requests (Italian restaurants in the SOMA neighborhood of San Francisco that have tables available tonight) with just a few clicks. It’s sort of like the power of faceted search a la Kayak, but packaged in a clever command line style interface that works in small form factor and low bandwidth environments.
  • Service delegation – Siri is particularly deep in technology for operationalizing a user’s intent into computational form, dispatching to multiple, heterogeneous services, gathering and integrating results, and presenting them back to the user as a set of solutions to their request.  In a restaurant selection task, for instance, Siri combines information from many different sources (local business directories, geospatial databases, restaurant guides, restaurant review sources, online reservation services, and the user’s own favorites) to show a set of candidates that meet the intent expressed in the user’s natural language request.

Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?

Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:

  • Task focus. Siri is very focused on a bounded set of specific human tasks, like finding something to do, going out with friends, and getting around town.  This task focus allows it to have a very rich model of its domain of competence, which makes everything more tractable from language understanding to reasoning to service invocation and results presentation
  • Structured data focus. The kinds of tasks that Siri is particularly good at involve semistructured data, usually on tasks involving multiple criteria and drawing from multiple sources.  For example, to help find a place to eat, user preferences for cuisine, price range, location, or even specific food items come into play.  Combining results from multiple sources requires
    reasoning about domain entity identity and the relative capabilities of different information providers.  These are hard problems of semantic
    information processing and integration that are difficult but feasible
    today using the latest AI technologies.
  • Architecture focus. Siri is built from deep experience in integrating multiple advanced technologies into a platform designed expressly for virtual assistants. Siri co-founder Adam Cheyer was chief architect of the CALO project, and has applied a career of experience to design the platform of the Siri product. Leading the CALO project taught him a lot about what works and doesn’t when applying AI to build a virtual assistant. Adam and I also have rather unique experience in combining AI with intelligent interfaces and web-scale knowledge integration. The result is a “pure  play” dedicated architecture for virtual assistants, integrating all the components of intent understanding, service delegation, and dialog flow management. We have avoided the need to solve general AI problems by concentrating on only what is needed for a virtual assistant, and have chosen to begin with a
    finite set of vertical domains serving mobile use cases.

Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?

Tom Gruber: Rather than trying to be like a search engine to all the world’s information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface.  The smaller the form factor, the more mobile the context,
the more limited the bandwidth : the more it is important that the interface make intelligent use of the user’s attention and the resources at hand.  In other words, “smaller needs to be smarter.”  And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure.  When you are on the go, you just don’t have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.

Nova Spivack: What language and platform is Siri written in?

Tom Gruber: Java, Javascript, and Objective C (for the iPhone)

Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?

Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards.  A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier.  For instance, we use as one of our geospatial information sources. It is a full-on Semantic
Web endpoint, and that makes it easy to deal with.  The more the API declares its data model, the more automated we can make our coupling to it.

Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?

Tom Gruber: Siri’s knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models.  As much as possible we represent things declaratively (i.e., as data in models, not lines of code).  This is a tried and true best practice for complex AI systems.  This makes the whole system more robust and scalable, and the development process more agile.  It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.

Nova Spivack: Will Siri be part of the Semantic Web, or at least the open linked data Web (by making open API’s, sharing of linked data, RDF, available, etc.)?

Tom Gruber: Siri isn’t a source of data, so it doesn’t expose data using Semantic Web standards.  In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop – an intelligent interface that knows about user needs
and sources of information to meet those needs, and intermediates.  The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.).  The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data.  For example, if a virtual assistant wants to schedule a dinner it needs more than the information
about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies.  That is the original purpose of ontologies-as-specification that I promoted in the
1990s – to help specify how to interact with these agents via knowledge-level APIs.

Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication.  As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.

All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text.  So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.

Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?

Tom Gruber: Siri’s top line measure of success is task completion (not relevance).  A subtask is intent recognition, and subtask of that is NLP.  Speech is another element, which couples to NLP and adds its own issues.  In this context, Siri’s NLP is “pretty darn good” — if the user is talking about something in Siri’s domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese.  All NLP is tuned for some class of natural language, and Siri’s is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don’tknow how it would compare to standard message and news corpuses using by the NLP research community.

Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?

Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.

Nova Spivack: Will Siri be able to talk back to users at any point?

Tom Gruber: It could use speech synthesis for output, for the appropriate contexts.  I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone.  For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.

Nova Spivack: Can you give me more examples of how the NLP in Siri works?

Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)

Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?

Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time.  As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live.  Siri doesn’t forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results.  The evolution in learning comes as users have a history with Siri, which gives it achance to make some generalizations about preferences.  There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.

Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?

Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes.  Siri knows about the data because we (humans) explicitly model what is in those sources.  With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request.  For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.

Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.

Tom Gruber: Thank you, Nova, it’s a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It’s easy to project intelligence onto an assistant, but Siri isn’t going to pass the Turing Test. It’s just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.

Video: My Talk on The Future of Libraries — "Library 3.0"

If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!

Twine's Explosive Growth

Twine has been growing at 50% per month since launch in October. We've been keeping that quiet while we wait to see if it holds. VentureBeat just noticed and did an article about it. It turns out our January numbers are higher than estimates and February is looking strong too. We have a slew of cool viral features coming out in the next few months too as we start to integrate with other social networks. Should be an interesting season.

Fast Company Interview — "Connective Intelligence"

In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.

Interest Networks are at a Tipping Point

UPDATE: There’s already a lot of good discussion going on around this post in my public twine.

I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.

In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.

At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.t

So, what is an interest network?

In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.

Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.

I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.

This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi–dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.

We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:

What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t.

We let our users tell us what they’re most interested in, and we follow their lead).

Most interest networks exhibit the following characteristics as well:

  • They have some sort of bookmarking/submission/markup function to store and map data (often using existing metaphors, even if what’s under the hood is new)
  • They also have some sort of social sharing function to provide the network benefit (this isn’t exclusive to interest networks, obviously, but it is characteristic)
  • And in most cases, interest networks look to add some sort of “smarts” or “recommendations” capability to the mix (that is, you get more out than you put in)

This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.

To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.

At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.

The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.

Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.

6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.

I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts — Carla, Jeremiah, and others, are you listening?

Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.

Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”

Now that anyone can join, it will be fun and gratifying to watch Twine grow.

Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.

Stay tuned!

Watch My best Talk: The Global Brain is Coming

I’ve posted a link to a video of my best talk — given at the GRID ’08 Conference in Stockholm this summer. It’s about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!

New Video: Leading Minds from Google, Yahoo, and Microsoft talk about their Visions for Future of The Web

Video from my panel at DEMO Fall ’08 on the Future of the Web is now available.

I moderated the panel, and our panelists were:

Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century

Peter Norvig, Director of Research, Google Inc.

Jon Udell, Evangelist, Microsoft Corporation

Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.

The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.

Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft’s longer-term views as well.


The Future of the Desktop

This is an older version of this article. The most recent version is located here:


I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I’ve come up so far.

(Author’s Note: This is a raw, first-draft of what I think it will be like. Please forgive any typos — I am still working on this and editing it…)

What Will Happen to the Desktop?

As we enter the third decade of the Web we are seeing an increasing shift from local desktop applications towards Web-hosted software-as-a-service (SaaS). The full range of standard desktop office tools (word processors, spreadsheets, presentation tools, databases, project management, drawing tools, and more) can now be accessed as Web-hosted apps within the browser. The same is true for an increasing range of enterprise applications. This process seems to be accelerating.

As more kinds of applications become available in Web-based form, the Web browser is becoming the primary framework in which end-users work and interact. But what will happen to the desktop? Will it too eventually become a Web-hosted application? Will the Web browser swallow up the desktop? Where is the desktop headed?

Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?

No. There have already been several attempts at doing this — and they never catch on. People don’t want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.

Partly this is due to the difference in user experience between using files and folders on a local machine and doing that in “simulated” fashion via some Flash-based or HTML-based imitation of a desktop. Imitations desktops to-date have simply been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it. The desktop of the future – what some have called “the Webtop” – still has yet to be invented.

It’s going to be a hosted web service

Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there will have to be some kind of interface that we consider to be our personal “home” and “workspace” — but ultimately it will have to be a unified space that all our devices connect to and share. This requires that it be a hosted online service.

Currently we have different information spaces on different devices (laptop, mobile device, PC). These will merge. Native local clients could be created for various devices, but ultimately the simplest and therefore most likely choice is to just use the browser as the client. This coming “Webtop” will provide an interface to your local devices, applications and information, as well as to your online life and information.

Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.

Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it’s will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.

The Web 3.0 desktop is going to be completely merged with the Web — it is going to be part of the Web. In fact there may eventually be no distinction between the desktop and the Web anymore.

The focus shifts from information to attention

As our digital lives shift from being focused on the old fashioned desktop to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (feeds, lifestreams, microblogs, timelines, etc.).

Instead of being just a directory, the desktop of the future is going to be more like a feed reader or social news site. The focus will be on keeping up with all the stuff flowing in and out of the user’s environment. The interface will be tuned to help the user understand what the trends are, rather than just on how things are organized.

The focus will be on helping the user to manage their attention rather than just their information. This is a leap to the meta-level: A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).

Users are going to shift from acting as librarians to acting as daytraders.

Our digital roles are already shifting from acting as librarians to becoming more like daytraders. In the PC era we were all focused on trying to manage the stuff on our computers — in other words, we were acting as librarians. But this is going to shift. Librarians organize stuff, but daytraders are focused on discovering and keeping track of trends. It’s a very different focus and activity, and it’s what we are all moving towards.

We are already spending more of our time keeping up with change and detecting trends, than on organizing information. In the coming decade the shelf-life of information is going to become vanishingly short and the focus will shift from storage and recall to real-time filtering, trend detection and prediction.

The Webtop will be more social and will leverage and integrate collective intelligence

The Webtop is going to be more socially oriented than desktops of today — it will have built-in messaging and social networking, as well as social-media sharing, collaborative filtering, discussions, and other community features.

The social dimension of our lives is becoming perhaps our most important source of information. We get information via email from friends, family and colleagues. We get information via social networks and social media sharing services. We co-create information with others in communities.

The social dimension is also starting to play a more important role in our information management and discovery activities. Instead of those activities remaining as solitary, they are becoming more communal. For example many social bookmarking and social news sites use community sentiment and collaborative filtering to help to highlight what is most interesting, useful or important.

It’s going to have powerful semantic search and social search capabilities built-in

The Webtop is going to have more powerful search built-in. This search will combine both social and semantic search features. Users will be able to search their information and rank it by social sentiment (for example, “find documents about x and rank them by how many of my friends liked them.”)

Semantic search will enable highly granular search and navigation of information along a potentially open-ended range of properties and relationships.

For example you will be able to search in a highly structured way — for example, search for products you once bookmarked that have a price of $10.95 and are on-sale this week. Or search for documents you read which were authored by Sue and related to project X, in the last month.

The semantics of the future desktop will be open-ended. That is to say that users as well as other application and information providers will be able to extend it with custom schemas, new data types, and custom fields to any piece of information.

Interactive shared spaces instead of folders

Forget about shared folders — that is an outmoded paradigm. Instead, the  new metaphor will be interactive shared spaces.

The need for shared community space is currently being provided for online by forums, blogs, social network profile pages, wikis, and new community sites. But as we move into Web 3.0 these will be replaced by something that combines their best features into one. These next-generation shared spaces will be like blogs, wikis, communities, social networks, databases, workspaces and search engines in one.

Any group of two or more individuals will be able to participate in a shared space that connects their desktops for a particular purpose. These new shared spaces will not only provide richer semantics in the underlying data, social network, and search, but they will also enable groups to seamlessly and collectively add, organize, track, manage, discuss, distribute, and search for information of mutual interest.

The personal cloud

The future desktop will function like a “personal cloud” for users. It will connect all their identities, data, relationships, services and activities in one virtual integrated space. All incoming and outgoing activity will flow through this space. All applications and services that a user makes use of will connect to it.

The personal cloud may not have a center, but rather may be comprised of many separate sub-spaces, federated around the Web and hosted by different service-providers. Yet from an end-user perspective it will function as a seamlessly integrated service. Users will be able to see and navigate all their information and applications, as if they were in one connected space, regardless of where they are actually hosted. Users will be able to search their personal cloud from any point within it.

Open data, linked data and open-standards based semantics

The underlying data in the future desktop, and in all associated services it connects, will be represented using open-standard data formats. Not only will the data be open, but the semantics of the data – the schema – will also be defined in an open way. The emerigng Semantic Web provides a good infrastructure for enabling this to happen.

The value of open linked-data and open semantics is that data will not be held prisoner anywhere and can easily be integrated with other data.

Users will be able to seamlessly move and integrate their data, or parts of their data, in different services. This means that your Webtop might even be portable to a different competing Webtop provider someday. If and when that becomes possible, how will Webtop providers compete to add value?

It’s going to be smart

One of the most important aspects of the coming desktop is that it’s going to be smart. It’s going to learn and help users to be more productive. Artificial intelligence is one of the key ways that competing Webtop providers will differentiate their offerings.

As you use it, it’s going to learn about your interests, relationships, current activities, information and preferences. It will adaptively self-organize to help you focus your attention on what is most important to whatever context you are in.

When reading something while you are taking a trip to Milan it may organize itself to be more contextually relevant to that time, place and context. When you later return home to San Francisco it will automatically adapt and shift to your home context. When you do a lot of searches about a certain product it will realize your context and intent has to do with that product and will adapt to help you with that activity for a while, until your behavior changes.

Your desktop will actually be a semantic knowledge base on the back-end. It will encode a rich semantic graph of your information, relationships, interests, behavior and preferences. You will be able to permit other applications to access part or all of your graph to datamine it and provide you with value-added views and even automated intelligent assistance.

For example, you might allow an agent that cross-links things to see all your data: it would go and add cross links to relevant things onto all the things you have created or collected. Another agent that makes personalized buying recommendations might only get to see your shopping history across all shopping sites you use.

Your desktop may also function as a simple personal assistant at times. You will be able to converse with your desktop eventually — through a conversational agent interface. While on the road you will be able to email or SMS in questions to it and get back immediate intelligent answers. You will even be able to do this via a voice interface.

For example, you might ask, “where is my next meeting?” or “what Japanese restaurants do I like in LA?” or “What is Sue’s Smith’s phone number?” and you would get back answers. You could also command it to do things for you — like reminding you to do something, or helping you keep track of an interest, or monitoring for something and alerting you when it happens.

Because your future desktop will connect all the relationships in your digital life — relationships connecting people, information, behavior, prefences and applications — it will be the ultimate place to learn about your interests and preferences.

Federated, open policies and permissions

This rich graph of meta-data that comprises your future desktop will enable the next-generation of smart services to learn about you and help you in an incredibly personalized manner. It will also of course be rife with potential for abuse and privacy will be a major function and concern.

One of the biggest enabling technologies that will be necessary is a federated model for sharing meta-data about policies and permissions on data. Information that is considered to be personal and private in Web site X should be recognized and treated as such by other applications and websites you choose to share that information with. This will require a way for sharing meta-data about your policies and permissions between different accounts and applicaitons you use.

The semantic web provides a good infrastructure for building and deploying a decentralized framework for policy and privacy integration, but it has yet to be developed, let alone adopted. For the full vision of the future desktop to emerge a universally accepted standard for exchanging policy and permission data will be a necessary enabling technology.

Who is most likely to own the future desktop?

When I think about what the future desktop is going to look like it seems to be a convergence of several different kinds of services that we currently view as separate.

It will be hosted on the cloud and accessible across all devices. It will place more emphasis on social interaction, social filtering, and collective intelligence. It will provide a very powerful and extensible data model with support for both unstructured and arbitrarily structured information. It will enable almost peer-to-peer like search federation, yet still have a unified home page and user-experience. It will be smart and personalized. It will be highly decentralized yet will manage identity, policies and permissions in an integrated cohesive and transparent manner across services.

By cobbling together a number of different services that exist today you could build something like this in a decentralized fashion. Is that how the desktop of the future will come about? Or will it be a new application provided by one player with a lot of centralized market power? Or could an upstart suddently emerge with the key enabling technologies to make this possible? It’s hard to predict, but one thing is certain: It will be an interesting process to watch.

Life in Perpetual Beta: The Film

Melissa Pierce is a filmmaker who is making a film about "Life in Perpetual Beta." It’s about how people who are adapting and reinventing themselves in the moment, and a new philosophy or approach to life. She’s interviewed a number of interesting people, and while I was in Chicago recently, she spoke with me as well. Here is a clip about how I view the philosophy of living in Beta. Her film is also in perpetual beta, and you can see the clips from her interviews on her blog as the film evolves. Eventually it will be released through the indie film circuit, and it looks like it will be a cool film. By the way, she is open to getting sponsors so if you like this idea and want your brand on the opening credits, drop her a line!

On the Difference Between "Semantic" and "Semantic Web"

This is a brief post with one purpose: to clarify the meaning of the term "semantic." It has suddenly become chic to label every new app as somehow "semantic" but what does this mean really? Are all "semantic" apps part of the "Semantic Web?" What is the criteria for something to be "semantic" versus "Semantic Web" anyway?

It’s pretty simple actually. Any app that can understand language to some degree could be labeled as "semantic." So even Google is somewhat of a semantic application by that criterion. Of course some applications are a lot more semantic than others. Powerset is more semantic than Google, for example, because it understands natural language, not just keywords.

But for an application to be considered part of the "Semantic Web" it has to support a set of open standards defined by the W3C, including at the very least RDF, and potentially also OWL and SPARQL. These are the technologies that collectively comprise the Semantic Web. Supporting these technologies means making at least some RDF data visible to outside applications.

I’m not sure if Powerset is doing this yet, nor whether Freebase is doing it yet, but they should (and I’m guessing they will). Twine, my company’s application, is using RDF and OWL internally within our app and we are also exposing this via our site (although we are still in private beta so only beta participants can see that data today). Other companies such as Digg are already making their RDF data visible to the public. Any application with at least publishes RDF data can be considered to be both semantic and part of the Semantic Web.

Associative Search and the Semantic Web: The Next Step Beyond Natural Language Search

Our present day search engines are a poor match for the way that our brains actually think and search for answers. Our brains search associatively along networks of relationships. We search for things that are related to things we know, and things that are related to those things. Our brains not only search along these networks, they sense when networks intersect, and that is how we find things. I call this associative search, because we search along networks of associations between things.

Human memory — in other words, human search — is associative. It works by “homing in” on what we are looking for, rather than finding exact matches. Compare this to the the keyword search that is so popular on the Web today and there are obvious differences. Keyword searching provides a very weak form of “homing in” — by choosing our keywords carefully we can limit the set of things which match. But the problem is we can only find things which contain those literal keywords.

There is no actual use of associations in keyword search, it is just literal matching to keywords. Our brains on the other hand use a much more sophisticated form of “homing in” on answers. Instead of literal matches, our brains look for things things which are associatively connected to things we remember, in order to find what we are ultimately looking for.

For example, consider the case where you cannot remember someone’s name. How do you remember it? Usually we start by trying to remember various facts about that person. By doing this our brains then start networking from those facts to other facts and finally to other memories that they intersect.  Ultimately through this process of “free association” or “associative memory” we home in on things which eventually trigger a memory of the person’s name.

Both forms of search make use of the intersections of sets, but the associative search model is exponentially more powerful because for every additional search term in your query, an entire network of concepts, and relationships between them, is implied. One additional term can result in an entire network of related queries, and when you begin to intersect the different networks that result from multiple
terms in the query, you quickly home in on only those results that make sense. In keyword search on the other hand, each additional search term only provides a linear benefit — there is no exponential amplification using networks.

Keyword search is a very weak approximation of associative search because there really is no concept of a relationship at all. By entering keywords into a search engine like Google we are simulating an associative search, but without the real power of actual relationships between things to help us. Google does not know how various concepts are related and it doesn’t take that into account when helping us find things. Instead, Google just looks for documents that contain exact matches to the terms we are looking for and weights them statistically. It makes some use of relationships between Web pages to rank the results, but it does not actually search along relationships to find new results.

Basically the problem today is that Google does not work the way our brains think. This difference creates an inefficiency for searchers: We have to do the work of translating our associative way of thinking into “keywordese” that is likely to return results we want. Often this requires a bit of trial and error and reiteration of our searches before we get result sets that match our needs.

A recently proposed solution to the problem of “keywordese” is natural language search (or NLP search), such as what is being proposed by companies like Powerset and Hakia. Natural language search engines are slightly closer to the way we actually think because they at least attempt to understand ordinary language instead of requiring keywords. You can ask a question and get answers to that question that make sense.

Natural language search engines are able to understand the language of a query and the language in the result documents in order to make a better match between the question and potential answers. But this is still not true associative search. Although these systems bear a closer resemblance to the way we think, they still do not actually leverage the power of networks — they are still not as powerful as associative search.

Continue reading

Great Collective Intelligence Book; Includes a Chapter I Wrote

I highly recommend this new book on Collective Intelligence. It features chapters by a Who’s Who of thinkers on Collective Intelligence, including a chapter by me about “Harnessing the Collective Intelligence of the World Wide Web.”

Here is the full-text of my chapter, minus illustrations (the rest of the book is great and I suggest you buy it to have on your shelf. It’s a big volume and worth the read):

Continue reading

A Few Predictions for the Near Future

This is a five minute video in which I was asked to make some predictions for the next decade about the Semantic Web, search and artificial intelligence. It was done at the NextWeb conference and was a fun interview.

Learning from the Future with Nova Spivack from Maarten on Vimeo.

Twine and Linked Data on the Semantic Web

Tim Berners-Lee just posted his thoughts about the importance of Linked Data on the Semantic Web. Linked data support is built-into Twine. All the data in Twine is accessible as open-standard RDF and OWL today and will be accessible to other applications via several API’s including SPARQL. You can learn more about Twine’s support for Linked Data and see some examples here.

Tim says:

In all this Semantic Web news, though, the proof of the pudding is in the eating. The benefit of the Semantic Web is that data may be re-used in ways unexpected by the original publisher. That is the value added. So when a Semantic Web start-up either feeds data to others who reuse it in interesting ways, or itself uses data produced by others, then we start to see the value of each bit increased through the network effect.

So if you are a VC funder or a journalist and some project is being sold to you as a Semantic Web project, ask how it gets extra re-use of data, by people who would not normally have access to it, or in ways for which it was not originally designed. Does it use standards? Is it available in RDF? Is there a SPARQL server?

Twine provides RDF and supports SPARQL (although while we are in beta we have not opened our SPARQL API yet, but we will…). At the same time Twine also protects privacy by only providing its data according to permissions. Apps can only get Twine data they permission to see such as their own data or their owner’s or users’s data, data that has been shared with them, or public data in Twine.

Twine is also designed to consume external Linked Data via it’s APIs. Twine will be able to consume external RDF and OWL ontologies, as a means to enable other applications and users to extend its functionality and add new data to it.

My Visit to DERI — World's Premier Semantic Web Research Institute

Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.

DERI has become the world’s premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what’s happening there.

Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:

  • Semantic Web Search Engine (SWSE) and YARS, a massively scalable triplestore.  These projects are concerned with crawling and indexing the information on the Semantic Web so that end-users can find it. They have done good work on consolidating data and also on building a highly scalable triplestore architecture.
  • Sindice — An API and search infrastructure for the Semantic Web. This project is focused on providing a rapid indexing API that apps can use to get their semantic content indexed, and that can also be used by apps to do semantic searches and retrieve semantic content from the rest of the Semantic Web. Sindice provides Web-scale semantic search capabilities to any semantic application or service.
  • SIOC — Semantically Interlinked Online Communities. This is an ontology for linking and sharing data across online communities in an open manner, that is getting a lot of traction. SIOC is on its way to becoming a standard and may play a big role in enabling portability and interoperability of social Web data.
  • JeromeDL is developing technology for semantically enabled digital libraries. I was impressed with the powerful faceted navigation and search capabilities they demonstrated.
  • is a project for personal knowledge management of bookmarks and unstructured data.
  • SCOT, OpenTagging and  These projects are focused on making tags more interoperable, and for generating social networks and communities from tags. They provide a richer tag ontology and framework for representing, connecting and sharing tags across applications.
  • Semantic Web Services.  One of the big opportunities for the Semantic Web that is often overlooked by the media is Web services. Semantics can be used to describe Web services so they can find one another and connect, and even to compose and orchestrate transactions and other solutions across networks of Web services, using rules and reasoning capabilities. Think of this as dynamic semantic middleware, with reasoning built-in.
  • eLite. I was introduced to the eLite project, a large e-learning initiative that is applying the Semantic Web.
  • Nepomuk.  Nepomuk is a large effort supported by many big industry players. They are making a social semantic desktop and a set of developer tools and libraries for semantic applications that are being shipped in the Linux KDE distribution. This is a big step for the Semantic Web!
  • Semantic Reality. Last but not least, and perhaps one of the most eye-opening demos I saw at DERI, is the Semantic Reality project. They are using semantics to integrate sensors with the real world. They are creating an infrastructure that can scale to handle trillions of sensors eventually. Among other things I saw, you can ask things like "where are my keys?" and the system will search a network of sensors and show you a live image of your keys on the desk where you left them, and even give you a map showing the exact location. The service can also email you or phone you when things happen in the real world that you care about — for example, if someone opens the door to your office, or a file cabinet, or your car, etc. Very groundbreaking research that could seed an entire new industry.

In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI — they are clearly the leader in the space.