Today I am announcing that my company, Radar Networks, and its flagship product, Twine, have been acquired by Evri. TechCrunch broke the story here.
This acquisition consolidates two leading providers of semantic discovery and search. It is also the culmination of a long and challenging venture to pioneer the adoption of the consumer Semantic Web.
As the CEO and founder of Radar Networks and Twine.com, it is difficult to describe what it feels like to have reached this milestone during what has been a tumultuous period of global recession. I am very proud of my loyal and dedicated team and the incredible work and accomplishments that we have made together, and I am grateful for the unflagging support of our investors, and the huge community of Twine users and supporters.
Selling Twine.com was not something we had planned on doing at this time, but given the economy and the fact that Twine.com is a long-term project that will require significant ongoing investment and work to reach our goals, it is the best decision for the business and our shareholders.
While we received several offers for the company, and were in discussions about M&A with multiple industry leading companies in media, search and social software, we eventually selected Evri.
The Twine team is joining Evri to continue our work there. The Evri team has assured me that Twine.com’s data and users are safe and sound and will be transitioned into the Evri.com service over time, in a manner that protects privacy and data, and is minimally disruptive. I believe they will handle this with care and respect for the Twine community.
It is always an emotional experience to sell a company. Building Twine.com has been a long, intense, challenging, rewarding, and all-consuming effort. There were incredible high points and some very deep lows along the way. But most of all, it has been an adventure I will never forget. I was fortunate to help pioneer a major new technology — the Semantic Web — with an amazing team, including many good friends. Bringing something as big, as ambitious, and as risky as Twine.com to market was exhilarating.
Twine has been one of the great learning experiences of my life. I am profoundly grateful to everyone I’ve worked with, and especially to those who supported us financially and personally with their moral support, ideas and advocacy.
I am also grateful to unsung heroes behind the project — the families of all of us who worked on it, who never failed to be supportive as we worked days, nights, weekends and vacations to bring Twine to market.
What I’m Doing Next
I will advise Evri through the transition, but will not be working full-time there. Instead, I will be turning my primary focus to several new projects, including some exciting new ventures:
- Live Matrix, a new venture focusing on making the live Web more navigable. Live Matrix is led by Sanjay Reddy (CEO of Live Matrix; formerly SVP of Corp Dev for Gemstar TV Guide). Live Matrix is going to give the Web a new dimension: time. More news about this soon.
- Klout, the leading provider of social analytics about influencers on Twitter and Facebook (which I was the first angel investor in, and which I now advise). Klout is a really hot company and it’s growing fast.
- I’m experimenting with a new way to grow ventures. It’s part incubator, part fund, part production company. I call it a Venture Production Studio. Through this initiative my partners and I are planning to produce a number of original startups, and selected outside startups as well. There is a huge gap in the early-stage arena, and to fill this we need to modify the economics and model of early stage venture investing.
- I’m looking forward to working more on my non-profit interests, particularly those related to supporting democracy and human rights around the world, and one of my particular interests, Tibetan cultural preservation.
- And last but not least, I’m getting married later this month, which may turn out to be my best project of all.
If you want to keep up with what I am thinking about and working on, you should follow me on Twitter at @novaspivack, and also keep up with my blog here at novaspivack.com and my mailing list (accessible in the upper right hand corner of this page).
The Story Behind the Story
In making this transition, it seems appropriate to tell the Twine.com story. This will provide some insight into how we got here, including some of our triumphs, and our mistakes, and some of the difficulties we faced along the way. Hopefully this will shed some light on the story behind the story, and may even be useful to other entrepreneurs out there in what is perhaps one of the most difficult venture capital and startup environments in history.
(Note: You may also be interested in viewing this presentation, “A Yarn About Twine” which covers the full history of the project with lots of pictures of various iterations of our work from the early semantic desktop app to Twine, to T2.)
The Early Years of the Project
The ideas that led to Twine were born in the 1990’s from my work as a co-founder of EarthWeb (which today continues as Dice.com), where among many things we prototyped a number of new knowledge-sharing and social networking tools, along with our primary work developing large Web portals and communities for customers, and eventually our own communities for IT professionals. My time with EarthWeb really helped me to understand that challenges and potential of sharing and growing knowledge socially on the Web. I became passionately interested in finding new ways to network people’s minds together, to solve information overload, and to enable the evolution of a future “global brain.”
After EarthWeb’s IPO I worked with SRI and Sarnoff to build their business incubator, nVention, and then eventually started my own incubator, Lucid Ventures, through which I co-founded Radar Networks with Kristin Thorisson, from the MIT Media Lab, and Jim Wissner (the continuing Chief Architect of Twine) in 2003. Our first implementation was a peer-to-peer Java-based knowledge sharing app called “Personal Radar.”
Personal Radar was a very cool app — it organized all the information on the desktop in a single semantic information space that was like an “iTunes for information” and then made it easy to share and annotate knowledge with others in a collaborative manner. There were some similarities to apps like Ray Ozzie’s Groove and the MIT Haystack project, but Personal Radar was built for consumers, entirely with Java, RDF, OWL and the standards of the emerging Semantic Web. You can see some screenshots pictures of this early work in this slideshow, here.
But due to the collapse of the first Internet bubble there was simply no venture funding available at the time and so instead, we ended up working as subcontractors on the DARPA CALO project at SRI. This kept our research alive through the downturn and also introduced us to a true Who’s Who of AI and Semantic Web gurus who worked on the CALO project. We eventually helped SRI build OpenIRIS, a personal semantic desktop application, which had many similarities to Personal Radar. All of our work for CALO was open-sourced under the LGPL license.
Becoming a Venture-Funded Company
Deborah L. McGuinness, who was one of the co-designers of the OWL language (the Web Ontology Language, one of the foundations of the Semantic Web standards at the W3C), became one of our science advisers and kindly introduced us to Paul Allen, who invited us to present our work to his team at Vulcan Capital. The rest is history. Paul Allen and Ron Conway led an angel round to seed-fund us and we moved out of consulting to DARPA and began work on developing our own products and services.
Our long-term plan was to create a major online portal powered by the Semantic Web that would provide a new generation of Web-scale semantic search and discovery features to consumers. But for this to happen, first we had to build our own Web-scale commercial semantic applications platform, because there was no platform available at that time that could meet the requirements we had. In the process of building our platform numerous technical challenges had to be overcome.
At the time (the early 2000’s) there were few development tools in existence for creating ontologies or semantic applications, and in addition there were no commercial-quality databases capable of delivering high-performance Web-scale storage and retrieval of RDF triples. So we had to develop our own development tools, our own semantic applications framework, and our own federated high-performance semantic datastore.
This turned out to be a nearly endless amount of work. However we were fortunate to have Jim Wissner as our lead technical architect and chief scientist. Under his guidance we went through several iterations and numerous technical breakthroughs, eventually developing the most powerful and developer-friendly semantic applications platform in the world. This led to the development of a portfolio of intellectual property that provides fundamental DNA for the Semantic Web.
During this process we raised a Series A round led by Vulcan Capital and Leapfrog Ventures, and our team was joined by interface designer and product management expert, Chris Jones (now leading strategy at HotStudio, a boutique design and user-experience firm in San Francisco). Under Chris’ guidance we developed Twine.com, our first application built on our semantic platform.
The mission of Twine.com was to help people keep up with their interests more efficiently, using the Semantic Web. The basic idea was that you could add content to Twine (most commonly by bookmarking it into the site, but also by authoring directly into it), and then Twine would use natural language processing and analysis, statistical methods, and graph and social network analysis, to automatically store, organize, link and semantically tag the content into various topical areas.
These topics could easily be followed by other users who wanted to keep up with specific types of content or interests. So basically you could author or add stuff to Twine and it would then do the work of making sense of it, organizing it, and helping you share it with others who were interested. The data was stored semantically and connected to ontologies, so that it could then be searched and reused in new ways.
With the help of Lew Tucker, Sonja Erickson and Candice Nobles, as well as an amazing team of engineers, product managers, systems admins and designers, Twine was announced at the Web 2.0 Summit in October of 2007 and went into full public beta in Q1 of 2008. Twine was well-received by the press and early-adopter users.
Soon after our initial beta launch we raised a Series B round, led by Vulcan Capital and Velocity Interactive Group (now named Fuse Capital), as well as DFJ. This gave us the capital to begin to grow Twine.com rapidly to become the major online destination we envisioned.
In the course of this work we made a number of additional technical breakthroughs, resulting in more than 20 patent filings in total, including several fundamental patents related to semantic data management, semantic portals, semantic social networking, semantic recommendations, semantic advertising, and semantic search.
Four of those patents have been granted so far and the rest are still pending — and perhaps the most interesting of these patents are related to our most recent work on “T2” and are not yet visible.
At the time of beta launch and for almost six months after, Twine was still very much a work in progress. Fortunately our users and the press were fairly forgiving as we worked through evolving the GUI and feature set from what was initially just slightly better than an alpha site to the highly refined and graphical UI we have today.
During these early days of Twine.com we were fortunate to have a devoted user-base and this became a thriving community of power-users who really helped us to refine the product and develop great content within it.
Rapid Growth, and Scaling Challenges
As Twine grew the community went through many changes and some growing pains, and eventually crossed the chasm to a more mainstream user-base. Within less than a year from launch the site grew to around 3 million monthly visitors, 300,000 registered users, 25,000 “twines” about various interests, and almost 5 million pieces of user-contributed content. It was on its way to becoming the largest semantic web on the Web.
By all accounts Twine was looking like a potential “hit.” During this period the company staff increased to more than 40 people (inclusive of contractors and offshore teams) and our monthly burn rate increased to aggressive levels of spending to keep up with growth.
Despite this growth and spending we still could not keep up with demand for new features and at times we experienced major scaling and performance challenges. We had always planned for several more iterations of our backend architecture to facilitate scaling the system. But now we could see the writing on the wall — we had to begin to develop a more powerful, more scalable backend for Twine, much sooner than we had expected we would need to.
This required us to increase our engineering spending further in order to simultaneously support the live version of Twine and its very substantial backend, and run a parallel development team working on the next generation of the backend and the next version of Twine on top of it. Running multiple development teams instead of one was a challenging and costly endeavor. The engineering team was stretched thin and we were all putting in 12 to 15 hour days every day.
Breakthrough to “T2”
We began to work in earnest on a new iteration of our back-end architecture and application framework — one that could scale fast enough to keep up with our unexpectedly fast growth rate and the increasing demands on our servers that this was causing.
This initiative yielded unexpected fruit. Not only did we solve our scaling problems, but we were able to do so to such a degree that entirely new possibilities were opened up to us — ones that had previously been out of reach for purely technical reasons. In particular, semantic search.
Semantic search had always been a long-term goal of ours, however, in the first version of Twine (the one that is currently online) search was our weakest feature area, due to the challenge of scaling a semantic datastore to handle hundreds of billions of triples. But our user-studies revealed that it was in fact the feature our users wanted us to develop the most – search slowly became the dominant paradigm within Twine, especially when the content in our system reached critical mass.
Our new architecture initiative solved the semantic search problem to such a degree that we realized that not only could we scale Twine.com, we could scale it to eventually become a semantic search engine for the entire Web.
Instead of relying on users to crowdsource only a subset of the best content into our index, we could crawl large portions of the Web automatically and ingest millions and millions of Web pages, process them, and make them semantically searchable — using a true W3C Semantic Web compliant backend. (Note: Why did we even attempt to do this? We believed strongly in supporting open-standards for the Semantic Web, despite the fact that they posed major technical challenges and required tools that did not exist yet, because they promised to enable semantic application and data interoperability, one of the main potential benefits of the Semantic Web).
Based on our newfound ability to do Web-scale semantic search, we began planning the next version of Twine — Twine 2.0 (“T2”), with the help of Bob Morgan, Mark Erickson, Sasi Reddy, and a team of great designers.
The new T2 plan would merge new faceted semantic search features with the existing social, personalization and knowledge management features of Twine 1.0. It would be the best of both worlds: semantic search + social search. We began working intensively on developing T2, along with a new hosted developer tools that would make it easy for any webmaster to easily add their site into our semantic index. We were certain that with T2 we had finally “cracked the code” to the Semantic Web — we had a product plan and a strategy that could really bring the Semantic Web to everyone on the Web. It elegantly solved the key challenges to adoption and on a technical level, using SOLR instead of a giant triplestore, we were able to scale to unprecedented levels. It was an exciting plan and everyone on the team was confident in the direction.
To see screenshots that demo T2 and our hosted development tools click here.
The Global Recession
Our growth was fast, and so was our spending, but at the time this seemed logical because the future looked bright and we were in a race to keep ahead of our own curve. We were quickly nearing a point where we would soon need to raise another round of funding to sustain our pace, but we were confident that with our growth trends steadily increasing and our exciting plans for T2, the necessary funding would be forthcoming at favorable valuations.
We were wrong.
The global economy crashed unexpectedly, throwing a major curveball in our path. We had not planned on that happening and it certainly was inconvenient to say the least.
The recession not only hit Wall Street, it hit Silicon Valley. Venture capital funding dried up almost overnight. VC funds sent alarming letters to their portfolio companies warning of dire financial turmoil ahead. Many startups were forced to close their doors, while others made drastic sudden layoffs for better or for worse. We too made spending cuts, but we were limited in our ability to slash expenses until the new T2 platform could be completed. Once that was done, we would be able to move Twine to a much more scalable and less costly architecture, and we would no longer need parallel development teams. But until that happened, we still had to maintain a sizeable infrastructure and engineering effort.
As the recession dragged on, and the clock kept ticking down, the urgency of raising a C round increased, and finally we were faced with a painful decision. We had to drastically reduce our spending in order to wait out the recession and live to raise more funding in the future.
Unfortunately, the only way to accomplish such a drastic reduction in spending was to lay off almost 30% of our staff and cut our monthly spending by almost 40%. But by doing that we could not possibly continue to work on as many fronts as we had been doing. The result was that we had to stop most work on Twine 1.0 (the version that was currently online) and focus all our remaining development cycles and spending on the team needed to continue our work on T2.
This was extremely painful for me as the CEO, and for everyone on our team. But it was necessary for the survival of the business and it did buy us valuable time. However, it also slowed us down tremendously. The irony of making this decision was that it reduced our burn-rate but slowed us down, reduced productivity, and cost us time to such a degree that in the end it may have cost us the same amount of money anyway.
While much of our traffic had been organic and direct, we also had a number of marketing partnerships and PR initiatives that we had to terminate. In addition, as part of this layoff we lost our amazing and talented marketing team, as well as half our product management team, our entire design team, our entire marketing and PR budget, and much of our support and community management team. This made it difficult to continue to promote the site, launch new features, fix bugs, or to support our existing online community. And as a result the service began to decline and usage declined along with it.
To make matters worse, at around the same time as we were making these drastic cuts, Google decided to de-index Twine. To this day we still are not sure why they decided to do this – it could have been that Google suddenly decided we were a competitive search engine, or it could be that their algorithm changed, or it could be that there was some error in our HTML markup that may have caused an indexing problem. We had literally millions of pages of topical user-generated content – but all of a sudden we saw drastic reductions in the number of pages being indexed, and in the ranking of those pages. This caused a very significant drop in organic traffic. With what little team I had remaining we spent time petitioning Google and trying to get reinstated. But we never managed to return to our former levels of index prominence.
Eventually, with all these obstacles, and the fact that we had to focus our remaining budget on T2, we put Twine.com on auto-pilot and let the traffic fall off, believing that we would have the opportunity to win it back once we launched next versipn. While painful to watch, this reduction in traffic and user activity at least had the benefit of reducing the pressure on the engineering team to scale the system and support it under load, giving us time to focus all our energy on getting T2 finished and on raising more funds.
But the recession dragged on and on and on, without end. VC’s remained extremely conservative and risk-averse. Meanwhile, we focused our internal work on growing a large semantic index of the Web in T2, vertical by vertical, starting with food, then games, and then many other topics (technology, health, sports, etc.). We were quite confident that if we could bring T2 to market it would be a turning point for Web search, and funding would follow.
Meanwhile we met with VC’s in earnest. But nobody was able to invest in anything due to the recession. Furthermore we were a pre-revenue company working on a risky advanced technology and VC partnerships were far too terrified by the recession to make such a bet. We encountered the dreaded “wait and see” response.
The only way we could get the funding we needed to continue was to launch T2, grow it, and generate revenues from it, but the only way we could reach those milestones was to launch T2 in the first place: a classic catch-22 situation.
We took comfort in the fact that we were not alone in this predicament. Almost every tech company at our stage was facing similar funding challenges. However, we were determined to find a solution despite the obstacles in our path.
Selling the Business
Had the recession not happened, I believe we would have raised a strong C round based on the momentum of the product and our technical achievements. Unfortunately, we, like many other early-stage technology ventures, found ourselves in the worst capital crunch in decades.
We eventually came to the conclusion that there was no viable path for the company but to use the runway we had left to sell to another entity that was more able to fund the ongoing development and marketing necessary to monetize T2.
While selling the company had always been a desirable exit strategy, we had hoped to do it after the launch and growth of T2. However, we could not afford to wait any longer. With some short-term bridge funding from our existing investors, we worked with Growth Point Technology Partners to sell the company.
We met with a number of the leading Internet and media companies and received numerous offers. In the end, the best and most strategically compatible offer came from Evri, one of our sibling companies in Vulcan Capital’s portfolio. While we had the option to sell to larger and more established companies with very compelling offers, it was simply the best option to join Evri.
And so we find ourselves at the present day. We got the best deal possible for our shareholders given the circumstances. Twine.com, my team, our users and their data are safe and sound. As an entrepreneur and CEO it is, as one advisor put it, of the utmost importance to always keep the company moving forward. I feel that I did manage to achieve this under extremely difficult economic circumstances. And for that I am grateful.
Outlook for the Semantic Web
I’ve been one of the most outspoken advocates of the Semantic Web during my tenure at Twine. So what about my outlook for the Semantic Web now that Twine is being sold and I’m starting to do other things? Do I still believe in the promise of the Semantic Web vision? Where is it going? These are questions I expect to be asked, so I will attempt to answer them here.
I continue to believe in the promise of semantic technologies, and in particular the approach of the W3C semantic web standards (RDF, OWL, SPARQL). That said, having tried to bring them to market as hard as anyone ever has, I can truly say they present significant challenges both to developers and to end-users. These challenges all stem from one underlying problem: Data storage.
Existing SQL databases are not optimal for large-scale, high-performance semantic data storage and retrieval. Yet triplestores are still not ready for prime-time. New graph databases and column stores show a lot of promise, but they are still only beginning to emerge. This situation makes it incredibly difficult to bring Web-scale semantic applications to market cost-effectively.
Enterprise semantic applications are much more feasible today however — because existing and emerging databases and semantic storage solutions do scale to enterprise levels. But for consumer-grade, enormous, Web services, there are still challenges. This is single greatest technical obstacle that Twine faced and it cost us a large amount of our venture funding to surmount. Finally we did find a solution with our T2 architecture, but it is still not a general solution for all types of applications.
I have recently seen some new graph data storage products that may provide the levels of scale and performance needed, but pricing has not been determined yet. In short, storage and retrieval of semantic graph datasets is a big unsolved challenge that is holding back the entire industry. We need federated database systems that can handle hundreds of billions to trillions of triples under high load conditions, in the cloud, on commodity hardware and open source software. Only then will it be affordable to make semantic applications and services at Web-scale.
I believe that semantic metadata is essential for the growth and evolution of the Web. It is one of the only ways we can hope to dig out from the increasing problem of information overload. It is one of the only ways to make search, discovery, and collaboration smart enough to really be significantly better than it is today.
But the notion that everyone will learn and adopt standards for creating this metadata themselves is flawed in my opinion. They won’t. Instead, we must focus on solutions (like Twine and Evri) that make this metadata automatically by analyzing content semantically. I believe this is the most practical approach to bringing the value of semantic search and discovery to consumers, as well as Webmasters and content providers around the Web.
The major search engines are all working on various forms of semantic search, but to my knowledge none of them are fully supporting the W3C standards for the Semantic Web. In some cases this is because they are attempting to co-opt the standards for their own competitive advantage, and in other cases it is because it is simply easier not to use them. But in taking the easier path, they are giving up the long-term potential gains of a truly open and interoperable semantic ecosystem.
I do believe that whoever enables this open semantic ecosystem first will win in the end — because it will have greater and faster network effects than any closed competing system. That is the promise and beauty of open standards: everyone can feel safe using them since no single commercial interest controls them. At least that’s the vision I see for the Semantic Web.
As far as where the Semantic Web will add the most value in years to come, I think we will see it appear in some new areas. First and foremost is e-commerce, an area that is ripe with structured data that needs to be normalized, integrated and made more searchable. This is perhaps the most potentially profitable and immediately useful application of semantic technologies. It’s also one where there has been very little innovation. But imagine if eBay or Amazon or Salesforce.com provided open-standards-compliant semantic metadata and semantic search across all their data.
Another important opportunity is search and SEO — these are the areas that Twine’s T2 project focused on, by enabling webmasters to easily and semi-automatically add semantic descriptions of their content into search indexes, without forcing them to learn RDF and OWL and do it manually. This would create a better SEO ecosystem and would be beneficial not only to content providers and search engines, but also to advertisers. This is the approach that I believe the major search engines should take.
Another area where semantics could add a lot of value is social media — by providing semantic descriptions of user profiles and user profile data, as well as social relationships on the Web, it would be possible to integrate and search across all social networks in a unified manner.
Finally, another area where semantics will be beneficial is to enable easier integration of datasets and applications around the Web — currently every database is a separate island, but by using the Semantic Web appropriately data can be freed from databases and easily reused, remixed and repurposed by other applications. I look forward to the promise of a truly open data layer on the Web, when the Web becomes essentially one big open database that all applications can use.
Lessons Learned and Advice for Startups
While the outcome for Twine was decent under the circumstances, and was certainly far better than the alternative of simply running out of money, I do wonder how it could have been different. I ask myself what I learned and what I would do differently if I had the chance or could go back in time.
I think the most important lessons I learned, and the advice that I would give to other entrepreneurs can be summarized with a few key points:
- Raise as little venture capital as possible. Raise less than you need, not more than you need. Don’t raise extra capital just because it is available. Later on it will make it harder to raise further capital when you really need it. If you can avoid raising venture capital at all, do so. It comes with many strings attached. Angel funding is far preferable. But best of all, self-fund from revenues as early as you can, if possible. If you must raise venture capital, raise as little as you can get by on — even if they offer you more. But make sure you have at least enough to reach your next funding round — and assume that it will take twice as long to close as you think. It is no easy task to get a startup funded and launched in this economy — the odds are not in your favor — so play defense, not offense, until conditions improve (years from now).
- Build for lower exits. Design your business model and capital strategy so that you can deliver a good ROI to your investors at an exit under $30mm. Exit prices are going lower, not higher. There is less competition and fewer buyers and they know it’s a buyer’s market. So make sure your capital strategy gives the option to sell in lower price ranges. If you raise too much you create a situation where you either have to sell at a loss, or raise even more funding which only makes the exit goal that much harder to reach.
- Spend less. Spend less than you want to, less than you need to, and less than you can. When you are flush with capital it is tempting to spend it and grow aggressively, but don’t. Assume the market will crash — downturns are more frequent and last longer than they used to. Expect that. Plan on it. And make sure you keep enough capital in reserve to spend 9 to 12 months raising your next round, because that is how long it takes in this economy to get a round done.
- Don’t rely on user-traction to raise funding. You cannot assume that user traction is enough to get your next round done. Even millions of users and exponential growth are not enough. VC’s and their investment committees want to see revenues, and particularly at least breakeven revenues. A large service that isn’t bringing in revenues yet is not a business, it’s an experiment. Perhaps it’s one that someone will buy, but if you can’t find a buyer then what? Don’t assume that VC’s will fund it. They won’t. Venture capital investing has changed dramatically — early stage and late stage deals are the only deals that are getting real funding. Mid-stage companies are simply left to die, unless they are profitable or will soon be profitable.
- Don’t be afraid to downsize when you have to. It sucks to fire people, but it’s sometimes simply necessary. One of the worst mistakes is to not fire people who should be fired, or to not do layoffs when the business needs require it. You lose credibility as a leader if you don’t act decisively. Often friendships and personal loyalties prevent or delay leaders from firing people that really should be fired. While friendship and loyalty are noble they unfortunately are not always the best thing for the business. It’s better for everyone to take their medicine sooner rather than later. Your team knows who should be fired. Your team knows when layoffs are needed. Ask them. Then do it. If you don’t feel comfortable firing people, or you can’t do it, or you don’t do it when you need to, don’t be the CEO.
- Develop cheaply, but still pay market salaries. Use offshore development resources, or locate your engineering team outside of the main “tech hub” cities. It is simply too expensive to compete with large public and private tech companies to pay top dollar for engineering talent in places like San Francisco and Silicon Valley. The cost of top-level engineers is too high in major cities to be affordable and the competition to hire and retain them is intense. If you can get engineers to work for free or for half price then perhaps you can do it, but I believe you get what you pay for. So rather thank skimp on salaries, pay people market salaries, but do it where market salaries are more affordable.
- Only innovate on one frontier at a time. For example, either innovate by making a new platform, or a new application, or a new business model. Don’t do all of these at once, it’s just too hard. If you want to make a new platform, just focus on that, don’t try to make an application too. If you want to make a new application, use an existing platform rather than also building a platform for it. If you want to make a new business model, use an existing application and platform — they can be ones you have built in the past, but don’t attempt to do it all at once. If you must do all three, do them sequentially, and make sure you can hit cash flow breakeven at each stage, with each one. Otherwise you’re at risk in this economy.
I hope that this advice is of some use to entrepreneurs (and VC’s) who are reading this. I’ve personally made all these mistakes myself, so I am speaking from experience. Hopefully I can spare you the trouble of having to learn these lessons the hard way.
What we did Well
I’ve spent considerable time in this article focusing on what didn’t go according to plan, and the mistakes we’ve learned from. But it’s also important to point out what we did right. I’m proud of the fact that Twine accomplished many milestones, including:
- Pioneering the Semantic Web and leading the charge to make it a mainstream topic of conversation.
- Creating the most powerful, developer friendly, platform for the Semantic Web.
- Successfully completing our work on CALO, the largest Semantic Web project in the US.
- Launching the first mainstream consumer application of Semantic Web.
- Having a very successful launch, covered by hundreds of articles.
- Gaining users extremely rapidly — faster than Twitter did in it’s early years.
- Hiring and retaining an incredible team of industry veterans.
- Raising nearly $24mm of venture capital over 2 rounds, because our plan was so promising.
- Developing more than 20 patents, several of which are fundamentally important for the Semantic Web field.
- Surviving two major economic bubbles and the downturns that followed.
- Innovating and most of all, adapting to change rapidly.
- Breaking through to T2 — a truly awesome technological innovation for Web-scale semantic search.
- Selling the company in one of the most difficult economic environments in history.
I am proud of what we accomplished with Twine. It’s been “a long strange trip” but one that has been full of excitement and accomplishments to remember.
Conclusions
If you’ve actually read this far, thank you. This is a big article, but after all, Twine is a big project – One that lasted nearly 5 years (or 9 years if you include our original research phase). I’m still bullish on the Semantic Web, and genuinely very enthusiastic about what Evri will do with Twine.com going forward.
Again I want to thank the hundreds of people who have helped make Twine possible over the years – but in particular the members of our technical and management team who went far beyond the call of duty to get us to the deal we have reached with Evri.
While this is certainly the end of an era, I believe that this story has only just begun. The first chapters are complete and now we are moving into a new era. Much work remains to be done and there are certainly still challenges and unknowns, but progress continues and the Semantic Web is here to stay.
Social tagging: Evri > nova spivack > Radar Networks > Search > semantic search > Semantic Web > T2 > Twine > Web 3.0 > Web 3.0
Trackbacks/Pingbacks