It’s Time for an Open Standard for Cards

Cards are fast becoming the hot new design paradigm for mobile apps, but their importance goes far beyond mobile. Cards are modular, bite-sized content containers designed for easy consumption and interaction on small screens, but they are also a new metaphor for user-interaction that is spreading across all manner of other apps and content.

The concept of cards emerged from the stream — the short content notifications layer of the Internet — which has been evolving since the early days of RSS, Atom and social media.

Read the rest on TechCrunch

Interest Networks are at a Tipping Point

UPDATE: There’s already a lot of good discussion going on around this post in my public twine.

I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.

In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.

At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.t

So, what is an interest network?

In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.

Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.

I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.

This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi–dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.

We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:

What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t.

We let our users tell us what they’re most interested in, and we follow their lead).

Most interest networks exhibit the following characteristics as well:

  • They have some sort of bookmarking/submission/markup function to store and map data (often using existing metaphors, even if what’s under the hood is new)
  • They also have some sort of social sharing function to provide the network benefit (this isn’t exclusive to interest networks, obviously, but it is characteristic)
  • And in most cases, interest networks look to add some sort of “smarts” or “recommendations” capability to the mix (that is, you get more out than you put in)

This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.

To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.

At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.

The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.

Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.

6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.

I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts — Carla, Jeremiah, and others, are you listening?

Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.

Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”

Now that anyone can join, it will be fun and gratifying to watch Twine grow.

Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.

Stay tuned!

Burma Update: Protestors Cremated Alive; Monks Massacred in Jungle

The situation in Burma is far worse than the mainstream media has reported so far. Watch this video that was just smuggled out showing soldiers beating unarmed protesters. There are now reports coming in from eyewitnesses of young school students being shot by the army, masses of injured protestors being cremated alive, and thousands of monks and other protesters being killed and dumped in mass graves in the jungles. The junta has now imprisoned thousands of monks and plans to "send them away." The Burmese military junta is known for torture, summary executions, and is listed with Somalia as one of the most corrupt regimes in the world. This is essentially a form of genocide or state-sponsored terrorism — in this case by a regime against its own people. Help get the word out. Sign this petition. This has to be stopped. The Burmese people are helpless and need protection from the world community.

Envisioning the Whole Digital Person

Another article of note on the subject of our evolving digital lives and what user-experience designers should be thinking about:

Our lives are becoming increasingly digitized—from the ways we
communicate, to our entertainment media, to our e-commerce
transactions, to our online research. As storage becomes cheaper and
data pipes become faster, we are doing more and more online—and in the
process, saving a record of our digital lives, whether we like it or


In the coming years, our ability to interact with the information
we’re so rapidly generating will determine how successfully we can
manage our digital lives. There is a great challenge at our doorsteps—a
shift in the way we live with each other.

As designers of user experiences for digital products
and services, we can make people’s digital lives more meaningful and
less confusing. It is our responsibility to envision not only
techniques for sorting, ordering, and navigating these digital
information spaces, but also to devise methods of helping people feel
comfortable with such interactions. To better understand and ultimately
solve this information management problem, we should take a holistic
view of the digital person. While our data might be scattered, people
need to feel whole.

'Bemes' are Defining the Blogosphere

Tom Hayes has an interesting post in which he coins the word ‘beme" to mean a meme that spreads in the blogosphere.

Michael Malone’s ABC News column on Thursday mentioning "bemes" has certainly produced a lot of interest.  Originally, I coined the word beme
to describe a meme propagated by blogs and bloggers.  Now I can see
that the turn of phrase has a much bigger potential to capture the
rapidly-moving cultural touchstones of the Bubble Generation.

As you may know, "meme" was first defined by Richard Dawkins in 1976
as "a unit of cultural information" spread from one mind to another.
In other words, a viral idea that eventually becomes common knowledge.

Fast forward three decades, and it seems to me that technology has turbo-charged the meme process.  Looking for the juste mot
to describe a "purposeful" meme fed into the vast human network of the
Internet, either by blog, email, video, phonecast, social media or
other viral means, beme seems to fit the bill. 

A beme is a turbo-charged meme made possible entirely by the
existence of the network affect.  A beme can be impactful because it is
lurid–a photo of a panty-less Britney Spears, or humorous–a
whimisical video of the band OKGO on treadmills, or gut-wrenching–the
sad tirade by comedian Michael Richards.  A beme can cement an idea
with the public in a way that cannot be legislated or regulated.  No
legal effort by Cisco to enforce a trademark, for example, will make
the public unlearn that Apple produces the iPhone.

  • A meme is old media, a beme is new media.
  • A meme takes off by accident, a beme by design.
  • A meme can take years to surface, a beme hours.

Venice Project Making Heavy Use of RDF

I just found out from Pete, that the Venice Project is making really heavy use of RDF. Very interesting. Another major proof point. It’s looking like 2007 is going to be the year of mainstream RDF applications. It sounds like there are some similarities between what the Venice Project is making, on a platform level, and what we’ve already built on a platform level. Of course the application they are making (video syndication, as I understand it) and the application we are making (not video; not announced yet) are completely different.

What is the Semantic Web, Actually?

I’ve read several blog posts reacting to John Markoff’s article today. There seem to be some misconceptions in those posts about what the Semantic Web is and is not. Here I will try to  succinctly correct a few of the larger misconceptions I’ve run into:

  • The Semantic Web is not just a single Web. There won’t be one Semantic Web, there will be thousands or even millions of them, each in their own area. They will all be part of one Semantic Web in that they will use the same open-standard languages and their data will be universally accessible, but they won’t all be run by any single company. They will connect together over time, forming a tapestry. But nobody will own this or run this as a single service. It will be just as decentralized as the Web already is.
  • The Semantic Web is not separate from the existing Web. The Semantic Web won’t be a new Web apart from the Web we already have. It simply adds new metadata and data to the existing Web. It merges right into the existing HTML Web just like XML does, except this new metadata is in RDF (since RDF can in fact be expressed in XML).
  • The Semantic Web is not just about unstructured data. In fact, the Semantic Web is really about structured data: it provides a means (RDF) to turn any content or data into structured data that other software can make use of. This is really what RDF enables.
  • The Semantic Web does not require complex ontologies. Even without making use of OWL and more sophisticated ontologies, powerful data-sharing and data-integration can be enabled on the existing Web using even just RDF alone.
  • The Semantic Web does not only exist on Web pages. RDF works inside of applications and databases, not just on Web pages. Calling it a "Web" is a misnomer of sorts — it’s not just about the Web, it’s about all information, data and applications.
  • The Semantic Web is not only about AI, and doesn’t require it. There are huge benefits from the Semantic Web without ever using a single line of artificial intelligence code. While the next-generation of AI will certainly be enabled by richer semantics, AI is not the only benefit of RDF. Making data available in RDF makes it more accessible, integratable, and reusable — regardless of any AI. The long-term future of the Semantic Web is AI for sure — but to get immediate benefits from RDF no AI is necessary.
  • The Semantic Web is not only about mining, search engines and spidering. Application developers and content providers, and end-users, can benefit from using the Semantic Web (RDF) within their own services, regardless of whether they expose that RDF metadata to outside parties. RDF is useful without doing any data-mining — it can be baked right into content within authoring tools and created transparently when information is published. RDF makes content more manageable and frees developers and content providers from having to look at relational data models. It also gives end-users better ways to collect and manage content they find.
  • The Semantic Web is not just research. It’s already in use and starting to reach the market. The government uses it of course. But also so do companies like Adobe, and more recently Yahoo (Yahoo Food has started to use some Semantic Web technologies now). And one flavor of RSS is defined with RDF. Oracle has released native RDF support in their products. The list goes on…

Learning more:

Excellent Feedback from Om Malik

Today A-List blogger and emerging "media 2.0" mogul, Om Malik, dropped by our offices to get a confidential demo of what we are building. We’ve asked Om to keep a tight lid on what we showed him, but he may be releasing at least a few hints in the near future.

Om was there in the early days of the Web and really understands the industry and the content ecosystem. I remember running into him in NYC when I was a co-founder of EarthWeb. He’s seen a lot of technologies come and go, and he has a huge knowledgebase in his head. So he was an excellent person to speak to about what we are doing.

He gave us some of the most useful user-feedback about our product that we’ve ever gotten. One of our target audiences is content creators, and what Om is building over at Gigaom is a perfect example. He is a hard-core content creator. So he really understands deeply the market pain that we are addressing. And he had some incredibly useful comments, tweaks and suggestions for us. During the meeting there were quite a few Aha’s for me personally — Several new angles and benefits of our product. Meeting with folks like Om, who represent potential users of what we are building, is really helpful to us in understanding what the needs and preferences of content creators are today. I’m really excited to start doing some design around some of the suggestions he made.

Of course, the needs of content providers are only one half of the equation. We’re also addressing the needs of content consumers with our product. In order to really solve the problems facing content creators we also have to address the problems faced by their readers. It’s a full ecosystem, a virtuous cycle — a whole new dimension of the Web.

Radar Networks is Seeking Search Engineers for Large-Scale Web Mining Initiative

My company, Radar Networks, is building a very large dataset by crawling and mining the Web. We then apply a range of new algorithms to the data (part of our secret sauce) to generate some very interesting and useful new information about the Web. We are looking for a few experienced search engineers to join our team — specifically people with hands-on experience designing and building large-scale, high-performance Web crawling and text-mining systems. If you are interested, or you know anyone who is interested or might be qualified for this, please send them our way. This is your chance to help architect and build a really large and potentially important new system. You can read more specifics abour our open jobs here.

A Cool Thingy…

This is cool Click to see why.  I think this idea has great value for viral, meme-based Web advertising. Just imagine: Advertisers could release really cool animations to add to sites, and site owners could add them into their sites for entertainment or humor. The animations could run ads within them as well. It’s fun. Everyone wins, everyone’s happy. And of course users can aim these animations at any other site so visitors who like it can spread it to their own sites. Very smart!!! Very Web 2.0.

Folktologies — Beyond the Folksonomy vs. Ontology Distinction

First of all I know Clay Shirky, and he’s a good fellow. But he’s simply wrong about his claim that "tagging" (of the flavor that is appearing on — what I call "social tagging") is inherently better than the use of formal ontologies. Clay favors the tagging approach because it is bottom-up and emergent in nature, and he argues against ontologies because pre-specification cannot anticipate the future. But this is a simplistic view of both approaches. One could just as easily argue against tagging systems because they don’t anticipate the future — they are shortsighted, now-oriented systems that fail to capture the "big picture" or to optimally organize resources for the long-term. Their saving grace is that over time they do (hopefully) self-organize and prune out the chaff, but that depends both on the level of participation and the quality of that participation.

Continue reading

My "A Physics of Ideas" Manifesto has been Published!

Change This, a project that helps to promote interesting new ideas so that they get noticed above the noise level of our culture has published my article on “A Physics of Ideas” as one of their featured Manifestos. They use an innovative PDF layout for easier reading, and they also provide a means for readers to provide feedback and even measure the popularity of various Manifestos. I’m happy this paper is getting noticed finally — I do think the ideas within it have potential. Take a look.

A Blog Novel

Rohit Gupta, a Bombay-based writer, who also reads this blog, is writing a blog-novel. He has come up with an innovative way to promote it — by letting readers choose quotes from his text to “own” — by choosing a quote and linking to his blog-novel from it, he will in return link back to your blog from that quote in his novel. It’s similar to my earlier GoMeme experiments, except in this case his novel is the meme that is spreading via a cooperative linking incentive.

Good idea, Rohit! I choose this quote from your novel:

The other article, an interesting one, is a 2000-word piece on the history of mathematical heretics known as the Circlesquarers, and the transcendental nature of the number Π.

Detailed Analysis of GoMeme 1.0 Results

Greg Tyrell, a PhD student with a strong interest in bioinformatics, has put together a detailed analysis and report on the GoMeme 1.0 experiment, containing several visualizations and results of the survey. Nice work Greg!

Also in other news, Google has started indexing the results. Currently there are 733 results when searching for sites with original, super-long GUID. There are 867 results when searching for the unique string “To add your blog to this experiment, copy this entire posting to your blog, and fill out the info below, substituting your own information in your posting, where appropriate” which was in the instructions — this number should include sites that did not put the whole GUID in. Technorati, which seems to be working better today, finds 58 sites with the long GUID, and none for the instructions text above. So I guess Google wins so far. But I am glad that Technorati is starting to get their bugs fixed! I noticed that blog stats are starting to be updated again.

I also got an interesting link to another Meme visualization, which although having nothing to do with our experiment as far as I can tell, is a nice concept. It takes forever to build out the full visualization and the tree appears to be almost white on my white background making it hard to see, but still worth a look — Meme Tree

GoMeme 2.0 – Help Test This Meme

Note: This experiment is now finished.

GoMeme 2.0 — Copy This GoMeme From This Line to The End of this article, and paste into your blog. Then follow the instructions below to fill it out for your site.

Steal This Post!!!! This is a GoMeme– a new way to spread an idea along social networks. This is the second generation meme in our experiment in spreading ideas. To find out what a GoMeme is, and how this experiment works, or just to see how this GoMeme is growing and discuss it with others, visit the Root Posting and FAQ for this GoMeme at .

Continue reading

Can You Imagine What Would Happen if MoveOn.Org Used the GoMeme Concept?

I wonder if anyone from MoveOn.Org or the Republicans will notice our GoMeme experiments? (Not that I’m taking sides — I’ll simply be happy if somebody wins the election!) Grassroots political campaigns could potentially really benefit from the techniques we’re testing here. For example, imagine a “blog meme” for a political campaign — a meme that states some useful facts about a candidate and their opponent, perhaps has some survey questions and a GUID, and has the added benefit of a cool Improve-Your-Google-Ranking-By-Hosting-This-Meme candy coating? Wow — it could spread the message to a lot of blogs pretty quickly if done right. That might actually work. But I try to stay out of politics, so I’m not taking sides here or endorsing anyone. If you read this and know the “right people” — feel free to suggest the idea to them.

FAQ for GoMeme 2.0

This posting is the FAQ and introduction for a new, improved, second-generation meme experiment that is designed to spread faster and more broadly than the first meme experiment. We call this kind of meme a “GoMeme” (pronounced Go-Meem), because it is a meme that is designed to Go. The actual GoMeme, which you can add to your Website is located, here. Before you do this, please read this FAQ so you know how it works.

Continue reading

A New Blogging Feature: Automated "Social Syndication" Networks

Here’s an idea I’ve had recently that is related to the Meme Propagation experiment (see posts below on this blog for more about that ongoing experiment). The concept is for a new, meme-based, way to syndicate content across blogs. Here’s how it might work:

1. You join a “meme syndication network” by joining at a central site. You get an account where you can profile your blog. You also set your blog’s syndication inputs — a set of other blogs that are also in the network that you are willing to automatically syndicate content from.

2. When you complete this, you are given an automatically generated HTML element containing a script to put in your blog sidebar, or anywhere else in your layout. This script is auto-generated for you from a central site that manages the network. The script automatically displays short excerpts for blog postings (pieces of microcontent) that have been “picked up” by your site from your registered “inputs” in the network. You place this script in your layout.

3. In the area created by the script in your site, you see a listing of blog postings that have been syndicated to your site from your inputs. You can post to your network by going to your account at the central network site and posting (or copying in the URL for anything you want to post) there. Any network-member sites that treat your node in the network as an “input” will then *automatically* pickup your posting and display it on their page.

Continue reading

GoMeme 1.0 — Testing Meme Propagation In Blogspace: Add Your Blog!

NOTE: This experiment is now finished.

This is an experiment in spreading ideas across weblogs using the principles of viral marketing and social networks using a new method for making content more viral, which we call a "GoMeme."

Continue reading

New Version of My "Metaweb" Graph — The Future of the Net



Many people have requested this graph and so I am posting my latest version of it. The Metaweb is the coming “intelligent Web” that is evolving from the convergence of the Web, Social Software and the Semantic Web. The Metaweb is starting to emerge as we shift from a Web focused on information to a Web focused on relationships between things — what I call “The Relationship Web” or the “Relationship Revolution.”

We see early signs of this shift to a Web of relationships in the sudden growth of social networking systems. As the semantics of these relationships continue to evolve the richness of the “arcs” will begin to rival that of the “nodes” that make up the network.

This is similar to the human brain — individual neurons are not particularly important or effective on their own, rather it is the vast networks of relationships that connect them that encode knowledge and ultimately enable intelligence. And like the human brain, in the future Metaweb, technologies will emerge to enable the equivalent of “spreading activation” to propagate across the network of nodes and arcs. This will provide a means of automatically growing links, weighting links, making recommendations, and learning across distributed graphs of nodes and links. This may resemble a sort of “Hebbian learning” across the link structure of the network — enhancing the strength of frequently used connections and dampening less used links, and even growing new transitive links when appropriate.

As the intelligence with which such processes unfolds, in a totally decentralized and grassroots manner, we will begin to see signs of emergent “transhuman” intelligences on the network. Web services are the beginning of this — but imagine if they were connected to autonomous intelligent agents, roaming the network and able to interact with one another, Web sites, and even people. These next-layer intelligences will begin to function as brokers, associators, editors, publishers, recommenders, advertisers, researchers, defenders, buyers, sellers, monitors, aggregators, distributors, integrators, translators, and also as knowledge-stewards responsible for constantly improving the structure and quality of subsets of the Web that they oversee. And while many of these agents will be able to interact intelligently with humans, not all of them will — most will probably just have interfaces for interacting with other agents.

Vast systems of “hybrid intelligence” (humans + intelligent software) will form — for example, next-generation communities that intelligently self-organize around emerging topics and trends, smart marketplaces that self-optimize to reduce the cost of transactions for their participants, ‘group minds’ and ‘enterprise minds’ that embody and manage the collective cognitiion of teams and organizations, and knowledge networks that function to enable distributed collective intelligence among networks of indivdiuals, across communities and business-relationships.

As the network becomes increasingly autonomous and self-organizing we may say that the network-as-a-whole is becoming “intelligent.” But it will be several steps beyond that before it finally “wakes up” — when the various processes of the network reach that point at which the entire system truly functions as a coordinated, self-aware intelligence. This will require the formation of many higher layers of intelligence — leading to something that functions like the cerebral cortex in humans. It will also require something that functions as its virtual “self-awareness” — an internal process of meta-level self-representation, self-projection, self-feedback, self-analysis and self-improvement within the network. For a map of how this may actually unfold over time we might look at the evolutionary history of nervous systems on Earth.

As structures that provide virtual higher-order cognition and self-awareness to the network emerge, connect to one another, and gain sophistication, the Global Brain will self-organize into a Global Mind — the intelligence of the whole will begin to outpace the intelligence of any of its parts and thus it will cross the threshold from being just a “bunch of interacting parts” to “a new higher-order whole” in its own right — a global intelligent Metaweb for our planet.

As I predicted .. Lifelogs are coming…

I call it a Lifelog — Nokia calls it a “Lifeblog” (my terminology is better) — but it’s the same idea — a log of all the stuff you experience — your whole life, blogged and online. OK but the key is to make sure I can keep my lifeblog private — or at least parts of it private! I would like my camera phone to take a photo every minute and add it to my Lifelog automatically. Then I can speed through it flip-book-animation style to get to a section I am interested in. Next would be to add a digital streaming voice recorder to my phone and record whatever is being said on every phone call, and even when I am not on a call at 1 minute intervals. Using voiceprints and speech-to-text we can then index who was speaking and what was said as a way to search and navigate the Lifelog — for example, this would make it possible to find all photos that correspond to times when Sue was speaking about “Internet.” With a little more work we could link this to additional semantics and make it really searchable.

From Application-Centric to Data-Centric Computing: The Metaweb

One of the big changes that will be enabled by the coming Metaweb is the shift from application-centric computing to data-centric computing. As the Metaweb evolves, information will be imbued with increasingly sophisticated metadata. HTML provides metadata about formatting and links. XML provides metadata about structure and behavior. RDF, RDFS and OWL provide metadata about relationships and meaning.

As higher levels of metadata are adopted and added to content, the content becomes “smarter” — more information about how to display, use and interpret the content is added to the content itself. The key here is that this metadata is added in an application-independent manner. In other words, the “intelligence” for interpreting the data is moved out of applications and into the data itself. Thus we move from “smart applications, dumb data” to “smart applications, smart data.”

A data-centric world will be very different from the application-centric world of today — for one thing, application providers will lose much of their competitive advantages (from platform lock-in and closed formats) as data becomes increasingly portable across various tools. Another big change will be in how we think about content — rather than content being thought of as static documents, every piece of content will be more like an object with its own unique identity and behaviors on the network.

Instead of moving data around we will access these semantic data objects using Web services protocols and interact with them from anywhere like mini-online services. To edit a document we might send commands to an object that represents the document on the network, rather than actually downloading and modifying a local file.

Ultimately this will bring about a shift from desktop computing to network computing — software will truly become a service and the business model of software will shift to be more like online service business models — based on subscriptions, a la carte pay-per-use features, and perhaps even advertising. Data objects will be accessible from everywhere and will be responsible for maintaining their own state, relationships and contents, as well as managing their own access, rights and usage policies. These are some of the changes that will come about as the Metaweb evolves.

The Metaweb is Coming… See this Diagram…

This diagram (click to see larger version) illustrates why I believe technology evolution is moving towards what I call the Metaweb. The Metaweb is emerging from the convergence of the Web, Social Software and the Semantic Web.