What's After the Real Time Web?

In typical Web-industry style we’re all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call “The Stream,” is not an end in itself, it’s a means to an end. So what will it enable, where is it headed, and what’s it going to look like when we look back at this trend in 10 or 20 years?

In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:

  1. Web Attention Deficit Disorder. The first problem with the real-time Web that is becoming increasingly evident is that it has a bad case of ADD. There is so much information streaming in from so many places at once that it’s simply impossible to focus on anything for very long, and a lot of important things are missed in the chaos. The first generation of tools for the Stream are going to need to address this problem.
  2. Web Intention Deficit Disorder. The second problem with the real-time Web will emerge after we have made some real headway in solving Web attention deficit disorder. This second problem is about how to get large numbers of people to focus their intention not just their attention. It’s not just difficult to get people to notice something, it’s even more difficult to get them to do something. Attending to something is simply noticing it. Intending to do something is actually taking action, expending some energy or effort to do something. Intending is a lot more expensive, cognitively speaking, than merely attending. The power of collective intention is literally what changes the world, but we don’t have the tools to direct it yet.

The Stream is not the only big trend taking place right now. In fact, it’s just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I’m tracking:

  • Messaging. The real-time Web aka The Stream is really about messaging in essence. It’s a subset of the global trend towards building a better messaging layer for the Web. Multiple forms of messaging are emerging, from the publish-and-subscribe nature of Twitter and RSS, to things like Google Wave, Pubsubhubub, and broadcast style messaging or multicasting via screencast, conferencing and media streaming and events in virtual worlds. The effect of these tools is that the speed and interactivity of the Web are increasing — the Web is getting faster. Information spreads more virally, more rapidly — in other words, “memes” (which we can think of as collective thoughts) are getting more sophisticated and gaining more mobility.
  • Semantics. The Web becomes more like a database. The resolution of search, ad targeting, and publishing increases. In other words, it’s a higher-resolution Web. Search will be able to target not just keywords but specific meaning. For example, you will be able to search precisely for products or content that meet certain constraints. Multiple approaches from natural language search to the metadata of the Semantic Web will contribute to increased semantic understanding and representation of the Web.
  • Attenuation. As information moves faster, and our networks get broader, information overload gets worse in multiple dimensions. This creates a need for tools to help people filter the firehose. Filtering in its essence is a process of attenuation — a way to focus attention more efficiently on signal versus noise. Broadly speaking there are many forms of filtering from automated filtering, to social filtering, to personalization, but they all come down to helping someone focus their finite attention more efficiently on the things they care about most.
  • The WebOS.  As cloud computing resources, mashups, open linked data, and open API’s proliferate, a new level of aggregator is emerging. These aggregators may focus on one of these areas or may cut across them. Ultimately they are the beginning of true cross-service WebOS’s. I predict this is going to be a big trend in the future — for example instead of writing Web apps directly to various data and API’s in dozens of places, just write to a single WebOS aggregator that acts as middleware between your app and all these choices. It’s much less complicated for developers. The winning WebOS is probably not going to come from Google, Microsoft or Amazon — rather it will probably come from someone neutral, with the best interests of developers as the primary goal.
  • Decentralization. As the semantics of the Web get richer, and the WebOS really emerges it will finally be possible for applications to leverage federated, Web-scale computing. This is when intelligent agents will actually emerge and be practical. By this time the Web will be far too vast and complex and rapidly changing for any centralized system to index and search it. Only massively federated swarms of intelligent agents, or extremely dynamic distributed computing tools, that can spread around the Web as they work, will be able to keep up with the Web.
  • Socialization. Our interactions and activities on the Web are increasingly socially networked, whether individual, group or involving large networks or crowds. Content is both shared and discovered socially through our circles of friends and contacts. In addition, new technologies like Google Social Search enable search results to be filtered by social distance or social relevancy. In other words, things that people you follow like get higher visibility in your search results. Socialization is a trend towards making previously non-social activities more social, and towards making already-social activities more efficient and broader. Ultimately this process leads to wider collaboration and higher levels of collective intelligence.
  • Augmentation. Increasingly we will see a trend towards augmenting things with other things. For example, augmenting a Web page or data set with links or notes from another Web page or data set. Or augmenting reality by superimposing video and data onto a live video image on a mobile phone. Or augmenting our bodies with direct connections to computers and the Web.

If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it’s collective intelligence — not just of humans, but also our computing systems, working in concert.

Collective Intelligence

I think that these trends are all combining, and going real-time. Effectively what we’re seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.

But that’s in the very distant future still. In the nearer term — the next 100 years or so — we’re going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.

Social Evolution

Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.

Physical Evolution

As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we’ll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:

  • Individual brains will get less good at things like memorization and recall, calculation, reasoning, and long-term planning and action.
  • Individual brains will get better at multi-tasking, information filtering, trend detection, and social communication. The parts of the nervous system involved in processing live information will increase disproportionately to other parts.
  • Our bodies may actually improve in certain areas. We will become more, not less, mobile, as computation and the Web become increasingly embedded into our surroundings, and into augmented views of our environments. This may cause our bodies to get into better health and shape since we will be less sedentary, less at our desks, less in front of TV’s. We’ll be moving around in the world, connected to everything and everyone no matter where we are. Physical strength will probably decrease overall as we will need to do less manual labor of any kind.

These are just some of the changes that are likely to occur as a result of the things we’re working on today. The Web and the emerging Real-Time Web are just a prelude of things to come.

My Burma Meme Spreads to 17,000 Web Pages in just one week!

I’ve been tracking the progress of my Burma protest meme. In just under one week it has spread to almost 17,000 web pages and it continues to grow. (For the latest number, click here). It’s great to see the blogosphere pick this up, and I’m glad to be able to do something to help raise awareness of this important human rights issue.

This meme is also an example of an interesting new way to spread content on the Web — whether for a protest or an ad or any other kind of announcement. It’s kind of like a chain letter, but via weblogs. There are many different ways to structure these memes with varying levels of virality and benefit to participants. For some earlier work I’ve done on meme propagation on the Web see my GoMeme experiments from a few years ago. In those experiments I created a series of memes that spread widely through the blogosphere, based on different viral messages, surveys, and benefits to participants. Other people then tracked the statistics of the memes as they spread. It turned out to be a very interesting study of superdistribution of content along social networks.

Networked Genome — New Finding Shatters Current Thinking

A new finding has discovered that the human genome may be highly networked. That is, genes do not operate in isolation, but rather they are networked together in a far more complex ecosystem than previously thought. It may be impossible to separate one gene from another in fact. This throws into question not only our understanding of genetics and the human genome, but also the whole genomics industry, which relies heavily on the idea that genes and drugs based on them can be patented:

The principle that gave rise to the biotech industry promised
benefits that were equally compelling. Known as the Central Dogma of
molecular biology, it stated that each gene in living organisms, from
humans to bacteria, carries the information needed to construct one
protein.

The scientists who invented recombinant DNA in 1973 built their
innovation on this mechanistic, "one gene, one protein" principle.

Because donor genes could be associated with specific functions,
with discrete properties and clear boundaries, scientists then believed
that a gene from any organism could fit neatly and predictably into a
larger design – one that products and companies could be built around,
and that could be protected by intellectual-property laws.

This presumption, now disputed, is what one molecular biologist calls "the industrial gene."

"The industrial gene is one that can be defined, owned, tracked,
proven acceptably safe, proven to have uniform effect, sold and
recalled," said Jack Heinemann, a professor of molecular biology in the
School of Biological Sciences at the University of Canterbury in New
Zealand and director of its Center for Integrated Research in Biosafety.

In the United States, the Patent and Trademark Office allows genes
to be patented on the basis of this uniform effect or function. In
fact, it defines a gene in these terms, as an ordered sequence of DNA
"that encodes a specific functional product."

In 2005, a study showed that more than 4,000 human genes had already
been patented in the United States alone. And this is but a small
fraction of the total number of patented plant, animal and microbial
genes.

In the context of the consortium’s findings, this definition now
raises some fundamental questions about the defensibility of those
patents.

If genes are only one component of how a genome functions, for
example, will infringement claims be subject to dispute when another
crucial component of the network is claimed by someone else?

Might owners of gene patents also find themselves liable for
unintended collateral damage caused by the network effects of the genes
they own?

And, just as important, will these not-yet-understood components of
gene function tarnish the appeal of the market for biotech investors,
who prefer their intellectual property claims to be unambiguous and
indisputable?

While no one has yet challenged the legal basis for gene patents,
the biotech industry itself has long since acknowledged the science
behind the question.

"The genome is enormously complex, and the only thing we can say
about it with certainty is how much more we have left to learn," wrote
Barbara Caulfield, executive vice president and general counsel at the
biotech pioneer Affymetrix, in a 2002 article on Law.com called "Why We
Hate Gene Patents."

"We’re learning that many diseases are caused not by the action of
single genes, but by the interplay among multiple genes," Caulfield
said. She noted that just before she wrote her article, "scientists
announced that they had decoded the genetic structures of one of the
most virulent forms of malaria and that it may involve interactions
among as many as 500 genes."

Even more important than patent laws are safety issues raised by the
consortium’s findings. Evidence of a networked genome shatters the
scientific basis for virtually every official risk assessment of
today’s commercial biotech products, from genetically engineered crops
to pharmaceuticals.

Read the rest here

Intelligence is in the Connections

Google’s Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry’s idea is that intelligence is a function of massive computation, not of “fancy whiteboard algorithms.” In other words, in his conception the brain doesn’t do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively “dumb” but from the combined power of all of them working together “intelligent” behaviors emerge.

Larry’s view is, in my opinion, an oversimplification that will not lead to actual AI. It’s certainly correct that some activities that we call “intelligent” can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible — they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today — which is still a long way short of true AI!

Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don’t think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software — the higher level cognitive algorithms and heuristics that the brain “runs” — also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).

Larry’s view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It’s a highly sophisticated system comprised of simple parts — and actually, the jury is still out on exactly how simple the parts really are — much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.

Perhaps the Web as a whole is the closest analogue we have today for the brain — with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We’re not talking about a few hundred thousand linux boxes — we’re talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.

One reader who commented on Larry’s talk made an excellent point on what this missing piece may be: “Intelligence is in the connections, not the bits.”The point is that most of the computation in the brain actually takesplace via the connections between neurons, regions, and perhapsprocesses. This writer also made some good points about quantumcomputation and how the brain may make use of it, a view that forexample Roger Penrose and others have spent a good deal of time on.There is some evidence that brain may make use of microtubules andquantum-level computing. Quantum computing is inherently about fields,correlations and nonlocality. In other words the connections in thebrain may exist on a quantum level, not just a neurological level.

Whether quantum computation is the key or not still remains to bedetermined. But regardless, essentially, Larry’s approach is equivalentto just aiming a massively parallel supercomputer at the Web and hopingthat will do the trick. Larry mentions for example that if allknowledge exists on the Web you should be able to enter a query and geta perfect answer. In his view, intelligence is basically just search ona grand scale. All answers exist on the Web, and the task is just tomatch questions to the right answers. But wait? Is that all thatintelligence does? Is Larry’s view too much of an oversimplification?Intelligence is not just about learning and recall, it’s also aboutreasoning and creativity. Reasoning is not just search. It’s unclearhow Larry’s approach would address that.

In my own opinion, for global-scale AI to really emerge the Web has toBE the computer. The computation has to happen IN the Web, betweensites and along connections — rather than from outside the system. Ithink that is how intelligence will ultimately emerge on a Web-widescale. Instead of some Google Godhead implementing AI from afar for thewhole Web, I think it is more likely that every site, app and person onthe Web will help to implement it. It will be much more of a hybridsystem that combines decentralized human and machine intelligences andtheir interactions along data connections and social relationships. Ithink this may emerge from a future evolution of the Web that providesfor much richer semantics on every piece of data and hyperlink on theWeb, and for decentralized learning, search, and reasoning to takeplace within every node on the Web. I think the Semantic Web is anecessary technology for this to happen, but it’s only the first step.More will need to happen on top of it for this vision to reallymaterialize.

My view is more of an “agent metaphor” for intelligence — perhaps itis similar to Marvin Minsky’s Society of Mind ideas. I think that mindsare more like communities than we presently think. Even in our ownindividual minds for example we experience competing thoughts, multiplethreads, and a kind of internal ecology and natural selection of ideas.These are not low-level processes — they are more like agents — theyare actually each somewhat “intelligent” on their own, they seem to besomewhat autonomous, and they interact in intelligent almost socialways.

Ideas seem to be actors, not just passive data points — they arecompeting for resources and survival in a complex ecology that existsboth within our individual minds and between them in socialrelationships and communities. As the theory of memetics proposes,ideas can even transport themselves through language, culture, andsocial interactions in order to reproduce and evolve from mind to mind.It is an illusion to think that there is some central self or “I” thatcontrols the process (that is just another agent in the community infact, perhaps one with a kind of reporting and selection role).

I’m not sure the complex social dynamics of these communities ofintelligence can really be modeled by a search engine metaphor. Thereis a lot more going on than just search. As well as communication andreasoning between different processes, there may in fact be feedbackacross levels from the top-down as well as the from the bottom-up.Larry is essentially proposing that intelligence is a purely bottom-upemergent process that can be reduced to search in the ideal, simplestcase. I disagree. I think there is so much feedback in every directionthat medium and the content really cannot be separated. The thoughtsthat take place in the brain ultimately feedback down to the neuralwetware itself, changing the states of neurons and connections –computation flows back down from the top, it doesn’t only flow up fromthe bottom. Any computing system that doesn’t include this kind offeedback in its basic architecture will not be able to implement trueAI.

In short, Google is not the right architecture to truly build a globalbrain on. But it could be a useful tool for search andquestions-and-answers in the future, if they can somehow keep up withthe growth and complexity of the Web.

'Bemes' are Defining the Blogosphere

Tom Hayes has an interesting post in which he coins the word ‘beme" to mean a meme that spreads in the blogosphere.

Michael Malone’s ABC News column on Thursday mentioning "bemes" has certainly produced a lot of interest.  Originally, I coined the word beme
to describe a meme propagated by blogs and bloggers.  Now I can see
that the turn of phrase has a much bigger potential to capture the
rapidly-moving cultural touchstones of the Bubble Generation.

As you may know, "meme" was first defined by Richard Dawkins in 1976
as "a unit of cultural information" spread from one mind to another.
In other words, a viral idea that eventually becomes common knowledge.

Fast forward three decades, and it seems to me that technology has turbo-charged the meme process.  Looking for the juste mot
to describe a "purposeful" meme fed into the vast human network of the
Internet, either by blog, email, video, phonecast, social media or
other viral means, beme seems to fit the bill. 

A beme is a turbo-charged meme made possible entirely by the
existence of the network affect.  A beme can be impactful because it is
lurid–a photo of a panty-less Britney Spears, or humorous–a
whimisical video of the band OKGO on treadmills, or gut-wrenching–the
sad tirade by comedian Michael Richards.  A beme can cement an idea
with the public in a way that cannot be legislated or regulated.  No
legal effort by Cisco to enforce a trademark, for example, will make
the public unlearn that Apple produces the iPhone.

  • A meme is old media, a beme is new media.
  • A meme takes off by accident, a beme by design.
  • A meme can take years to surface, a beme hours.

Interesting Idea: Start a Magazine that is a Wiki

I was reading this article in Wired magazine about wikis, where the article itself is a wiki that the readers can contribute to — and an idea occurred to me. What if you could make an entire magazine that was in a fact a wiki? This magazine would be published online via a Website running a wiki engine. Every issue would be by and for the community of readers. There would be an editorial group among the readers that would decide what to write articles about for the next issue of the magazine, and then the community would work to write the articles. To get into the editorial group, remain there, and have a vote as an editor, a community member would have to make a certain number of (non-spurrious) contributions to articles on an ongoing basis (and/or maintain a certain reputation in the community as measured in some other manner).

I can imagine this idea taking off and a lot of these "wikazines" forming around various subject areas. It makes sense that communities of people who are interested in subjects could help to research and write about them. Of course in such communities there would be some people who put more effort in than others, and some who were more like readers or lurkers. But it would still be much more involving than old "one-way media."

In some ways communities like Digg simulate this — people essentially vote on what is interesting and this filters up to become the featured content on the site. But that is still one step removed from the creative process itself — only the readers participate, not the content authors. What’s interesting about this proposal is that it blurs the distinction between an author and a reader, and provides a way for a magazine to be truly emergent and community-driven. OK, I’m too busy to start this, but I hope someone out there on the lazyweb takes this idea and runs with it. Please let me know if you find examples of this.

Radar Networks is Seeking Search Engineers for Large-Scale Web Mining Initiative

My company, Radar Networks, is building a very large dataset by crawling and mining the Web. We then apply a range of new algorithms to the data (part of our secret sauce) to generate some very interesting and useful new information about the Web. We are looking for a few experienced search engineers to join our team — specifically people with hands-on experience designing and building large-scale, high-performance Web crawling and text-mining systems. If you are interested, or you know anyone who is interested or might be qualified for this, please send them our way. This is your chance to help architect and build a really large and potentially important new system. You can read more specifics abour our open jobs here.

I'm Going to Start Blogging About Radar Networks Here

I
haven’t blogged very much about my stealth startup, Radar Networks,
yet. At the most, I’ve made a few cryptic posts and announcements in the past, but we’ve been keeping things pretty quiet. That’s been a conscious  decision because we have been working
intensively on R&D  and we just weren’t ready to say much yet.

Unlike
some companies which have done massive and deliberate hype about unreleased vapor software, we
really felt it would be better to just focus on our work and let it
speak for itself when we release it.

The fact is we have been working quietly for several years on something really big, and really hard. It hasn’t always been easy — there have been some  technical challenges that took a long time to overcome. And it took us a long time to find VC’s daring enough to back us.

The thing is, what we are making is not a typical Web 2.0 "build it and flip it in 6 months" kind of project. It’s deep technology that has long-term infrastructure-level implications for the Web and the future of content. And until recently we really didn’t even have a good way to describe it to non-techies. So we just focused on our work and figured we would talk about it someday in the future.

But perhaps I’ve erred on the side of
caution — being so averse to gratuitous hype that I have literally said almost
nothing publicly about the company. We didn’t even issue a press release about our Series A round (which
happened last April — I’ll be adding one to our new corporate site, which launches on Sunday night however, for historical purposes), and until today, our site at Radar has been  just
a one-page placeholder with no info at all about what we are doing.

But something happened that changed my mind about this recently. I had lunch with my friend Munjal Shah, the CEO of Riya. Listening to Munjal tell his stories about how he has blogged so openly about Riya’s growth, even from way before their launch, and how that has provided him and his team with amazingly valuable community feedback, support, critiques, and new ideas, really got me thinking. Maybe it’s time Radar Networks started telling a little more of its story? It seems like the team at Riya really benefitted from being so open. So although, we’re still in stealth-mode and there are limits to what we can say at this point, I do think there are some aspects we can start to talk about, even before we’ve launched. And besides that our story itself is interesting — it’s the story of what it’s like to build and work in a deep-technology play in today’s venture economy.

So that’s what I’m going to start doing here — I’m going to start telling our story on this blog, Minding the Planet. I already have around 500 regular readers, and most of them are scientists and hard-core techies and entrepreneurs. I’ve been writing mainly about emerging technologies that are interesting enough to inspire me to post about them, and once in a while about ideas I have been thinking about. These are also subjects that are of interest to the people who read this blog. But now I’m also going to start blogging more about Radar Networks and what we are doing and how it’s going. I’ll post about our progress, the questions we have, the achievements on our team, and of course news about our launch plans. And I hope to hear from people out there who are interested in joining us when we do our private invite-only beta tests.

We’re
still quite a ways from a public launch, but we do have something
working in the lab and it’s very exciting. Our VC’s want us to launch
it now, but it’s still an early alpha and we think it needs a lot more
work (and testing) before our baby is ready to step out into the big
world out there. But it looks promising. I do think, all modesty aside
for a moment, that it has the potential to really advance the Web on a
broad scale. And it’s exciting to work on.

This post is already long enough, so I’ll finish here for the moment. In my upcoming posts I will start to talk a little bit more about the new category that Radar Networks is going to define, and some of the technologies we’re using, and challenges we’ve overcome along the way. And I’ll share some insights, and stories, and successes we’ve had.

But I’m getting ahead of myself, and
besides that, my dinner’s ready. More later.

Harnessing The Collective Mind

Today I read an interesting article in the New York Times about a company called Rite-Solutions which is using a home-grown stock market for ideas to catalyze bottom-up innovation across all levels of personnel in their organization. This is a way to very effectively harness and focus the collective creativity and energy in an organization around the best ideas that the organization generates.

Using virtual stock market systems to measure community sentiment is not a new concept but it is a new frontier. I don’t think we’ve even scratched the surface of what this paradigm can accomplish. For lots of detailed links to resources on this topic see the wikipedia entry on prediction markets. This prediction markets portal also has collected interesting links on the topic. Here is an informative blog post about recent prediction market attempts. Here is a scathing critique of some prediction markets.

There are many interesting examples of prediction markets on the Web:

  • Google uses a similar kind of system — their own version of a prediction market — to enable staff members to collaboratively predict the likelihood that various internal projects and events will occur on-schedule.
  • Yahoo also has a prediction market called BuzzGame that enables visitors to help predict technology trends. 
  • Newsfutures Exchange is a prediction market about the news, which is powered by a commercial prediction market engine sold by a company called Newsfutures.
  • BlogShares, a fantasy stock market for Weblogs in which players invest virtual money in the blogs they think will gain the most audience share.
  • Intrade is another exchange for trading on idea futures.
  • The Iowa Political Futures Exchange is a prediction market that focuses on political change.
  • Tradesports is a prediction market around sports topics.
  • The Hollywood Stock Exchange is a prediction market around movies.
  • The Foresight Exchange is another prediction market for predicting future events.

Here are some interesting, more detailed discussions of prediction market ideas and potential features.

Another area that is related, but highly underleveraged today, are ways to enable communities to help establish whether various ideas are correct using argumentation. By enabling masses of people to provide reasons to agree or disagree with ideas, and with those reasons as well, we can automatically rate what ideas are most agreed with or disagreed with. One very interesting example of this is TruthMapping.com. Some further concepts related to this approach are discussed in this thread.

Big Thinkers' Most Dangerous Ideas

The Edge has published mini-essays by 119 "big thinkers" on their "most dangerous ideas" — fun reading.

The history of science is replete with discoveries
that were considered socially, morally, or emotionally
dangerous in their time; the Copernican and
Darwinian revolutions are the most obvious.
What is your dangerous idea? An idea you think
about (not necessarily one you originated)
that is dangerous not because it is assumed to be false, but because it might be true?

 

A Cool Thingy…

This is cool Click to see why.  I think this idea has great value for viral, meme-based Web advertising. Just imagine: Advertisers could release really cool animations to add to sites, and site owners could add them into their sites for entertainment or humor. The animations could run ads within them as well. It’s fun. Everyone wins, everyone’s happy. And of course users can aim these animations at any other site so visitors who like it can spread it to their own sites. Very smart!!! Very Web 2.0.

Folktologies — Beyond the Folksonomy vs. Ontology Distinction

First of all I know Clay Shirky, and he’s a good fellow. But he’s simply wrong about his claim that "tagging" (of the flavor that is appearing on del.icio.us — what I call "social tagging") is inherently better than the use of formal ontologies. Clay favors the tagging approach because it is bottom-up and emergent in nature, and he argues against ontologies because pre-specification cannot anticipate the future. But this is a simplistic view of both approaches. One could just as easily argue against tagging systems because they don’t anticipate the future — they are shortsighted, now-oriented systems that fail to capture the "big picture" or to optimally organize resources for the long-term. Their saving grace is that over time they do (hopefully) self-organize and prune out the chaff, but that depends both on the level of participation and the quality of that participation.

Continue reading

My "A Physics of Ideas" Manifesto has been Published!

Change This, a project that helps to promote interesting new ideas so that they get noticed above the noise level of our culture has published my article on “A Physics of Ideas” as one of their featured Manifestos. They use an innovative PDF layout for easier reading, and they also provide a means for readers to provide feedback and even measure the popularity of various Manifestos. I’m happy this paper is getting noticed finally — I do think the ideas within it have potential. Take a look.

A Blog Novel

Rohit Gupta, a Bombay-based writer, who also reads this blog, is writing a blog-novel. He has come up with an innovative way to promote it — by letting readers choose quotes from his text to “own” — by choosing a quote and linking to his blog-novel from it, he will in return link back to your blog from that quote in his novel. It’s similar to my earlier GoMeme experiments, except in this case his novel is the meme that is spreading via a cooperative linking incentive.

Good idea, Rohit! I choose this quote from your novel:

The other article, an interesting one, is a 2000-word piece on the history of mathematical heretics known as the Circlesquarers, and the transcendental nature of the number Π.

Great Article on Psychohistory and Sociophysics — Can We Predict Behavior?

Great find from Rob Usey at Psydex Corporation: This article is a survey of the emerging field of “sociophysics” which attempts to apply statistical mechanics to predict human social behavior. It’s very cool stuff if you’re interested in social networks, memes, sociology and prediction science. The article discusses recent progress towards Isaac Asimov’s vision for a science of Psychohistory as proposed in his Foundation stories. This relates in many ways to my previous article on “A Physics of Ideas” in which I proposed some elementary ways to measure the trajectories of memes as if they were moving particles in a Newtonian system.

Detailed Analysis of GoMeme 1.0 Results

Greg Tyrell, a PhD student with a strong interest in bioinformatics, has put together a detailed analysis and report on the GoMeme 1.0 experiment, containing several visualizations and results of the survey. Nice work Greg!

Also in other news, Google has started indexing the results. Currently there are 733 results when searching for sites with original, super-long GUID. There are 867 results when searching for the unique string “To add your blog to this experiment, copy this entire posting to your blog, and fill out the info below, substituting your own information in your posting, where appropriate” which was in the instructions — this number should include sites that did not put the whole GUID in. Technorati, which seems to be working better today, finds 58 sites with the long GUID, and none for the instructions text above. So I guess Google wins so far. But I am glad that Technorati is starting to get their bugs fixed! I noticed that blog stats are starting to be updated again.

I also got an interesting link to another Meme visualization, which although having nothing to do with our experiment as far as I can tell, is a nice concept. It takes forever to build out the full visualization and the tree appears to be almost white on my white background making it hard to see, but still worth a look — Meme Tree