Peace in the Middle East: Could Alternative Energy Be the Solution?

I have been thinking about the situation in the Middle East and also the rise of oil prices, peak oil, and the problem of a world economy based on energy scarcity rather than abundance. There is, I believe, a way to solve the problems in the Middle East, and the energy problems facing the world, at the same time. But it requires thinking “outside the box.”

Middle Eastern nations must take the lead in freeing the world from dependence on their oil. This is not only their best strategy for the future of their nations and their people, but also it is what will ultimately be best for the region and the whole world.

It is inevitable that someone is going to invent a new technology that frees the world from dependence on fossil fuels. When that happens all oil empires will suddenly collapse. Far-sighted, visionary leaders in oil-producing nations must ensure that their nations are in position to lead the coming non-fossil-fuel energy revolution. This is the wisdom of “cannibalize yourself before someone else does.”

Middle Eastern nations should invest more heavily than any other nations in inventing and supplying new alternative energy technologies. For example: hydrogen, solar, biofuels, zero point energy, magnetic power, and the many new emerging alternatives to fossil fuels. This is a huge opportunity for the Middle East not only for economic reasons, but also because it may just be the key to bringing about long-term sustainable peace in the region.

There is a finite supply of oil in the Middle East — the game will and must eventually end. Are Middle Eastern nations thinking far enough ahead about this or not? There is a tremendous opportunity for them if they can take the initiative on this front and there is an equally tremendous risk if they do not. If they do not have a major stake in whatever comes after fossil fuels, they will be left with nothing when whatever is next inevitably happens (which might be very soon).

Any Middle Eastern leader who is not thinking very seriously about this issue right now is selling their people short. I sincerely advise them to make this a major focus going forward. Not only will this help them to improve quality of life for their people now and in the future, but it is the best way to help bring about world peace. The Middle East has the potential to lead a huge and lucrative global energy Renaissance. All it takes is vision and courage to push the frontier and to think outside of the box.

Continue reading

A Bottle That Purifies Enough Water for a Year

This is a really great invention — a hand held water bottle that can purify a year’s worth of water. It removes not only parasites and bacteria, but also viruses. It was just announced recently at a defense industry tradeshow and was a big hit among military commanders who need a better way to get water to their troops. Beyond that it could be a lifesaver in disaster areas and in developing countries where finding clean water is a daily struggle.

Scientist Says "Never in Our Imagination Could This Happen." Famous Last Words?

Whenever a scientist says something like, don’t worry our new experiment could never get out of the lab, or don’t worry the miniature black hole we are going to generate couldn’t possibly swallow up the entire planet, I tend to get a little worried. The problem is that just about every time a scientist has said something is patently absurd, totally impossible or could never ever happen, it usually turns out that in fact it isn’t as impossible as they thought. Now here’s a new article about scientists creating new artificial lifeforms, based on new genetic building blocks — and once again there’s one of those statements. I’m guessing that this means that in about 10 years some synthetic life form is going to be found to have done the impossible and escaped from the lab — perhaps into our food supply, or maybe into our environment. Don’t get me wrong — I’m in favor of this kind of research into new frontiers. I just don’t think anyone can guarantee it won’t escape from the lab.

Plans for a Lunar Ark to Save Humanity

Researchers at the International Space University (ISU), of which I am an alumnus, are proposing an interesting initiative to build an ark on the moon to preserve human civilization and biodiversity, and the Internet, in the event of a catastrophe on earth, such as a comet impact, nuclear war, etc. This project is similar to what I proposed in my Genesis Project posting in 2003.

Humans are just beginning to send trinkets of technology and culture
into space. NASA’s recently launched Phoenix Mars Lander, for example,
carries a mini-disc inscribed with stories, art, and music about Mars.

The Phoenix lander is a "precursor mission" in a decades-long
project to transplant the essentials of humanity onto the moon and
eventually Mars. (See a photo gallery about the Phoenix mission.)

The International Space University team is now on a more ambitious
mission: to start building a "lunar biological and historical archive,"
initially through robotic landings on the moon.

Laying the foundation for "rebuilding the terrestrial Internet,
plus an Earth-moon extension of it, should be a priority," Burke said.

Continue reading

Networked Genome — New Finding Shatters Current Thinking

A new finding has discovered that the human genome may be highly networked. That is, genes do not operate in isolation, but rather they are networked together in a far more complex ecosystem than previously thought. It may be impossible to separate one gene from another in fact. This throws into question not only our understanding of genetics and the human genome, but also the whole genomics industry, which relies heavily on the idea that genes and drugs based on them can be patented:

The principle that gave rise to the biotech industry promised
benefits that were equally compelling. Known as the Central Dogma of
molecular biology, it stated that each gene in living organisms, from
humans to bacteria, carries the information needed to construct one
protein.

The scientists who invented recombinant DNA in 1973 built their
innovation on this mechanistic, "one gene, one protein" principle.

Because donor genes could be associated with specific functions,
with discrete properties and clear boundaries, scientists then believed
that a gene from any organism could fit neatly and predictably into a
larger design – one that products and companies could be built around,
and that could be protected by intellectual-property laws.

This presumption, now disputed, is what one molecular biologist calls "the industrial gene."

"The industrial gene is one that can be defined, owned, tracked,
proven acceptably safe, proven to have uniform effect, sold and
recalled," said Jack Heinemann, a professor of molecular biology in the
School of Biological Sciences at the University of Canterbury in New
Zealand and director of its Center for Integrated Research in Biosafety.

In the United States, the Patent and Trademark Office allows genes
to be patented on the basis of this uniform effect or function. In
fact, it defines a gene in these terms, as an ordered sequence of DNA
"that encodes a specific functional product."

In 2005, a study showed that more than 4,000 human genes had already
been patented in the United States alone. And this is but a small
fraction of the total number of patented plant, animal and microbial
genes.

In the context of the consortium’s findings, this definition now
raises some fundamental questions about the defensibility of those
patents.

If genes are only one component of how a genome functions, for
example, will infringement claims be subject to dispute when another
crucial component of the network is claimed by someone else?

Might owners of gene patents also find themselves liable for
unintended collateral damage caused by the network effects of the genes
they own?

And, just as important, will these not-yet-understood components of
gene function tarnish the appeal of the market for biotech investors,
who prefer their intellectual property claims to be unambiguous and
indisputable?

While no one has yet challenged the legal basis for gene patents,
the biotech industry itself has long since acknowledged the science
behind the question.

"The genome is enormously complex, and the only thing we can say
about it with certainty is how much more we have left to learn," wrote
Barbara Caulfield, executive vice president and general counsel at the
biotech pioneer Affymetrix, in a 2002 article on Law.com called "Why We
Hate Gene Patents."

"We’re learning that many diseases are caused not by the action of
single genes, but by the interplay among multiple genes," Caulfield
said. She noted that just before she wrote her article, "scientists
announced that they had decoded the genetic structures of one of the
most virulent forms of malaria and that it may involve interactions
among as many as 500 genes."

Even more important than patent laws are safety issues raised by the
consortium’s findings. Evidence of a networked genome shatters the
scientific basis for virtually every official risk assessment of
today’s commercial biotech products, from genetically engineered crops
to pharmaceuticals.

Read the rest here

Steorn Set to Demo "Free Energy" Device Tomorrow

Steorn, the Irish company that claims to have invented a mechanical device that generates unlimited free energy with no fuel, is scheduled to demonstrate their device publicly for the first time in London tomorrow. A panel of 22 independent world experts has been recruited to study the device. It should be an interesting demo!

Minding The Planet — The Meaning and Future of the Semantic Web

NOTES

Prelude

Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, “Minding the Planet” about how the Internet would enable the evolution of higher forms of collective intelligence.

My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, “One thing is certain: Someday, you will write this book.” We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.

A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.

But ever since that day on the porch with my grandfather, I remembered what he said: “Someday, you will write this book.” I’ve tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I’ve continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it’s the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.

This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?

I’ve often joked that it is ironic that a term that contains theword “semantic” has such an ambiguous meaning for most people. Mostpeople just have no idea what this means, they have no context for it,it is not connected to their experience and knowledge. This is aproblem that people who are deeply immersed in the trenches of theSemantic Web have not been able to solve adequately — they have notfound the words to communicate what they can clearly see, what they areworking on, and why it matters for everyone. In this article I havetried, and hopefully succeeded, in providing a detailed introductionand context for the Semantic Web fornon-technical people. But even technical people working in the fieldmay find something of interest here as I piece together the fragmentsinto a Big Picture and a vision for what might be called “Semantic Web2.0.”

I hope the reader will bear with me as Ibounce around across different scales of technology and time, and fromthe extremes of core technology to wild speculation in order to tellthis story. If you are looking for the cold hardscience of it all, this article will provide an understanding but willnot satisfy your need for seeing the actual code; there are otherplaceswhere you can find that level of detail and rigor. But if you want tounderstand what it all really means and what the opportunity and futurelookslike – this may be what you are looking for.

I should also note that all of this is my personal view of what I’vebeen working on,and what it really means to me. It is not necessarily the official viewof the mainstream academic Semantic Web community — although there arecertainly many places where we all agree. But I’m sure that somereaders will certainly disagree or raise objections to some of myassertions, and certainly to my many far-flung speculations about thefuture. I welcome those different perspectives; we’re all trying tomake sense of this and the more of us who do that together, the more wecan collectively start to really understand it. So please feel free towrite your own vision or response, and please let me know so I can linkto it!

So with this Prelude in mind, let’s get started…

The Semantic Web Vision

The Semantic Web is a set of technologies which are designed toenable aparticular vision for the future of the Web – a future in which allknowledge exists on the Web in a format that software applications canunderstand andreason about. By making knowledge more accessible to software, softwarewillessentially become able to understand knowledge, think about knowledge,and createnew knowledge. In other words, software will be able to be moreintelligent –not as intelligent as humans perhaps, but more intelligent than say,your wordprocessor is today.

The dream of making software more intelligent has been around almost as longas software itself. And although it is taking longer to materialize than past experts hadpredicted, progress towards this goal is being steadilymade. At the same time, the shape of this dream is changing. It is becomingmore realistic and pragmatic. The original dream of artificial intelligence wasthat we would all have personal robot assistants doing all the work we don’twant to do for us. That is not the dream of the Semantic Web. Instead, today’sSemantic Web is about facilitating what humans do – it is about helping humansdo things more intelligently. It’s not a vision in which humans do nothing andsoftware does everything.

The Semantic Web vision is not just about helping software become smarter –it is about providing new technologies that enable people, groups,organizations and communities to be smarter.

For example, by providing individuals with tools that learn about what theyknow, and what they want, search can be much more accurate and productive.

Using software that is able to understand and automatically organize largecollections of knowledge, groups, organizations and communities can reachhigher levels of collective intelligence and they can cope with volumes ofinformation that are just too great for individuals or even groups tocomprehend on their own.

Another example: more efficient marketplaces can be enabled by software thatlearns about products, services, vendors, transactions and market trends andunderstands how to connect them together in optimal ways.

In short, the Semantic Web aims to make software smarter, not just for itsown sake, but in order to help make people, and groups of people, smarter. Inthe original Semantic Web vision this fact was under-emphasized, leading to theimpression that Semantic Web was only about automating the world. In fact, it isreally about facilitating the world.

The Semantic Web Opportunity

The Semantic Web is one of the most significant things to happen since theWeb itself. But it will not appear overnight. It will take decades. It willgrow in a bottom-up, grassroots, emergent, community-driven manner just likethe Web itself. Many things have to converge for this trend to really take off.

The core open standards already exist, but the necessary development tools haveto mature, the ontologies that define human knowledge have to come into beingand mature, and most importantly we need a few real “killer apps” to prove thevalue and drive adoption of the Semantic Web paradigm. The first generation ofthe Web had its Mozilla, Netscape, Internet Explorer, and Apache – and it alsohad HTML, HTTP, a bunch of good development tools, and a few killer apps andservices such as Yahoo! and thousands of popular Web sites. The same things arenecessary for the Semantic Web to take off.

And this is where we are today – this all just about to start emerging.There are several companies racing to get this technology, or applications ofit, to market in various forms. Within a year or two you will see mass-consumerSemantic Web products and services hit the market, and within 5 years therewill be at least a few “killer apps” of the Semantic Web. Ten years from nowthe Semantic Web will have spread into many of the most popular sites andapplications on the Web. Within 20 years all content and applications on theInternet will be integrated with the Semantic Web. This is a sea-change. A bigevolutionary step for the Web.

The Semantic Web is an opportunity to redefine, or perhaps to better define,all the content and applications on the Web. That’s a big opportunity. Andwithin it there are many business opportunities and a lot of money to be made. It’snot unlike the opportunity of the first generation of the Web. There areplatform opportunities, content opportunities, commerce opportunities, searchopportunities, community and social networking opportunities, and collaborationopportunities in this space. There is room for a lot of players to compete andat this point the field is wide open.

The Semantic Web is a blue ocean waiting to be explored. And like anyunexplored ocean its also has its share of reefs, pirate islands, hidden treasure, shoals,whirlpools, sea monsters and typhoons. But there are new worlds out there to be discovered,and they exert an irresistible pull on the imagination. This is an excitingfrontier – and also one fraught with hard technical and social challenges thathave yet to be solved. For early ventures in the Semantic Web arena, it’s notgoing to be easy, but the intellectual and technological challenges, and the potentialfinancial rewards, glory, and benefit to society, are worth the effort andrisk. And this is what all great technological revolutions are made of.

Semantic Web 2.0

Some people who have heard the term “Semantic Web” thrown around too muchmay think it is a buzzword, and they are right. But it is not just a buzzword –it actually has some substance behind it. That substance hasn’t emerged yet,but it will. Early critiques of the Semantic Web were right – the early visiondid not leverage concepts such as folksonomy and user-contributed content atall. But that is largely because when the Semantic Web was originally conceivedof Web 2.0 hadn’t happened yet. The early experiments that came out of researchlabs were geeky, to put it lightly, and impractical, but they are already beingfollowed up by more pragmatic, user-friendly approaches.

Today’s Semantic Web – what we might call “Semantic Web 2.0” is a kinder,gentler, more social Semantic Web. It combines the best of the original visionwith what we have all learned about social software and community in the last10 years. Although much of this is still in the lab, it is already starting totrickle out. For example, recently Yahoo! started a pilot of the Semantic Webbehind their food vertical. Other organizations are experimenting with usingSemantic Web technology in parts of their applications, or to store or mapdata. But that’s just the beginning.

The Google Factor

Entrepreneurs, venture capitalists and technologists are increasinglystarting to see these opportunities. Who will be the “Google of the SemanticWeb?” – will it be Google itself? That’s doubtful. Like any entrenchedincumbent, Google is heavily tied to a particular technology and worldview. Andin Google’s case it is anything but semantic today. It would be easier for anupstart to take this position than for Google to port their entireinfrastructure and worldview to a Semantic Web way of thinking.

If it is goingto be Google it will most likely be by acquisition rather than by internal origination. Andthis makes more sense anyway – for Google is in a position where they can just wait and buy the winner,at almost any price, rather than competing in the playing field. One thing to note however is that Google has at least one product offering that shows some potential for becoming a key part of the Semantic Web. I am speaking of Google Base, Google’s open database which is meant to be a registry for structured data so that it can be found in Google search. But Google Base does not conform to or make use of the many open standards of the Semantic Web community. That may or may not be a good thing, depending on your perspective.

Of course the downside of Google waiting to join the mainstream Semantic Web community until after the winner is announced is very large – once there is a winner it may be too late for Google to beat them. Thewinner of the Semantic Web race could very well unseat Google. The strategistsat Google are probably not yet aware of this but as soon as they seesignificant traction around a major Semantic Web play it will become of interestto them.

In any case, I think there won’t be just one winner, there will be severalmajor Semantic Web companies in the future, focusing on different parts of theopportunity. And you can be sure that if Google gets into the game, every majorportal will need to get into this space at some point or risk becomingirrelevant. There will be demand and many acquisitions. In many ways the Semantic Web will not be controlled by just one company — it will be more like a fabric that connects them all together.

Context is King — The Nature ofKnowledge

It should be clear by now that the Semantic Web is all about enablingsoftware (and people) to work with knowledge more intelligently. But what isknowledge? Knowledge is not just information. It is meaningful information – itis information plus context. For example, if I simply say the word “sem” toyou, it is just raw information, it is not knowledge. It probably has nomeaning to you other than a particular set of letters that you recognize and asound you can pronounce, and the mere fact that this information was stated byme.

But if I tell you that “sem” it is the Tibetan word for “mind” then suddenly,“sem means mind in Tibetan” to you. If I further tell you that Tibetans have about as many words for “mind” as Eskimos have for “snow,” this is further meaning. Thisis context, in other words, knowledge, about the sound “sem.” The sound is raw information. When it is given context itbecomes a word, a word that has meaning, a word that is connected to conceptsin your mind – it becomes knowledge. By connecting raw information to context,knowledge is formed.

Once you have acquired a piece of knowledge such as “sem means mind in Tibetan,” you may then also form further knowledgeabout it. For example, you may form the memory, “Nova said that ‘sem means mind in Tibetan.’” You mightalso connect the word “sem” to networks of further concepts you have about Tibet and your understanding of what the word “mind” means.

The mind is the organ of meaning – mind is where meaning is stored,interpreted and created. Meaning is not “out there” in the world, it is purelysubjective, it is purely mental. Meaning is almost equivalent to mind in fact.For the two never occur separately. Each of our individual minds has some way of internally representing meaning — when we read or hear a word that we know, our minds connect that to a network of concepts about it and at that moment it means something to us.

Digging deeper, if you are really curious,or you happen to know Greek, you may also find that a similar sound occurs inthe Greek word, sēmantikós – which means “having meaning” and in turn is the root of the English word “semantic”which means “pertaining to or arising from meaning.” That’s an odd coincidence!“Sem” occurs in Tibetan word for mind, and the English and Greek words that allrelate to the concepts of “meaning” and “mind.” Even stranger is that not only do these words have a similar sound, they have a similar meaning.

With all this knowledge at yourdisposal, when you then see the term “Semantic Web” you may be able to inferthat it has something to do with adding “meaning” to the Web. However, if youwere a Tibetan, perhaps you might instead think the term had something to dowith adding “mind” to the Web. In either case you would be right!

Discovering New Connections

We’ve discovered a new connection — namely that there is an implicit connectionbetween “sem” in Greek, English and Tibetan: they all relate to meaning andmind. It’s not a direct, explicit connection – it’s not evident unless you digfor it. But it’s a useful tidbit of knowledge once it’s found. Unlike the direct migration of the sound “sem” from Greek to English,there may not have ever been a direct transfer of this sound from Greek toSanskrit to Tibetan. But in a strange and unexpected way, they are all connected. This connectionwasn’t necessarily explicitly stated by anyone before, but was uncovered byexploring our network of concepts and making inferences.

The sequence of thought about “sem”above is quite similar to kind of intellectual reasoning and discovery that theactual Semantic Web seeks to enable software to do automatically.  How is this kind of reasoning and discovery enabled? The Semantic Web providesa set of technologies for formally defining the context of information. Just asthe Web relies on a standard formal specification for “marking up” informationwith formatting codes that enable any applications that understand those codesto format the information in the same way, the Semantic Web relies on newstandards for “marking up” information with statements about its context – itsmeaning – that enable any applications to understand, and reason about, the meaning of those statements in the same way.

By applying semantic reasoning agents to large collections of semantically enhanced content, all sorts of new connections may be inferred, leading to new knowledge, unexpected discoveries and useful additional context around content. This kind of reasoning and discovery is already taking place in fields from drug discovery and medical research, to homeland security and intelligence. The Semantic Web is not the only way to do this — but it certainly will improve the process dramatically. And of course, with this improvement will come new questions about how to assess and explain how various inferences were made, and how to protect privacy as our inferencing capabilities begin to extend across ever more sources of public and private data. I don’t have the answers to these questions, but others are working on them and I have confidence that solutions will be arrived at over time.

Smart Data

By marking up information with metadata that formally codifies its context, we can make the data itself “smarter.” The data becomes self-describing. When you get a piece of data you also get the necessary metadata for understanding it. For example, if I sent you a document containing the word “sem” in it, I could add markup around that word indicating that it is the word for “mind” in the Tibetan language.

Similarly, a document containing mentions of “Radar Networks” could contain metadata indicating that “Radar Networks” is an Internet company, not a product or a type of radar technology. A document about a person could contain semantic markup indicating that they are residents of a certain city, experts on Italian cooking, and members of a certain profession. All of this could be encoded as metadata in a form that software could easily understand. The data carries more information about its own meaning.

The alternative to smart data would be for software to actually read and understand natural language as well as humans. But that’s really hard. To correctly interpret raw natural language, software would have to be developed that knew as much as a human being. But think about how much teaching and learning is required to raise a human being to the point where they can read at an adult level. It is likely that similar training would be necessary to build software that could do that. So far that goal has not been achieved, although some attempts have been made. While decent progress in natural language understanding has been made, most software that can do this is limited around particular vertical domains, and it’s brittle — it doesn’t do a good job of making sense of terms and forms of speech that it wasn’t trained to parse and make sense of.

Instead of trying to make software a million times smarter than it is today, it is much easier to just encode more metadata about what our information means. That turns out to be less work in the end. And there’s an added benefit to this approach — the meaning exists with the data and travels with it. It is independent of any one software program — all software can access it. And because the meaning of information is stored with the information itself, rather than in the software, the software doesn’t have to be enormous to be smart. It just has to know the basic language for interpreting the semantic metadata it finds on the information it works with.

Smart data enables relatively dumb software to be smarter with less work. That’s an immediate benefit. And in the long-term as software actually gets smarter, smart data will make it easier for it to start learning and exploring on its own. So it’s a win-win approach. Start with by adding semantic metadata to data, end up with smarter software.

Making Statements About the World

Metadata comes down to making statements about the world in a manner that machines, and perhaps even humans, can understand unambiguously. The same piece of metadata should be interpreted in the same way by different applications and readers.

There are many kinds of statementsthat can be made about information to provide it with context. For example, youcan state a definition such as “person” means “a human being or a legalentity.” You can state an assertion such as “Sue is a human being.” You canstate a rule such that “if x is a human being, then x is a person.”

From thesestatements it can then be inferred that “Sue is a person.” This inference is soobvious to you and me that it seems trivial, but most software today cannot dothis. It doesn’t know what a person is, let alone what a name is. But ifsoftware could do this, then it could for example, automatically organizedocuments by the people they are related to, or discover connections betweenpeople who were mentioned in a set of documents, or it could find documentsabout people who were related to particular topics, or it could give you a listof all the people mentioned in a set of documents, or all the documents relatedto a person.

Of course this is a very basicexample. But imagine if your software didn’t just know about people – it knewabout most of the common concepts that occur in your life. Your software wouldthen be able to help you work with your documents just about as intelligentlyas you are able to do by yourself, or perhaps even more intelligently, becauseyou are just one person and you have limited time and energy but your softwarecould work all the time, and in parallel, to help you.

Examples and Benefits

How could the existence of the Semantic Web and all the semantic metadata that defines it be really useful toeveryone in the near-term?

Well, for example, the problem of email spam would finally be cured:your software would be able to look at a message and know whether it wasmeaningful and/or relevant to you or not.

Similarly, you would never have to file anything by hand again. Your software could atuomate all filing and information organization tasks for you because it would understand your information and your interests. It would be able to figure out when to file something in a single folder, multiple folders, or new ones. It would organize everything — documents, photos, contacts, bookmarks, notes, products, music, video, data records — and it would do it even better and more consistently than you could on your own. Your software wouldn’t just organize stuff, it would turn it into knowledge by connecting it to more context. It could this not just for individuals, but for groups, organizations and entire communities.

Another example: search would bevastly better: you could search conversationally by typing in everyday naturallanguage and you would get precisely what you asked for, or even what youneeded but didn’t know how to ask for correctly, and nothing else. Your searchengine could even ask you questions to help you narrow what you want. You wouldfinally be able to converse with software in ordinary speech and it would understandyou.

The process of discovery would be easier too. You could have software agent that worked as your personal recommendation agent. It would constantly be looking in all the places you read or participate in for things that are relevant to your past, present and potential future interests and needs. It could then alert you in a contextually sensitive way, knowing how to reach you and how urgently to mark things. As you gave it feedback it could learn and do a better job over time.

Going even further with this,semantically-aware software – software that is aware of context, software thatunderstands knowledge – isn’t just for helping you with your information, itcan also help to enrich and facilitate, and even partially automate, yourcommunication and commerce (when you want it to). So for example, your software could help you with your email. It would be able to recommend responses to messages for you, or automate the process. It would be able to enrich your messaging anddiscussions by automatically cross-linking what you are speaking about withrelated messages, discussions, documents, Web sites, subject categories,people, organizations, places, events, etc.

Shopping and marketplaces wouldalso become better – you could search precisely for any kind of product, withany specific attributes, and find it anywhere on the Web, in any store. You could post classified ads and automatically get relevant matches according to your priorities, from all over the Web, or only from specific places and parties that match your criteria for who you trust. You could also easily invent a new custom datastructure for posting classified ads for a new kind of product or service and publishit to the Web in a format that other Web services and applications couldimmediately mine and index without having to necessarily integrate with yoursoftware or data schema directly.

You could publish an entiredatabase to the Web and other applications and services could immediately startto integrate your data with their data, without having to migrate your schemaor their own. You could merge data from different data sources together to create new data sources without having to ever touch or look at an actual database schema.

Bumps on the Road

The above examples illustrate thepotential of the Semantic Web today, but the reality on the ground is that the technology isstill in the early phases of evolution. Even for experienced software engineersand Web developers, it is difficult to apply in practice. The main obstaclesare twofold:

(1) The Tools Problem:

There are very few commercial-gradetools for doing anything with the Semantic Web today – Most of the tools forbuilding semantically-aware applications, or for adding semantics toinformation are still in the research phase and were designed for expertcomputer scientists who specialize in knowledge representation, artificialintelligence, and machine learning.

These tools require a largelearning curve to work with and they don’t generally support large-scaleapplications – they were designed mainly to test theories and frameworks, notto actually apply them. But if the Semantic Web is ever going to becomemainstream, it has to be made easier to apply – it has to be made moreproductive and accessible for ordinary software and content developers.

Fortunately, the tools problem isalready on the verge of being solved. Companies such as my own venture, RadarNetworks, are developing the next generation of tools for building Semantic Webapplications and Semantic Web sites. These tools will hide most of thecomplexity, enabling ordinary mortals to build applications and content thatleverage the power of semantics without needing PhD’s in knowledge representation.

(2) The Ontology Problem:

The Semantic Web providesframeworks for defining systems of formally defined concepts called “ontologies,”that can then be used to connect information to context in an unambiguous way. Withoutontologies, there really can be no semantics. The ontologies ARE the semantics,they define the meanings that are so essential for connecting information tocontext.

But there are still few widely used or standardized ontologies. Andgetting people to agree on common ontologies is not generally easy. Everyonehas their own way of describing things, their own worldview, and let’s face itnobody wants to use somebody else’s worldview instead of their own.Furthermore, the world is very complex and to adequately describe all the knowledgethat comprises what is thought of as “common sense” would require a very largeontology (and in fact, such an ontology exists – it’s called Cyc and it is solarge and complex that only experts can really use it today).

Even to describe the knowledge ofjust a single vertical domain, such as medicine, is extremely challenging. Tomake matters worse, the tools for authoring ontologies are still very hard touse – one has to understand the OWL language and difficult, buggy ontologyauthoring tools in order to use them. Domain experts who are non-technical andnot trained in formal reasoning or knowledge representation may find theprocess of designing ontologies frustrating using current tools. What is needed are commercial quality tools for buildingontologies that hide the underlying complexity so that people can just pourtheir knowledge into them as easily as they speak. That’s still a ways off, butnot far off. Perhaps ten years at the most.

Of course the difficulty ofdefining ontologies would be irrelevant if the necessary ontologies alreadyexisted. Perhaps experts could define them and then everyone else could justuse them? There are numerous ontologies already in existence, both on thegeneral level as well as about specific verticals. However in my own opinion,having looked at many of them, I still haven’t found one that has the rightbalance of coverage of the necessary concepts most applications need, andaccessibility and ease-of-use by non-experts. That kind of balance is arequirement for any ontology to really go mainstream.

Furthermore, regarding the presentcrop of ontologies, what is still lacking is standardization. Ontologists havenot agreed on which ontologies to use. As a result it’s anybody’s guess whichontology to use when writing a semantic application and thus there is a highdegree of ontology diversity today. Diversity is good, but too much diversityis chaos.

Applications that use differentontologies about the same things don’t automatically interoperate unless theirontologies have been integrated. This is similar to the problem of databaseintegration in the enterprise. In order to interoperate, different applicationsthat use different data schemas for records about the same things, have to bemapped to each other somehow – either at the application-level or the data-level.This mapping can be direct or through some form of middleware.

Ontologies canbe used as a form of semantic middleware, enabling applications to be mapped atthe data-level instead of the applications-level. Ontologies can also be usedto map applications at the applications level, by making ontologies of Webservices and capabilities, by the way. This is an area in which a lot ofresearch is presently taking place.

The OWL language can expressmappings between concepts in different ontologies. But if there are manyontologies, and many of them partially overlap, it is a non-trivial task toactually make the mappings between their concepts.

Even though concept A inontology one and concept B in ontology two may have the same names, and evensome of the same properties, in the context of the rest of the concepts intheir respective ontologies they may imply very different meanings. So simplymapping them as equivalent on the basis of their names is not adequate, theirconnections to all the other concepts in their respective ontologies have to beconsidered as well. It quickly becomes complex. There are some potential waysto automate the construction of mappings between ontologies however – but theyare still experimental. Today, integrating ontologies requires the help ofexpert ontologists, and to be honest, I’m not sure even the experts have itfigured out. It’s more of an art than a science at this point.

Darwinian Selection of Ontologies

All that is needed for mainstream adoption to begin is for a largebody of mainstream content to become semantically tagged andaccessible. This will cause whatever ontology is behind that content to become popular.

When developers see that there is significant content andtraction around aparticular ontology, they will use that ontology for their ownapplicationsabout similar concepts, or at least they will do the work of mappingtheir ownontology to it, and in this way the world will converge in a Darwinianfashionaround a few main ontologies over time.

These main ontologies will then beworth thetime and effort necessary to integrate them on a semantic level,resulting in acohesive Semantic Web. We may in fact see Darwinian natural selection take place not just at the ontology level, but at the level of pieces of ontologies.

A certain ontology may do a good job of defining what a person is, while another may do a good job of defining what a company is. These definitions may be used for a lot of content, and gradually they will become common parts of an emergent meta-ontology comprised of the most-popular pieces from thousands of ontologies. This could be great or it could be a total mess. Nobody knows yet. It’s a subject for further research.

Making Sense of Ontologies

Since ontologies are so important,it is helpful to actually understand what an ontology really is, and what itlooks like. An ontology is a system of formally defined related concepts. Forexample, a simple ontology is this set of statements such as this:

A human is a living thing.

A person is a human.

A person may have a first name.

A person may have a last name.

A person must have one and only onedate of birth.

A person must have a gender.

A person may be socially related toanother person.

A friendship is a kind of socialrelationship.

A romantic relationship is a kindof friendship.

A marriage is a kind of romanticrelationship.

A person may be in a marriage withonly one other person at a time.

A person may be employed by anemployer.

An employer may be a person or anorganization.

An organization is a group ofpeople.

An organization may have a productor a service.

A company is a type organization.

We’ve just built a simple ontologyabout a few concepts: humans, living things, persons, names, socialrelationships, marriages, employment, employers, organizations, groups,products and services. Within this system of concepts there is particular logic,some constraints, and some structure. It may or may not correspond to yourworldview, but it is a worldview that is unambiguously defined, can becommunicated, and is internally logically consistent, and that is what isimportant.

The Semantic Web approach providesan open-standard language, OWL, for defining ontologies. OWL also provides fora way to define instances of ontologies. Instances are assertions within theworldview that a given ontology provides. In other words OWL provides a meansto make statements that connect information to the ontology so that softwarecan understand its meaning unambiguously. For example, below is a set ofstatements based on the above ontology:

There exists a person x.

Person x has a first name “Sue”

Person x  has a last name “Smith”

Person x has a full name “Sue Smith”

Sue Smith was born on June 1, 2005

Sue Smith has a gender: female

Sue Smith has a friend: Jane, who isanother person.

Sue Smith is married to: Bob, anotherperson.

Sue Smith is employed by Acme, Inc, a company

Acme Inc. has a product, Widget2.0.

The set of statements above, plusthe ontology they are connected to, collectively comprise a knowledge basethat, if represented formally in the OWL markup language, could be understoodby any application that speaks OWL in the precise manner that it was intendedto be understood.

Making Metadata

The OWL language provides a way tomarkup any information such as a data record, an email message or a Web pagewith metadata in the form of statements that link particular words or phrasesto concepts in the ontology. When software applications that understand OWLencounter the information they can then reference the ontology and figure outexactly what the information means – or at least what the ontology says that itmeans.

But something has to add thesesemantic metadata statements to the information – and if it doesn’t add them or adds thewrong ones, then software applications that look at the information will getthe wrong idea. And this is another challenge – how will all this metadata getcreated and added into content? People certainly aren’t going to add it all byhand!

Fortunately there are many ways tomake this easier. The best approach is to automate it using special softwarethat goes through information, analyzes the meaning and adds semantic metadataautomatically. This works today, but the software has to be trained or providedwith rules and that takes some time. It also doesn’t scale cost-effectively tovast data-sets.

Alternatively, individuals can beprovided with ways to add semantics themselves as they author information. Whenyou post your resume in a semantically-aware job board, you could fill out aform about each of your past jobs, and the job board would connect that data toappropriate semantic concepts in an underlying employment ontology. As anend-user you would just fill out a form like you are used to doing;under-the-hood the job board would add the semantics for you.

Another approach is to leveragecommunities to get the semantics. We already see communities that are addingbasic metadata “tags” to photos, news articles and maps. Already a few simpletypes of tags are being used pseudo-semantically: subject tags and geographicaltags. These are primitive forms of semantic metadata. Although they are notexpressed in OWL or connected to formal ontologies, they are at leastsemantically typed with prefixes or by being entered into fields or specificnamespaces that define their types.

Tagging by Example

There may also be another solution to the problem of how to add semantics to content in the not to distant future. Once asuitable amount of content has been marked up with semantic metadata,it may be possible, through purely statistical forms of machinelearning, for software to begin to learn how to do a pretty good job ofmarking up new content with semantic metadata.

For example, if thestring “Nova Spivack” is often marked up with semantic metadata statingthat it indicates a person, and not just any person but a specificperson that is abstractly represented in a knowledge base somewhere,then when software applications encounter a new non-semanticallyenhanced document containing strings such as “Nova Spivack” or”Spivack, Nova” they can make a reasonably good guess that thisindicates that same specific person, and they can add the necessarysemantic metadata to that effect automatically.

As more and more semanticmetadata is added to the Web and made accessible it constitutes a statisticaltraining set that can be learned and generalized from. Although humansmay need to jump-start the process with some manually semantic tagging,it might not be long before software could assist them and eventuallydo all the tagging for them. Only in special cases would software needto ask a human for assistance — for example when totally new terms orexpressions were encountered for the first several times.

The technology for doing this learning already exists — and actually it’s not very different from how search engines like Google measure the community sentiment around web pages. Each time something is semantically tagged with a certain meaning that constitutes a “vote” for it having that meaning. The meaning that gets the most votes wins. It’s an elegant, Darwinian, emergent approach to learning how to automatically tag the Web.

One this is certain, if communities were able to tagthings with more types of tags, and these tags were connected to ontologies andknowledge bases, that would result in a lot of semantic metadata being added tocontent in a completely bottom-up, grassroots manner, and this in turn would enable this process to start to become automated or at least machine-augmented.

Getting the Process Started

But making the userexperience of semantic tagging easy (and immediately beneficial) enough that regular people will do it, is a challenge that has yet to be solved.However, it will be solved shortly. It has to be. And many companies andresearchers know this and are working on it right now. This does have to be solved to get the process of jump-starting the Semantic Web started.

I believe that the Tools Problem – the lack of commercial grade tools forbuilding semantic applications – is essentially solved already (although theproducts have not hit the market yet; they will within a few years at most).The Ontology Problem is further from being solved. I think the way this problemwill be solved is through a few “killer apps” that result in the building up ofa large amount of content around particular ontologies within particular onlineservices.

Where might we see this content initially arising? In my opinion it will most likely be within vertical communities of interest, communities of practice, and communities of purpose. Within such communities there is a need to create a common body of knowledge and to make that knowledge more accessible, connected and useful.

The Semantic Web can really improve the quality of knowledge and user-experience within these domains. Because they are communities, not just static content services, these organizations are driven by user-contributed content — users play a key role in building content and tagging it. We already see this process starting to take place in communities such as Flickr, del.icio.us, the Wikipedia and Digg. We know that communities of people do tag content, and consume tagged content, if it is easy and beneficial enough for to them to do so.

In the near future we may see miniature Semantic Webs arising around particular places, topics and subject areas, projects, and other organizations. Or perhaps, like almost every form of new media in recent times, we may see early adoption of the Semantic Web around online porn — what might be called “the sementic web.”

Whether you like it or not, it is a fact that pornography was one of the biggest drivers of early mainstream adoption of personal video technology, CD-ROMs, and also of the Internet and the Web.

But I think it probably is not necessary this time around. While, I’m sure that the so-called “sementic web” could become better from the Semantic Web, it isn’t going to be the primary driver of adoption of the Semantic Web. That’s probably a good thing — the world can just skip over that phase of development and benefit from this technology with both hands so to speak.

The World Wide Database

In some ways one could think of theSemantic Web as “the world wide database” – it does for the meaning of data records what theWeb did for the formatting documents. But that’s just the beginning. It actually turnsdocuments into richer data records. It turns unstructured data into structureddata. All data becomes structured data in fact. The structure is not merelydefined structurally, but it is defined semantically.

In other words, it’s notmerely that for example, a data record or document can be defined in such a wayas to specify that it contains a certain field of data with a certain label ata certain location – it defines what that field of data actually means in anunambiguous, machine understandable way. If all you want is a Web of data,XML is good enough. But if you want to make that data interoperable and machineunderstandable then you need RDF and OWL – the Semantic Web.

Like any database,the Semantic Web, or rather the myriad mini-semantic-webs that will comprise it,have to overcome the challenge of data integration. Ontologies provide a betterway to describe and map data, but the data still has to be described andmapped, and this does take some work. It’s not a magic bullet.

The Semantic Webmakes it easier to integrate data, but it doesn’t completely remove the dataintegration problem altogether. I think the eventual solution to this problemwill combine technology and community folksonomy oriented approaches.

The Semantic Web in HistoricalContext

Let’s transition now and zoom out to see the bigger picture. The Semantic Webprovides technologies for representing and sharing knowledge in new ways. Inparticular, it makes knowledge more accessible to software, and thus to otherpeople. Another way of saying this is that it liberates knowledge fromparticular human minds and organizations – it provides a way to make knowledgeexplicit, in a standardized format that any application can understand. This isquite significant. Let’s put this in historical perspective.

Before the invention of the printing press, there were two ways to spreadknowledge – one was orally, the other was in some symbolic form such as art orwritten manuscripts. The oral transmission of knowledge had limited range and ahigh error-rate, and the only way to learn something was to meet someone whoknew it and get them to tell you. The other option, symbolic communicationthrough art and writing, provided a means to communicate knowledgeindependently of particular people – but it was only feasible to produce a fewcopies of any given artwork or manuscript because they had to be copied byhand. So the transmission of knowledge was limited to small groups or at leastsmall audiences. Basically, the only way to get access to this knowledge was tobe one of the lucky few who could acquire one of its rare physical copies.

The invention of the printing press changed this – for the first timeknowledge could be rapidly and cost-effectively mass-produced and mass-distributed.Printing made it possible to share knowledge with ever-larger audiences. Thisenabled a huge transformation for human knowledge, society, government,technology – really every area of human life was transformed by thisinnovation.

The World Wide Web made the replication and distribution of knowledge eveneasier – With the Web you don’t even have to physically print or distributeknowledge anymore, the cost of distribution is effectively zero, and everyonehas instant access to everything from anywhere, anytime. That’s a lot betterthan having to lug around a stack of physical books. Everyone potentially haswhatever knowledge they need with no physical barriers. This has been anotherhuge transformation for humanity – and it has affected every area of humanlife. Like the printing press, the Web fundamentally changed the economics ofknowledge.

The Semantic Web is the next big step in this process – it will make all theknowledge of the human race accessible to software. For the first time,non-human things (software applications) will be able to start working withhuman knowledge to do things (for humans) on their own. This is a big leap – aleap like the emergence of a new species, or the symbiosis of two existingspecies into a new form of life.

The printing press and the Web changed the economics of replicating,distributing and accessing knowledge. The Semantic Web changes the economics ofprocessing knowledge. Unlike the printing press and the Web, the Semantic Webenables knowledge to be processed by non-human things.

In other words, humans don’t have to do all the thinking on their own, theycan be assisted by software. Of course we humans have to at least first createthe software (until we someday learn to create software that is smart enough tocreate software too), and we have to create the ontologies necessary for thesoftware to actually understand anything (until we learn to create software thatis smart enough to create ontologies too), and we have to add the semanticmetadata to our content in various ways (until our software is smart enough todo this for us, which it almost is already). But once we do the initial work ofmaking the ontologies and software, and adding semantic metadata, the systemstarts to pick up speed on its own, and over time the amount of work we humanshave to do to make it all function decreases. Eventually, once the system hasencoded enough knowledge and intelligence, it starts to function withoutneeding much help, and when it does need our help, it will simply ask us andlearn from our answers.

This may sound like science-fiction today, but in fact it a lot of this isalready built and working in the lab. The big hurdle is figuring out how to getthis technology to mass-market. That is probably as hard as inventing thetechnology in the first place. But I’m confident that someone will solve iteventually.

Once this happens the economics of processing knowledge will truly bedifferent than it is today. Instead of needing an actual real-live expert, theknowledge of that expert will be accessible to software that can act as theirproxy – and anyone will be able to access this virtual expert, anywhere,anytime. It will be like the Web – but instead of just information beingaccessible, the combined knowledge and expertise of all of humanity will alsobe accessible, and not just to people but also to software applications.

The Question of Consciousness

The Semantic Web literally enables humans to share their knowledge with eachother and with machines. It enables the virtualization of human knowledge andintelligence. With respect to machines, in doing this, it will lend machines“minds” in a certain sense – namely in that they will at least be able tocorrectly interpret the meaning of information and replicate the expertise ofexperts.

But will these machine-minds be conscious? Will they be aware of themeanings they interpret, or will they just be automatons that are simplyfollowing instructions without any awareness of the meanings they areprocessing? I doubt that software will ever be conscious, because from what Ican tell consciousness — or what might be called the sentient awareness ofawareness itself as well as other things that are sensed — is an immaterialphenomena that is as fundamental as space, time and energy — or perhaps evenmore fundamental. But this is just my personal opinion after having searchedfor consciousness through every means possible for decades. It just cannot befound to be something, yet it is definitely and undeniably taking place.

Consciousness can be exemplified through the analogy of space (but unlikespace, consciousness has this property of being aware, it’s not a mere lifelessvoid). We all agree space is there, but nobody can actually point to itsomewhere, and nobody can synthesize space. Space is immaterial andfundamental. It is primordial. So is electricity. Nobody really knows whatelectricity is ultimately, but if you build the right kind of circuit you canchannel it and we’ve learned a lot about how to do that.

Perhaps we may figure out how to channel consciousness like we channelelectricity with some sort of synthetic device someday, but I think that ishighly unlikely. I think if you really want to create consciousness it’s mucheasier and more effective to just have children. That’s something ordinarymortals can do today with the technology they were born with. Of course whenyou have children you don’t really “create” their consciousness, it seems to bethere on its own. We don’t really know what it is or where it comes from, orwhen it arises there. We know very little about consciousness today.Considering that it is the most fundamental human experience of all, it isactually surprising how little we know about it!

In any case, until we truly delve far more deeply into the nature of themind, consciousness will be barely understood or recognized, let aloneexplained or synthesized by anyone. In many eastern civilizations there aremulti-thousand year traditions that focus quite precisely on the nature ofconsciousness. The major religions have all universally concluded thatconsciousness is beyond the reach of science, beyond the reach of concepts,beyond the mind entirely. All those smart people analyzing consciousness for solong, and with such precision, and so many methods of inquiry, may have a pointworth listening to.

Whether or not machines will ever actually “know” or be capable of beingconscious of that meaning or expertise is a big debate, but at least we can allagree that they will be able to interpret the meaning of information and rulesif given the right instructions. Without having to be conscious, software willbe able to process semantics quite well — this has already been proven. It’sworking today.

While consciousness is and may always be a mystery that we cannot synthesize– the ability for software to follow instructions is an established fact. Inits most reduced form, the Semantic Web just makes it possible to providericher kinds of instructions. There’s no magic to it. Just a lot of details. Infact, to play on a famous line, “it’s semantics all the way down.”

The Semantic Web does not require that we make conscious software. It justprovides a way to make slightly more intelligent software. There’s a bigdifference. Intelligence is simply a form of information processing, for themost part. It does not require consciousness — the actual awareness of what isgoing on — which is something else altogether.

While highly intelligentsoftware may need to sense its environment and its own internal state andreason about these, it does not actually have to be conscious to do this. Theseoperations are for the most part simple procedures applied vast numbers of timeand in complex patterns. Nowhere in them is there any consciousness nor doesconsciousness suddenly emerge when suitable levels of complexity are reached.

Consciousness is something quite special and mysterious. And fortunately forhumans, it is not necessary for the creation of more intelligent software, noris it a byproduct of the creation of more intelligent software, in my opinion.

The Intelligence of the Web

So the real point of the Semantic Web is that it enables the Web to becomemore intelligent. At first this may seem like a rather outlandish statement,but in fact the Web is already becoming intelligent, even without the SemanticWeb.

Although the intelligence of the Web is not very evident at first glance,nonetheless it can be found if you look for it. This intelligence doesn’t existacross the entire Web yet, it only exists in islands that are few and farbetween compared to the vast amount of information on the Web as a whole. Butthese islands are growing, and more are appearing every year, and they arestarting to connect together. And as this happens the collective intelligenceof the Web is increasing.

Perhaps the premier example of an “island of intelligence” is theWikipedia, but there are many others: The Open Directory, portals such as Yahooand Google, vertical content providers such as CNET and WebMD, commercecommunities such as Craigslist and Amazon, content oriented communities such asLiveJournal, Slashdot, Flickr and Digg and of course the millions of discussionboards scattered around the Web, and social communities such as MySpace andFacebook. There are also large numbers of private islands of intelligence onthe Web within enterprises — for example the many online knowledge andcollaboration portals that exist within businesses, non-profits, andgovernments.

What makes these islands “intelligent” is that they are places where people(and sometimes applications as well) are able to interact with each other tohelp grow and evolve collections of knowledge. When you look at them close-upthey appear to be just like any other Web site, but when you look at what theyare doing as a whole – these services are thinking.They are learning, self-organizing, sensing their environments, interpreting,reasoning, understanding, introspecting, and building knowledge. These are theactivities of minds, of intelligent systems.

The intelligence of a system such as the Wikipedia exists on several levels– the individuals who author and edit it are intelligent, the groups that helpto manage it are intelligent, and the community as a whole – which isconstantly growing, changing, and learning – is intelligent.

Flickr and Digg also exhibit intelligence. Flickr’s growing system of tagsis the beginnings of something resembling a collective visual sense organ onthe Web. Images are perceived, stored, interpreted, and connected to conceptsand other images. This is what the human visual system does. Similarly, Digg isa community that collectively detects, focuses attention on, and interpretscurrent news. It’s not unlike a primitive collective analogue to the humanfacility for situational awareness.

There are many other examples of collective intelligence emerging on theWeb. The Semantic Web will add one more form of intelligent actor to the mix –intelligent applications. In the future, after the Wikipedia is connected tothe Semantic Web, as well as humans, it will be authored and edited by smartapplications that constantly look for new information, new connections, and newinferences to add to it.

Although the knowledge on the Web today is still mostly organized withindifferent islands of intelligence, these islands are starting to reach out andconnect together. They are forming trade-routes, connecting their economies,and learning each other’s languages and cultures. The next-step will be forthese islands of knowledge to begin to share not just content and services, butalso their knowledge — what they know about their content and services. The SemanticWeb will make this possible, by providing an open format for the representationand exchange of knowledge and expertise.

When applications integrate their content using the Semantic Web they willalso be able to integrate their context, their knowledge – this will make thecontent much more useful and the integration much deeper. For example, when anapplication imports photos from another application it will also be able toimport semantic metadata about the meaning and connections of those photos.Everything that the community and application know about the photos in theservice that provides the content (the photos) can be shared with the servicethat receives the content. Better yet, there will be no need for customapplication integration in order for this to happen: as long as both servicesconform to the open standards of the Semantic Web the knowledge is instantlyportable and reusable.

Freeing Intelligence from Silos

Today much of the real value of the Web (and in the world) is still lockedaway in the minds of individuals, the cultures of groups and organizations, andapplication-specific data-silos. The emerging Semantic Web will begin to unlockthe intelligence in these silos by making the knowledge and expertise theyrepresent more accessible and understandable.

It will free knowledge and expertise from the narrow confines of individualminds, groups and organizations, and applications, and make them not only moreinteroperable, but more portable. It will be possible for example for a personor an application to share everything they know about a subject of interest aseasily as we share documents today. In essence the Semantic Web provides acommon language (or at least a common set of languages) for sharing knowledgeand intelligence as easily as we share content today.

The Semantic Web also provides standards for searching and reasoning moreintelligently. The SPARQL query language enables any application to ask forknowledge from any other application that speaks SPARQL. Instead of merekeyword search, this enables semantic search. Applications can search forspecific types of things that have particular attributes and relationships toother things.

In addition, standards such as SWRL provide formalisms for representing andsharing axioms, or rules, as well. Rules are a particular kind of knowledge –and there is a lot of it to represent and share, for example proceduralknowledge, and logical structures about the world. An ontology provides a meansto describe the basic entities, their attributes and relations, but rulesenable you to also make logical assertions and inferences about them. Withoutgoing into a lot of detail about rules and how they work here, the importantpoint to realize is that they are also included in the framework. All forms ofknowledge can be represented by the Semantic Web.

Zooming Way, Waaaay Out

So far in this article, I’ve spenta lot of time talking about plumbing – the pipes, fluids, valves, fixtures,specifications and tools of the Semantic Web. I’ve also spent some time onillustrations of how it might be useful in the very near future to individuals,groups and organizations. But where is it heading after this? What is thelong-term potential of this and what might it mean for the human race on ahistorical time-scale?

For those of you who would prefer not to speculate, stop reading here. Forthe rest of you, I believe that the true significance of the Semantic Web, on along-term timescale is that it provides an infrastructure that will enable theevolution of increasingly sophisticated forms of collective intelligence. Ultimatelythis will result in the Web itself becoming more and more intelligent, untilone day the entire human species together with all of its software andknowledge will function as something like a single worldwide distributed mind –a global mind.

Just the like the mind of a single human individual, the global mind will bevery chaotic, yet out of that chaos will emerge cohesive patterns of thoughtand decision. Just like in an individual human mind, there will be feedbackbetween different levels of order – from individuals to groups to systems ofgroups and back down from systems of groups to groups to individuals. Becauseof these feedback loops the system will adapt to its environment, and to itsown internal state.

The coming global mind will collectively exhibit forms of cognition andbehavior that are the signs of higher-forms of intelligence. It will form andreact to concepts about its “self” – just like an individual human mind. Itwill learn and introspect and explore the universe. The thoughts it thinks maysometimes be too big for any one person to understand or even recognize them –they will be comprised of shifting patterns of millions of pieces of knowledge.

The Role of Humanity

Every person on the Internet will be a part of the global mind. Andcollectively they will function as its consciousness. I do not believe some newform of consciousness will suddenly emerge when the Web passes some thresholdof complexity. I believe that humanity IS the consciousness of the Web anduntil and unless we ever find a way to connect other lifeforms to the Web, orwe build conscious machines, humans will be the only form of consciousness ofthe Web.

When I say that humans will function as the consciousness of the Web I meanthat we will be the things in the system that know. The knowledge of theSemantic Web is what is known, but what knows that knowledge has to besomething other than knowledge. A thought is knowledge, but what knows thatthought is not knowledge, it is consciousness, whatever that is. We can figureout how to enable machines to represent and use knowledge, but we don’t knowhow to make them conscious, and we don’t have to. Because we are alreadyconscious.

As we’ve discussed earlier in this article, we don’t need conscious machines, we just need more intelligent machines.Intelligence – at least basic forms of it – does not require consciousness. It may be the case that the very highest forms of intelligence require or are capable of consciousness. This may mean that software will never achieve the highest levels of intelligence and probably guaranteesthat humans (and other conscious things) will always play a special role in theworld; a role that no computer system will be able to compete with. We providethe consciousness to the system. There may be all sorts of other intelligent,non-conscious software applications and communities on the Web; in fact therealready are, with varying degrees of intelligence. But individual humans, andgroups of humans, will be the only consciousness on the Web.

The Collective Self

Although the software of the Semantic Web will not be conscious we can say that system as a whole contains or is conscious to the extent that human consciousnesses are part of it. And like most conscious entities, it may also start to be self-conscious.

If the Web ever becomes a global mind as I am predicting, will it have a“self?” Will there be a part of the Web that functions as its central self-representation?Perhaps someone will build something like that someday, or perhaps it will evolve.Perhaps it will function by collecting reports from applications and people inreal-time – a giant collective zeitgeist.

In the early days of the Web portals such as Yahoo! provided this function — they were almost real-time maps of the Web and what was happening. Today making such a map is nearly impossible, but services such as Google Zeitgeist at least attempt to provide approximations of it. Perhaps through random sampling it can be done on a broader scale.

My guess is that the global mind will need a self-representation at somepoint. All forms of higher intelligence seem to have one. It’s necessary forunderstanding, learning and planning. It may evolve at first as a bunch ofcompeting self-representations within particular services or subsystems withinthe collective. Eventually they will converge or at least narrow down to just afew major perspectives. There may also be millions of minor perspectives thatcan be drilled down into for particular viewpoints from these top-level “portals.”

The collective self, will function much like the individual self – as amirror of sorts. Its function is simply to reflect. As soon as it exists theentire system will make a shift to a greater form of intelligence – because forthe first time it will be able to see itself, to measure itself, as a whole. Itis at this phase transition when the first truly global collective self-mirroring function evolves, that we can say that the transition from a bunch of cooperating intelligent parts toa new intelligent whole in its own right has taken place.

I think that the collective self, even if it converges on a few majorperspectives that group and summarize millions of minor perspectives, will becommunity-driven and highly decentralized. At least I hope so – because theself-concept is the most important part of any mind and it should be designedin a way that protects it from being manipulated for nefarious ends. At least Ihope that is how it is designed.

Programming the Global Mind

On the other hand, there are times when a little bit of adjustment or guidance iswarranted – just as in the case of an individual mind, the collective selfdoesn’t merely reflect, it effectively guides the interpretation of the pastand present, and planning for the future.

One way to change the direction ofthe collective mind, is to change what is appearing in the mirror of thecollective self. This is a form of programming on a vast scale – When thisprogramming is dishonest or used for negative purposes it is called “propaganda,” but there are cases whereit can be done for beneficial purposes as well. An example of this today ispublic service advertising and educational public television programming. Allforms of mass-media today are in fact collective social programming. When yourealize this it is not surprising that our present culture is violent andmessed up – just look at our mass-media!

In terms of the global mind, ideally one would hope that it would be able tolearn and improve over time. One would hope that it would not have the collective equivalent of psycho-social disorders. To facilitate this, just like any form of higherintelligence, it may need to be taught, and even parented a bit. It also mayneed a form of therapy now and then. These functions could be provided by thepeople who participate in it. Again, I believe that humans serve a vital and irreplaceablerole in this process.

How It All Might Unfold

Now how is this all going to unfold? I believe that there are a number ofkey evolutionary steps that Semantic Web will go through as the Web evolvestowards a true global mind:

1. Representing individual knowledge. The first step is to make individuals’knowledge accessible to themselves. As individuals become inundated withincreasing amounts of information, they will need better ways of managing it,keeping track of it, and re-using it. They will (or already do) need”personal knowledge management.”

2. Connecting individual knowledge. Next, once individual knowledge isrepresented, it becomes possible to start connecting it and sharing it acrossindividuals. This stage could be called “interpersonal knowledgemanagement.”

3. Representing group knowledge. Groups of individuals also need ways ofcollectively representing their knowledge, making sense of it, and growing itover time. Wikis and community portals are just the beginning. The Semantic Webwill take these “group minds” to the next level — it will make the collective knowledge ofgroups far richer and more re-usable.

4. Connecting group knowledge. This step is analogous to connectingindividual knowledge. Here, groups become able to connect their knowledge togetherto form larger collectives, and it becomes possible to more easily access andshare knowledge between different groups in very different areas of interest.

5. Representing the knowledge of the entire Web. This stage — what might becalled “the global mind” — is still in the distant future, but atthis point in the future we will begin to be able to view, search, and navigatethe knowledge of the entire Web as a whole. The distinction here is thatinstead of a collection of interoperating but separate intelligentapplications, individuals and groups, the entire Web itself will begin tofunction as one cohesive intelligent system. The crucial step that enables thisto happen is the formation of a collective self-representation. This enablesthe system to see itself as a whole for the first time.

How it May be Organized

I believe the global mind will be organized mainly in the form of bottom-up and lateral, distributed emergent computation andcommunity — but it will be facilitated by certain key top-down services thathelp to organize and make sense of it as a whole. I think this future Web willbe highly distributed, but will have certain large services within it as well– much like the human brain itself, which is organized into functionalsub-systems for processes like vision, hearing, language, planning, memory,learning, etc.

As the Web gets more complex there will come a day when nobody understandsit anymore – after that point we will probably learn more about how the Web isorganized by learning about the human mind and brain – they will be quitesimilar in my opinion. Likewise we will probably learn a tremendous amountabout the functioning of the human brain and mind by observing how the Webfunctions, grows and evolves over time, because they really are quite similarin at least an abstract sense.

The internet and its software and content is like a brain, and the state ofits software and the content is like its mind. The people on the Internet arelike its consciousness. Although these are just analogies, they are actuallyuseful, at least in helping us to envision and understand this complex system. Asthe field of general systems theory has shown us in the past, systems at verydifferent levels of scale tend to share the same basic characteristics and obeythe same basic laws of behavior. Not only that, but evolution tends to convergeon similar solutions for similar problems. So these analogies may be more thanjust rough approximations, they may be quite accurate in fact.

The future global brain will require tremendous computing and storageresources — far beyond even what Google provides today. Fortunately as Moore’s Law advances thecost of computing and storage will eventually be low enough to do thiscost-effectively. However even with much cheaper and more powerful computingresources it will still have to be a distributed system. I doubt that therewill be any central node because quite simply no central solution will be ableto keep up with all the distributed change taking place. Highly distributed problemsrequire distributed solutions and that is probably what will eventually emergeon the future Web.

Someday perhaps it will be more like a peer-to-peer network, comprised ofapplications and people who function sort of like the neurons in the human brain.Perhaps they will be connected and organized by higher-level super-peers orsuper-nodes which bring things together, make sense of what is going on andcoordinate mass collective activities. But even these higher-level serviceswill probably have to be highly distributed as well. It really will bedifficult to draw boundaries between parts of this system, they will all beconnected as an integral whole.

In fact it may look very much like a grid computing architecture – in whichall the services are dynamically distributed across all the nodes such that atany one time any node might be working on a variety of tasks for differentservices. My guess is that because this is the simplest, most fault-tolerant,and most efficient way to do mass computation, it is probably what will evolvehere on Earth.

The Ecology of Mind

Where we are today in this evolutionary process is perhaps equivalent to therise of early forms of hominids. Perhaps Austrolapithecus or Cro-Magnon, ormaybe the first Homo Sapiens. Compared to early man, the global mind is like the rise of 21stcentury mega-cities. A lot of evolution has to happen to get there. But itprobably will happen, unless humanity self-destructs first,which I sincerely hope we somehow manage to avoid. And this brings me to afinal point. This vision of the future global mind is highly technological;however I don’t think we’ll ever accomplish it without a new focus on ecology.

Ecology probably conjures up images of hippies and biologists, or maybehippies who are biologists, or at least organic farmers, for most people, but infact it is really the science of living systems and how they work. And anysystem that includes living things is a living system. This means that the Webis a living system and the global mind will be a living system too. As a living system, the Web is an ecosystem and is alsoconnected to other ecosystems. In short, ecology is absolutely essential tomaking sense of the Web, let alone helping to grow and evolve it.

In many ways the Semantic Web and the collective minds, and the global mind,that it enables, can be seen as an ecosystem of people, applications,information and knowledge. This ecosystem is very complex, much like naturalecosystems in the physical world. An ecosystem isn’t built, it’s grown, andevolved. And similarly the Semantic Web, and the coming global mind, will notreally be built, they will be grown and evolved. The people and organizationsthat end up playing a leading role in this process will be the ones thatunderstand and adapt to the ecology most effectively.

In my opinion ecology is going to be the most important science anddiscipline of the 21st century – it is the science of healthysystems. What nature teaches us about complex systems can be applied to everykind of system – and especially the systems we are evolving on the Web. Inorder to ever have a hope of evolving a global mind, and all the wonderfullevels of species-level collective intelligence that it will enable, we have tonot destroy the planet before we get there. Ecology is the science that cansave us, not the Semantic Web (although perhaps by improving collectiveintelligence, it can help).

Ecology is essentially the science of community – whether biological,technological or social. And community is a key part of the Semantic Web atevery level: communities of software, communities of people, and communities ofgroups. In the end the global mind is the ultimate human community. It is thereward we get for finally learning how to live together in peace and balancewith our environment.

The Necessity of Sustainability

The point of this discussion of the relevance of ecology to the future ofthe Web, and my vision for the global mind, is that I think that it is clearthat if the global mind ever emerges it will not be in a world that is anythinglike what we might imagine. It won’t be like the Borg in Star Trek, it won’t belike living inside of a machine. Humans won’t be relegated to the roles ofslaves or drones. Robots won’t be doing all the work. The entire world won’t becoated with silicon. We won’t all live in a virtual reality. It won’t be one ofthese technological dystopias.

In fact, I think the global mind can only come to pass in a much greener,more organic, healthier, more balanced and sustainable world. Because it willtake a long time for the global mind to emerge, if humanity doesn’t figure outhow to create that sort of a world, it will wipe itself out sooner or later,but certainly long before the global mind really happens. Not only that, butthe global mind will be smart by definition, and hopefully this intelligencewill extend to helping humanity manage its resources, civilizations andrelationships to the natural environment.

The Smart Environment

The global mind also needs a global body so to speak. It’s not going to bean isolated homunculus floating in a vat of liquid that replaces the physicalworld! It will be a smart environment that ubiquitously integrates with ourphysical world. We won’t have to sit in front of computers or deliberatelylogon to the network to interact with the global mind. It will be everywhere.

The global mind will be physically integrated into furniture, houses,vehicles, devices, artworks, and even the natural environment. It will sensethe state of the world and different ecosystems in real-time and alert humansand applications to emerging threats. It will also be able to allocateresources intelligently to compensate for natural disasters, storms, andenvironmental damage – much in the way that the air traffic control systemsallocates and manages airplane traffic. It won’t do it all on its own, humansand organizations will be a key part of the process.

Someday the global mind may even be physically integrated into our bodiesand brains, even down the level of our DNA. It may in fact learn how to curediseases and improve the design of the human body, extending our lives, sensorycapabilities, and cognitive abilities. We may be able to interact with it bythought alone. At that point it will become indistinguishable from a limitedfrom of omniscience, and everyone may have access to it. Although it will onlyextend to wherever humanity has a presence in the universe, within thatboundary it will know everything there is to know, and everyone will be able toknow any of it they are interested in.

Enabling a Better World

By enabling greater forms of collective intelligence to emerge we really arehelping to make a better world, a world that learns and hopefully understandsitself well enough to find a way to survive. We’re building something thatsomeday will be wonderful – far greater than any of us can imagine. We’re helpingto make the species and the whole planet more intelligent. We’re building thetools for the future of human community. And that future community, if it ever arrives,will be better, more self-aware, more sustainable than the one we live intoday.

I should also mention that knowledge is power, and power can be used forgood or evil. The Semantic Web makes knowledge more accessible. This puts more power in the hands of the many, not just the few. As long as we stick to this vision — we stick to making knowledge open and accessible, using open standards, in as distributed a fashion as we can devise, then the potential power of the Semantic Web will be protected against being coopted or controlled by the few at the expense of the many. This is where technologists really have to be socially responsible when making development decisions. It’s important that we build a more open world, not a less open world. It’s important that we build a world where knowledge, integration and unification are balanced with respect for privacy, individuality, diversity and freedom of opinion.

But I am not particularly worried that the Semantic Web and the future globalmind will be the ultimate evil – I don’t think it is likely that we will end upwith a system of total control dominated by evil masterminds with powerfulSemantic Web computer systems to do their dirty work. Statistically speaking, criminal empires don’t last very long because theyare run by criminals who tend to be very short-sighted and who also surroundthemselves with other criminals who eventually unseat them, or theyself-destruct. It’s possible that the Semantic Web, like any other technology,may be used by the bad guys to spy on citizens, manipulate the world, and doevil things. But only in the short-term.

In the long-term either our civilization will get tired of endlesssuccessions of criminal empires and realize that the only way to actuallysurvive as a species is to invent a form of government that is immune to beingtaken over by evil people and organizations, or it will self-destruct. Eitherway, that is a hurdle we have to cross before the global mind that I envisioncan ever come about. Many civilizations came before ours, and it is likely thatours will not be the last one on this planet. It may in fact be the case that adifferent form of civilization is necessary for the global mind to emerge, andis the natural byproduct of the emergence of the global mind.

We know that the global mind cannot emerge anytime soon, and therefore, ifit ever emerges then by definition it must be in the context of a civilizationthat has learned to become sustainable. A long-term sustainable civilization is a non-evil civilization. And that is why I think it is a safebet to be so optimistic about the long-term future of this trend.

All Seafood Gone by 2050 — Overfishing and Overpopulation

New research suggests that all the world’s ocean seafood stocks will be gone by 2050…

WASHINGTON (AP) – Clambakes, crabcakes, swordfish steaks and even
humble fish sticks could be little more than a fond memory in a few
decades. If current trends of overfishing and pollution continue, the
populations of just about all seafood face collapse by 2048, a team of
ecologists and economists warns in a report in Friday’s issue of the
journal Science.

"Whether we looked at tide pools or studies over the entire world’s
ocean, we saw the same picture emerging. In losing species we lose the
productivity and stability of entire ecosystems," said the lead author
Boris Worm of Dalhousie University in Halifax, Nova Scotia.

"I was shocked and disturbed by how consistent these trends are – beyond anything we suspected," Worm said.

While the study focused on the oceans, concerns have been expressed by
ecologists about threats to fish in the Great Lakes and other lakes,
rivers and freshwaters, too.

Worm and an international team spent four years analyzing 32 controlled
experiments, other studies from 48 marine protected areas and global
catch data from the U.N. Food and Agriculture Organization’s database
of all fish and invertebrates worldwide from 1950 to 2003.

The scientists also looked at a 1,000-year time series for 12 coastal
regions, drawing on data from archives, fishery records, sediment cores
and archaeological data.

"At this point 29 percent of fish and seafood species have collapsed –
that is, their catch has declined by 90 percent. It is a very clear
trend, and it is accelerating," Worm said. "If the long-term trend
continues, all fish and seafood species are projected to collapse
within my lifetime – by 2048."

A World Without Elephants

This is so sad. Elephants are increasingly being wiped out due to encroachment by nearby human populations, and also by inept human attempts to help them — and of course by poaching. As their species is increasingly backed into a dead-end corner, and as older elephants are separated from their herds, younger elephants are developing psychological disorders and are becoming violent. Meanwhile female elephants are not learning to rear their young properly, leading to developmental disorders and social problems that then ripple from generation to generation. All of this is adding up to a downward spiral for elephants worldwide — and in fact, as the article illustrates, elephants in completely separate communites around the world are starting to exhibit signs of "going crazy."  I’ve always loved elephants and I wish there was something that could be done.

Humanity is so out of balance with the rest of planet. I’m a realist though — I don’t believe that governments, or even the majority of people in the world, will ever just sacrifice their own gain for the good of the environment or any other species. Only if it is clearly tied to their survival or personal gain, will most people and governments "feel the pain" enough to change their behavior.

The solution to the tragedy of the commons is to privatize, or to somehow connect what happens in the commons to everyone’s survival and benefit. Locally, elephant survival and well-being could be assured if the local government and people were paid to maintain them as a world resource. I think that there really should be a form of global taxation whereby every government pays into a fund that is then used to pay certain local communities around endangered resources and species to protect and steward them.

If there was a way to turn their environments and endangered species into resources that earned money for them (more money than they could earn by destroying them), then they would finally be motivated to take care of them. I doubt that any other kind of solution will ultimately work. Maybe I’m too cynical or too much of a realist or a pragmatist. But I really do think this solution would work, not just for the elephants, but the rainforests, the whales, coral reefs and fisheries, etc.

Dolphins are Smarter Than We Think

This is an interesting article about recent evidence of deep thinking by dolphins:

At the Institute for Marine Mammal Studies in Mississippi, Kelly the
dolphin has built up quite a reputation. All the dolphins at the
institute are trained to hold onto any litter that falls into their
pools until they see a trainer, when they can trade the litter for
fish. In this way, the dolphins help to keep their pools clean.

Kelly
has taken this task one step further. When people drop paper into the
water she hides it under a rock at the bottom of the pool. The next
time a trainer passes, she goes down to the rock and tears off a piece
of paper to give to the trainer. After a fish reward, she goes back
down, tears off another piece of paper, gets another fish, and so on.
This behaviour is interesting because it shows that Kelly has a sense
of the future and delays gratification. She has realised that a big
piece of paper gets the same reward as a small piece and so delivers
only small pieces to keep the extra food coming. She has, in effect,
trained the humans.

Her
cunning has not stopped there. One day, when a gull flew into her pool,
she grabbed it, waited for the trainers and then gave it to them. It
was a large bird and so the trainers gave her lots of fish. This seemed
to give Kelly a new idea. The next time she was fed, instead of eating
the last fish, she took it to the bottom of the pool and hid it under
the rock where she had been hiding the paper. When no trainers were
present, she brought the fish to the surface and used it to lure the
gulls, which she would catch to get even more fish. After mastering
this lucrative strategy, she taught her calf, who taught other calves,
and so gull-baiting has become a hot game among the dolphins.

You've Heard about Global Warming … Now Comes "Global Cooling"

Russian scientists are now predicting a period of "Global Cooling" will begin in 2012. Well at least the good news is that Al Gore can make a sequel. And I guess this means San Francisco will have even colder summers…er winters…now? But all jokes aside, this is something to track. The term "global warming" is  misleading. In fact a better term would just be "global climate change." An increase in temperature does not mean that all parts of the world will get warmer — it will actually result in a precipitous decrease in temperature in some places as the Gulf Stream currents change and global air currents also shift. While everyone else is getting their sun-tan lotion ready, perhaps those in the know should be buying down jackets?

Amazon Desertification May Start Next Year — Global Warming Could Increase by 50% — Note to Self: Find New Planet

Amazon Rainforest Faces Desertification

Amazon rainforest ‘could become a desert’

And that could speed up global warming with ‘incalculable consequences’, says alarming new research

The Independent (U.K.), July 23, 2006

The vast Amazon rainforest is on the
brink of being turned into desert, with catastrophic consequences for
the world’s climate, alarming research suggests. And the process, which
would be irreversible, could begin as early as next year.

Studies by the blue-chip Woods Hole
Research Centre, carried out in Amazonia, have concluded that the
forest cannot withstand more than two consecutive years of drought
without breaking down.

Scientists say that this would spread
drought into the northern hemisphere, including Britain, and could
massively accelerate global warming with incalculable consequences,
spinning out of control, a process that might end in the world becoming
uninhabitable.

The alarming news comes in the midst
of a heatwave gripping Britain and much of Europe and the United
States. Temperatures in the south of England reached a July record of
36.3C on Tuesday. And it comes hard on the heels of a warning by an
international group of experts, led by the Eastern Orthodox " pope"
Bartholomew, last week that the forest is rapidly approaching a "
tipping point" that would lead to its total destruction.

The research ­ carried out by the
Massachusetts-based Woods Hole centre in Santarem on the Amazon river ­
has taken even the scientists conducting it by surprise. When Dr Dan
Nepstead started the experiment in 2002 ­ by covering a chunk of
rainforest the size of a football pitch with plastic panels to see how
it would cope without rain ­ he surrounded it with sophisticated
sensors, expecting to record only minor changes.

The trees managed the first year of
drought without difficulty. In the second year, they sunk their roots
deeper to find moisture, but survived. But in year three, they started
dying. Beginning with the tallest the trees started to come crashing
down, exposing the forest floor to the drying sun.

By the end of the year the trees had
released more than two-thirds of the carbon dioxide they have stored
during their lives, helping to act as a break on global warming.
Instead they began accelerating the climate change.

As we report today on pages 28 and 29,
the Amazon now appears to be entering its second successive year of
drought, raising the possibility that it could start dying next year.
The immense forest contains 90 billion tons of carbon, enough in itself
to increase the rate of global warming by 50 per cent.

Dr Nepstead expects "mega-fires"
rapidly to sweep across the drying jungle. With the trees gone, the
soil will bake in the sun and the rainforest could become desert.

Dr Deborah Clark from the University
of Missouri, one of the world’s top forest ecologists, says the
research shows that "the lock has broken" on the Amazon ecosystem. She
adds: the Amazon is "headed in a terrible direction".

Fred Pearce is the author of ‘The Last Generation’ (Eden Project Books), published earlier this year

Electronic Smog

Are you living in a cloud of electronic smog? New research has shown that fields from electrical wiring and devices in the home and office should be considered to be a form of pollution. Recent studies are finding that many cancers and other diseases may be directly related to exposure to these electrical fields.

Study Discovers Whale Song Syntax

New research into the mathematical properties of whale songs reveals that they have a complex language:

The songs of the humpback whale are among the most complex in the
animal kingdom. Researchers have now mathematically confirmed that
whales have their own syntax that uses sound units to build phrases
that can be combined to form songs that last for hours.

Until now, only humans have demonstrated the ability to use such a
hierarchical structure of communication. The research, published online
in the March 2006 issue of the Journal of the Acoustical Society of
America, offers a new approach to studying animal communication,
although the authors do not claim that humpback whale songs meet the
linguistic rigor necessary for a true language.

"Humpback songs
are not like human language, but elements of language are seen in their
songs," said Ryuji Suzuki, a Howard Hughes Medical Institute (HHMI)
predoctoral fellow in neuroscience at Massachusetts Institute of
Technology and first author of the paper.

This research is important because it brings us one step closer to someday being able to decode whalesongs and eventually even communicate with whales. In the long-term it may also contribute to the development of general techniques for interspecies language translation — techniques that could someday come in handy if and when we start to interact with extraterrestrial species.

Doomsday Vault to House World Seed Bank

The Norwegians are planning to create a deep underground vault near the North Pole to house a backup copy of seeds for all known varieties of crops. The goal is to ensure food supplies and enable humanity to regenerate in the event of nuclear war, global warming or other catastrophes. It’s a good idea. This is similar to my own idea for what I call the Genesis Project, which would provide a backup of critical human knowledge as well, and a system for helping humanity relearn it, in case we get knocked back to the Stone Age for some reason.

Big Thinkers' Most Dangerous Ideas

The Edge has published mini-essays by 119 "big thinkers" on their "most dangerous ideas" — fun reading.

The history of science is replete with discoveries
that were considered socially, morally, or emotionally
dangerous in their time; the Copernican and
Darwinian revolutions are the most obvious.
What is your dangerous idea? An idea you think
about (not necessarily one you originated)
that is dangerous not because it is assumed to be false, but because it might be true?

 

Hydrino Power — A New Source of Energy?

A new source of inexpensive, renewable energy that depends on a modified form of hydrogen has quantum theorists up in arms: they say it violates the laws of quantum mechanics. The inventors, on the other hand, claim they have extensive proof that it works.

It seems too good to be
true: a new source of near-limitless power that costs virtually
nothing, uses tiny amounts of water as its fuel and produces next to no
waste. If that does not sound radical enough, how about this: the
principle behind the source turns modern physics on its head.

Randell
Mills, a Harvard University medic who also studied electrical
engineering at Massachusetts Institute of Technology, claims to have
built a prototype power source that generates up to 1,000 times more
heat than conventional fuel. Independent scientists claim to have
verified the experiments and Dr Mills says that his company, Blacklight
Power, has tens of millions of dollars in investment lined up to bring
the idea to market. And he claims to be just months away from unveiling
his creation.

NASA Makes Plans to Deflect Possible Asteroid Hit in 2036

This just in

NASA has outlined what it could do, and in what time frame, in case a
quarter-mile-wide asteroid named Apophis is on a course to slam into
Earth in the year 2036. The timetable was released by the B612
Foundation, a group that is pressing NASA and other government agencies
to do more to head off threats from near-Earth objects.

The plan runs like this: Eight years from now,
if there’s still a chance of a collision in 2036, NASA would start
drawing up plans to put a probe on the space rock or in orbit around it
in 2019. Measurements sent back from the probe would characterize
Apophis’ course to an accuracy of mere yards (meters) by the year 2020.

If
those readings still could not rule out a strike in 2036, NASA would
try to deflect the asteroid into a non-threatening course in the
2024-2028 time frame by firing an impactor at it — using this year’s Deep Impact comet-blasting probe
as a model. Experts would start planning for the "Son of Deep Impact"
mission even before they knew whether or not it was needed.

How to Save the Amazon Rainforest

I read the an article today about how Brazil is gradually losing the fight to save the Amazon. The worlds’ rainforests are a global resource — not only are they
directly important to the air we all breathe, they also harbor a huge,
still untapped, reservoir of species diversity which could be of
profound importance to science and future medical and pharma research.
The problem is that currently there is no direct benefit to Brazil, or
other rainforest nations, for the global use of their rainforest resources.

The key then is to find a way to turn rainforests into economically valuable national resources for countries that maintain them. In other words, rainforests should be to Brazil, what oil is to Saudi Arabia (or actually better, because rainforests, unlike oil, are renewable). Rainforest countries should make more money by keeping their rainforests alive and healthy,  than by chopping them down.

Continue reading

New Ice Age Coming Much Sooner than Expected?

Significant new research findings indicate that a new ice age may be starting sooner than anyone expected…

CLIMATE change researchers have detected the
first signs of a slowdown in the Gulf Stream — the mighty ocean current
that keeps Britain and Europe from freezing.

They have found that one of the “engines” driving the Gulf Stream —
the sinking of supercooled water in the Greenland Sea — has weakened to
less than a quarter of its former strength.

The
weakening, apparently caused by global warming, could herald big
changes in the current over the next few years or decades.
Paradoxically, it could lead to Britain and northwestern and Europe
undergoing a sharp drop in temperatures.

Click here for the full article

New Data Indicates Earth 3 Million Years Overdue for Mass Extinction

A new study claims that life on earth emerges and is wiped out in 62 million year cycles. The dinosaurs vanished 65 million years ago. That implies we are 3 million years overdue for a mass-extinction cycle. Or maybe we’re 3 million years into one? Data indicates that species are presently going extinct at an increasing rate not seen since the last mass-extinction.

Interesting New Magnetic Motor Announced: Big Claims Made

If you are interested in alternative energy, here’s something new to look at — a new magnetic motor based on ideas that originated with Nikola Tesla. The makers claim it is an "economical solution for the world’s power and energy needs." Well, we’ve heard that before, but it’s good to keep an open mind — maybe someone will get it right eventually. For more info, see their site, which has a lot more info on how it works.