Google’s Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry’s idea is that intelligence is a function of massive computation, not of “fancy whiteboard algorithms.” In other words, in his conception the brain doesn’t do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively “dumb” but from the combined power of all of them working together “intelligent” behaviors emerge.
Larry’s view is, in my opinion, an over simplification that will not lead to actual AI. It’s certainly correct that some activities that we call “intelligent” can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible — they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today — which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don’t think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software — the higher level cognitive algorithms and heuristics that the brain “runs” — also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry’s view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It’s a highly sophisticated system comprised of simple parts — and actually, the jury is still out on exactly how simple the parts really are — much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain — with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We’re not talking about a few hundred thousand linux boxes — we’re talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
One reader who commented on Larry’s talk made an excellent point on what this missing piece may be: “Intelligence is in the connections, not the bits.”The point is that most of the computation in the brain actually takes place via the connections between neurons, regions, and perhaps processes. This writer also made some good points about quantum computation and how the brain may make use of it, a view that for example Roger Penrose and others have spent a good deal of time on.There is some evidence that brain may make use of microtubules and quantum-level computing. Quantum computing is inherently about fields,correlations and nonlocality. In other words the connections in the brain may exist on a quantum level, not just a neurological level.
Whether quantum computation is the key or not still remains to be determined. But regardless, essentially, Larry’s approach is equivalent to just aiming a massively parallel supercomputer at the Web and hoping that will do the trick. Larry mentions for example that if all knowledge exists on the Web you should be able to enter a query and get a perfect answer. In his view, intelligence is basically just search on a grand scale. All answers exist on the Web, and the task is just to match questions to the right answers. But wait? Is that all that intelligence does? Is Larry’s view too much of an oversimplification? Intelligence is not just about learning and recall, it’s also about reasoning and creativity. Reasoning is not just search. It’s unclear how Larry’s approach would address that.
In my own opinion, for global-scale AI to really emerge the Web has toBE the computer. The computation has to happen IN the Web, between sites and along connections — rather than from outside the system. I think that is how intelligence will ultimately emerge on a Web-widescale. Instead of some Google Godhead implementing AI from afar for the whole Web, I think it is more likely that every site, app and person on the Web will help to implement it. It will be much more of a hybrid system that combines decentralized human and machine intelligences and their interactions along data connections and social relationships. I think this may emerge from a future evolution of the Web that provides for much richer semantics on every piece of data and hyperlink on the Web, and for decentralized learning, search, and reasoning to take place within every node on the Web. I think the Semantic Web is a necessary technology for this to happen, but it’s only the first step. More will need to happen on top of it for this vision to really materialize.
My view is more of an “agent metaphor” for intelligence — perhaps it is similar to Marvin Minsky’s Society of Mind ideas. I think that minds are more like communities than we presently think. Even in our own individual minds for example we experience competing thoughts, multiple threads, and a kind of internal ecology and natural selection of ideas. These are not low-level processes — they are more like agents — they are actually each somewhat “intelligent” on their own, they seem to be somewhat autonomous, and they interact in intelligent almost social ways.
Ideas seem to be actors, not just passive data points — they are competing for resources and survival in a complex ecology that exists both within our individual minds and between them in social relationships and communities. As the theory of memetics proposes, ideas can even transport themselves through language, culture, and social interactions in order to reproduce and evolve from mind to mind. It is an illusion to think that there is some central self or “I” that controls the process (that is just another agent in the community in fact, perhaps one with a kind of reporting and selection role).
I’m not sure the complex social dynamics of these communities of intelligence can really be modeled by a search engine metaphor. There is a lot more going on than just search. As well as communication and reasoning between different processes, there may in fact be feedback across levels from the top-down as well as the from the bottom-up. Larry is essentially proposing that intelligence is a purely bottom-up emergent process that can be reduced to search in the ideal, simplest case. I disagree. I think there is so much feedback in every direction that medium and the content really cannot be separated. The thoughts that take place in the brain ultimately feedback down to the neural wetware itself, changing the states of neurons and connections –computation flows back down from the top, it doesn’t only flow up from the bottom. Any computing system that doesn’t include this kind of feedback in its basic architecture will not be able to implement true AI.
In short, Google is not the right architecture to truly build a global brain on. But it could be a useful tool for search and questions-and-answers in the future, if they can somehow keep up with the growth and complexity of the Web.