Google’s Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry’s idea is that intelligence is a function of massive computation, not of “fancy whiteboard algorithms.” In other words, in his conception the brain doesn’t do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively “dumb” but from the combined power of all of them working together “intelligent” behaviors emerge.
Larry’s view is, in my opinion, an oversimplification that will not lead to actual AI. It’s certainly correct that some activities that we call “intelligent” can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible — they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today — which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don’t think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software — the higher level cognitive algorithms and heuristics that the brain “runs” — also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry’s view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It’s a highly sophisticated system comprised of simple parts — and actually, the jury is still out on exactly how simple the parts really are — much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain — with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We’re not talking about a few hundred thousand linux boxes — we’re talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
One reader who commented on Larry’s talk made an excellent point on what this missing piece may be: “Intelligence is in the connections, not the bits.”The point is that most of the computation in the brain actually takesplace via the connections between neurons, regions, and perhapsprocesses. This writer also made some good points about quantumcomputation and how the brain may make use of it, a view that forexample Roger Penrose and others have spent a good deal of time on.There is some evidence that brain may make use of microtubules andquantum-level computing. Quantum computing is inherently about fields,correlations and nonlocality. In other words the connections in thebrain may exist on a quantum level, not just a neurological level.
Whether quantum computation is the key or not still remains to bedetermined. But regardless, essentially, Larry’s approach is equivalentto just aiming a massively parallel supercomputer at the Web and hopingthat will do the trick. Larry mentions for example that if allknowledge exists on the Web you should be able to enter a query and geta perfect answer. In his view, intelligence is basically just search ona grand scale. All answers exist on the Web, and the task is just tomatch questions to the right answers. But wait? Is that all thatintelligence does? Is Larry’s view too much of an oversimplification?Intelligence is not just about learning and recall, it’s also aboutreasoning and creativity. Reasoning is not just search. It’s unclearhow Larry’s approach would address that.
In my own opinion, for global-scale AI to really emerge the Web has toBE the computer. The computation has to happen IN the Web, betweensites and along connections — rather than from outside the system. Ithink that is how intelligence will ultimately emerge on a Web-widescale. Instead of some Google Godhead implementing AI from afar for thewhole Web, I think it is more likely that every site, app and person onthe Web will help to implement it. It will be much more of a hybridsystem that combines decentralized human and machine intelligences andtheir interactions along data connections and social relationships. Ithink this may emerge from a future evolution of the Web that providesfor much richer semantics on every piece of data and hyperlink on theWeb, and for decentralized learning, search, and reasoning to takeplace within every node on the Web. I think the Semantic Web is anecessary technology for this to happen, but it’s only the first step.More will need to happen on top of it for this vision to reallymaterialize.
My view is more of an “agent metaphor” for intelligence — perhaps itis similar to Marvin Minsky’s Society of Mind ideas. I think that mindsare more like communities than we presently think. Even in our ownindividual minds for example we experience competing thoughts, multiplethreads, and a kind of internal ecology and natural selection of ideas.These are not low-level processes — they are more like agents — theyare actually each somewhat “intelligent” on their own, they seem to besomewhat autonomous, and they interact in intelligent almost socialways.
Ideas seem to be actors, not just passive data points — they arecompeting for resources and survival in a complex ecology that existsboth within our individual minds and between them in socialrelationships and communities. As the theory of memetics proposes,ideas can even transport themselves through language, culture, andsocial interactions in order to reproduce and evolve from mind to mind.It is an illusion to think that there is some central self or “I” thatcontrols the process (that is just another agent in the community infact, perhaps one with a kind of reporting and selection role).
I’m not sure the complex social dynamics of these communities ofintelligence can really be modeled by a search engine metaphor. Thereis a lot more going on than just search. As well as communication andreasoning between different processes, there may in fact be feedbackacross levels from the top-down as well as the from the bottom-up.Larry is essentially proposing that intelligence is a purely bottom-upemergent process that can be reduced to search in the ideal, simplestcase. I disagree. I think there is so much feedback in every directionthat medium and the content really cannot be separated. The thoughtsthat take place in the brain ultimately feedback down to the neuralwetware itself, changing the states of neurons and connections –computation flows back down from the top, it doesn’t only flow up fromthe bottom. Any computing system that doesn’t include this kind offeedback in its basic architecture will not be able to implement trueAI.
In short, Google is not the right architecture to truly build a globalbrain on. But it could be a useful tool for search andquestions-and-answers in the future, if they can somehow keep up withthe growth and complexity of the Web.