Keywords: AI, genetic algorithms, cognitive science

Title: What Is Thought

Author: Eric B. Baum

Publisher: MIT Press

Media: Book

Reviewer: Pan

What is thought? How did it evolve? Can we model it scientifically? These are some of the main questions that Eric Baum tackles in this large and ambitious book. Combining insights from computer science, information theory, cognitive science and evolutionary theory, Baum proposes a set of theories for the processes involved in thought without necessarily being able to point to the underlying biological mechanisms at work. In 1944, before the discovery of DNA, the great physicist Erwin Schrodinger wrote a book called 'What Is Life?', in which he looked at what processes are required for life to exist and reproduce. Remarkably Schrodinger predicted many of the features of DNA based on his knowledge of physics and his understanding of the problems that genetic reproduction has to solve. Baum acknowledges that he is seeking to do exactly the same to answer the question of what thought is.

Baum's starting point is that thought and computation are synonymous. Non-computationalist explanations are rejected, including all non-material explanations. While rejecting any kind of 'supernatural' theories makes a lot of sense - after all thinking is what brains do, and brains are physical objects when all is said and done - the complete identification of thought and computation was not explored in sufficient detail. The definitions of terms was vague. There is a difference between saying that thought can be modelled as computation and saying that thought is computation.

Baum builds his theory on a number of very simple building blocks. Firstly evolutionary principles are paramount. Any explanation must be based on evolutionary theory or it fails. A theory that has any value must obviously explain not just how things are now but also how things got this way.

Secondly the semantics of the world must be encoded in DNA. The world is enormously complex, how is it that we can carry a model of the world in our heads? How is that we can interpret the rich structure around us unless we are able to extract the signal from the noise? Baum uses the idea of information compression as a key principle. We can make sense of the world because we can extract the key features from it. These features are 'built-in' somehow, and the only place that this can be encoded is in DNA.

Based on his own work in the fields of evolutionary algorithms and neural networks, Baum proceeds to show how these principles can be used to build a wide-ranging theory of how we think and how thinking evolved. Along the way he looks at some of the major issues that any convincing theory has to address. These include how organisms, from bacteria to homo sapiens, extract information about the structure of the world. How we are able to generalise experience and to abstract from the specific to the general. How we deal with the explosion of possibilities that face us everyday, in other words how do we whittle down choices to something that we can handle.

Computer science, and artificial intelligence (AI) in particular, are used to explore these issues in more detail. The failures of AI to model intelligence are discussed in some detail, making the most of the opportunities to show why things have failed and also to point out where there have been some limited successes. The book makes good use of 'standard' problems, such as the Travelling Salesman Problem, to illustrate some of the complex issues at work.

While the book is written largely for the non-expert, there is a fair degree of mathematical content though this can be skipped without really breaking the flow of the argument. It certainly covers many of the more interesting areas of computer science, with good coverage of neural networks, genetic algorithms and reinforcement learning in particular.

However the book is over-long and suffers from being repetitive in places. The core argument could easily have been presented in a more concise form, which would have made for a slimmer and more readable book.

While there is much that is persuasive in the arguments Baum presents there are also some quite problematic areas. Firstly is a tendency to imbue DNA with both intension and direction, as though it's an intelligent entity itself. Secondly the author makes much of the fact that semantically meaningful structures are related to compactness of code (with DNA itself as the ultimate in concision), but it is not clear that this is necessarily the case. Finally the transition between a connectionist network (i.e. a network of neurons in the brain that can be modelled using a neural network) and a symbol processing system (i.e. the ability to label objects, ideas, emotions etc) remains unclear. Again Baum draws on his knowledge of AI, pointing out that for some problems it has been shown that one of more groups of neurons in a network correspond to particular states or to particular concepts. However as Daniel Dennet and others have pointed out, in these examples the other neurons in the network don't correspond to concepts, they contain noise or random values but when these noise nodes are removed the network stops working.

However, overall the book makes many interesting points and is worth reading as an account of how computationalists model our thought processes. While ultimately it failed to convince this reader, it is, despite some faults, a book that provokes the very thing that it studies.


Hit the 'back' key in your browser to return to subject index page

Return to home page