Home           About           Downloads





Fluid Concepts and Creative Analogies

The book Fluid Concepts and Creative Analogies describes a cognitive model that is very compelling and fun to study.

When you run an implementation of this cognitive model, the cognitive model seems to actively pursue an agenda in an incredible, human like way. Some aspects of its behaviour are reminiscent of ant colony simulators, with codelets as the ants. Each codelet is attracted to a "salient" ("pheromone scented") part of the workspace and does one little piece of work - proposing or building a structure, changing the salience of a new part of the workspace, changing the activation and slippability of one of the concepts in the slipnet, and so on. As the codelets run, the system seems to be pursuing an agenda, an agenda that is different every time it runs, and that is emergent rather than explicitly programmed in.

Trying to apply this cognitive model to other domains is a mind bending exercise. By way of contrast, when applying a genetic algorithm to a problem for example, you just need to define a few recombination operators for generating new proposed solutions from old ones, and a fitness function for measuring a proposed solution. When this program runs, its style is simply a parallel prioritized search. The fluid concepts cognitive model on the other hand has a much more compelling, beautiful style as it works toward a solution. Whether it is more or less efficient is not as important to me as its behaviour.

When applying this cognitive model to a new domain, you need examine the domain in more detail than with other AI techniques. You need to choose the concepts involved, how they are connected to each other and how these concepts manifest themselves through the "top-down" codelets they can generate and how "bottom-up" codelets should be influenced by the activations of the concepts. When you see this program running, you see a story unfold about a thought process rather than just a dry search. The system notices features, which may activate concepts, which may direct the system to search for features related to that activated concept. The activation of a concept may also strengthen the bond between two other concepts, causing a slippage of activation. For example, thinking about the letter "a" and the concept of successorship may lead you to start thinking about the letter "b". Its this richness and depth of domain knowledge that gives systems based on this cognitive model such a compelling, beautiful character.

As well as Fluid Concepts and Creative Analogies, I've been reading the following two books written by the co-authors:

Analogy-Making as Perception - A Computer Model by Melanie Mitchell

The Subtlety of Sameness by Robert French

Also its helpful when studying this to read An evaluation of the Fluid Concept Architecture, a good examination of the cognitive model.

Links:

The book

A review and description of the book by Daniel Dennett

Fluid Analogies Research Group

A java implementation of Copycat by Scott Bolland.
Update: Scott Bolland appears to have disappeared from the internet. A copy of the Copycat program along with the tutorial pages appears on the Internet Archive.