Post by W.R.Elsberry, 1999/09/14

I do have some comments and especially questions for Dr. Dembski concerning his recent post. I have been working upon a draft of an article for eventual publication in the Reports of the National Center for Science Education, and the topic is criticism of evolutionary computation.

My draft article is at In it, I list and comment upon criticisms from three groups: anti-evolutionary creationists, computer scientists, and biologists.

The objection currently numbered as "5" under creationist criticisms is taken from the discussion period for William Dembski's talk at the "Naturalism, Theism, and the Scientific Enterprise" conference held in 1997. I thought that I had understood Dembski's stance on evolutionary computation following that discussion, but the recent post indicates that perhaps I overlooked something. When I brought up a test case to apply in that discussion, Dembski's objection seemed to me to boil down to this:

Natural selection simulated on computer produces solutions which are informed by the intelligence that went into the operating system, system software, and evolutionary computation software.

And that is the objection that I answered in my draft article. In other words, CSI arises, but can be traced to infusion by an intelligent agent. In my draft article, I utilize the same test case as I proffered to Dembski in 1997: Where does the "infusion" of information occur in the operation of a genetic algorithm that solves a 100-city tour of the "Traveling Salesman Problem"? At the time, and in my draft report, I considered and eliminated each potential source of "infusion".

Now, it appears that Dembski has ignored the test case I offered in 1997, and instead takes as an archetype the WEASEL program described by Richard Dawkins in "The Blind Watchmaker". I especially wish to ask some questions about the following paragraph:

WAD>It follows that Dawkins's evolutionary algorithm, by vastly
WAD>increasing the probability of getting the target sequence,
WAD>vastly decreases the complexity inherent in that sequence. As
WAD>the sole possibility that Dawkins's evolutionary algorithm can
WAD>attain, the target sequence in fact has minimal complexity
WAD>(i.e., the probability is 1 and the complexity, as measured by
WAD>the usual information measure, is 0). In general, then,
WAD>evolutionary algorithms generate not true complexity but only
WAD>the appearance of complexity. And since they cannot generate
WAD>complexity, they cannot generate specified complexity either.

First sentence: "It follows that Dawkins's evolutionary algorithm, by vastly increasing the probability of getting the target sequence, vastly decreases the complexity inherent in that sequence."

Dembski's development of the concept of "complex specified information" (or equivalent phrases) is at variance with the usage given here, or at least it seems so to me. Given an event, in this case a solution to a problem, Dembski's methodology in "The Design Inference" does not make a distinction based upon *how* the event was reached, but rather simply determines whether the event has the attribute of CSI. It would seem that otherwise every word said about an "Explanatory Filter" is simply question-begging, if the causal story is taken as an input rather than, as TDI asserts, the conclusion to be determined. So I would ask Dembski on what grounds he changes his definition of complexity for the case of solutions found by evolutionary computation? That is, given identical solutions produced by a human and by an evolutionary algorithm, is there some principled means of saying that the complexity measure differs between the two? Or, most generically, what principled means do we have for distniguishing the complexity of identical solutions when their sources are unspecified, such that one is to be considered truly complex, and the other to be falsely complex?

Second sentence: "As the sole possibility that Dawkins's evolutionary algorithm can attain, the target sequence in fact has minimal complexity (i.e., the probability is 1 and the complexity, as measured by the usual information measure, is 0)."

Dawkins discusses the use of a "distant ideal target" and how that is not how evolution actually works. It is also not how evolutionary computations work outside of pedagogic exercises like WEASEL. I brought up a real challenge to Dembski at the NTSE conference, to explain the information in a 100-city tour of the TSP found via genetic algorithm. I notice that Dembski does not mention that at all.

By Dembski's reasoning above, evolutionary computation that does not feature the use of a "distant ideal target" must then stand as an instantiation of the production of specified information whose complexity can be measured by our prior uncertainty as to which solution eventually is yielded by the algorithm. For those cases where we do not have a solution in hand (or withhold our knowledge of that solution from the algorithm), this becomes simply 1 in the size of the relevant problem space for those spaces which can be characterized as having a global optimum. That is why I specified a 100-city tour of the TSP at the NTSE, which as a measurable problem space gets us into the vicinity of Dembski's CSI.

Third Sentence: "In general, then, evolutionary algorithms generate not true complexity but only the appearance of complexity."

May I suggest that Dembski modify future editions of "The Design Inference" so that there appears a section that delineates how one can distinguish "true CSI" (TCSI) from "false CSI" (FCSI), and that that fully qualified terminology be adopted throughout?

In regards to the initial clause, I would like to ask Dembski if there is any argument that would show that the critique of Dawkins' WEASEL program, which uses a "distant ideal target", is generically applicable to evolutionary computation? That argument appears to be conspicuous by its absence from the post.

Fourth sentence: "And since they cannot generate complexity, they cannot generate specified complexity either."

Whether this holds true, even granting Dembski's other assumptions, depends upon the use or non-use of a "distant ideal target". Unless and until an argument appears that extends that to the general case, I will take it to be false.

The next paragraph is worth careful scrutiny, as well.

WAD>This conclusion may seem counterintuitive, especially given
WAD>all the marvelous properties that evolutionary algorithms do
WAD>possess. But the conclusion holds. What's more, it is
WAD>consistent with the "no free lunch" (NFL) theorems of David
WAD>Wolpert and William Macready, which place significant
WAD>restrictions on the range of problems genetic algorithms can

The "conclusion" cannot be said to hold in advance of the argument that would lead to it.

I have the Wolpert and Macready paper, "No Free Lunch Theorems for Search", obtained via ftp from the Santa Fe Institute, beside me here. I can find no reference to NFL limiting or restricting the range of problems *any* algorithm can solve. So far as I can tell, The NFL is about comparing the efficiency of algorithms, not decreeing which ones can or cannot be solved by any particular algorithm. For example, there is this quote to be found therein:

"As another example, even if one's goal is to find a maximum of the cost function, hill-climbing and hill-descending are equivalent, on average."

One's intuition is not to deploy a hill-descending algorithm in order to find maxima. This is an indication that Wolpert and Macready's findings are not about whether hill-descent is *capable* of finding maxima; it is taken as a given that they are.

"Restriction" is a different question from comparison of efficiency. How does Dembski reconcile his statement of NFL determining a restriction upon range of problems for the particular case of evolutionary algorithms when the central result of NFL is that *all* algorithms are equivalent when their performance is averaged over all cost functions?

The central question that Dembski poses concerning evolutionary algorithms is not one of comparative efficiency, but rather that of essential capacity. I find it difficult to see on what grounds Dembski advances NFL as a relevant finding concerning the capability of evolutionary algorithms to perform tasks. I ask Dembski to clarify his reasoning on this point.

There are a variety of other points contained within the post that could benefit from further discussion, but which range away from my article's topic.

I found interesting Dembski's enumeration of instances within evolutionary computation.

"How does the scientific community explain specified complexity? Usually via an evolutionary algorithm. By an evolutionary algorithm I mean any algorithm that generates contingency via some chance process and then sifts the so-generated contingency via some law-like process. The Darwinian mutation-selection mechanism, neural nets, and genetic algorithms all fall within this broad definition of evolutionary algorithms."

I obtained my master's degree with a specialization in articifical intelligence, and within that my focus was artificial neural systems. But Dembski's inclusion of neural nets within evolutionary computation is the first such classification that I have seen. I would ask on what basis or on whose authority Dembski provides this classification? The principles of operation of ANS mdoels show, as far as I can tell, qualitative rather than quantitative differences from canonical evolutionary computation examples, like genetic algorithms. A critical difference is the incorporation of *populations* of solutions within genetic algorithms and other systems, which is contrasted to a single architecture whose weights and other parameters are adjusted in ANS. But my current research has focused elsewhere, so perhaps Dembski can point to more recent findings that unify these (IMO) disparate fields.

As a final question, I would ask why the conception of CSI should require the addition of qualifiers, whether those qualifiers are "true" and "false", or "actual" and "apparent"? It seems to me to be inconsistent with or contradictory to Dembski's previous exposition of CSI as essentially disconnected from consideration of the source of its causation (see TDI pp. 8, 36, 226-7 for specific mentions of disconnect from intelligent agency).