A response to Dembski's "Specified Complexity"

by Wesley R. Elsberry

Dembski's analysis fails to be even-handed. Dembski explores how evolutionary computation approaches a solution, but does not show that an intelligent agent can approach any particular problem in a supposedly different manner and escape the problems that Dembski asserts for EC. Specifically, if the probability of producing a solution becomes the relevant CSI metric, the probability of an intelligent agent achieving a solution looks to be just as much a "probability amplifier" as an algorithm.


What this means is that even though with respect to the uniform probability on the phase space the target has exceedingly small probability, the probability for the evolutionary algorithm E to get into the target in m steps is no longer small. And since complexity and improbability are for the purposes of specified complexity parallel notions, this means that even though the target is complex and specified with respect to the uniform probability on the phase space, it remains specified but is no longer complex with respect to the probability induced by evolutionary algorithm E.

[End Quote - WA Dembski, "Specified Complexity", MetaViews 152]

The above shows the kind of bait-and-switch tactic necessary to maintain the illusion that the products of algorithms or natural processes can in principle be distinguished from the products of intelligent agency. When one examines Dembski's technical discussion of "specification", one finds that the complexity is determined from the likelihood of a solution occurring due to the *chance* hypothesis. Here, Dembski swaps that out for the likelihood that the non-chance hypothesis finds the solution. Were this a pinball game, the machine would lock up and flash "TILT!".

The relative probability for assessing the complexity of some solution is given by Dembski on page 145 of TDI as P(E|H), where H is a *chance* hypothesis.

Essentially, what Dembski proves with his analysis of evolutionary computation is not that it cannot produce actual specified complexity, but rather that the bounded complexity measure discussed on page 144 of TDI will show that a problem is solvable by evolutionary computation given a certain limited m steps.


But the problem is even worse. It follows by a combinatorial argument that for any partition of the phase space into pieces none of which has probability more than the probability of the target (which by assumption is less than 1 in 10^150), for the vast majority of these partition elements the probability of the evolutionary algorithm E entering them is going to be no better than pure random sampling. It follows that the vast majority of fitness functions on the phase space that coincide with our original fitness function on the target but reshuffle the function on the partition elements outside the target will not land the evolutionary algorithm in the target (this result is essentially a corollary of the No Free Lunch theorems by Wolpert and Macready). Simply put, the vast majority of fitness functions will not guide E into the target even if they coincide with our original fitness function on the target (see Appendix 8).

[End Quote - WA Dembski, "Specified Complexity", MetaViews 152]

Dembski's invocation of Wolpert and Macready's "No Free Lunch" theorem suffers from the same error as his last use of it in "Explaining Specified Complexity". Wolpert and Macready's results are about comparative efficiency, not essential capacity. As mentioned before, Wolpert and Macready treat all algorithms as having the capacity to solve the problem at hand on every possible cost function for that problem. The example that they give of hill-descending algorithms solving hill-climbing problems illustrates this point nicely.

One can characterize the fitness functions which cause some evolutionary algorithm to become less efficient than other algorithms or blind search: the fitness function is "misleading". That is, nearby candidate solutions in genetic space map to worse-performing points when evaluated by the fitness function, and thus away from the solution that would terminate the search. What Dembski needs to do is show that biological genetics instantiates such a situation. Unfortunately for Dembski, the diversity of variants of proteins which perform the same functions would tend to indicate that, in general, that the biological fitness functions do not match the relevant features of misleading cost functions.


Does this mean that the evolutionary algorithm has in fact generated complex specified information, but that in referring to a loss of complexity with respect to E I'm simply engaging in some fancy redefinitions to avoid this conclusion? I don't think so. Remember that we are interested in the **generation** of specified complexity and not in its reshuffling.

[End Quote - WA Dembski, "Specified Complexity", MetaViews 152]

I do think that fancy footwork has been engaged. By the definitions that Dembski lays out in his book, "The Design Inference", the complexity of an event is derived from a probabilistic analysis of the event given that a chance process produced that event. In "Explaining Specified Complexity" and "Specified Complexity", Dembski now tells us that the relevant complexity measure must be taken instead upon the probabilistic analysis of the event given a non-chance hypothesis. On the other hand, Dembski fails to apply his new mode of complexity measures to solutions found by intelligent agents. Given a problem and an intelligent agent to solve the problem, the complexity of the solution now is just the same as the likelihood that the intelligent agent will find a solution. For almost every example of CSI that Dembski deploys in "The Design Inference", this probability is either 1 or very close to 1, and thus none of those examples any longer can be considered to represent CSI with Dembski's universal probability bound of 1 in 10^150. This will require a major rewrite of "The Design Inference". In another essay, Dembski makes a conclusion that reads, "This is known as having your cake and eating it. Polite society frowns on such obvious bad taste."

Omniscient omnipotent entities cannot fail to solve problems. The likelihood of problem solution for such entities is always and everywhere 1. Thus, events due to omniscient omnipotent entities also reduce to having only the "appearance of specified complexity" rather than its actuality, according to Dembski's own logic given in "Specified Complexity".

The Problem That Dembski Does Not Want To Address

Dembski says above that he is interested in the problem of the generation of specified complexity, not its reshuffling. But Dembski's design inference treats this distinction as a "don't care" condition. Dembski offers specified complexity and his design inference as a means of getting to a reliable indication of the action of an intelligent agent. The problem is that Dembski's design inference does not distinguish between an event whose specified complexity is merely transformed prior existing specified complexity and an event whose specified complexity is (somehow that never gets discussed by Dembski) actual original specified complexity. Given two events which have the same complexity as measured by probabilistic analysis of each event having occurred by a chance process, Dembski's design inference can find "design" as the relevant category. But if by the "cheating" process of transforming already existing information into new forms with the "appearance of specified complexity" natural processes and algorithms can construct counterfeit specified complexity, why does it matter that some other process can produce actual specified complexity? They cannot be distinguished after the fact even under Dembski's uneven rules if one does not have in hand the actual causal story that goes with the event. But Dembski's design inference was supposed to get us away from having to rely upon having those causal stories. We were supposed to be able to simply examine the properties of the produced event and declare it to be only explicable as an instance of "design". Dembski's response to the problem that evolutionary computation poses for his thesis shows that the design inference is incapable of doing this.

Dembski asserts special construction of fitness functions, but fails to show that this is the case for biological systems. Further, Dembski will be in danger of giving a wonderful argument for Deism if he continues in this vein. Let's stipulate that Dembski is right about intelligence being needed to form the fitness function for natural evolutionary computation. A Deist's god then sets up the fitness function and steps back to watch evolutionary processes accomplish everything else.

Appendix 1: The Algorithm Room

William A. Dembski's writings claim, among other things, that algorithms cannot produce Complex Specified Information (CSI), but intelligent agents can. A recent posting of Dembski's introduced qualifiers to CSI, so that we now have "apparent CSI" and "actual CSI". Dembski categorizes as "apparent CSI" those solutions which meet the formerly given criteria of CSI, but which are produced via evolutionary computation. This is contrasted with "actual CSI", in which a solution meets the CSI criteria and which an intelligent agent produces. See my Dembski link page and follow the link for "Explaining Specified Complexity".

Dembski is also fond of both practical and hypothetical illustrations to make his points. I'd like to propose a hypothetical illustration to explore the utility of the "apparent CSI"/"actual CSI" split.

Let's say that we have an intelligent agent in a room. The room is equipped with all sorts of computers and tomes on algorithms, including the complete works of Knuth. We'll call this the "Algorithm Room". We pass a problem whose solution meets the criteria of CSI into the room (say, a 100-city tour of the TSP or perhaps the Zeller congruence applied to many dates). Enough time passes that our intelligent agent could work the problem posed by hand without recourse to references or other resources. The correct solution is passed out of the room, with an affidavit from the intelligent agent that no computational or reference assistance was utilized. Under those circumstances, we pay our intelligent agent at a high consultant rate. But if our intelligent agent simply used the references or computers, he would get paid at the lowly computer operator rate. We suspect that our intelligent agent not only utilized the computers to accomplish the task, but that he also used the time thus freed up to do some light reading, like "Once is Not Enough".

There are four broad categories of possible explanation of the solution that was passed back out of the "Algorithm Room". First, our intelligent agent might have employed chance, throwing dice to come up with the solution, and then waiting an appropriate period to pass the solution out. Given that the solution actually did solve the problem passed in, we can be highly confident that this category of explanation is not the actual one. Second, our intelligent agent might have ignored every resource of the "Algorithm Room" and spent the entire time working out the solution from the basic information provided with the problem (distances between cities or dates in question). Third, our intelligent agent might have gone so far as to look up and apply, via pencil and paper, some appropriate algorithm taken from one of the reference books. In this case, the sole novel intelligent action on our agent's part was looking up the algorithm. Essentially, our agent utilized himself as a computer. Fourth, our intelligent agent might simply have fed the basic data into one of the computers and run an algorithm to pop out the needed solution. Again, the intelligent agent's deployment of intelligence stopped well short of being applied to produce the actual solution to the problem at hand.

Because we suspect cheating, we wish to distinguish between a solution that is the result of the third or fourth categories of action, and a solution that is the result of the second category of action of our intelligent agent. We have only the attributes of the provided solution to the problem to go upon. Can we make a determination as to whether cheating happened or not?

Dembski's article, "Explaining Specified Complexity", critiques a specific evolutionary algorithm. Dembski does not dispute that the solution represents CSI, but categorizes the result as apparent CSI because the specific algorithm critiqued must necessarily produce it. Dembski then claims that this same critique applies to all evolutionary algorithms, and Dembski includes natural selection within that category.

The question all this poses is whether Dembski's analytical processes bearing upon CSI can, in the absence of further information from inside the "Algorithm Room", decide whether the solution received was actually the work of the intelligent agent (and thus "actual CSI") or the product of an algorithm falsely claimed to be the work of the intelligent agent (and thus "apparent CSI")?

If Dembski's analytical techniques cannot resolve the issue of possible cheating in the "Algorithm Room", how does he hope to resolve the issue of whether certain features of biology are necessarily the work of an intelligent agent or agents? If Dembski has no solution to this dilemma, the Design Inference is dead.