Skip navigation.
The Critic's Resource on AntiEvolution

Specified Complexity

Unacknowledged Errors in “Unacknowledged Costs” Essay


Back over the summer, William Dembski was talking up "Baylor's Evolutionary Informatics Laboratory", and one of the features there was a PDF of an essay critiquing the "ev" evolutionary computation program by Tom Schneider. Titled "Unacknowledged Information Costs in Evolutionary Computing", the essay by Robert J. Marks and William A. Dembski made some pretty stunning claims about the "ev" program. Among them, it claimed that blind search was a more effective strategy than evolutionary computation for the problem at hand, and that the search structure in place was responsible for most of the information resulting from the program. The essay was pitched as being "in review", publication unspecified. Dembski also made much of the fact that Tom Schneider had not, at some point, posted a response to the essay.

There are some things that Marks and Dembski did right, and others that were botched. Where they got it right was in posting the scripts that they used to come up with data for their conclusions, and in removing the paper from the "" site on notification of the errors. The posting of scripts allowed others to figure out where they got it wrong. What is surprising is just how trivial the error was, and how poor the scrutiny must have been to let things get to this point.

Now what remains to be seen is whether in any future iteration of their paper they bother to do the scholarly thing and acknowledge both the errors and those who brought the errors to their attention. Dembski at least has an exceedingly poor track record on this score, writing that critics can be used to improve materials released online. While Dembski has occasionally taken a clue from a critic, it is rather rarer that one sees Dembski acknowledge his debt to a critic.

In the current case, Marks and Dembski owe a debt to Tom Schneider, "After the Bar Closes" regular "2ndclass", and "Good Math, Bad Math" commenter David vun Kannon. Schneider worked from properties of the "ev" simulation itself to demonstrate that the numbers in the Marks and Dembski critique cannot possibly be correct. "2ndclass" made a project out of examining the Matlab script provided with the Marks and Dembski paper to find the source of the bogus data used to form the conclusions of Marks and Dembski. vun Kannon suggested an easy way to use the Java version of "ev" to quickly check the claims by Marks and Dembski.

(Also posted at the Austringer)

Specified Complexity Depends Upon Implicit Design Conjectures


William Dembski's No Free Lunch contains the following passage:

The presumption here is that if a subject S can figure out an independently given pattern to which E conforms (i.e., a detachable rejection region of probability less than alpha that includes E), then so could someone else. Indeed, the presumption is that someone else used that very same item of background knowledge -- the one used by S to eliminate H -- to bring about E in the first place.

[No Free Lunch, p. 75]

Because Dembski's framework is based upon the elimination of alternative explanations, what we end up with here is the situation that Dembski is attributing the complement of the probability that can be assigned to chance hypotheses to an implicit design conjecture, the one that underlies a particular "specification". When the "saturated" probability of the alternative is less than 1/2, Dembski says that we should prefer "design" as our causal explanation, and because we have this relationship between the specification and the putative causal story, we thus are adopting that particular causal conjecture.

Specified Complexity and Reliability


(Originally posted to, retrieved via Google Groups.)

From: "Wesley R. Elsberry"
Subject: Re: Designer as a Scientific Theory
Date: 1998/09/07
Message-ID: #1/1
X-Deja-AN: 388825198
Organization: Online Zoologists

In article ,
Ivar Ylvisaker wrote:
>Wesley R. Elsberry wrote:
>>In article

IY>[with much snipping]

Me too.


IY>I don't think that Dembski will accept Wesley's version of
IY>a filter that detects intelligent designers. Wesley's
IY>filter stage 3 passes only those phenomena that we know are
IY>caused by intelligent designers. I assume that Wesley is
IY>referring to man and, maybe animals as designers but not to
IY>unknown supernatural beings. Dembski wants to go further.

Yes, Dembski *wants* to go further. Unfortunately, we do not
have in hand the justification to do so. Is it contained in
Dembski's forthcoming book? Somehow, I doubt it, but I do
look forward to seeing Dr. Dembski try.

Finite Improbability Calculator


The Finite Improbability Calculator is a collection of routines to permit exploration of very small probabilities. Many antievolutionary arguments are based upon an argument from improbability: some phenomenon is so improbable that it must be due to an intelligent agent.


  1. Select an operation to perform from the list.
  2. Enter the parameters for the operation.
  3. Press the button for the operation.
  4. Results appear in a table at the top of this page.


Change of base


Old Base
New Base

Return to Operations list



Enter a positive integer in the box:

Return to Operations list

Permutation and Combination


Total elements
Selected elements

Return to Operations list

Specified Anti-Information


Length of uncompressed string:

Length of program/input pair that produces the string:

Number of different symbols in strings:

Return to Operations list

Dembski's p_origin and M/N ratio


Perturbation tolerance:
Perturbation identity:
Number of symbols:
Length of string:

Page numbers refer to "No Free Lunch".

Return to Operations list

Dembski's p_local


Number of items in system (e.g., 50):
Number of copies of each item (e.g., 5):
Number of possible substitutions per item (e.g., 10):
Total number of items available (e.g., 4289):

Page numbers refer to "No Free Lunch".

Return to Operations list

Dembski's p_perturb


Number of subunits (N):
Different types of subunits (k):
Perturbation tolerance factor (q):
Perturbation identity factor (r):

Page numbers refer to "No Free Lunch".

Return to Operations list

Error in dembskis


Error Measurement
Expected Value

Return to Operations list

Hazen Functional Complexity


N (number of possible configurations)
M(Ex) (number of functional configurations)

Return to Operations list

Notes on calculations

Factorial:  The point here is to permit calculation of factorial(n) where n can be a large number, say the number of proteins which an organism codes for.  However, even a "double" floating-point number overflows at 1.7e308.  So factorials are calculated here using a logarithmic representation.  The Stirling approximation is used for very large n, and a logarithmic version of the classical iterative method is used for smaller n.  Stirling's approximation is taken as

            n! ~ n^n e^(-n) sqrt(2 * pi * n) (1 + 1/(12n))

Change of base: Calculated as 

            new exponent = (old_exponent * ln(oldbase)) / ln(newbase)

Permutation and combination: Uses the factorial function discussed above.

            permutations =  n! / (n - k)!

            combinations = n! / k!(n - k)!

Specified Anti-Information

Specified Anti-Information is an application of the "universal distribution" of Kirchherr et alia 1997, expounded in Elsberry and Shallit 2003. SAI is a framework intended as an alternative to Dembski's "design inference". The SAI of a bit string is defined as

SAI = max(0,|y| - C(y))

where |y| is the length of the bit string of interest and C(y) is the Kolmogorov complexity of y. Since C(y) is uncomputable, mostly we should speak of Known Specified Anti-Information, which is just the maximum SAI that can be established by application of known compression techniques.

SAI is defined for bit strings, but often we deal with strings based on a symbol set with cardinality > 2. It is straightforward to determine the length of a bit string needed to represent such a string, though, using the "change of base" function presented earlier. The second part of the SAI section permits SAI to be calculated for such strings.

Something to note here is the apparent difference in ease of application of SAI with the various measures introduced by Dembski.

porig approximation (as per NFL p.301):

            porig ~ symbols^(-length (perturbation_tolerance - perturbation_identity))

The discussion on page 301 implies that functional proteins may themselves be considered "discrete combinatorial objects" to which this formula would apply.  With a little exploration, then, one can verify that any functional protein of length 1153 or greater has an origination probability smaller than Dembski's "universal small probability".

plocal calculation (as per NFL p.293):

            plocal = (units in system * substitutions / total different units) (units in system * copies)

M/N ratio approximation (as per NFL p.297):

            M/N ratio ~ ((combinations(length, tolerance * length) * (symbols-1)(tolerance * length)) / (combinations(length, identity * length) * (symbols-1)(identity * length)))

There is a discrepancy between the result which Dembski reports for his example calculation of an M/N ratio on p.297 and what the Finite Improbability Calculator reports.  Plug in symbols=30, length=1000, identity=0.2 and the result comes out as 5.555117e-223, whereas Dembski reports 10^-288, or a factor of 10^-65 off.  Jeff Shallit noted this error in Dembski's text some time back.

DCO pperturb approximations (as per NFL pp.299 and 300):

            pperturb (p.299) ~ ((combinations(length, tolerance * length) / (combinations(length, identity * length)) * (symbols-1)(length * (tolerance - identity))

            pperturb (p.300) ~ (symbols)(length * (tolerance - identity))

Error in dembskis

That error might be measured in a unit called "dembskis" that scaled things in terms of orders of magnitude came up in discussion of errors in an essay by Marks and Dembski. The reference unit of error for the measure is taken from the case mentioned above in the M/N ratio calculation note, where Dembski had an error of about 65 orders of magnitude. "Dave W." formalized the notion with an equation, and W. Kevin Vicklund suggested using a rounded-off value of 150 as the constant in the denominator, based upon Dembski's figure of 10^150 as a universal small probability. Thus, the final form of quantifying error in dembskis (Reed Cartwright proposed the symbol Δ) is

Δ = | ln(erroneous measure) - ln(correct measure) | / 150

There is not yet a consensus on what to term the unit, but two proposals being considered are "Dmb" and "duns".

Hazen Functional Complexity

The calculation is made per the 2007 PNAS paper by Hazen et al.. Given a number of possible configurations, N, and a (smaller) number of functionally equivalent configurations, M(Ex), one obtains the functional complexity metric, I(Ex) as:

I(Ex) = - log2(M(Ex) / N)


Dembski, William A. 2002. No Free Lunch. Rowman & Littlefield Publishers.

Elsberry, Wesley R. and Jeffrey Shallit. 2003. Information Theory, Evolutionary Computation, and Dembski's "Complex Specified Information".

Hazen RM, Griffin PL, Carothers JM, Szostak JW (2007) Functional information and the emergence of biocomplexity. Proc Natl Acad Sci U S A 104 Suppl 1:8574-81.

Kirchherr, W., M. Li, and P. Vitanyi. The miraculous universal distribution. Math. Intelligencer 19(4) (1997), 7-15.

The Finite Improbability Calculator was first coded in spring of 2002, following publication of William Dembski's book, "No Free Lunch". The original utilized a Perl CGI script. The FIC was ported to a PHP instantiation in January, 2004, with routines added for calculating Specified Anti-Information. The FIC then was altered to work within a Drupal page using the "PHP code" option.

The name of this page was inspired by "The Hitchhiker's Guide to the Galaxy" by the late great Douglas Adams.


Syndicate content