Joined: Jan. 2007
I think I've uncovered a disconnect between the way people who are coding up these genetic algorithms (GAs) and IDers are discussing the issues at hand. It was actually re-reading Dembski's paper on "active information" that broight this to light (we'll ignore the tautological nature of Dembski's "active information" for the time being).
Dembski spends a significant amount of effort pointing out (ad nauseum) that GAs and other search techniques can not violate the NFLT. NFLT is interesting, so we'll take an aside to talk about it for anyone who hasn't grasped it yet - The NFLT basically state the following - averaged over all possible search spaces, any search algorithm is as good as any other search algorithm. This leads to what seems to be a very counter-intuitive result, namely that averaged over all possible fitness landscapes, a *hill climbing* algorithm is as effective as a *hill descending* algorithm *for finding local maxima*. At first glance this result seems highly unusual. But deeper reflection on the first clause "averaged over all possible fitness landscapes" gives you a clue as to what is going on here.
It turns out that almost all (I mean that in the formal sense: http://en.wikipedia.org/wiki/Almost_all) of the possible fitness functions in the phrase "averaged over all fitness functions" look absolutely nothing like any "function" you've ever seen - notably, *almost all* error surfaces are nowhere differentiable and nowhere continuous. Now consider a search over such a space (having trouble imagining it? surf(rand(100)); in MATLAB will give you a vague notion), and it's obvious that no matter where you are, you have no idea where to go next to find the maximum. In otherwords, walking "uphill" will do no better than waling "downhill" in general. So that maybe explains the NFLT... This ends the NFLT aside.
In any case, Dembski is constantly pointing something out - namely that no algorithm can violate NFLT (I think there are restrictions on the kinds of algos this applies to, but nevermind). And Dembski is correct - nothing violates NFLT as far as we know. Dembski takes this to mean that all search algorithms are therefore overall useless, and any time any search algorithm does any better than random chance some clever programmer somewhere input special (CSI, IC, whatever) information into their code.
Thus the claims on UD that programs like word mutagenation and other GAs that implement very basic crossover and mutation on strings must be "sneaking information in" because they appear to be violating NFLT! But the point of these GA's that programmers are throwing together to show IDers how wrong they are is not to violate NFLT - we are quite aware that we can't violate it. The point of these algorithms is that *the search spaces of interest to us (word spaces, etc) are nothing like the nowhere continuous and nowhere differentiable surfaces that give rise to NFLT*. In other words, the world we live in is filled with error surfaces very conducive to genetic-algorithm based search strategies (and gradient searches, branch and bound searches, etc).
That's why in Patrick's newest post he posits a search problem that is very difficult for a GA to solve and says "A ha! You can't solve this, so GA's aren't universal problem solving tools". But this misses the point - of course GAs aren't universal problem solving tools - NFLT says they can't be! The point is that for most problems of interest to us, GAs perform well.