Wesley R. Elsberry
Posts: 4937 Joined: May 2002

Salvador T. Cordova critiqued a "misunderstanding" concerning TSPGRID and Dembski's LCI in a thread on ARN.
Quote  Originally posted by Salvador T. Cordova:
Originally posted by Lars the digital caveman: "Hence, large amounts of CSI that weren't there before have been generated. This clearly contradicts the LCI. "
I agree with you that there are major problems with definitions in ID, and confusion is still rampant. It would be in ID's intrest to establish uniform standards.
However, consider the following, running a program that loops from 1 to a trillion and fills an array in memory with numbers 1 to a trillion. Is more information (not CSI) generated than was in the program before the run? When one applies alogrithmic compression, on sees the trillion bytes of information are not generated by running the program. The information is algorithmically compressed to the program by definition.
Likewise, the authors misunderstood in the case of TSPGRID what is going on, because instead of simple integers, they were generating CSI entities. But they forgot that the sum total of what was being generated was algorithmically compressible. Apply the compression, one sees, no information was added within the system boundary.
If we take:
X = TSPGRID Y = inputs (25, 461, 330)
as the starting point, that establishes the 'thermodynamic boundary' so to speak right?
Running the program generates the following SET: A = F(25) = CSI corresponding to 25 B = F(461) = CSI corresponding to 461 C = F (330) = CSI corresponding to 330
It appears that we've generated lots of new CSI, but this is not true because the above SET of CSI entities is algorithmically compressible to the following by definition:
X = TSPGRID Y = input of (25, 461, 330)
thus (X,Y) is 'isomorphic' to (A,B,C)
under algorithmic compression. Thus LCI is not violated.
However, the confusion is understandable, and thus I don't appeal to LCI personally very much. And as I said, it's hard to create real world thermodynmically closed systems to run experiments on.
"Salvador, one has to distinguish between information in general and CSI."
Agreed, ID might be better served I believe to reconsider it's definitions of information, CSI, and detectability techniques.
One can do a lot of detection without appealing to Dembski's definition of CSI. Some of those methods I show in my threads.
The state of ID is more exotic than it needs to be, in my opinion. ID could benefit by emphasizing simpler detection methods.
Once the less exotic are demonstrated to be effective, then things like what Dembski is showing, with some reformulation, will be more acceptable.
I love uncle Bill Dembski, but at times his definitions kill me.
Respectfully, Salvador 
I agree that there is a misunderstanding, but disagree as to who has the misunderstanding. Let's review a bit about TSPGRID.
Quote  Our algorithm is called TSPGRID, and takes an integer n as an input. It then solves the traveling salesman problem on a 2n * 2n square grid of cities. Here the distance between any two cities is simply Euclidean distance (the ordinary distance in the plane). Since it is possible to visit all 4n^2 cities and return to to the start in a tour of cost 4n^2, an optimal traveling salesman tour corresponds to a Hamiltonian cycle in the graph where each vertex is connected to its neighbor by a grid line.
As we have seen above in Section 9, Dembski sometimes objects that problemsolving algorithms cannot generate specified complexity because they are not contingent. In his interpretation of the word this means they produce a unique solution with probability 1. Our algorithm avoids this objection under one interpretation of specified complexity, because it chooses randomly among all the possible optimal solutions, and there are many of them.
In fact, Gobel has proved that the number of different Hamiltonian cycles on the 2n * 2n grid is bounded above by c * 28^n^2 and below by c' * 2.538^n^2, where c, c' are constants [31]. We do not specify the details of how the Hamiltonian cycle is actually found, and in fact they are unimportant. A standard genetic algorithm could indeed be used provided that a sufficiently large set of possible solutions is generated, with each solution having roughly equal probability of being output. For the sake of ease of analysis, we assume our algorithm has the property that each solution is equally likely to occur.

If TSPGRID selects among the many possible solutions for each input randomly (and elsewhere in the paper we define random in AIT as incompressible), how is it that there is a compressible representation of the sort Salvador claims? As I see it, either TSPGRID is being asserted to not select among possible solutions randomly, despite what we plainly said, or compressibility is being redefined by Salvador here.
Quote  Running the program generates the following SET:
A = F(25) = CSI corresponding to 25
B = F(461) = CSI corresponding to 461
C = F (330) = CSI corresponding to 330 
But running the TSPGRID program another three times generates another set,
A', but highly unlikely that A = A'
B', but highly unlikely that B = B'
C', but highly unlikely that C = C'
Et cetera.
Perhaps Salvador could explain how his idea of compression works, since I'm not seeing it. I think the problem here is that Salvador is treating TSPGRID as a deterministic algorithm when it isn't. The whole point of describing TSPGRID was to avoid a situation where every run of the program on the same input yielded the same result.
 "You can't teach an old dogma new tricks."  Dorothy Parker
