scordova
Posts: 64 Joined: Dec. 2003
|
Greetings Wesley,
If I am mistaken about anything of your statements please clarify. We may need to go a few rounds to tidy things up. I may have a few typos in my notation too, so let's help each other out to at least clarify things.
--------------------------------------------------------------- From my vantage point we have 3 components to the TSPGRID operation.
At first glance we see 2 components
1. TSPGRID program itself 2. Random inputs in the form of "n", where 4n^2 is the number of cities
However, in actuality TSPGRID is composed of
A. Deterministic elements B. Random selector "R", to select a solution: "it chooses randomly among all the possible optimal solutions"
Thus the 3 components correspond to 1A, 1B, 2: 1. TSPGRID program itself A. Deterministic elements B. Random selector which I label "R", to select a solution
2. Random inputs in the form of "n", where 4n^2 is the number of cities
Thus in reality we have two random inputs, namely "n" and "R".
For a given "n", each run of TSPGRID corresponds to an "R". So we effectively have a doubly nested loop, each run for a given "n" permits "R" as an input. Thus the system is thermodynamically open with respect to "R". Each run of the TSPGRID program adds one integer of "R" to the mix.
To close the system we need to redefine the thermodynamic boundary for each additional run. We can do the following.
Let "R" be traced and recorded such that each run can be reconstructed.
TSPGRID for a given "n" running under R25 might generate the following segments for a shortest path: (S1, S5, S30, ... S4n^2) = CSI25
Thus:
CSI25 = (S1, S5, S30, ... S4n^2) is compressibly equivalent to TSPGRID("n",R25)
similarly, for example
CSI461 = (S2, S65, S30, ... S4n^2) is compressibly equivalent to TSPGRID("n",R461)
CSI330 = (S25, S22, S650, ... S4n^2) is compressibly equivalent to TSPGRID("n",R330)
By way of anlogy in algebra: T25 + T461 + T330 = T (25+461+330)
CRUDELY SPEAKING the CSI entities
(S1, S5, S30, ... S4n^2) + (S2, S65, S30, ... S4n^2) + (S25, S22, S650, ... S4n^2)
= TSPGRID("n",R25) + TSPGRID("n",R461) + TSPGRID("n",R330) =
TSPGRID("n") ( R25 + R461 + R330 )
If we define the information boundary ('thermodynamic boundary' ) around the system
TSPGRID("n") + R25 + R461 + R330 = CSI25 + CSI461 + CSI330
we see there is no violation of LCI. Algorithmic compression is applied on the outputs (CSI25, CSI461, CSI330) not the inputs "n" and "R".
-------------------------------------------------------------------------
To offer a scecond view:
A polynomial of m-th order has m-solutions, potentially all soluitions may be unique.
However, in contrast, in the travelling salesman problem, for a given "n", there are m-unique solutions, and m is bounded by: "c * 28^n^2 and below by c' * 2.538^n^2, where c, c' are constants"
There are unique pathways such that:
We take run #1: our sum total of outputs is P1 We take run #2: our sum total of outputs is P1 + P2 . . . We take run #m: our sum total of outputs is P1 + P2 + .... + Pm
The maximum CSI defined by the space of all possible solutions is bounded by (not necessarily equivalent to) I(P1) + I(P2) +..... I (Pm) where I is the information content, P is the pathway, and "m" is the number of pathways. It is important to see that the total information is finite, it has a maximum.
I-total is less than or equal to I(P1) + I(P2) +..... I (Pm)
Now consider: We take run #1: our sum total of outputs equal to TSPGRID(n,1)
We take run #2: our sum total of outputs equal to TSPGRID(n,1) + TSPGRID)(n,2) . . .
We take run #m: our sum total of outputs equal to TSPGRID(n,1) + TSPGRID(n,2) + .... TSPGRID(n,m)
What was actually demonstrated was that I(P1) + I(P2) + ... I(Pm) was compressible to I(TSPGRID(n)) + I(1) + ... I(m): this is the information content of the TSPGRID algorithm plus the information content of all the "m" inputs.
what made it look like LCI was violated was that each run of TSPGRID redefined the system boundary and I-total appeared to increase by I(Pk) when in fact it was increased only by I(k) when applying algorithmic compression to the sum total of outputs from all runs.
In a sense, LCI pertains to the space of solutions, not the algorithm that finds (generates) the solutions!!!
Taken to the extreme, when all solutions are found, the system boundary can no longer be redefined, LCI will be enforced at some point. This is the case with information in a closed universe. At some point, LCI will be enforced.
I know the above descriptions look crude, horrendous, and convoluted, but if there are ways I can clarify, please ask.
The problem in application, as Chaitin points out, is we never really know that we have the optimal compression. We only know, one compression is better than the other! What that means is one may be able to create a TSPGRID so convoluted that the optimal compression will not be apparent as it was in this case. LCI may be true in that case, but it would be hard to prove.
A very severe problem in applying LCI is however evident: an executable file, like an encrypted self-extracting zip may look unbelievably chaotic and as it self-extracts it suddenly looks like CSI came out of nowhere: LCI is preserved, but it looks like it was violated!
Thus in my heart I believe in LCI, but it's hard persuading others. LCI is actually derivable from set theory, and in the end it's actually pretty bland. We have a hint of it in the travelling salesman problem. LCI pertains to the space of solutions, not the algorithm that finds (generates) the solutions.
Also, the problem of "R" entering the TSPGRID system is exactly the problem in the analysis of biological systems. "R" enters through random quantum events. As cells mutate it is analogous to several "R's" being added, the basic cell is the TSPGRID program. Thus each mutation generates CSI, but evaluation of information increase must be applied after algorithmic compression is applied.
I proposed a simple test with I-ERP (Information as Evidenced by Replicas of Polymers), this is basically the tally of alles (is that right? you may need to help me out here). I-ERP may be increasing in the human gene pool, it is CSI, but it is DEADLY CSI.
Worldwide I-ERP may be decreasing because of extinctions, but within each living specie population it is increasing until the species reach extinction. 2nd law guarantees I-ERP will be zero at some stage, but I fear life on Earth will rapidly approaching I-ERP = 0 long before the universe burns out.
It's like a propagating a bug during software execution. I don't believe natural selection will clean this out of the human gene pool any more than kicking a computer will clean out serious bugs. The number of DEADLY mutations (infusions of DEADLY CSI) I fear is accumulating faster than natural selection can clean them out. Thus, if the some ID paradigms are true this has bearing on our very lives. For example, I am disturbed at the persistence of sickle-cell anemia and the persistence of many bad mutations. If they continue to accumulate, that is one ID prediction that will not be a very happy one.
I speak here, not as an ID advocate so much as a concerened citizen. Thus ID, if for no other reason should be explored to help alleviate pain of the inevitable end of all things.
If DEADLY CSI is emerging faster than natural selection can purge it, I think in the interest of science we should explore this possibility.
Also, I fear that loads of I-ERP is being forever lost because of damage to our eco-systems (species extinction) like in the rainforests.
You may be on the other side of the ID debate, but I think this is where there can be common ground for valuable research based on concern for ourselves and the environment. Improved defintions of information would be useful for the scientific enterprise, and I hope both sides will find a way to cooperate.
Sincerely, Salvador
|