RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (2) < [1] 2 >   
  Topic: Elsberry & Shallit on Dembski, Discussion of the criticism< Next Oldest | Next Newest >  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Nov. 12 2003,08:26   

I'm starting this thread for discussion of the paper that Jeff Shallit and I wrote on Dembski's ideas. Since it is now known to the public, I expect some criticisms of our criticisms will be made.

For example...

In a thread on ARN, "Rock" gripes that we imply that we have a positive theory but that we don't expound upon it. Well, we do have a positive approach to examining bit strings that is expounded upon for a couple of pages in our appendix. This is apparently not clear when one is simply "skimming" our paper. I have also started a thread here for discussion of our specified anti-information (SAI) as a replacement for Dembski's notion of specification.

"Rock" also complained that there was "nothing original" in our paper. It is certainly true that many of our criticisms had been expressed less formally and separately elsewhere in discussion on the Internet, but I'm not sure that that applies to all of the criticisms that we made. SAI is an application of the universal probability distribution, but the application itself is original with us.

In his last sentence, "Rock" asks if our ideas bear closer examination than Dembski's on these matters. Clearly, I think so. We identified a number of problems in Dembski's approach that we feel are insurmountable. Our SAI addresses each of those problems.


Please use this thread to bring attention to criticism made in other fora.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 06 2003,11:58   

Salvador T. Cordova critiqued a "misunderstanding" concerning TSPGRID and Dembski's LCI in a thread on ARN.

Quote
Originally posted by Salvador T. Cordova:

Originally posted by Lars the digital caveman:
"Hence, large amounts of CSI that weren't there before have been generated. This clearly contradicts the LCI. "

I agree with you that there are major problems with definitions in ID, and confusion is still rampant.  It would be in ID's intrest to establish uniform standards.

However, consider the following, running a program that loops from 1 to a trillion and fills an array in memory with numbers 1 to a trillion. Is more information (not CSI) generated than was in the program before the run?  When one applies alogrithmic compression, on sees the trillion bytes of information are not generated by running the program.  The information is algorithmically compressed to the program by definition.

Likewise,  the authors misunderstood in the case of TSPGRID what is going on, because instead of simple integers, they were generating CSI entities. But they forgot that the sum total of what was being generated was algorithmically compressible.  Apply the compression, one sees, no information was added within the system boundary.

If we take:

X =  TSPGRID
Y =  inputs (25, 461, 330)

as the starting point, that establishes the 'thermodynamic boundary' so to speak right?

Running the program generates the following SET:
A  =  F(25) = CSI corresponding to 25
B  =  F(461) = CSI corresponding to 461
C  =  F (330) = CSI corresponding to 330

It appears that we've generated lots of new CSI, but this is not true because the above SET of CSI entities is algorithmically compressible to the following by definition:

X =  TSPGRID
Y = input of (25, 461, 330)

thus (X,Y) is 'isomorphic' to (A,B,C)

under algorithmic compression. Thus LCI is not violated.

However, the confusion is understandable, and thus I don't appeal to LCI personally very much.  And as I said, it's hard to create real world thermodynmically closed systems to run experiments on.

"Salvador, one has to distinguish between information in general and CSI."

Agreed, ID might be better served I believe to reconsider it's definitions of information, CSI, and detectability techniques.  

One can do a lot of detection without appealing to Dembski's definition of CSI.  Some of those methods I show in my threads.

The state of ID is more exotic than it needs to be, in my opinion. ID could benefit by emphasizing simpler detection methods.  

Once the less exotic are demonstrated to be effective, then things like what Dembski is showing, with some reformulation, will be more acceptable.

I love uncle Bill Dembski, but at times his definitions kill me.

Respectfully,
Salvador


I agree that there is a misunderstanding, but disagree as to who has the misunderstanding. Let's review a bit about TSPGRID.

Quote

Our algorithm is called TSPGRID, and takes an integer n as an input. It then solves the traveling salesman problem on a 2n * 2n square grid of cities. Here the distance between any two cities is simply Euclidean distance (the ordinary distance in the plane). Since it is possible to visit all 4n^2 cities and return to to the start in a tour of cost 4n^2, an optimal traveling salesman tour corresponds to a Hamiltonian cycle in the graph where each vertex is connected to its neighbor by a grid line.

As we have seen above in Section 9, Dembski sometimes objects that problem-solving algorithms cannot generate specified complexity because they are not contingent. In his interpretation of the word this means they produce a unique solution with probability 1. Our algorithm avoids this objection under one interpretation of specified complexity, because it chooses randomly among all the possible optimal solutions, and there are many of them.

In fact, Gobel has proved that the number of different Hamiltonian cycles on the 2n * 2n grid is bounded above by c * 28^n^2 and below by c' * 2.538^n^2, where c, c' are constants [31]. We do not specify the details of how the Hamiltonian cycle is actually found, and in fact they are unimportant. A standard genetic algorithm could indeed be used provided that a sufficiently large set of possible solutions is generated, with each solution having roughly equal probability of being output. For the sake of ease of analysis, we assume our algorithm has the property that each solution is equally likely to occur.


If TSPGRID selects among the many possible solutions for each input randomly (and elsewhere in the paper we define random in AIT as incompressible), how is it that there is a compressible representation of the sort Salvador claims? As I see it, either TSPGRID is being asserted to not select among possible solutions randomly, despite what we plainly said, or compressibility is being redefined by Salvador here.

Quote
Running the program generates the following SET:

A  =  F(25) = CSI corresponding to 25

B  =  F(461) = CSI corresponding to 461

C  =  F (330) = CSI corresponding to 330


But running the TSPGRID program another three times generates another set,

A', but highly unlikely that A = A'

B', but highly unlikely that B = B'

C', but highly unlikely that C = C'

Et cetera.

Perhaps Salvador could explain how his idea of compression works, since I'm not seeing it. I think the problem here is that Salvador is treating TSPGRID as a deterministic algorithm when it isn't. The whole point of describing TSPGRID was to avoid a situation where every run of the program on the same input yielded the same result.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 07 2003,23:13   

I will review it and mull over it again.  In as much as I have nothing but a little embarassment at stake (nothing monetary), I will give you my honest opinion, but I must ponder how to better express what I wrote. I will not try to defend LCI, but express why I believe TSPGRID is not a valid counter example against LCI.  

By the way, please forgive me if any of my posts on the ARN thread were a little caustic toward your work (I used the word 'misrepresentation' once).  Please accept my apologies.  If I argue a point forcefully, it is not meant as an attack you personally.

Despite our differences, I want you to know I have the highest respect for your intellect and ability. It shines through in everything I've seen you post (like at Talk Origins).

You raise very good points that we in ID must address.  I will post as it comes to me, make retractions if appropriate, hopefully the truth will become evident for all sides of the debate.



Sincerely,
Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 07 2003,23:33   

Salvador,

Thanks for the quick reply.

Since I am working on a shorter version of the paper for publication, working out potential issues is quite useful. I appreciate comments that shed light on whether we're hitting the marks we set or not. I'm still thinking that TSPGRID demonstrates some problems in the argument for LCI, but it's possible that we've overlooked something. If that's the case, we'll have to revise the discussion of TSPGRID or abandon it.

I'm not convinced yet that it's time to man the lifeboats, though.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 08 2003,11:22   

Greetings Wesley,


If I am mistaken about anything of your statements please clarify.  We may need to go a few rounds to tidy things up. I may have a few typos in my notation too, so let's help each other out to at least clarify things.


---------------------------------------------------------------
From my vantage point we have 3 components to the TSPGRID operation.

At first glance we see 2 components

1.  TSPGRID program itself
2.  Random inputs in the form of "n", where 4n^2 is the number of cities

However, in actuality TSPGRID is composed of

A.  Deterministic elements
B.  Random selector "R", to select a solution:
"it chooses randomly among all the possible optimal solutions"


Thus the 3 components correspond to 1A, 1B, 2:
1.  TSPGRID program itself
 A.  Deterministic elements
 B.  Random selector which I label "R", to select a solution

2.  Random inputs in the form of "n", where 4n^2 is the number of cities


Thus in reality we have two random inputs, namely "n" and "R".

For a given "n",  each run of TSPGRID corresponds to an "R".  So we effectively have  a doubly nested loop,
each run for a given "n" permits "R" as an input.  Thus the system is thermodynamically open with respect to "R".  Each run of the TSPGRID program adds one integer of "R" to the mix.



To close the system we need to redefine the thermodynamic boundary for each  additional run.  We can do the following.

Let "R" be traced and recorded such that each run can be reconstructed.

TSPGRID for a given "n" running under R25 might generate the following segments for a shortest path:
(S1, S5, S30, ... S4n^2) = CSI25

Thus:

CSI25 = (S1, S5, S30, ... S4n^2) is compressibly equivalent to TSPGRID("n",R25)

similarly, for example

CSI461 = (S2, S65, S30, ... S4n^2) is compressibly equivalent to TSPGRID("n",R461)

CSI330 = (S25, S22, S650, ... S4n^2) is compressibly equivalent to TSPGRID("n",R330)


By way of anlogy in algebra:
     T25 + T461 + T330 =  T (25+461+330)

CRUDELY SPEAKING the CSI entities

(S1, S5, S30, ... S4n^2) + (S2, S65, S30, ... S4n^2) + (S25, S22, S650, ... S4n^2)

= TSPGRID("n",R25) + TSPGRID("n",R461) + TSPGRID("n",R330) =

TSPGRID("n") ( R25 + R461 + R330 )

If we define the information boundary ('thermodynamic boundary' ) around the system

     TSPGRID("n") + R25 + R461 + R330 = CSI25 + CSI461 + CSI330


we see there is no  violation of LCI.  Algorithmic compression is applied on the outputs (CSI25, CSI461, CSI330) not the inputs "n" and "R".

-------------------------------------------------------------------------

To offer a scecond view:

A polynomial of m-th order has m-solutions, potentially all soluitions may be unique.

However, in contrast, in the travelling salesman problem, for a given "n", there are m-unique solutions, and m is bounded by: "c * 28^n^2 and below by c' * 2.538^n^2, where c, c' are constants"

There are unique pathways such that:

We take run #1: our sum total of outputs is P1
We take run #2:  our sum total of outputs is P1 + P2
.
.
.
We take run #m:  our sum total of outputs is P1 + P2 + ....  + Pm



The maximum CSI defined by the space of all possible solutions is bounded by (not necessarily equivalent to) I(P1) + I(P2) +..... I (Pm) where I is the information content, P is the pathway, and "m" is the number of pathways. It is important to see that the total information is finite, it has a maximum.

I-total is less than or equal to I(P1) + I(P2) +..... I (Pm)

Now consider:
We take run #1: our sum total of outputs equal to TSPGRID(n,1)

We take run #2: our sum total of outputs equal to TSPGRID(n,1) + TSPGRID)(n,2)
.
.
.

We take run #m: our sum total of outputs equal to TSPGRID(n,1) + TSPGRID(n,2) + .... TSPGRID(n,m)


What was actually demonstrated was that I(P1) + I(P2) + ... I(Pm) was
compressible to I(TSPGRID(n)) + I(1) + ... I(m): this is the information content of the TSPGRID algorithm plus the information content of all the "m" inputs.

what made it look like LCI was violated was that each run of TSPGRID redefined the system boundary and
I-total appeared to increase by I(Pk) when in fact it was increased only by I(k) when applying algorithmic compression to the sum total of outputs from all runs.


In a sense, LCI pertains to the space of solutions, not the algorithm that finds (generates) the solutions!!!

Taken to the extreme, when all solutions are found, the system boundary can no longer be redefined, LCI
will be enforced at some point.  This is the case with information in a closed universe.  At some point, LCI will be enforced.


I know the above descriptions look crude, horrendous, and convoluted, but if there are ways I can clarify,
please ask.

The problem in application, as Chaitin points out, is we never really know that we have the optimal compression.   We only know, one compression is better than the other!  What that means is one may be able to create a TSPGRID so convoluted that the optimal compression will not be apparent as it was in this case.  LCI may be true in that case, but it would be hard to prove.

A very severe problem in applying LCI is however evident: an executable file, like an encrypted self-extracting zip may look unbelievably chaotic and as it self-extracts it suddenly looks like CSI came out of nowhere: LCI is preserved, but it looks like it was violated!

Thus in my heart I believe in LCI, but it's hard persuading others. LCI is actually derivable from set theory, and in the end it's actually pretty bland.  We have a hint of it in the travelling salesman problem.  LCI pertains to the space of solutions, not the algorithm that finds (generates) the solutions.


Also, the problem of "R" entering the TSPGRID system is exactly the problem in the analysis of biological systems. "R" enters through random quantum events.  As cells mutate it is analogous to several "R's" being added, the basic cell is the TSPGRID program.  Thus each mutation generates CSI, but evaluation of information increase must be applied after algorithmic compression is applied.

I proposed a simple test with I-ERP (Information as Evidenced by Replicas of Polymers), this is basically the tally of alles (is that right? you may need to help me out here).  I-ERP may be increasing in the human gene pool, it is CSI, but it is DEADLY CSI.

Worldwide I-ERP may be decreasing because of extinctions, but within each living specie population it is increasing until the species reach extinction.  2nd law guarantees I-ERP will be zero at some stage, but I fear life on Earth will rapidly approaching I-ERP = 0 long before the universe burns out.

It's like a propagating a bug during software execution. I don't believe natural selection will clean this out of the human gene pool any more than kicking a computer will clean out serious bugs.  The number of DEADLY mutations (infusions of DEADLY CSI) I fear is accumulating faster than natural selection can clean them out.  Thus, if the some ID paradigms are true this has bearing on our very lives.  For example, I am disturbed at the persistence of sickle-cell anemia and the persistence of many bad mutations.  If they continue to accumulate, that is one ID prediction that will not be a very happy one.  

I speak here, not as an ID advocate so much as a concerened citizen.  Thus ID, if for no other reason should be explored to help alleviate pain of the inevitable end of all things.

If DEADLY CSI is emerging faster than natural selection can purge it, I think in the interest of science we should explore this possibility.


Also, I fear that loads of I-ERP is being forever lost because of damage to our eco-systems (species extinction) like in the rainforests.

You may be on the other side of the ID debate, but I think this is where there can be common ground for valuable research based on concern for ourselves and the environment.  Improved defintions of information would be useful for the scientific enterprise, and I hope both sides will find a way to cooperate.


Sincerely,
Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 08 2003,19:23   

By the way Wesley,

   Thank you for soliciting my thoughts and giving me a chance to clarify.  I hope my post will be of assistance to you.  I look forward to your reply.


Best Regards,
Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 09 2003,02:02   

Salvador,

Thanks for your clarifications.

As we note in section 5 and in the appendix, we believe that what CSI actually identifies, when it can be said to work at all, is the outcome of simple computational processes. That's why our "specified anti-information" (SAI) is a superior approach to "specification" than Dembski's methods. Given your obvious interest in algorithmic information theory, you should be able to confirm this for yourself briefly.

I'm afraid that I don't concur with your analysis of the TSPGRID algorithm. In order to get to compressibility, you've converted TSPGRID into TSPGRIDdet, a separate, deterministic algorithm that solves the same problem, and added to your "background knowledge" the particular sequence of random numbers that specify a particular output solution for TSPGRIDdet. That doesn't set aside our claim that CSI is increased by TSPGRID when one uses the "uniform probability interpretation". Essentially, your compressibility approach uses the "causal history based interpretation", which was not our claim. See section 7 for a thorough critique of the "causal history based interpretation".

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 09 2003,13:16   

Greetings Wesley,

    What you point out is exactly things that need to be resolved in ID's presentation of CSI.

    I have claimed as you can see, CSI can emerge from an algorithmically constrained process in a thermodynamically open system being pumped by random inputs.  I do not believe CSI can emerge without an algorithmic process somewhere or intelligence in the pipeline (be it the laws of physics like in a snowflake or whatever).

    We in ID are hurting ourselves by not addressing what you have brought up.  I believe CSI cannot emerge apart from an algorithmically constraining influence or intelligence.  Your definition is worth exploring on that count.  Whether algorithms can spontaneously be implemented is where much of ID differs from ideas of undirected abiogenesis.

I think your critiques should be respected and answered.  Again, thank you for soliciting my thoughts.

With your permission, I'd like to reference this thread on the ARN board and the ISCID board.


With much respect,
Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 10 2003,00:29   

Greetings Wesley,

   I have continued to review your work on the SAI.  If it is a powerful as you say, and my intuition is in agreement with that, independent of the whole origins debate, you've given ID an absolute gift!  

   Please, if you would offer your thoughts:  Do you believe an algorithmic processes can emerge from a purely random processes?  Can SAI structures emerge without some intervening computational constraint (afterall that's what SAI was intended to detect)?  

This is basically the origin of life issue in my mind:  computational constraints do not emerge spontaneously (except for the computational constraints offered by the laws of physics).

 

Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 11 2003,14:20   

We cover some natural instances of computational systems in section 5 of the paper.

I think that you should have a look at this thread on SAI before getting too excited about how much of a "gift" SAI is to ID. SAI does not support the inference of action of an intelligent agent, just a simple computational process. As such, there is no distinction between naturally occurring computational processes, algorithms deployed by an intelligent agent, and the direct action of intelligent agents to be had via use of SAI. All may "generate" arbitrarily large amounts of SAI.

I think that Dembski is likely to think of SAI as more of a poison pill than a gift.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Dec. 17 2003,13:45   

Wesley,

I saw your and Shallit's paper referenced on the ARN forum, read it, started thinking about Dembski's use of the word "specification," and have a few comments about it.  This isn't exactly a critique of your paper but it may be helpful.

If we could visit many earth-like planets in the universe, we would expect to see some things that were similar to things on Earth and other things that were different.  For example, we would expect to see life, we wouldn't expect to see George Bush.  We would expect sometimes to hear language, we wouldn't expect to hear English.  The first kind of thing is what I suspect Dembski had in mind when he first defined his "specification."  The second kind of thing is what he would call a chance event.

Dembski's says that his "specification" is the rejection region used in Fisherian statistics.  However, there are usually many possible rejection regions that one might choose when assessing the null hypothesis for a questioned event.  Dembski in The Design Inference says that one of the events in the specification must be the event at issue.  This seems to be cheating since, if the event is in the rejection region, the null hypothesis (that nature did it) is rejected.  However, Dembski claims that this is legitimate providing one can specify the rest of a rejection region using information that is independent of the null hypothesis.  Dembski says he wants to avoid "cherry picking," meaning that he does not want to pick a rejection region merely because it contains the questioned event.  But if many rejection regions are possible and he deliberately picks one that contains the questioned event, what else is he doing?  Put another way, how does he demonstrate that he is not "cherry picking"?

If the null hypothesis is that nature did it and if the information that defines the specification is also derived from nature, would Dembski still conclude that rejecting the null hypothesis implies that the event must be a design event?  Put another way, is it logical to conclude design without using a rejection region that is based on a competing design hypothesis?  Dembski's examples of rejection regions typically postulate that a man or a man-like alien is responsible.  Examples are Caputo and the prime numbers in the film "Contact."  He also said that the bacterial flagellum was like a boat motor.

Having said all that, it is not obvious to me that Dembski actually needs the concept of a specification.  Dembski's goal is to show that there is a designer who might be God.  From the "Intelligent Design Coming Clean" paper on his web site: "... a designer, who for both Van Till and me is God...." Dembski doesn't need a general procedure to do this.  All he needs is a good argument for one event.  A specification is an event (usually, an event that is a collection of other events).  If he can show that an event that is a specification must have been designed, then it is unimportant that the specification specifies other events.  Dembski has the answer he wants.  Put another way, a specification is nothing more than an event that would be interesting to working scientists anyway.

I don't understand "Appendix A.1 A different kind of specification."  Some strings are random and cannot be compressed, some strings can be compressed using a known program, and still other strings could be compressed except that we don't know how.  If there is a program to compress a string, it could be the invention of an intelligent designer or it could be a model of a natural process.  So what does this have to do with specifications?

A suggestion that may be helpful in your quest to shorten your paper:  Focus on issues that Dembski can't repair.  Ignore issues such as the claim that telephone numbers are CSI or the error in the prime number sequence. Discussions about Dembski's gaffes tend to obscure the more significant problems in Dembski's writing.

Ivar

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: July 19 2004,16:03   

Ivar wrote:

Quote

I don't understand "Appendix A.1 A different kind of specification."  Some strings are random and cannot be compressed, some strings can be compressed using a known program, and still other strings could be compressed except that we don't know how.  If there is a program to compress a string, it could be the invention of an intelligent designer or it could be a model of a natural process.  So what does this have to do with specifications?


The existence of a minimal program/input pair that results in a certain output indicates that there exists an effective method for production of the output. Since effective methods are something that are in common between intelligent agents and instances of natural computation, one cannot distinguish which of the two sorts of causation might have resulted in the output, but one can reject chance causation for the output. We haven't so much repaired specification as we have pointed out a better alternative to it.

This leads me to a claim about Dembski's design inference: Everything which is supposedly explained by a design inference is better and more simply explained by Specified Anti-Information.

SAI identifies an effective method for the production of the output of interest. The result of a design inference is less specific, being simply the negation of currently known (and considered) regularity and chance. The further arguments Dembski gives to go from a design inference to intelligent agency are flawed. On both practical and theoretical grounds, SAI is a superior methodology to that of the design inference.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: July 31 2004,01:49   

Wesley,

I am now beginning a critique of your paper at:

ARN Discussion

The moderators have roped of the discussion to you, me, Jeffrey Shallit, Bill Dembski, and Jason Rosenhouse (if he wishes to participate).

I invite your participation.

Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 04 2004,05:37   

On "specified complexity" and equivocation:

Salvador writes:

Quote

Wesley and Jeffrey say "Strongly implies that Davies' use of the term is the same as his own". I don't think that is a charitable reading of page 180.


If Dembski had simply noted Davies' use of the term "specified complexity" and stated how his use differed, that would be one thing. But Dembski criticizes Davies for his willingness to credit natural mechanisms with the production of "specified complexity". Dembski makes no distinction between the use Davies makes of "specified complexity" and the different use Dembski does. It is Dembski who has the deficit in charity here.

Quote

For the sake of completeness I ask Wesley to justify that Davies ever gave a precise mathematical defintion of "specified complexity" (not complexity) in terms of Kolmogorov-Complexity?


It's completely irrelevant, which is the only sort of completeness I can make out for the above question. We never claimed that Davies did any such thing.

Quote

The issue is not "complexity per se", but for "tightly specified complexity". I invite Wesley explain how Davies distinguishes plain vanilla K-complexity from "tightly specified complexity"?


No, the issue is whether Davies' use of "specified complexity" is open to the criticism that Dembski makes of it. The fact that Davies uses "complexity" to mean something entirely different from Dembski's usage is a clue that the two usages of "specified complexity" differ significantly.

Quote

Wesley, you're entitled to your opinion, but I think you do not give page 180 of No Free Lunch a charitable reading whatsoever.


<shrug> I don't think Dembski reads Davies charitably. We seem to be at an impasse on this one.

Quote

Bill clarifies his position versus that of Davies and Orgel in Design Revolution page 84.


Dembski notes that Orgel and Davies use the term "loosely". He doesn't say that their usage is significantly different from his own. The implication is that the difference is in the degree of precision of use, with Dembski having greater precision.

Quote

Is Granite K-complex in terms of the composition and the positioning of the molecules? If so, then even Orgel does not use complexity the way you argue Davies uses it.


<shrug> We never said that the usage of Orgel was the same as that of Davies.

Quote

Bottom line, is Bill has made an effort to distinguish his definitions from others. The complaint that Bill "strongly implies that Davies' use of the term is the same as his own" I think has been settled in a subsequent book, Design Revolution.


I don't agree. Dembski has not retracted the criticism of Davies which is dependent upon Davies' use being the same as Dembski's. Simply saying that Davies' use was "loose" in some sense doesn't get Dembski off the hook for this.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 04 2004,06:27   

To Define "Intelligence", or Not to Define It...

Salvador takes issue with a criticism of ours:

Quote

Just as Dembski fails to give a positive account of the second half of "intelligent design", he
also fails to define the first half: intelligence.


Salvador notes something Dembski said earlier:

Quote

Bill Dembski in IS INTELLIGENT DESIGN A FORM OF NATURAL THEOLOGY
Within intelligent design, intelligence is a primitive notion much as force or energy are primitive notions within physics. We can say intelligible things about these notions and show how they can be usefully employed in certain contexts. But in defining them, we gain no substantive insight.


Salvador concludes:

Quote

I think therefore Wesley and Jefferey's claim about Bill:


quote:
--------------------------------------------------------------------------------
"he also fails to define the first half: intelligence"

--------------------------------------------------------------------------------

is therefore an unfair representation of Bill's position on intelligence. If intelligence is primitive to reality, not defining it, but leaving it as an undefined is reasonable.

I suggest Wesley and Jeffrey withdraw that complaint from their paper in fairness of representing Bill's position correctly.


Wesley and Jeffrey may not agree with Bill, but owe him the courtesy of representing his work fairly. Bill has explicitly said he did not believe "defining" intelligence will gain substantive insight. Jeffrey I'm sure could offer examples of undefined terms in mathematics, etc....


So, the issue isn't that there is an inaccuracy in what we said, but rather that we aren't "fair" in making this observation.

I think that I will need to revise this criticism, as it becomes more trenchant with the noting of Dembski's demurral at even making an attempt to clarify what "intelligence" means when he deploys it.

Just as ID advocates like to note that the term "evolution" can have many different meanings, it is possible to note that "intelligence" also has many different meanings. Salvador's defense of an "undefined" use of intelligence critically depends upon the undefined term having a unitary and agreed-upon significance to the class of readers, and while this might be true for the concept of "force" in physics, this is clearly not the case for "intelligence".

The phrase "intelligent design", for example, doesn't really mean that a "design" will have characteristics that indicate that it was intelligently arrived at. Rather, all that is meant is that some agent (as opposed to a process) was involved in causing some event. The putative agent is carefully relieved of any responsibility for actually displaying what an outside observer might call "intelligence" (see Dembski's essay on "optimal design").

(Actually, I find it interesting that Salvador's quote concerning "force" is incomplete. The whole paragraph is: "In most expositions of mechanics, force is usually taken as a primitive, without an explicit definition. Rather it is taken to be defined implicitly by the (often vague) presentation of the theory within which it is contained. Various physicists, philosophers and mathematicians, such as Ernst Mach, Clifford Truesdell and Walter Noll have contributed to the intellectual effort of obtaining a more rational, non-circular, and explicit definition of force." Salvador only quoted the part in italics. The rest of the paragraph indicates that not everyone is just as comfy with undefined terms lying about as Salvador is.)

Of course, in the interest of brevity that whole sentence and the possible further line of criticism suggested by Salvador could be dropped, as its absence would do no harm to the remainder of the section on "Intelligence" and the next sub-section, "Animal Intelligence".

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 05 2004,19:47   

Salvador writes:

Quote

I've done all I can to let Elsberry know that I'm responding to his criticism of you. He hasn't shown his himself at ARN. They've accused you of not responding to your critics: well, I'm giving them a taste of their own medicine. They don't seem eager to respond to me : Elsberry, Shallit, Dembski, Cordova.


That's amusing. Salvador apparently doesn't think that a response can be made anywhere except where he dictates. He is wrong on that point, too. Consider it a vote of "no confidence" in ARN's management on my part.

Dembski has been accused of not responding to his critics, and Salvador has eagerly expressed (and in public, no less) his willingness to take a "grenade" for Dembski so that Dembski can continue to not respond to critics. Who is that supposed to fool?

There are many issues that I have raised that have received no response from Dembski. Some are more serious than others. Some date back to our first encounter in 1997. Salvador has had his ARN thread up for just a few weeks, and substantive commentary in it is only a few days old. Even if I hadn't already responded, Salvador would simply be getting a taste of the medicine that Dembski so freely dispenses to critics.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Aug. 08 2004,18:19   

Quote (Wesley R. Elsberry @ July 19 2004,16<!--emo&:0)
We haven't so much repaired specification as we have pointed out a better alternative to it.

I don't see the point of this.  Why do you want a "better alternative" to specification?

Dembski had to define a term like specification so that he could justify ignoring those improbable chance events that occur all the time, e.g., strings of coin flips.  His definition of specification is confusing, and, probably for him, that is good.  Now he can write books rather than short and obviously flawed papers.  But I don't see why you want to help him to continue the confusion.

There is another alternative to specification that is less confusing: limit the design question to events that are biological events.  Actually, this may not be a real limitation.  So far as I know, the only possible specified, complex events (as defined by Dembski) are man-made events (which are irrelevant except as examples) and biological events.  (Fictional events are also irrelevant.)

Questions about the origins of biological events are legitimate.  However, Dembski's answer is not.  He asserts that if we do not have a detailed, experimentally verified theory that explains how nature did it, then we can presume that some unknown designer did it.  No experimental evidence confirming a design hypothesis is required.  Dembski seems to believe that this assertion is genuine science.

Incidentally, I assumed that when Dembski wrote that, "Where direct, empirical corroboration is possible, design actually is present whenever specified complexity is present," he was referring to man and to man-made objects.  (See here.)  Maybe this is his experimental evidence confirming that life was designed.

Ivar

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Aug. 12 2004,21:22   

Wesley,

I'm willing to discuss the paper with you here.  I have taken time to learn the material better.  I took your paper seriously enough to study it.

My mind has changed on a few issues since that time, mostly against the content in your paper.

If you won't come to ARN, I'm willing to come here to your website.

You're a gentleman, Wesley, it's not in my nature to be polemic to gentleman, but I think there are some things seriously wrong with what you wrote.

For starters:



Quote

Wesley and Jeffrey wrote:

Dembski also identifues CSI or specified complexity" with similarly-worded concepts
in the literature. But these identifications are little more than equivocation.

For example, Dembski quotes Paul Davies' book, The Fifth Miracle, where Davies uses the term specied complexity", and strongly implies that Davies' use of the term is the same as his own [19, p. 180]. This is simply false. For Davies, the term complexity means high Kolmogorov
complexity, and has nothing to do with improbability.



What Bill wrote on page 180

Quote

In The Fifth Miracle Davies goes so far as to suggest that any laws capable of explaining th origin of life must be radically different from any scientific laws known to date. The problem as he sees it, with currently known scientific laws, like the laws of chemistry and physics, is that they cannot explain the key features of life that needes to be explained. That feature is specified complexity. As Davies puts it: "Living organisms are mysterious not for thier complexity per se, but for their tightly specified complexity." Nonetheless once life (or more generally some self-replicator) arrives on the scen, Davies thinks there is no problem accounting for specified complexity...

In this chapter I will argue that the problem of explaining specified complexity is even worse than Davies makes out in The Fifth Miracle


You're free to say what you want Wesley, but I think the way you represented page 180 was a stretch.

Further Bill clarified his position in his latest book with:

Bill clarifies his position versus that of Davies and Orgel in  Design Revolution page 84.

   
Quote
 
The Term Specified Complexity is about thirty years old. To my knowledge, orgigin-of-life researcher Leslie Orgel was the first to use it.  The term appeared in his 1973 book  The Origins of Life, whre he wrote, "Livign organism are distinguished by their specified complexity..  Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity."  More recently, in his 1999 book  The Fifth Miracle, Paul Davies identified specified complexity as the key to resolving the problem of life's origin:

" Living organisms are mysterious not for their complexity  per se, but for their tightly specified complexity.l  To comprehend fully how life arose from nonlife, we need to know not only how biological information was concentrated, but also how biologically useful information came to be specified"

Neither Orgel nor Davies, however, provided a precise analytic account of specified complexity.  I provide such an account int  The Design Inference (1988) and its sequel  No Free Lunch (2002).  Here I will merely sketch that account of specified complexity. Orgel and Davies used the term  specified complexity loosely.


Is Granite K-complex in terms of the composition and the positioning of the molecules?  If so, then even Orgel does not use complexity the way you argue Davies uses it.  

Bottom line is Bill has made an effort to distinguish his definitions from others.  The complaint that Bill "strongly implies that Davies' use of the term is the same as his own" I think has been settled in a subsequent book,  Design Revolution.


cheers,
Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 12 2004,23:04   

Salvador,

Look a little further up the page. I've already responded to this bit under the heading, "On 'specified complexity' and equivocation".

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 13 2004,09:58   

On "simple comuptational processes":

Salvador wrote:

Quote

But for starters, if I have a "500 coins heads" in a box and this was done by a coin ordering robot, how can one say the robot is performing a "simple computational process". That robot could be incredibly complex or simple, the resulting output of "500 coins heads" speaks nothing of the complexity inside the robot to in achieve "500 coins heads".


This would be the "Rube Goldberg" objection. One can come to the same result by any of a number of means, some of them much more complex than others. But the point of Algorithmic Information Theory is that no more information exists in the output than is to be found in the shortest program/input pair that produces that output. That longer program/input pairs exist is irrelevant to the result.

Quote

I invite Wesley care to quantify the phrase "simple computational process"?


That would be the appendix detailing "Specified Anti-Information".

Quote

I invite Wesley and Jeffrey to define the number of bits needed to implement a basic computer, such as a Universal Turing Machine that can perform "the simple computational process".


I don't see why one need postulate a UTM for every job. That's overkill. That's another reason why we made reference to cellular automata.

Quote

Bottom line, an orderly arrangement (like coins all heads), speaks nothing of the level of complexity required to create that orderly arrangement. The above quote is therefore seriously flawed.


Non sequitur. Dembski's argument offers to exclude natural processes in principle; the possible existence of simple computational processes instantiated by natural processes capable of producing the observed event vitiates that claim. That more complex processes might also do the same job in no way reduces the force of this rebuttal.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Aug. 13 2004,15:14   

Wesley,

First of all, I thank you for the courtesy of replying to me.  I am a hostile critic of your work on these emotionally charged issues.  I've not been exactly kind in some of my comments about your work or the things you have said (which you see all over ARN), and I am thus all the more grateful for the favor of your replies.

I recognize I'm a guest here at your website, thus I will try to keep my postings on this thread to a minimum unless of course you wish me to elaborate or debate more.

I may post verbosely at ARN on your paper.  Post your responses wherever you please if you are so inclined to do so.  The questions however that need clarification from you I will post here.  My goal now in posting here, is to specifically ensure that I represent and understand your position accurately.  

I know you'd rather debate Bill Dembski rather than me, so with that in mind, I will not try to be too much of a distraction to you at your website.  

Again, thank you for the favor of your responses.  If you really want me to engage your paper I will, otherwise, I will limit my participation on this thread.



respectfully,
Salvador

PS
I'll post at ARN to let everyone know that you have responded to me now.  Thank you.

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Aug. 13 2004,15:45   

Quote

Wesley wrote:

There are many issues that I have raised that have received no response from Dembski.



Well, for the record what avenue would you think appropriate for a public exchange with William Dembski?  

Do you want him to respond with counter papers to your papers?  Seriously, do want him to come out here and post to your thread?

I think a lot of what you write about his work does not represent his work or his position at all.

Your own ideas have merit, such as SAI.  However, your attempts to state Bill's ideas in your own words I don't think are very charitable and end up being strawmen.

Seriously Wesley,  I corresponded with Bill over some of the points I too had questions on.  

What I dispute with you is things I think are as plain as day.  For example my quotes form Design Revolution I think cleared some things up as far as the Davies, Orgel, and Dembski's definition of Specified Complexity.  You obviously disagree, but I thought what Bill wrote in that book was quite sufficient to address a point you raised in your paper.

We're going to not resolve anything on this thread, but I want to make sure I represent your words accurately.   As much as I'll be tempted to quibble, I'm probably going to let a lot of things go.  I may post more elaborate responses at ARN.  You are welcome to respond or not respond.  

I will make an effort from now on not to make a big deal if you have no response.  I am willing to do that because I see you have made an effort to respond.

State what you want from me, and what you feel is fair in this dialogue.  I will do my best to keep the discourse open.  If I say something over at ARN you feel is unfair, rude, or mis-represents you, you are free to challenge me on it, and I'll do my best to make amends.  I'm for fair play.  OK?


Thank you.


Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 13 2004,17:21   

Hi Wesley,

As I indicated elsewhere I would post some comments to your site.  I will try to keep to be respectful of your time. My intent is not for protracted discussion, but to make sure I represent your work correctly.

Do you believe genetically engineered products evidence CSI by Dembski's definition?  Here is a case were potentially non-algorithmically compressible information is CSI.  This would refute your paper's claim that Dembski confuses what you call SAI and CSI.  It seems to also a misinterpretation Mark Perakh also makes.

I would actually argue that there is an overlap between objects exhibiting SAI and CSI, but in actuality they are not the same.  Some forms of SAI are a subset of CSI.


Thanks.

Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 16 2004,20:35   

Salvador T. Cordova wrote:

Quote

Do you believe genetically engineered products evidence CSI by Dembski's definition?


I have no need to believe that anything evidences CSI by Dembski's definition. The reason that I need not believe any such thing is that there has never been a successful application of Dembski's EF/DI via the GCEA meeting or exceeding the "universal small probability" of any event whatsoever. If you had a citation of such a published successful, fully worked-out calculation, I'm sure that you would share that with us.

Until then, it's all just blowin' smoke.

Quote

Here is a case were potentially non-algorithmically compressible information is CSI.  This would refute your paper's claim that Dembski confuses what you call SAI and CSI.


I have no recollection of saying that Dembski confuses SAI with anything else. That would hardly be sporting, since "SAI" as a term was introduced in that paper. Perhaps a specific citation of the purported faulty language would be appropriate?

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 17 2004,08:32   

We clearly will not agree on many things Wesley, but thank you for responding.

My intent is to make sure that I am representing you correctly.  And that is why I am asking you questions.

The fact you said:
Quote

I have no need to believe that anything evidences CSI by Dembski's definition.


Evidences you do not represent and possibly do not understand CSI.  When I a designer like myself creates an ID artifact there can be no doubt that in many cases there is CSI.  It is the blueprint artifact methaphor.  

That is why I asked about DNA genetic engineering.  I corresponded with Dembski (who by the way was Shallit's student as evidenced in the Acknowledgements of Design Inference) last week to quadruple check that my interpretation was correct.  I was certain I was right, and I was.

Thus, as I have suspected, your paper incorrectly represents Dembski's work.  If your complaint is one of clarity, I will pass that on and we'll make the adjustments.

Your SAI concept has merit.  


For the record not all of my posts on the matter at ARN are correct technically, and I have to fix a few things.  I would not be surprised to see the ID leadership or rank and file at some point write a refutaiton of your's and Shallit's paper.

I know that we are on both sides of an emotionally charged issue, and I am grateful you have offered to dialogue with me, even if the communications are mostly dysfunctional, we at least have some dialogue.

Thank you.

Respectfully,
Salvador

  
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Sep. 17 2004,10:20   

Quote (scordova @ Sep. 17 2004,08:32)

When I a designer like myself creates an ID artifact there can be no doubt that in many cases there is CSI.  It is the blueprint artifact methaphor.

To reiterate a comment made on the ARN board, an object does not have CSI merely because it was made by man.  Dembski's definition of CSI requires that one show that a non-intelligent nature could not create the object, i.e., that it could not be an object resulting from regular and chance events.

Note that if a non-intellligent nature did create life and, eventually, man, then there has been a chain of regular and chance events that resulted in the objects that have been made by man.  The probability of such a chain can not be smaller than Dembski's Universal Probability Bound of 10^-150, i.e., it cannot be "complex."  One cannot deduce that man is the product of an intelligent designer merely because man is an intelligent designer.

Ivar

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 17 2004,10:21   

Salvador,

I repeat: I have no need of belief in evidence of CSI according to Dembski's definitions. If CSI were claimed by Dembski's definitions, which involve working out all the parts of the EF/DI confirmed by GCEA, there would be no question of what was claimed, nor how the conclusion was drawn, and all would be open to examination and crititique.

This has not been done. I have no knowledge of a specific example that corresponds to what you are talking about, and I certainly am not convinced by mysterious private email exchanges that I am not privy to. What you describe sounds like an example of what Jeff Shallit and I referred to as the "Sloppy Chance Elimination Argument" in our paper.

It's a longstanding criticism of mine that Dembski has not made available the calculations that his public claims imply have already been accomplished (as in his 1998 "Science and Design" article in "First Things", which strongly implied that his EF/DI and GCEA had been applied to the systems labeled as IC by Michael Behe).

This latest missive of yours simply confirms that my analysis on this point has been spot-on.

Edited by Wesley R. Elsberry on Sep. 17 2004,10:36

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Art



Posts: 69
Joined: Dec. 2002

(Permalink) Posted: Sep. 17 2004,11:22   

Sal, this may be "piling on", but I have a recommendation for you.  You need to review Dembski's "parable" about the archer and the bulleyes.  Most, if not all, of the things you are arguing (here and on other boards) as possessing CSI are actually items that, using this parable, are rightly called fabrications.

   
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 17 2004,16:42   

Quote


This has not been done. I have no knowledge of a specific example that corresponds to what you are talking about, and I certainly am not convinced by mysterious private email exchanges that I am not privy to. What you describe sounds like an example of what Jeff Shallit and I referred to as the "Sloppy Chance Elimination Argument" in our paper.

It's a longstanding criticism of mine that Dembski has not made available the calculations that his public claims imply have already been accomplished (as in his 1998 "Science and Design" article in "First Things", which strongly implied that his EF/DI and GCEA had been applied to the systems labeled as IC by Michael Behe).

This latest missive of yours simply confirms that my analysis on this point has been spot-on.


Fair enough.  I'll suggest to Bill we at least do some of these for human examples, and maybe you will be convinced CSI at least does exist in human affairs.

You actually solved a major problem for establishing detachable, non-postidictive specifications with your SAI.  That was gift!

An example closely analogous to SAI is the problem of "convergent evolution" which Sternberg calls "neo-Darwinian" epicycles.

The weakness of arguing as protein sequences as evidencing CSI I think needs review as I believe Art makes a very good case which the IDists need to address.  Same with the flagellum.


Quote

I have no recollection of saying that Dembski confuses SAI with anything else. That would hardly be sporting, since "SAI" as a term was introduced in that paper. Perhaps a specific citation of the purported faulty language would be appropriate?



It is not my intent to ever misrepresent you, that is why I am here asking for clarifications and your own words.

You in fact wrote:
Quote


An alternate view is that if specified complexity detects anything at all, it detects the output of simple computational processes. This is consonant with Dembski's claim. It is CSI that within the Chaitin-Kolmogorov-Solomonoff theory of algorithmic information identifies the highly compressible, nonrandom strings of digits.


Are not compressible strings, strings which evidence SAI??  If not, I'll amend my assertion, no problem.  I'm for fair play.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 18 2004,11:22   

This is something that I responded to on The Panda's Thumb. I'm repeating it here so that there is another channel of communication for this message to Salvador.

*****************

Salvador T. Cordova wrote:

Quote

Sternberg’s professional qualifications in relevant fields, it seems, exceed even those of Gishlick, Elsberry, Matzke combined.  So I hope that will be taken into consideration in view of charges the article is substandard science.


The credentials of Sternberg don't change the content of Meyer 2004. That's pure argument by authority, and it just doesn't work in science.

The similar situation with regard to antievolutionist fascination with (mis)quotation is something I've commented upon before:

Quote

The antievolution fascination with quotations seems to stem from the anti-science mindset of "revelation": testimonial evidence reigns supreme in theology, thus many antievolutionists may mistake that condition as being the same in science. However, science has pretty much eschewed assigning any intrinsic worth to testimonial evidence. Quotations from some source are taken as being an indication that some condition as stated holds according to the reliability of the speaker, as seen by reviewing the evidence. Antievolutionists "get" the first part, but have real difficulty coming to terms with the second part. If some Expert A says X, then the antievolutionist expects that no lesser known mortal will dare gainsay Expert A's opinion on X. However, such a situation is routine in science. Anyone presenting Evidence Q that is inconsistent with X then has shown Expert A to be incorrect on X. If the person holding forth shows repeatedly that they can't be trusted to tell us correct information on, say, trilobites, then that just means that we likely don't hold any further talk on trilobites from that source in high regard.


http://www.antievolution.org/people/wre/quotes/

We pointed out problems with Meyer 2004. The issue is whether our criticism stands up to scrutiny. Salvador has avoided dealing with the content of our criticism, and is apparently forced to adopt fallacious modes of argumentation to defend Meyer 2004.

I've pointed out to Salvador exactly what he needs to do to show that his boasting about the Elsberry and Shallit 2003 paper being the wrong citation to critique Meyer 2004 by was on track. These items are things that if I were wrong about, Salvador should quickly be able to show that I was wrong on. This is the FOURTH TIME I've entered this in response to Salvador's comments here since August 31st. I'll email them to him, too, just to eliminate any weak apologetic that he had somehow overlooked the previous presentations.

===================

(From http://www.pandasthumb.org/pt-archives/000430.html#c7223 )

[quote=Salvador T. Cordova]
In the meantime, I hope Stephen Meyers will read these reviews and learn.  I can confidently say he can ignore any challenges offered by the “Elsberry and Shallit 2003” paper.  I don’t mind you guys building your case on it though. It’ll just be that more of an embarassment to see it all collapse when that paper is refuted.
[/quote]

It doesn’t matter if “the paper” is “refuted”; what matters is whether the particular claims made are supported and true. Here are the claims again:

Quote

2. Meyer relies on Dembski’s “specified complexity,” but even if he used it correctly (by rigorously applying Dembski’s filter, criteria, and probability calculations), Dembski’s filter has never been demonstrated to be able to distinguish anything in the biological realm — it has never been successfully applied by anyone to any biological phenomena (Elsberry and Shallit, 2003).

3. Meyer claims, “The Cambrian explosion represents a remarkable jump in the specified complexity or ‘complex specified information’ (CSI) of the biological world.” Yet to substantiate this, Meyer would have to yield up the details of the application of Dembski’s “generic chance elimination argument” to this event, which he does not do. There’s small wonder in that, for the total number of attempted uses of Dembski’s CSI in any even partially rigorous way number a meager four (Elsberry and Shallit, 2003).


In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (2), all one has to do is produce a citation in the published literature (dated prior to our paper) showing a complete and correct application of Dembski’s GCEA to a biological system such that “CSI” is concluded. Thus far, I’m unaware of any such instance. The only thing that makes any moves in that direction at all is Dembski’s section 5.10 of “NFL”, and we were careful to make clear why that one was both incomplete and incorrect.

In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (3), all one has to do is produce citations in the published literature (dated prior to our paper) showing the attempted application of Dembski’s GCEA to more than four cases. I’m unaware of any further examples that have been published, but I’m perfectly open to revising our number to account for all the instances.

Until and unless those citations are forthcoming, the braggadacio about how the Elsberry and Shallit 2003 paper can be safely ignored seems somewhat out of place.

=====

I posted that on August 31st. As far as I can tell, neither Salvador nor any other ID advocate has made the slightest headway in showing that I was inaccurate in either claim made above. Salvador has taken up an aggressive grandstanding technique, though I think that it is obvious to all that there is little to no substance as yet to back it up. If I were wrong on the two points above, it seems to me that it would be simplicity itself for some ID advocate to show that I was wrong, and I would have expected that to happen already. I predict that what I've written here will again disappear into the ID memory hole of inconveniently true criticisms.

If I'm wrong here, though, I'm willing both to take my lumps and acknowledge whoever it is that shows me to be wrong. I'm still waiting for the documentation. I suspect I will wait a long, long time.

Edited by Wesley R. Elsberry on Sep. 18 2004,11:31

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
  52 replies since Nov. 12 2003,08:26 < Next Oldest | Next Newest >  

Pages: (2) < [1] 2 >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]