RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (2) < [1] 2 >   
  Topic: Elsberry & Shallit on Dembski, Discussion of the criticism< Next Oldest | Next Newest >  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Nov. 12 2003,08:26   

I'm starting this thread for discussion of the paper that Jeff Shallit and I wrote on Dembski's ideas. Since it is now known to the public, I expect some criticisms of our criticisms will be made.

For example...

In a thread on ARN, "Rock" gripes that we imply that we have a positive theory but that we don't expound upon it. Well, we do have a positive approach to examining bit strings that is expounded upon for a couple of pages in our appendix. This is apparently not clear when one is simply "skimming" our paper. I have also started a thread here for discussion of our specified anti-information (SAI) as a replacement for Dembski's notion of specification.

"Rock" also complained that there was "nothing original" in our paper. It is certainly true that many of our criticisms had been expressed less formally and separately elsewhere in discussion on the Internet, but I'm not sure that that applies to all of the criticisms that we made. SAI is an application of the universal probability distribution, but the application itself is original with us.

In his last sentence, "Rock" asks if our ideas bear closer examination than Dembski's on these matters. Clearly, I think so. We identified a number of problems in Dembski's approach that we feel are insurmountable. Our SAI addresses each of those problems.


Please use this thread to bring attention to criticism made in other fora.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 06 2003,11:58   

Salvador T. Cordova critiqued a "misunderstanding" concerning TSPGRID and Dembski's LCI in a thread on ARN.

Quote
Originally posted by Salvador T. Cordova:

Originally posted by Lars the digital caveman:
"Hence, large amounts of CSI that weren't there before have been generated. This clearly contradicts the LCI. "

I agree with you that there are major problems with definitions in ID, and confusion is still rampant.  It would be in ID's intrest to establish uniform standards.

However, consider the following, running a program that loops from 1 to a trillion and fills an array in memory with numbers 1 to a trillion. Is more information (not CSI) generated than was in the program before the run?  When one applies alogrithmic compression, on sees the trillion bytes of information are not generated by running the program.  The information is algorithmically compressed to the program by definition.

Likewise,  the authors misunderstood in the case of TSPGRID what is going on, because instead of simple integers, they were generating CSI entities. But they forgot that the sum total of what was being generated was algorithmically compressible.  Apply the compression, one sees, no information was added within the system boundary.

If we take:

X =  TSPGRID
Y =  inputs (25, 461, 330)

as the starting point, that establishes the 'thermodynamic boundary' so to speak right?

Running the program generates the following SET:
A  =  F(25) = CSI corresponding to 25
B  =  F(461) = CSI corresponding to 461
C  =  F (330) = CSI corresponding to 330

It appears that we've generated lots of new CSI, but this is not true because the above SET of CSI entities is algorithmically compressible to the following by definition:

X =  TSPGRID
Y = input of (25, 461, 330)

thus (X,Y) is 'isomorphic' to (A,B,C)

under algorithmic compression. Thus LCI is not violated.

However, the confusion is understandable, and thus I don't appeal to LCI personally very much.  And as I said, it's hard to create real world thermodynmically closed systems to run experiments on.

"Salvador, one has to distinguish between information in general and CSI."

Agreed, ID might be better served I believe to reconsider it's definitions of information, CSI, and detectability techniques.  

One can do a lot of detection without appealing to Dembski's definition of CSI.  Some of those methods I show in my threads.

The state of ID is more exotic than it needs to be, in my opinion. ID could benefit by emphasizing simpler detection methods.  

Once the less exotic are demonstrated to be effective, then things like what Dembski is showing, with some reformulation, will be more acceptable.

I love uncle Bill Dembski, but at times his definitions kill me.

Respectfully,
Salvador


I agree that there is a misunderstanding, but disagree as to who has the misunderstanding. Let's review a bit about TSPGRID.

Quote

Our algorithm is called TSPGRID, and takes an integer n as an input. It then solves the traveling salesman problem on a 2n * 2n square grid of cities. Here the distance between any two cities is simply Euclidean distance (the ordinary distance in the plane). Since it is possible to visit all 4n^2 cities and return to to the start in a tour of cost 4n^2, an optimal traveling salesman tour corresponds to a Hamiltonian cycle in the graph where each vertex is connected to its neighbor by a grid line.

As we have seen above in Section 9, Dembski sometimes objects that problem-solving algorithms cannot generate specified complexity because they are not contingent. In his interpretation of the word this means they produce a unique solution with probability 1. Our algorithm avoids this objection under one interpretation of specified complexity, because it chooses randomly among all the possible optimal solutions, and there are many of them.

In fact, Gobel has proved that the number of different Hamiltonian cycles on the 2n * 2n grid is bounded above by c * 28^n^2 and below by c' * 2.538^n^2, where c, c' are constants [31]. We do not specify the details of how the Hamiltonian cycle is actually found, and in fact they are unimportant. A standard genetic algorithm could indeed be used provided that a sufficiently large set of possible solutions is generated, with each solution having roughly equal probability of being output. For the sake of ease of analysis, we assume our algorithm has the property that each solution is equally likely to occur.


If TSPGRID selects among the many possible solutions for each input randomly (and elsewhere in the paper we define random in AIT as incompressible), how is it that there is a compressible representation of the sort Salvador claims? As I see it, either TSPGRID is being asserted to not select among possible solutions randomly, despite what we plainly said, or compressibility is being redefined by Salvador here.

Quote
Running the program generates the following SET:

A  =  F(25) = CSI corresponding to 25

B  =  F(461) = CSI corresponding to 461

C  =  F (330) = CSI corresponding to 330


But running the TSPGRID program another three times generates another set,

A', but highly unlikely that A = A'

B', but highly unlikely that B = B'

C', but highly unlikely that C = C'

Et cetera.

Perhaps Salvador could explain how his idea of compression works, since I'm not seeing it. I think the problem here is that Salvador is treating TSPGRID as a deterministic algorithm when it isn't. The whole point of describing TSPGRID was to avoid a situation where every run of the program on the same input yielded the same result.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 07 2003,23:13   

I will review it and mull over it again.  In as much as I have nothing but a little embarassment at stake (nothing monetary), I will give you my honest opinion, but I must ponder how to better express what I wrote. I will not try to defend LCI, but express why I believe TSPGRID is not a valid counter example against LCI.  

By the way, please forgive me if any of my posts on the ARN thread were a little caustic toward your work (I used the word 'misrepresentation' once).  Please accept my apologies.  If I argue a point forcefully, it is not meant as an attack you personally.

Despite our differences, I want you to know I have the highest respect for your intellect and ability. It shines through in everything I've seen you post (like at Talk Origins).

You raise very good points that we in ID must address.  I will post as it comes to me, make retractions if appropriate, hopefully the truth will become evident for all sides of the debate.



Sincerely,
Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 07 2003,23:33   

Salvador,

Thanks for the quick reply.

Since I am working on a shorter version of the paper for publication, working out potential issues is quite useful. I appreciate comments that shed light on whether we're hitting the marks we set or not. I'm still thinking that TSPGRID demonstrates some problems in the argument for LCI, but it's possible that we've overlooked something. If that's the case, we'll have to revise the discussion of TSPGRID or abandon it.

I'm not convinced yet that it's time to man the lifeboats, though.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 08 2003,11:22   

Greetings Wesley,


If I am mistaken about anything of your statements please clarify.  We may need to go a few rounds to tidy things up. I may have a few typos in my notation too, so let's help each other out to at least clarify things.


---------------------------------------------------------------
From my vantage point we have 3 components to the TSPGRID operation.

At first glance we see 2 components

1.  TSPGRID program itself
2.  Random inputs in the form of "n", where 4n^2 is the number of cities

However, in actuality TSPGRID is composed of

A.  Deterministic elements
B.  Random selector "R", to select a solution:
"it chooses randomly among all the possible optimal solutions"


Thus the 3 components correspond to 1A, 1B, 2:
1.  TSPGRID program itself
 A.  Deterministic elements
 B.  Random selector which I label "R", to select a solution

2.  Random inputs in the form of "n", where 4n^2 is the number of cities


Thus in reality we have two random inputs, namely "n" and "R".

For a given "n",  each run of TSPGRID corresponds to an "R".  So we effectively have  a doubly nested loop,
each run for a given "n" permits "R" as an input.  Thus the system is thermodynamically open with respect to "R".  Each run of the TSPGRID program adds one integer of "R" to the mix.



To close the system we need to redefine the thermodynamic boundary for each  additional run.  We can do the following.

Let "R" be traced and recorded such that each run can be reconstructed.

TSPGRID for a given "n" running under R25 might generate the following segments for a shortest path:
(S1, S5, S30, ... S4n^2) = CSI25

Thus:

CSI25 = (S1, S5, S30, ... S4n^2) is compressibly equivalent to TSPGRID("n",R25)

similarly, for example

CSI461 = (S2, S65, S30, ... S4n^2) is compressibly equivalent to TSPGRID("n",R461)

CSI330 = (S25, S22, S650, ... S4n^2) is compressibly equivalent to TSPGRID("n",R330)


By way of anlogy in algebra:
     T25 + T461 + T330 =  T (25+461+330)

CRUDELY SPEAKING the CSI entities

(S1, S5, S30, ... S4n^2) + (S2, S65, S30, ... S4n^2) + (S25, S22, S650, ... S4n^2)

= TSPGRID("n",R25) + TSPGRID("n",R461) + TSPGRID("n",R330) =

TSPGRID("n") ( R25 + R461 + R330 )

If we define the information boundary ('thermodynamic boundary' ) around the system

     TSPGRID("n") + R25 + R461 + R330 = CSI25 + CSI461 + CSI330


we see there is no  violation of LCI.  Algorithmic compression is applied on the outputs (CSI25, CSI461, CSI330) not the inputs "n" and "R".

-------------------------------------------------------------------------

To offer a scecond view:

A polynomial of m-th order has m-solutions, potentially all soluitions may be unique.

However, in contrast, in the travelling salesman problem, for a given "n", there are m-unique solutions, and m is bounded by: "c * 28^n^2 and below by c' * 2.538^n^2, where c, c' are constants"

There are unique pathways such that:

We take run #1: our sum total of outputs is P1
We take run #2:  our sum total of outputs is P1 + P2
.
.
.
We take run #m:  our sum total of outputs is P1 + P2 + ....  + Pm



The maximum CSI defined by the space of all possible solutions is bounded by (not necessarily equivalent to) I(P1) + I(P2) +..... I (Pm) where I is the information content, P is the pathway, and "m" is the number of pathways. It is important to see that the total information is finite, it has a maximum.

I-total is less than or equal to I(P1) + I(P2) +..... I (Pm)

Now consider:
We take run #1: our sum total of outputs equal to TSPGRID(n,1)

We take run #2: our sum total of outputs equal to TSPGRID(n,1) + TSPGRID)(n,2)
.
.
.

We take run #m: our sum total of outputs equal to TSPGRID(n,1) + TSPGRID(n,2) + .... TSPGRID(n,m)


What was actually demonstrated was that I(P1) + I(P2) + ... I(Pm) was
compressible to I(TSPGRID(n)) + I(1) + ... I(m): this is the information content of the TSPGRID algorithm plus the information content of all the "m" inputs.

what made it look like LCI was violated was that each run of TSPGRID redefined the system boundary and
I-total appeared to increase by I(Pk) when in fact it was increased only by I(k) when applying algorithmic compression to the sum total of outputs from all runs.


In a sense, LCI pertains to the space of solutions, not the algorithm that finds (generates) the solutions!!!

Taken to the extreme, when all solutions are found, the system boundary can no longer be redefined, LCI
will be enforced at some point.  This is the case with information in a closed universe.  At some point, LCI will be enforced.


I know the above descriptions look crude, horrendous, and convoluted, but if there are ways I can clarify,
please ask.

The problem in application, as Chaitin points out, is we never really know that we have the optimal compression.   We only know, one compression is better than the other!  What that means is one may be able to create a TSPGRID so convoluted that the optimal compression will not be apparent as it was in this case.  LCI may be true in that case, but it would be hard to prove.

A very severe problem in applying LCI is however evident: an executable file, like an encrypted self-extracting zip may look unbelievably chaotic and as it self-extracts it suddenly looks like CSI came out of nowhere: LCI is preserved, but it looks like it was violated!

Thus in my heart I believe in LCI, but it's hard persuading others. LCI is actually derivable from set theory, and in the end it's actually pretty bland.  We have a hint of it in the travelling salesman problem.  LCI pertains to the space of solutions, not the algorithm that finds (generates) the solutions.


Also, the problem of "R" entering the TSPGRID system is exactly the problem in the analysis of biological systems. "R" enters through random quantum events.  As cells mutate it is analogous to several "R's" being added, the basic cell is the TSPGRID program.  Thus each mutation generates CSI, but evaluation of information increase must be applied after algorithmic compression is applied.

I proposed a simple test with I-ERP (Information as Evidenced by Replicas of Polymers), this is basically the tally of alles (is that right? you may need to help me out here).  I-ERP may be increasing in the human gene pool, it is CSI, but it is DEADLY CSI.

Worldwide I-ERP may be decreasing because of extinctions, but within each living specie population it is increasing until the species reach extinction.  2nd law guarantees I-ERP will be zero at some stage, but I fear life on Earth will rapidly approaching I-ERP = 0 long before the universe burns out.

It's like a propagating a bug during software execution. I don't believe natural selection will clean this out of the human gene pool any more than kicking a computer will clean out serious bugs.  The number of DEADLY mutations (infusions of DEADLY CSI) I fear is accumulating faster than natural selection can clean them out.  Thus, if the some ID paradigms are true this has bearing on our very lives.  For example, I am disturbed at the persistence of sickle-cell anemia and the persistence of many bad mutations.  If they continue to accumulate, that is one ID prediction that will not be a very happy one.  

I speak here, not as an ID advocate so much as a concerened citizen.  Thus ID, if for no other reason should be explored to help alleviate pain of the inevitable end of all things.

If DEADLY CSI is emerging faster than natural selection can purge it, I think in the interest of science we should explore this possibility.


Also, I fear that loads of I-ERP is being forever lost because of damage to our eco-systems (species extinction) like in the rainforests.

You may be on the other side of the ID debate, but I think this is where there can be common ground for valuable research based on concern for ourselves and the environment.  Improved defintions of information would be useful for the scientific enterprise, and I hope both sides will find a way to cooperate.


Sincerely,
Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 08 2003,19:23   

By the way Wesley,

   Thank you for soliciting my thoughts and giving me a chance to clarify.  I hope my post will be of assistance to you.  I look forward to your reply.


Best Regards,
Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 09 2003,02:02   

Salvador,

Thanks for your clarifications.

As we note in section 5 and in the appendix, we believe that what CSI actually identifies, when it can be said to work at all, is the outcome of simple computational processes. That's why our "specified anti-information" (SAI) is a superior approach to "specification" than Dembski's methods. Given your obvious interest in algorithmic information theory, you should be able to confirm this for yourself briefly.

I'm afraid that I don't concur with your analysis of the TSPGRID algorithm. In order to get to compressibility, you've converted TSPGRID into TSPGRIDdet, a separate, deterministic algorithm that solves the same problem, and added to your "background knowledge" the particular sequence of random numbers that specify a particular output solution for TSPGRIDdet. That doesn't set aside our claim that CSI is increased by TSPGRID when one uses the "uniform probability interpretation". Essentially, your compressibility approach uses the "causal history based interpretation", which was not our claim. See section 7 for a thorough critique of the "causal history based interpretation".

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 09 2003,13:16   

Greetings Wesley,

    What you point out is exactly things that need to be resolved in ID's presentation of CSI.

    I have claimed as you can see, CSI can emerge from an algorithmically constrained process in a thermodynamically open system being pumped by random inputs.  I do not believe CSI can emerge without an algorithmic process somewhere or intelligence in the pipeline (be it the laws of physics like in a snowflake or whatever).

    We in ID are hurting ourselves by not addressing what you have brought up.  I believe CSI cannot emerge apart from an algorithmically constraining influence or intelligence.  Your definition is worth exploring on that count.  Whether algorithms can spontaneously be implemented is where much of ID differs from ideas of undirected abiogenesis.

I think your critiques should be respected and answered.  Again, thank you for soliciting my thoughts.

With your permission, I'd like to reference this thread on the ARN board and the ISCID board.


With much respect,
Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Dec. 10 2003,00:29   

Greetings Wesley,

   I have continued to review your work on the SAI.  If it is a powerful as you say, and my intuition is in agreement with that, independent of the whole origins debate, you've given ID an absolute gift!  

   Please, if you would offer your thoughts:  Do you believe an algorithmic processes can emerge from a purely random processes?  Can SAI structures emerge without some intervening computational constraint (afterall that's what SAI was intended to detect)?  

This is basically the origin of life issue in my mind:  computational constraints do not emerge spontaneously (except for the computational constraints offered by the laws of physics).

 

Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 11 2003,14:20   

We cover some natural instances of computational systems in section 5 of the paper.

I think that you should have a look at this thread on SAI before getting too excited about how much of a "gift" SAI is to ID. SAI does not support the inference of action of an intelligent agent, just a simple computational process. As such, there is no distinction between naturally occurring computational processes, algorithms deployed by an intelligent agent, and the direct action of intelligent agents to be had via use of SAI. All may "generate" arbitrarily large amounts of SAI.

I think that Dembski is likely to think of SAI as more of a poison pill than a gift.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Dec. 17 2003,13:45   

Wesley,

I saw your and Shallit's paper referenced on the ARN forum, read it, started thinking about Dembski's use of the word "specification," and have a few comments about it.  This isn't exactly a critique of your paper but it may be helpful.

If we could visit many earth-like planets in the universe, we would expect to see some things that were similar to things on Earth and other things that were different.  For example, we would expect to see life, we wouldn't expect to see George Bush.  We would expect sometimes to hear language, we wouldn't expect to hear English.  The first kind of thing is what I suspect Dembski had in mind when he first defined his "specification."  The second kind of thing is what he would call a chance event.

Dembski's says that his "specification" is the rejection region used in Fisherian statistics.  However, there are usually many possible rejection regions that one might choose when assessing the null hypothesis for a questioned event.  Dembski in The Design Inference says that one of the events in the specification must be the event at issue.  This seems to be cheating since, if the event is in the rejection region, the null hypothesis (that nature did it) is rejected.  However, Dembski claims that this is legitimate providing one can specify the rest of a rejection region using information that is independent of the null hypothesis.  Dembski says he wants to avoid "cherry picking," meaning that he does not want to pick a rejection region merely because it contains the questioned event.  But if many rejection regions are possible and he deliberately picks one that contains the questioned event, what else is he doing?  Put another way, how does he demonstrate that he is not "cherry picking"?

If the null hypothesis is that nature did it and if the information that defines the specification is also derived from nature, would Dembski still conclude that rejecting the null hypothesis implies that the event must be a design event?  Put another way, is it logical to conclude design without using a rejection region that is based on a competing design hypothesis?  Dembski's examples of rejection regions typically postulate that a man or a man-like alien is responsible.  Examples are Caputo and the prime numbers in the film "Contact."  He also said that the bacterial flagellum was like a boat motor.

Having said all that, it is not obvious to me that Dembski actually needs the concept of a specification.  Dembski's goal is to show that there is a designer who might be God.  From the "Intelligent Design Coming Clean" paper on his web site: "... a designer, who for both Van Till and me is God...." Dembski doesn't need a general procedure to do this.  All he needs is a good argument for one event.  A specification is an event (usually, an event that is a collection of other events).  If he can show that an event that is a specification must have been designed, then it is unimportant that the specification specifies other events.  Dembski has the answer he wants.  Put another way, a specification is nothing more than an event that would be interesting to working scientists anyway.

I don't understand "Appendix A.1 A different kind of specification."  Some strings are random and cannot be compressed, some strings can be compressed using a known program, and still other strings could be compressed except that we don't know how.  If there is a program to compress a string, it could be the invention of an intelligent designer or it could be a model of a natural process.  So what does this have to do with specifications?

A suggestion that may be helpful in your quest to shorten your paper:  Focus on issues that Dembski can't repair.  Ignore issues such as the claim that telephone numbers are CSI or the error in the prime number sequence. Discussions about Dembski's gaffes tend to obscure the more significant problems in Dembski's writing.

Ivar

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: July 19 2004,16:03   

Ivar wrote:

Quote

I don't understand "Appendix A.1 A different kind of specification."  Some strings are random and cannot be compressed, some strings can be compressed using a known program, and still other strings could be compressed except that we don't know how.  If there is a program to compress a string, it could be the invention of an intelligent designer or it could be a model of a natural process.  So what does this have to do with specifications?


The existence of a minimal program/input pair that results in a certain output indicates that there exists an effective method for production of the output. Since effective methods are something that are in common between intelligent agents and instances of natural computation, one cannot distinguish which of the two sorts of causation might have resulted in the output, but one can reject chance causation for the output. We haven't so much repaired specification as we have pointed out a better alternative to it.

This leads me to a claim about Dembski's design inference: Everything which is supposedly explained by a design inference is better and more simply explained by Specified Anti-Information.

SAI identifies an effective method for the production of the output of interest. The result of a design inference is less specific, being simply the negation of currently known (and considered) regularity and chance. The further arguments Dembski gives to go from a design inference to intelligent agency are flawed. On both practical and theoretical grounds, SAI is a superior methodology to that of the design inference.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: July 31 2004,01:49   

Wesley,

I am now beginning a critique of your paper at:

ARN Discussion

The moderators have roped of the discussion to you, me, Jeffrey Shallit, Bill Dembski, and Jason Rosenhouse (if he wishes to participate).

I invite your participation.

Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 04 2004,05:37   

On "specified complexity" and equivocation:

Salvador writes:

Quote

Wesley and Jeffrey say "Strongly implies that Davies' use of the term is the same as his own". I don't think that is a charitable reading of page 180.


If Dembski had simply noted Davies' use of the term "specified complexity" and stated how his use differed, that would be one thing. But Dembski criticizes Davies for his willingness to credit natural mechanisms with the production of "specified complexity". Dembski makes no distinction between the use Davies makes of "specified complexity" and the different use Dembski does. It is Dembski who has the deficit in charity here.

Quote

For the sake of completeness I ask Wesley to justify that Davies ever gave a precise mathematical defintion of "specified complexity" (not complexity) in terms of Kolmogorov-Complexity?


It's completely irrelevant, which is the only sort of completeness I can make out for the above question. We never claimed that Davies did any such thing.

Quote

The issue is not "complexity per se", but for "tightly specified complexity". I invite Wesley explain how Davies distinguishes plain vanilla K-complexity from "tightly specified complexity"?


No, the issue is whether Davies' use of "specified complexity" is open to the criticism that Dembski makes of it. The fact that Davies uses "complexity" to mean something entirely different from Dembski's usage is a clue that the two usages of "specified complexity" differ significantly.

Quote

Wesley, you're entitled to your opinion, but I think you do not give page 180 of No Free Lunch a charitable reading whatsoever.


<shrug> I don't think Dembski reads Davies charitably. We seem to be at an impasse on this one.

Quote

Bill clarifies his position versus that of Davies and Orgel in Design Revolution page 84.


Dembski notes that Orgel and Davies use the term "loosely". He doesn't say that their usage is significantly different from his own. The implication is that the difference is in the degree of precision of use, with Dembski having greater precision.

Quote

Is Granite K-complex in terms of the composition and the positioning of the molecules? If so, then even Orgel does not use complexity the way you argue Davies uses it.


<shrug> We never said that the usage of Orgel was the same as that of Davies.

Quote

Bottom line, is Bill has made an effort to distinguish his definitions from others. The complaint that Bill "strongly implies that Davies' use of the term is the same as his own" I think has been settled in a subsequent book, Design Revolution.


I don't agree. Dembski has not retracted the criticism of Davies which is dependent upon Davies' use being the same as Dembski's. Simply saying that Davies' use was "loose" in some sense doesn't get Dembski off the hook for this.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 04 2004,06:27   

To Define "Intelligence", or Not to Define It...

Salvador takes issue with a criticism of ours:

Quote

Just as Dembski fails to give a positive account of the second half of "intelligent design", he
also fails to define the first half: intelligence.


Salvador notes something Dembski said earlier:

Quote

Bill Dembski in IS INTELLIGENT DESIGN A FORM OF NATURAL THEOLOGY
Within intelligent design, intelligence is a primitive notion much as force or energy are primitive notions within physics. We can say intelligible things about these notions and show how they can be usefully employed in certain contexts. But in defining them, we gain no substantive insight.


Salvador concludes:

Quote

I think therefore Wesley and Jefferey's claim about Bill:


quote:
--------------------------------------------------------------------------------
"he also fails to define the first half: intelligence"

--------------------------------------------------------------------------------

is therefore an unfair representation of Bill's position on intelligence. If intelligence is primitive to reality, not defining it, but leaving it as an undefined is reasonable.

I suggest Wesley and Jeffrey withdraw that complaint from their paper in fairness of representing Bill's position correctly.


Wesley and Jeffrey may not agree with Bill, but owe him the courtesy of representing his work fairly. Bill has explicitly said he did not believe "defining" intelligence will gain substantive insight. Jeffrey I'm sure could offer examples of undefined terms in mathematics, etc....


So, the issue isn't that there is an inaccuracy in what we said, but rather that we aren't "fair" in making this observation.

I think that I will need to revise this criticism, as it becomes more trenchant with the noting of Dembski's demurral at even making an attempt to clarify what "intelligence" means when he deploys it.

Just as ID advocates like to note that the term "evolution" can have many different meanings, it is possible to note that "intelligence" also has many different meanings. Salvador's defense of an "undefined" use of intelligence critically depends upon the undefined term having a unitary and agreed-upon significance to the class of readers, and while this might be true for the concept of "force" in physics, this is clearly not the case for "intelligence".

The phrase "intelligent design", for example, doesn't really mean that a "design" will have characteristics that indicate that it was intelligently arrived at. Rather, all that is meant is that some agent (as opposed to a process) was involved in causing some event. The putative agent is carefully relieved of any responsibility for actually displaying what an outside observer might call "intelligence" (see Dembski's essay on "optimal design").

(Actually, I find it interesting that Salvador's quote concerning "force" is incomplete. The whole paragraph is: "In most expositions of mechanics, force is usually taken as a primitive, without an explicit definition. Rather it is taken to be defined implicitly by the (often vague) presentation of the theory within which it is contained. Various physicists, philosophers and mathematicians, such as Ernst Mach, Clifford Truesdell and Walter Noll have contributed to the intellectual effort of obtaining a more rational, non-circular, and explicit definition of force." Salvador only quoted the part in italics. The rest of the paragraph indicates that not everyone is just as comfy with undefined terms lying about as Salvador is.)

Of course, in the interest of brevity that whole sentence and the possible further line of criticism suggested by Salvador could be dropped, as its absence would do no harm to the remainder of the section on "Intelligence" and the next sub-section, "Animal Intelligence".

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 05 2004,19:47   

Salvador writes:

Quote

I've done all I can to let Elsberry know that I'm responding to his criticism of you. He hasn't shown his himself at ARN. They've accused you of not responding to your critics: well, I'm giving them a taste of their own medicine. They don't seem eager to respond to me : Elsberry, Shallit, Dembski, Cordova.


That's amusing. Salvador apparently doesn't think that a response can be made anywhere except where he dictates. He is wrong on that point, too. Consider it a vote of "no confidence" in ARN's management on my part.

Dembski has been accused of not responding to his critics, and Salvador has eagerly expressed (and in public, no less) his willingness to take a "grenade" for Dembski so that Dembski can continue to not respond to critics. Who is that supposed to fool?

There are many issues that I have raised that have received no response from Dembski. Some are more serious than others. Some date back to our first encounter in 1997. Salvador has had his ARN thread up for just a few weeks, and substantive commentary in it is only a few days old. Even if I hadn't already responded, Salvador would simply be getting a taste of the medicine that Dembski so freely dispenses to critics.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Aug. 08 2004,18:19   

Quote (Wesley R. Elsberry @ July 19 2004,16<!--emo&:0)
We haven't so much repaired specification as we have pointed out a better alternative to it.

I don't see the point of this.  Why do you want a "better alternative" to specification?

Dembski had to define a term like specification so that he could justify ignoring those improbable chance events that occur all the time, e.g., strings of coin flips.  His definition of specification is confusing, and, probably for him, that is good.  Now he can write books rather than short and obviously flawed papers.  But I don't see why you want to help him to continue the confusion.

There is another alternative to specification that is less confusing: limit the design question to events that are biological events.  Actually, this may not be a real limitation.  So far as I know, the only possible specified, complex events (as defined by Dembski) are man-made events (which are irrelevant except as examples) and biological events.  (Fictional events are also irrelevant.)

Questions about the origins of biological events are legitimate.  However, Dembski's answer is not.  He asserts that if we do not have a detailed, experimentally verified theory that explains how nature did it, then we can presume that some unknown designer did it.  No experimental evidence confirming a design hypothesis is required.  Dembski seems to believe that this assertion is genuine science.

Incidentally, I assumed that when Dembski wrote that, "Where direct, empirical corroboration is possible, design actually is present whenever specified complexity is present," he was referring to man and to man-made objects.  (See here.)  Maybe this is his experimental evidence confirming that life was designed.

Ivar

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Aug. 12 2004,21:22   

Wesley,

I'm willing to discuss the paper with you here.  I have taken time to learn the material better.  I took your paper seriously enough to study it.

My mind has changed on a few issues since that time, mostly against the content in your paper.

If you won't come to ARN, I'm willing to come here to your website.

You're a gentleman, Wesley, it's not in my nature to be polemic to gentleman, but I think there are some things seriously wrong with what you wrote.

For starters:



Quote

Wesley and Jeffrey wrote:

Dembski also identifues CSI or specified complexity" with similarly-worded concepts
in the literature. But these identifications are little more than equivocation.

For example, Dembski quotes Paul Davies' book, The Fifth Miracle, where Davies uses the term specied complexity", and strongly implies that Davies' use of the term is the same as his own [19, p. 180]. This is simply false. For Davies, the term complexity means high Kolmogorov
complexity, and has nothing to do with improbability.



What Bill wrote on page 180

Quote

In The Fifth Miracle Davies goes so far as to suggest that any laws capable of explaining th origin of life must be radically different from any scientific laws known to date. The problem as he sees it, with currently known scientific laws, like the laws of chemistry and physics, is that they cannot explain the key features of life that needes to be explained. That feature is specified complexity. As Davies puts it: "Living organisms are mysterious not for thier complexity per se, but for their tightly specified complexity." Nonetheless once life (or more generally some self-replicator) arrives on the scen, Davies thinks there is no problem accounting for specified complexity...

In this chapter I will argue that the problem of explaining specified complexity is even worse than Davies makes out in The Fifth Miracle


You're free to say what you want Wesley, but I think the way you represented page 180 was a stretch.

Further Bill clarified his position in his latest book with:

Bill clarifies his position versus that of Davies and Orgel in  Design Revolution page 84.

   
Quote
 
The Term Specified Complexity is about thirty years old. To my knowledge, orgigin-of-life researcher Leslie Orgel was the first to use it.  The term appeared in his 1973 book  The Origins of Life, whre he wrote, "Livign organism are distinguished by their specified complexity..  Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity."  More recently, in his 1999 book  The Fifth Miracle, Paul Davies identified specified complexity as the key to resolving the problem of life's origin:

" Living organisms are mysterious not for their complexity  per se, but for their tightly specified complexity.l  To comprehend fully how life arose from nonlife, we need to know not only how biological information was concentrated, but also how biologically useful information came to be specified"

Neither Orgel nor Davies, however, provided a precise analytic account of specified complexity.  I provide such an account int  The Design Inference (1988) and its sequel  No Free Lunch (2002).  Here I will merely sketch that account of specified complexity. Orgel and Davies used the term  specified complexity loosely.


Is Granite K-complex in terms of the composition and the positioning of the molecules?  If so, then even Orgel does not use complexity the way you argue Davies uses it.  

Bottom line is Bill has made an effort to distinguish his definitions from others.  The complaint that Bill "strongly implies that Davies' use of the term is the same as his own" I think has been settled in a subsequent book,  Design Revolution.


cheers,
Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 12 2004,23:04   

Salvador,

Look a little further up the page. I've already responded to this bit under the heading, "On 'specified complexity' and equivocation".

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 13 2004,09:58   

On "simple comuptational processes":

Salvador wrote:

Quote

But for starters, if I have a "500 coins heads" in a box and this was done by a coin ordering robot, how can one say the robot is performing a "simple computational process". That robot could be incredibly complex or simple, the resulting output of "500 coins heads" speaks nothing of the complexity inside the robot to in achieve "500 coins heads".


This would be the "Rube Goldberg" objection. One can come to the same result by any of a number of means, some of them much more complex than others. But the point of Algorithmic Information Theory is that no more information exists in the output than is to be found in the shortest program/input pair that produces that output. That longer program/input pairs exist is irrelevant to the result.

Quote

I invite Wesley care to quantify the phrase "simple computational process"?


That would be the appendix detailing "Specified Anti-Information".

Quote

I invite Wesley and Jeffrey to define the number of bits needed to implement a basic computer, such as a Universal Turing Machine that can perform "the simple computational process".


I don't see why one need postulate a UTM for every job. That's overkill. That's another reason why we made reference to cellular automata.

Quote

Bottom line, an orderly arrangement (like coins all heads), speaks nothing of the level of complexity required to create that orderly arrangement. The above quote is therefore seriously flawed.


Non sequitur. Dembski's argument offers to exclude natural processes in principle; the possible existence of simple computational processes instantiated by natural processes capable of producing the observed event vitiates that claim. That more complex processes might also do the same job in no way reduces the force of this rebuttal.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Aug. 13 2004,15:14   

Wesley,

First of all, I thank you for the courtesy of replying to me.  I am a hostile critic of your work on these emotionally charged issues.  I've not been exactly kind in some of my comments about your work or the things you have said (which you see all over ARN), and I am thus all the more grateful for the favor of your replies.

I recognize I'm a guest here at your website, thus I will try to keep my postings on this thread to a minimum unless of course you wish me to elaborate or debate more.

I may post verbosely at ARN on your paper.  Post your responses wherever you please if you are so inclined to do so.  The questions however that need clarification from you I will post here.  My goal now in posting here, is to specifically ensure that I represent and understand your position accurately.  

I know you'd rather debate Bill Dembski rather than me, so with that in mind, I will not try to be too much of a distraction to you at your website.  

Again, thank you for the favor of your responses.  If you really want me to engage your paper I will, otherwise, I will limit my participation on this thread.



respectfully,
Salvador

PS
I'll post at ARN to let everyone know that you have responded to me now.  Thank you.

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Aug. 13 2004,15:45   

Quote

Wesley wrote:

There are many issues that I have raised that have received no response from Dembski.



Well, for the record what avenue would you think appropriate for a public exchange with William Dembski?  

Do you want him to respond with counter papers to your papers?  Seriously, do want him to come out here and post to your thread?

I think a lot of what you write about his work does not represent his work or his position at all.

Your own ideas have merit, such as SAI.  However, your attempts to state Bill's ideas in your own words I don't think are very charitable and end up being strawmen.

Seriously Wesley,  I corresponded with Bill over some of the points I too had questions on.  

What I dispute with you is things I think are as plain as day.  For example my quotes form Design Revolution I think cleared some things up as far as the Davies, Orgel, and Dembski's definition of Specified Complexity.  You obviously disagree, but I thought what Bill wrote in that book was quite sufficient to address a point you raised in your paper.

We're going to not resolve anything on this thread, but I want to make sure I represent your words accurately.   As much as I'll be tempted to quibble, I'm probably going to let a lot of things go.  I may post more elaborate responses at ARN.  You are welcome to respond or not respond.  

I will make an effort from now on not to make a big deal if you have no response.  I am willing to do that because I see you have made an effort to respond.

State what you want from me, and what you feel is fair in this dialogue.  I will do my best to keep the discourse open.  If I say something over at ARN you feel is unfair, rude, or mis-represents you, you are free to challenge me on it, and I'll do my best to make amends.  I'm for fair play.  OK?


Thank you.


Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 13 2004,17:21   

Hi Wesley,

As I indicated elsewhere I would post some comments to your site.  I will try to keep to be respectful of your time. My intent is not for protracted discussion, but to make sure I represent your work correctly.

Do you believe genetically engineered products evidence CSI by Dembski's definition?  Here is a case were potentially non-algorithmically compressible information is CSI.  This would refute your paper's claim that Dembski confuses what you call SAI and CSI.  It seems to also a misinterpretation Mark Perakh also makes.

I would actually argue that there is an overlap between objects exhibiting SAI and CSI, but in actuality they are not the same.  Some forms of SAI are a subset of CSI.


Thanks.

Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 16 2004,20:35   

Salvador T. Cordova wrote:

Quote

Do you believe genetically engineered products evidence CSI by Dembski's definition?


I have no need to believe that anything evidences CSI by Dembski's definition. The reason that I need not believe any such thing is that there has never been a successful application of Dembski's EF/DI via the GCEA meeting or exceeding the "universal small probability" of any event whatsoever. If you had a citation of such a published successful, fully worked-out calculation, I'm sure that you would share that with us.

Until then, it's all just blowin' smoke.

Quote

Here is a case were potentially non-algorithmically compressible information is CSI.  This would refute your paper's claim that Dembski confuses what you call SAI and CSI.


I have no recollection of saying that Dembski confuses SAI with anything else. That would hardly be sporting, since "SAI" as a term was introduced in that paper. Perhaps a specific citation of the purported faulty language would be appropriate?

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 17 2004,08:32   

We clearly will not agree on many things Wesley, but thank you for responding.

My intent is to make sure that I am representing you correctly.  And that is why I am asking you questions.

The fact you said:
Quote

I have no need to believe that anything evidences CSI by Dembski's definition.


Evidences you do not represent and possibly do not understand CSI.  When I a designer like myself creates an ID artifact there can be no doubt that in many cases there is CSI.  It is the blueprint artifact methaphor.  

That is why I asked about DNA genetic engineering.  I corresponded with Dembski (who by the way was Shallit's student as evidenced in the Acknowledgements of Design Inference) last week to quadruple check that my interpretation was correct.  I was certain I was right, and I was.

Thus, as I have suspected, your paper incorrectly represents Dembski's work.  If your complaint is one of clarity, I will pass that on and we'll make the adjustments.

Your SAI concept has merit.  


For the record not all of my posts on the matter at ARN are correct technically, and I have to fix a few things.  I would not be surprised to see the ID leadership or rank and file at some point write a refutaiton of your's and Shallit's paper.

I know that we are on both sides of an emotionally charged issue, and I am grateful you have offered to dialogue with me, even if the communications are mostly dysfunctional, we at least have some dialogue.

Thank you.

Respectfully,
Salvador

  
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Sep. 17 2004,10:20   

Quote (scordova @ Sep. 17 2004,08:32)

When I a designer like myself creates an ID artifact there can be no doubt that in many cases there is CSI.  It is the blueprint artifact methaphor.

To reiterate a comment made on the ARN board, an object does not have CSI merely because it was made by man.  Dembski's definition of CSI requires that one show that a non-intelligent nature could not create the object, i.e., that it could not be an object resulting from regular and chance events.

Note that if a non-intellligent nature did create life and, eventually, man, then there has been a chain of regular and chance events that resulted in the objects that have been made by man.  The probability of such a chain can not be smaller than Dembski's Universal Probability Bound of 10^-150, i.e., it cannot be "complex."  One cannot deduce that man is the product of an intelligent designer merely because man is an intelligent designer.

Ivar

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 17 2004,10:21   

Salvador,

I repeat: I have no need of belief in evidence of CSI according to Dembski's definitions. If CSI were claimed by Dembski's definitions, which involve working out all the parts of the EF/DI confirmed by GCEA, there would be no question of what was claimed, nor how the conclusion was drawn, and all would be open to examination and crititique.

This has not been done. I have no knowledge of a specific example that corresponds to what you are talking about, and I certainly am not convinced by mysterious private email exchanges that I am not privy to. What you describe sounds like an example of what Jeff Shallit and I referred to as the "Sloppy Chance Elimination Argument" in our paper.

It's a longstanding criticism of mine that Dembski has not made available the calculations that his public claims imply have already been accomplished (as in his 1998 "Science and Design" article in "First Things", which strongly implied that his EF/DI and GCEA had been applied to the systems labeled as IC by Michael Behe).

This latest missive of yours simply confirms that my analysis on this point has been spot-on.

Edited by Wesley R. Elsberry on Sep. 17 2004,10:36

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Art



Posts: 69
Joined: Dec. 2002

(Permalink) Posted: Sep. 17 2004,11:22   

Sal, this may be "piling on", but I have a recommendation for you.  You need to review Dembski's "parable" about the archer and the bulleyes.  Most, if not all, of the things you are arguing (here and on other boards) as possessing CSI are actually items that, using this parable, are rightly called fabrications.

   
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 17 2004,16:42   

Quote


This has not been done. I have no knowledge of a specific example that corresponds to what you are talking about, and I certainly am not convinced by mysterious private email exchanges that I am not privy to. What you describe sounds like an example of what Jeff Shallit and I referred to as the "Sloppy Chance Elimination Argument" in our paper.

It's a longstanding criticism of mine that Dembski has not made available the calculations that his public claims imply have already been accomplished (as in his 1998 "Science and Design" article in "First Things", which strongly implied that his EF/DI and GCEA had been applied to the systems labeled as IC by Michael Behe).

This latest missive of yours simply confirms that my analysis on this point has been spot-on.


Fair enough.  I'll suggest to Bill we at least do some of these for human examples, and maybe you will be convinced CSI at least does exist in human affairs.

You actually solved a major problem for establishing detachable, non-postidictive specifications with your SAI.  That was gift!

An example closely analogous to SAI is the problem of "convergent evolution" which Sternberg calls "neo-Darwinian" epicycles.

The weakness of arguing as protein sequences as evidencing CSI I think needs review as I believe Art makes a very good case which the IDists need to address.  Same with the flagellum.


Quote

I have no recollection of saying that Dembski confuses SAI with anything else. That would hardly be sporting, since "SAI" as a term was introduced in that paper. Perhaps a specific citation of the purported faulty language would be appropriate?



It is not my intent to ever misrepresent you, that is why I am here asking for clarifications and your own words.

You in fact wrote:
Quote


An alternate view is that if specified complexity detects anything at all, it detects the output of simple computational processes. This is consonant with Dembski's claim. It is CSI that within the Chaitin-Kolmogorov-Solomonoff theory of algorithmic information identifies the highly compressible, nonrandom strings of digits.


Are not compressible strings, strings which evidence SAI??  If not, I'll amend my assertion, no problem.  I'm for fair play.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 18 2004,11:22   

This is something that I responded to on The Panda's Thumb. I'm repeating it here so that there is another channel of communication for this message to Salvador.

*****************

Salvador T. Cordova wrote:

Quote

Sternberg’s professional qualifications in relevant fields, it seems, exceed even those of Gishlick, Elsberry, Matzke combined.  So I hope that will be taken into consideration in view of charges the article is substandard science.


The credentials of Sternberg don't change the content of Meyer 2004. That's pure argument by authority, and it just doesn't work in science.

The similar situation with regard to antievolutionist fascination with (mis)quotation is something I've commented upon before:

Quote

The antievolution fascination with quotations seems to stem from the anti-science mindset of "revelation": testimonial evidence reigns supreme in theology, thus many antievolutionists may mistake that condition as being the same in science. However, science has pretty much eschewed assigning any intrinsic worth to testimonial evidence. Quotations from some source are taken as being an indication that some condition as stated holds according to the reliability of the speaker, as seen by reviewing the evidence. Antievolutionists "get" the first part, but have real difficulty coming to terms with the second part. If some Expert A says X, then the antievolutionist expects that no lesser known mortal will dare gainsay Expert A's opinion on X. However, such a situation is routine in science. Anyone presenting Evidence Q that is inconsistent with X then has shown Expert A to be incorrect on X. If the person holding forth shows repeatedly that they can't be trusted to tell us correct information on, say, trilobites, then that just means that we likely don't hold any further talk on trilobites from that source in high regard.


http://www.antievolution.org/people/wre/quotes/

We pointed out problems with Meyer 2004. The issue is whether our criticism stands up to scrutiny. Salvador has avoided dealing with the content of our criticism, and is apparently forced to adopt fallacious modes of argumentation to defend Meyer 2004.

I've pointed out to Salvador exactly what he needs to do to show that his boasting about the Elsberry and Shallit 2003 paper being the wrong citation to critique Meyer 2004 by was on track. These items are things that if I were wrong about, Salvador should quickly be able to show that I was wrong on. This is the FOURTH TIME I've entered this in response to Salvador's comments here since August 31st. I'll email them to him, too, just to eliminate any weak apologetic that he had somehow overlooked the previous presentations.

===================

(From http://www.pandasthumb.org/pt-archives/000430.html#c7223 )

[quote=Salvador T. Cordova]
In the meantime, I hope Stephen Meyers will read these reviews and learn.  I can confidently say he can ignore any challenges offered by the “Elsberry and Shallit 2003” paper.  I don’t mind you guys building your case on it though. It’ll just be that more of an embarassment to see it all collapse when that paper is refuted.
[/quote]

It doesn’t matter if “the paper” is “refuted”; what matters is whether the particular claims made are supported and true. Here are the claims again:

Quote

2. Meyer relies on Dembski’s “specified complexity,” but even if he used it correctly (by rigorously applying Dembski’s filter, criteria, and probability calculations), Dembski’s filter has never been demonstrated to be able to distinguish anything in the biological realm — it has never been successfully applied by anyone to any biological phenomena (Elsberry and Shallit, 2003).

3. Meyer claims, “The Cambrian explosion represents a remarkable jump in the specified complexity or ‘complex specified information’ (CSI) of the biological world.” Yet to substantiate this, Meyer would have to yield up the details of the application of Dembski’s “generic chance elimination argument” to this event, which he does not do. There’s small wonder in that, for the total number of attempted uses of Dembski’s CSI in any even partially rigorous way number a meager four (Elsberry and Shallit, 2003).


In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (2), all one has to do is produce a citation in the published literature (dated prior to our paper) showing a complete and correct application of Dembski’s GCEA to a biological system such that “CSI” is concluded. Thus far, I’m unaware of any such instance. The only thing that makes any moves in that direction at all is Dembski’s section 5.10 of “NFL”, and we were careful to make clear why that one was both incomplete and incorrect.

In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (3), all one has to do is produce citations in the published literature (dated prior to our paper) showing the attempted application of Dembski’s GCEA to more than four cases. I’m unaware of any further examples that have been published, but I’m perfectly open to revising our number to account for all the instances.

Until and unless those citations are forthcoming, the braggadacio about how the Elsberry and Shallit 2003 paper can be safely ignored seems somewhat out of place.

=====

I posted that on August 31st. As far as I can tell, neither Salvador nor any other ID advocate has made the slightest headway in showing that I was inaccurate in either claim made above. Salvador has taken up an aggressive grandstanding technique, though I think that it is obvious to all that there is little to no substance as yet to back it up. If I were wrong on the two points above, it seems to me that it would be simplicity itself for some ID advocate to show that I was wrong, and I would have expected that to happen already. I predict that what I've written here will again disappear into the ID memory hole of inconveniently true criticisms.

If I'm wrong here, though, I'm willing both to take my lumps and acknowledge whoever it is that shows me to be wrong. I'm still waiting for the documentation. I suspect I will wait a long, long time.

Edited by Wesley R. Elsberry on Sep. 18 2004,11:31

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 19 2004,23:58   

So here goes, and I want you to correct me if I don't represent your position correctly.  

I will go into the examples of Monsanto and other genetically engineered products as examples of human CSI and then show how they are extensible to other kinds of CSI.  Your SAI concept is critical to helping create detachable CSI.

The following example should be quite derivable from his book, No Free Lunch, page 139-141.  

Quote


We have a "pill-box" of 1000 bins with a coin in each bin.  Clearly the existence of the coins is one layer of design and the fact they are in a bins is an indication of another layer of design.  Thus we have two layers of design in evidence, however we wish to determine if the Heads/Tails configuration evidences yet another layer of design.

The space of possibilities, Omega, is defined by all possible configurations of the 1000 coins.

Let the detachable specification, T,  be defined as the set of confgurations where the pattern of the first 500 coins are replicated by the last 500 coins.

P(T) =  (2^500/ 2^1000)

thus

I(T) = - log2 ( 2^500 / 2^1000 )  = 500 bits

if the first 500 coins exhibit a K-complex configuration, then seeing any arrangement E of such coins evidences CSI with respect to Head and Tails in this example.



If however, you dispute the ordered pair (T,E) exhibits CSI, I would argue the E's exhibit at least SAI.

Does E exhibit SAI?  

These coin examples are a start and extensible to DNA's an proteins, but we must start somewhere.

Salvador

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,00:35   

Quote

Sal, this may be "piling on", but I have a recommendation for you.  


You didn't want to engage me on the CSI topic I created at ARN CSI for Dummies.  So you come here where you feel you can pile on. OK.

Quote

You need to review Dembski's "parable" about the archer and the bulleyes.  


As usual you try to present me as not understanding.  


Quote

Most, if not all, of the things you are arguing (here and on other boards) as possessing CSI are actually items that, using this parable, are rightly called fabrications.


Fabrications are designs.  Coin examples are designs.  But we see analogs of these very fabrications in biological systems.

We see several molecular and morphological convergences not attributable to horizontal gene transfers.  That is exactly the CSI (not Sternberg's term) which bothered Sternberg regarding what he called the "Darwinian epicycle of convergent evolution".  Two independent pathways arriving at the same molecular configuration in to unrelated lineages is a nasty problem for Darwinian evolution and Sternberg knows it.  This is an especially nasty for non-functional (essentially invisible to selection) in "spacer" sequences (whatever is the right term) appearing in unrelated lineages arrived at through different expression pathways.

But beyond molecular convergences, even within a single organism like the Nematode we have to separate developmental pathways creating the symmetric halves of the Nematode.  Such a peculiar fact makes no sense in the light of Darwinian Evolution, but does in terms of CSI.




Quote

page 335, Nature's Destiny by Denton:

A curious aspect of the development of the namatode and one that would never have been predicted is that although the organism is bilaterally symmetrical--that is, its left and right halves are mirror images of each other--the equivalent organs and cells on the right- and lefthand sides of the body of the larva are not derived from equivalent cells in the embryo In other words, identical components on the right and left sides of the body are generated in different ways from different and nonsymemetrically placed progenitor cells in the early embryo and have therefore lineage patterns which are in some cases completely dissimilar. This is like making the right and left headlight on an automobile in completely different ways and utilizing completely different process.

Even individual cells of the same cell type in any one organ, such as, say, the muscle cells, gland cells, or nerve cells of the pharynx, are also derived from different lineages. For example, one particular cell progenitor of the pharynx gives rise to muscle cells, interneurons, gland cells, and epithelial cells. Another progenitor gives rise to to muscle and gland cells.




The nematode halves combined are at least, conceptually speaking, algorithmically compressible being that they are symmetric.  However, two independent pathways arrive at each half.  Evidence of CSI.

Do you now have an inckling why I'm exploring the above symmetric coin patterns?

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,00:41   

Quote

To reiterate a comment made on the ARN board, an object does not have CSI merely because it was made by man.  Dembski's definition of CSI requires that one show that a non-intelligent nature could not create the object, i.e., that it could not be an object resulting from regular and chance events.


I ignored your comment Ivar because it was didn't even reflect what I was saying.

Quote

page 141 of No Free Lunch
Complex Specified Information :

The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex.


Dembki's definition here doesn't look like the definition described by Ivar.

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,01:19   

Let me point something out about CSI that is counter intuitive.  I will, for the sake of clarity use a smaller than 500 element string, but the example is extensible.

Say we have a space of possibilities Omega defined by 8 coins

There are 256 possibile configurations.
Each possible T can have one or more elements.  For example, here are two T's, T1 and T2:

T1 =
{
0111 1111
}

or
T2 =
{
0000 0100,  1000 0000, 0100 1000, 1111 0010
}


T1 can be represented by one 8-bit string and T1 occupies 8 bits in Omega space.  T1 occupies only 1 of the 256 possibilities in Omega Space.  Using Dembski's calculations

I(T1) = - log2 ( P(T1) )  =  - log2 ( 1 /256) = 8 bits


NOW HERE IS THE CATCH:

T2 can be represented by 4 8-bits strings and T2 occupies 4 of the 256 possibilities in Omega Space.

I(T2) = - log2 ( P(T2) )  = - log2 ( 4/256 ) = 6 bits


The specification of T1 requires 8 bits and it represents 8 bits in Omega Space

HOWEVER, the specification of T2 requires 32 bits (4 * 8) but it only represents 6 bits in Omega.        :eek:        

What this means is that for the 1000 coin example above, had I not used symmetric patterns, but rather listed each every 1000-bit specification explicitly until I reached the 500-bit threshhold within Omega Space, there would not be resources in the universe sufficient to do such a task.       :0

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,01:38   

I will recommend the following for you and IDists.

When identifing CSI.

Describe

1.  The space defined by Omega

2.  The space defined by T

3.  Give conceptual examples E which would evidence CSI within T

4.  defend the reasons why one believes T is detachable and not post-dictive

5.  Provide sample calculations

6.  Not insist on absolute K-complexity for E, since K-complexity is not fully tractable.   Rather something like a maximal huffman compression (or whatever) test for operational utility.

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,02:28   

Quote


In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (2), all one has to do is produce a citation in the published literature (dated prior to our paper) showing a complete and correct application of Dembski’s GCEA to a biological system such that “CSI” is concluded. Thus far, I’m unaware of any such instance. The only thing that makes any moves in that direction at all is Dembski’s section 5.10 of “NFL”, and we were careful to make clear why that one was both incomplete and incorrect.


That would be a sufficient but not necessary condition to refute your point.  Thus meeting the challenge the way you specify is not a necessary condition for proving CSI in biology, it is only a sufficient condition as far as you are concerned.  Again, you're pulling the "if I don't see it in peer review, I don't believe it" gimmick.  Absense of meeting that challenge does not refute CSI.


Michael S. Y. Lee, “Molecular phylogenies become functional,” Trends in Ecology and Evolution 14 (1999): 177-178.

where he said:
Quote

...the mitochondrial cytochrome b gene implied...an absurd phylogeny of mammals, regardless of the method of tree construction. Cats and whales fell within primates, grouping with simians (monkeys and apes) and strepsirhines (lemurs, bush-babies and lorises) to the exclusion of tarsiers. Cytochrome b is probably the most commonly sequenced gene in vertebrates, making this surprising result even more disconcerting.


My argument here is not against phylogenic reconstruction.  It is against the fact that the "Darwinian Epicycle" of "evolutionary convergence" needs to be applead to in order to solve the problem of identical features in unrelated creatures..  Any one is welcome to outline a generic method for searching for CSI in evolutionary convergences and it's calculation.  I will offer my attempt in this thread.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 20 2004,02:54   

Salvador T. Cordova wrote:

Quote

The NCSE website links to that thread. I believe Wesley couldn't tolerate the very embarassing data I provided about Sternberg and the flaws in Elsberry's paper.

I have long suspected the guys over there can't deal with direct scientific debate but rather rely on misrepresentation, strawmen, and ad hominem.


(Source: http://www.arn.org/ubb/ultimatebb.php?ubb=get_topic;f=12;t=001289 , "Darwinist Censorship at PandasThumb.org" )

The fact is that every post to Panda's Thumb that Salvador has ever made is still on Panda's Thumb. Several of the off-topic ones now grace "The Bathroom Wall", but they are still there. There is no censorship of Salvador, and in fact I've taken the time to respond to some of Salvador's persistent claims in this very thread, as well as in threads on Panda's Thumb.

I've asked Salvador to provide the basis for his claim that my citation of E&S 2003 on two points of critique on Meyer 2004 is a bad move. I've been asking for that since August 31st. (See the page before this one in the thread here.) Salvador has studiously avoided that discussion. In this case, I am the one who has consistently pursued "direct scientific debate" and Salvador the one who has taken to "misrepresentation, strawmen, and ad hominem".

As far as I am concerned, there is no "conversation" possible at this point with Salvador. At least, until Salvador takes responsibility for his false and malicious claim concerning "censorship" by me and addresses the specific points I raised in criticism of Meyer 2004 where I cited E&S 2003, I don't expect to engage Salvador on much of anything.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Ivar



Posts: 4
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,03:38   

Ivar originally wrote:
Quote

To reiterate a comment made on the ARN board, an object does not have CSI merely because it was made by man.  Dembski's definition of CSI requires that one show that a non-intelligent nature could not create the object, i.e., that it could not be an object resulting from regular and chance events.

Salvador responded:
Quote
I ignored your comment Ivar because it was didn't even reflect what I was saying.

page 141 of No Free Lunch
Complex Specified Information:  The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex.

Dembski's definition here doesn't look like the definition described by Ivar.


From the text on pages 140 and 141 of No Free Lunch:
Quote
This information-theoretic account of complexity is entirely consistent with the account of complexity given in sections 1.3 and 1.5.... It follows that information can be complex, specified, or both. Information that is both complex and specified will be called complex specified information, or CSI for short (see figure 3.2).

From pages 18, 19, and 22 of No Free Lunch:
Quote
Since complexity and probability are correlative notions (i.e., higher complexity corresponds to smaller probability), this question can be reformulated probabilistically: How small does a probability have to be so that in the presence of a specification it reliably implicates design? ....

A probability of 1 in 10^150 is therefore a universal probability bound. [Reference section 6.5 of The Design Inference] A universal probability bound is impervious to all available probabilistic resources that may be brought against it. Indeed, all the probabilistic resources in the known physical world cannot conspire to render remotely probable an event whose probability is less than this universal probability bound.

Complexity is calculated assuming that the "known physical world" generated the event, object, or information in question.  If information (or whatever) is too complex, i.e., too improbable, to be generated by the physical world (and is also specified), only then is design inferred.  Dembski is still relying on his Explanatory Filter (page 13 of No Free Lunch).  In Dembski's world, complexity is undefined when one assumes that a designer did it.

Ivar

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,12:46   

Quote


2. Meyer relies on Dembski’s “specified complexity,” but even if he used it correctly (by rigorously applying Dembski’s filter, criteria, and probability calculations), Dembski’s filter has never been demonstrated to be able to distinguish anything in the biological realm — it has never been successfully applied by anyone to any biological phenomena (Elsberry and Shallit, 2003).

3. Meyer claims, “The Cambrian explosion represents a remarkable jump in the specified complexity or ‘complex specified information’ (CSI) of the biological world.” Yet to substantiate this, Meyer would have to yield up the details of the application of Dembski’s “generic chance elimination argument” to this event, which he does not do. There’s small wonder in that, for the total number of attempted uses of Dembski’s CSI in any even partially rigorous way number a meager four (Elsberry and Shallit, 2003).


Point 2.  Sussessfully applying it in biological phenomenon would be a sufficient condition.  If we achieve detection of CSI bio-engineered agents then does the CSI concept begin to make headway for you?  If I cited papers in bio-defense and genetic engineering that use CSI (but not exactly by that name, "CSI"), would that suffice that CSI can be detected in biology.  

My posts above are there to show that in your paper you don't even represent CSI correctly.  You know CSI exists in bio-engineering (it can not possibly be otherwise).  By way of extension, particularly in the area of evolutionary convergence we have plausible candidates for CSI.  I sketched some examples, would you really care for me to elaborate???


Point 3.  Well that number can increase.  Why?  Sternberg knows about all these "Darwinian Epicycles" posed by both morphological and molecular convergence, that is an area ripe for research.  He was right to let Meyer make a "review article" that can be the basis of further research.  Sternberg rightly said:

Quote

Sternberg on Information

His [Meyer's] paper—by addressing the problem of novel organismal morphologies from an information standpoint—provided insight into why this fundamental problem has not yet been solved.While Meyer presented a controversial alternative hypothesis, he did so in a scientific manner and in a way that advances understanding of why his view has reemerged as an option for some scientists. Overall, his discussion is certainly relevant to current fundamental issues in systematics and paleontology.




Sternberg uses the word "information", there are ways we can formalize it in terms of CSI.  Do you want to stick around for the sample calculations???  Formulating the arguments in terms of convergence I believe is one of the best ways to avoid complaints of post diction.

Your paper does not even represent CSI correctly Wesley, so how can it be used as a refutation of Meyer's or anyone's work.  I've gone through the trouble of outling some of the sample calculations in this thread, and it seem just as I was getting close to highlighting the most important points in the calculations you disengage in dialogue.  

For your information, the above example with the 1000 coins qualifies as examples of SAI (you are invited to demonstrate otherwise).  Now what sort of "naturally arising simple computational processes" would generate such physical examples????  That's an indefensible assertion on your part that "simple computational processes" generate all such SAI phenomenon.  

I've given ideas of how to correct your paper.  You can choose to keep the paper as is.

Also, where is your definition of Omega, T, and E in all of your supposed counter examples like TSPGRID?

My feeling is Shallit did a good job being Dembski's teacher 16 years ago, and I'm surprised a mathematician of his stature would write such a paper on his former student's fine work.

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,13:37   

CSI Detection Application

The above link does not use the term CSI, but it demonstrates intelligent intervention can be detected in biological agents through essentially eliminative approaches.

What Elsberry's paper fails to grasp is the basic defintion and applications of CSI.  


His comment :
Quote

I have no need to believe that anything evidences CSI by Dembski's definition. The reason that I need not believe any such thing is that there has never been a successful application of Dembski's EF/DI via the GCEA meeting or exceeding the "universal small probability" of any event whatsoever. If you had a citation of such a published successful, fully worked-out calculation, I'm sure that you would share that with us.

Until then, it's all just blowin' smoke.


Is therefore indefensible.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 20 2004,14:04   

With respect to Salvador's responses to the points made against Meyer 2004, there's loads of hand-waving but absolutely no citations of work that would support setting aside our critique. It doesn't matter what may be done in the future; what matters is that Meyer's reliance upon CSI at the time he wrote his paper (and at the time we wrote ours) was based on ... nothing. No successful calculations showing CSI for any event whatsoever. It's still true today, AFAICT, but the challenge to Salvador is to show that we were wrong in 2003, not we might be wrong in 2753. And ... Salvador's still blowin' smoke.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 20 2004,18:13   

Well,

Ok, Wesley.  I think we've exhausted most of what we'll say on this thread.

I actually provisionally agreed with your critique of the protein CSI, and I will concede that Meyer did not provide detailed calculations in that review for the kinds of CSI that I look for.  If it turn out however, that we find proteins that have higher improbability, I will happily recant my one major complaint with Meyer's paper.

I'll even concede CSI has not reached it's full maturity and use within the ID community and that the calculations thus far have not appeared EXPLICITY in peer-review.

Do I conclude that lack of peer-reviewed articles at this stage of the game is sufficient evidence against CSI's viability as a concept.  Absolutely not.  

I have worked in relevant fields where the ability to detect of ID artifacts is a given (Target Recognition).  There was no need to appeal to a peer-reviewed article because it was so obvious that ID detection is possible.  

With the bio-reporting engineering, I doubt they will really even bother with doing probability calculations each time they make a detection, because the design inference in the bio-reporters is so obvious : novel traits like bio-reporter bio-luminesence would not arise via Darwinian evolution.  They can count on the fact that if such a critter is seen, it came from our labs and not purely Darwinian processes.

An analogous inference is very reasonable for the Cambrian explosion, that the creatures did not arise via Darwinian evolution.


I maintain that you are a gentleman (much more so than I), and you are far better mannered than most that I have dealt with.  Sorry we're on other sides of the issues.

If you have any further questions for me, you can post them at ARN and send me a private message there.  

I advise you that your paper is badly flawed.  It does not represent what CSI is.  I gave the definitions from page 141  No Free Lunch and how to calculate it.  If you apply such techniques with care, you'll see you'll have to reject TSPGRID as a counter example to LCI.  Further, your SAI has merit, but if you suggest all SAI is createable through "simple computational processes", I just provided a counter example to that claim with the coins above.  There would be no such thing possible through "simple computation processes", because:

1.  If a human made such a coin string, it would be and act of intelligence

2.  If a robot (or some machine like it) created the coin strings, such a robot is anything but simple.

You are free to keep insisting your paper is correct, but by doing so you will invite persistent citations of those errors in your paper.  I leave the decision to change or not change the paper in your hands.


Well, thank you for your time.
Salvador

  
Jkrebs



Posts: 590
Joined: Sep. 2004

(Permalink) Posted: Sep. 21 2004,22:20   

Salvador writes,

Quote
With the bio-reporting engineering, I doubt they will really even bother with doing probability calculations each time they make a detection, because the design inference in the bio-reporters is so obvious : novel traits like bio-reporter bio-luminesence would not arise via Darwinian evolution.  They can count on the fact that if such a critter is seen, it came from our labs and not purely Darwinian processes.


This is a total punt.  It's not a matter of them not "doing probability calculations each time they make a detection, because the design inference in the bio-reporters is so obvious," it's a matter of them never having done them at all!  That is a very simple fact that you consistently ignore.

Your example of 500 coins is irrelevant.  The issue is detecting design in biological organisms.  No one, ever, has even offered a methodology for showing that CSI exists in biology, must less attempted to implement a methodology, much less shown CSI that exists according to Dembski's definition.

Without going off on a tangent, Salvador, do you agree with these statements, and if not can you show evidence (not just arguements, but actual evidence) that they are not true?

  
scordova



Posts: 64
Joined: Dec. 2003

(Permalink) Posted: Sep. 22 2004,13:56   

Hi Jack Krebs,

Well, I think your objections have merit.  I for one have sided with the critics on a few points, and I try to give them credit when credit is due.   For example, I've been very positive on Shallit's concept of SAI (that must be Shallit's idea, since he was Debmski's algorithmic information mentor).

I think responding to the challenges in Wesley's paper as well as the points you just rased are a good thing.

I direct readers to Response to Elsberry Shallit 2003.  

Anything of extreme relevance, I might bring back here to this website, escpecially since I know ISCID is finicky about who posts there these days.

Thanks to you for pointing out perceived deficiencies in my line of reasoning, and I will do my best to make ammendments.


Salvador

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Sep. 22 2004,15:39   

My email to Salvador

Over on ISCID, Salvador says:

Quote

The co-author, Wesley Elsberry, having seen my writings at ARN personally requested I discuss his paper at his website antievolution.org


To help others understand how these things come about, herewith are my emails to Salvador of 2003/12/07 and 08:

Quote


Date: Sun, 7 Dec 2003 14:23:13 -0600 (CST)
From: "Wesley R. Elsberry" <welsberr@vangogh.fdisk.net>
Message-Id: <200312072023.hB7KNDCO018697@vangogh.fdisk.net>
To: [EMAIL=_@hotmail.com[/EMAIL]
Subject: TSPGRID
Cc: welsberr@vangogh.fdisk.net
Reply-To: welsberr@onlinezoologists.com


Thanks for your recent comments on the TSPGRID algorithm. I think, though, that you did not read our description of the TSPGRID algorithm carefully. Please see my response at
http://www.antievolution.org/cgi-bin....=2;t=78

Wesley


Quote

Date: Sun, 7 Dec 2003 14:23:13 -0600 (CST)
From: "Wesley R. Elsberry" <welsberr@vangogh.fdisk.net>
Message-Id: <200312072023.hB7KNDCO018697@vangogh.fdisk.net>
To: _@hotmail.com
Subject: Re: TSPGRID
Cc: welsberr@vangogh.fdisk.net
Reply-To: welsberr@onlinezoologists.com

Thanks for the response. I've responded as well in the AE thread.

I expect criticisms to come with pointy ends. We stated our own criticisms without much in the way of sugar-coating. I think Mayr said it once that he wrote his stuff in the mode of dialectic: thesis, expected antithesis, and to be hoped for synthesis. It seems a reasonable way to get to where we can be sure of arguments.

Wesley


Edited by Wesley R. Elsberry on Feb. 06 2005,21:04

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Feb. 06 2005,21:27   

The "Forbidden URL"?

Salvador Cordova has been saying some unkind things about me over on ARN. Part of what he's saying is that I can't stand for people to know about a thread on the ISCID bulletin board that he has concerning comments on Elsberry and Shallit 2003. Salvador has taken to calling it "the Forbidden URL".

I'm not sure why, precisely, Salvador wants to do this. In some places, Salvador boasts that I "personally requested" his commentary on our paper. (See above for a response as to how "personal" my communication was.) Elsewhere, Salvador wishes to assert that I instead must somehow "censor" his commentary. Given that we've covered pretty much all of the ground that Salvador has on ISCID here in this thread and I have neither posting privileges nor "censoring" privileges there, I don't see much point in belaboring my points. If people can't see through the rather sophomoric posturing Salvador engages in ("To write a paper to refute CSI and not include the most central definition of CSI is inexecusable", when we extensively critiqued the mathematics that instantiate CSI according to Dembski, for instance), I don't know that further discussion on my part will do much to correct the situation.

As for "censorship" on the Panda's Thumb weblog, I don't believe that I've ever deleted a comment by Salvador. I did move several off-topic comments entered by Salvador to "the Bathroom Wall" thread, which is PT's place for miscellany. I've moved some of my own posts there, so I certainly do not concur with Salvador that this constitutes "censorship".

In any case, "the Forbidden URL" isn't so much "forbidden" as it is irrelevant to the various threads that Salvador posted to on PT, redundant to the present thread here, and inaccessible to me for responses in any case. (Not that I have any great desire to post on ISCID. This BB is perfectly fine, and I have the added benefit of knowing that my posts won't just happen to disappear here.) Which makes the claim of "censorship" on my part ring somewhat hollow, I think.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Feb. 06 2005,22:04   

Salvador and the Meyer paper

Here's something from the comments at PT that I'd like to remind Salvador of. If his claims were correct, he should have been able to demonstrate that convincingly a long time ago. His silence has been eloquent.


=== http://www.pandasthumb.org/pt-archives/000484.html#c7763 ===

Salvador T. Cordova:

Quote

Sternberg’s professional qualifications in relevant fields, it seems, exceed even those of Gishlick, Elsberry, Matzke combined.  So I hope that will be taken into consideration in view of charges the article is substandard science.


The credentials of Sternberg don't change the content of Meyer 2004. That's pure argument by authority, and it just doesn't work in science.

The similar situation with regard to antievolutionist fascination with (mis)quotation is something I've commented upon before:

Quote

The antievolution fascination with quotations seems to stem from the anti-science mindset of "revelation": testimonial evidence reigns supreme in theology, thus many antievolutionists may mistake that condition as being the same in science. However, science has pretty much eschewed assigning any intrinsic worth to testimonial evidence. Quotations from some source are taken as being an indication that some condition as stated holds according to the reliability of the speaker, as seen by reviewing the evidence. Antievolutionists "get" the first part, but have real difficulty coming to terms with the second part. If some Expert A says X, then the antievolutionist expects that no lesser known mortal will dare gainsay Expert A's opinion on X. However, such a situation is routine in science. Anyone presenting Evidence Q that is inconsistent with X then has shown Expert A to be incorrect on X. If the person holding forth shows repeatedly that they can't be trusted to tell us correct information on, say, trilobites, then that just means that we likely don't hold any further talk on trilobites from that source in high regard.


http://www.antievolution.org/people/wre/quotes/

We pointed out problems with Meyer 2004. The issue is whether our criticism stands up to scrutiny. Salvador has avoided dealing with the content of our criticism, and is apparently forced to adopt fallacious modes of argumentation to defend Meyer 2004.

I've pointed out to Salvador exactly what he needs to do to show that his boasting about the Elsberry and Shallit 2003 paper being the wrong citation to critique Meyer 2004 by was on track. These items are things that if I were wrong about, Salvador should quickly be able to show that I was wrong on. This is the FOURTH TIME I've entered this in response to Salvador's comments here since August 31st. I'll email them to him, too, just to eliminate any weak apologetic that he had somehow overlooked the previous presentations.

===================

(From http://www.pandasthumb.org/pt-archives/000430.html#c7223 )

Salvador T. Cordova:
Quote

In the meantime, I hope Stephen Meyers will read these reviews and learn.  I can confidently say he can ignore any challenges offered by the “Elsberry and Shallit 2003” paper.  I don’t mind you guys building your case on it though. It’ll just be that more of an embarassment to see it all collapse when that paper is refuted.


It doesn’t matter if “the paper” is “refuted”; what matters is whether the particular claims made are supported and true. Here are the claims again:

Quote

2. Meyer relies on Dembski’s “specified complexity,” but even if he used it correctly (by rigorously applying Dembski’s filter, criteria, and probability calculations), Dembski’s filter has never been demonstrated to be able to distinguish anything in the biological realm — it has never been successfully applied by anyone to any biological phenomena (Elsberry and Shallit, 2003).

3. Meyer claims, “The Cambrian explosion represents a remarkable jump in the specified complexity or ‘complex specified information’ (CSI) of the biological world.” Yet to substantiate this, Meyer would have to yield up the details of the application of Dembski’s “generic chance elimination argument” to this event, which he does not do. There’s small wonder in that, for the total number of attempted uses of Dembski’s CSI in any even partially rigorous way number a meager four (Elsberry and Shallit, 2003).


In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (2), all one has to do is produce a citation in the published literature (dated prior to our paper) showing a complete and correct application of Dembski’s GCEA to a biological system such that “CSI” is concluded. Thus far, I’m unaware of any such instance. The only thing that makes any moves in that direction at all is Dembski’s section 5.10 of “NFL”, and we were careful to make clear why that one was both incomplete and incorrect.

In order to demonstrate that Elsberry and Shallit 2003 is incorrect on point (3), all one has to do is produce citations in the published literature (dated prior to our paper) showing the attempted application of Dembski’s GCEA to more than four cases. I’m unaware of any further examples that have been published, but I’m perfectly open to revising our number to account for all the instances.

Until and unless those citations are forthcoming, the braggadacio about how the Elsberry and Shallit 2003 paper can be safely ignored seems somewhat out of place.

=====

I posted that on August 31st. As far as I can tell, neither Salvador nor any other ID advocate has made the slightest headway in showing that I was inaccurate in either claim made above. Salvador has taken up an aggressive grandstanding technique, though I think that it is obvious to all that there is little to no substance as yet to back it up. If I were wrong on the two points above, it seems to me that it would be simplicity itself for some ID advocate to show that I was wrong, and I would have expected that to happen already. I predict that what I've written here will again disappear into the ID memory hole of inconveniently true criticisms.

If I'm wrong here, though, I'm willing both to take my lumps and acknowledge whoever it is that shows me to be wrong. I'm still waiting for the documentation. I suspect I will wait a long, long time.

Edited by Wesley R. Elsberry on Feb. 06 2005,22:05

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
charlie d



Posts: 56
Joined: Oct. 2002

(Permalink) Posted: Feb. 07 2005,08:26   

I think it's worth noting that Dembski himself is now on the record as stating that existing EF-related probability calculations such as those he made for the flagellum in NFL are "incomplete and sloppy ".  In his "Reply to Henry Morris", Dembski says:
Quote
Nonetheless, I found the probabilistic reasoning in the creationist literature incomplete and sloppy. For instance, authors often referred to the probability of the chance formation of a particular protein, but failed to note that the relevant probability was that of any protein that performed the same function (this is a much more difficult probability to calculate, and one with which recent ID research has been having some success).
I have no idea what recent "success" Dembski is alluding to, but most definitely his flagellum calculations did not consider the possibility of alternative forms of flagellar proteins capable of performing the same functions.  Instead, he took a straightforward, chance-alone, classical creationist "tornado in a junkyard" approach.  

Thus, the claim that the EF "has never been successfully applied by anyone to any biological phenomena" is now supported by the EF author himself.  I think Sal would have a really hard time contradicting Dembski on that one.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Feb. 07 2005,11:30   

Actually, I don't think that Dembski means that his calculations are sloppy. He will say that he doesn't include his works in "the creationist literature". We still have to point out that his "calculation" for the E. coli flagellum is incomplete (according to Dembski's own statements of what components make up the use of the GCEA) and incorrect. But I'm quite comfy in stating that.

Edited by Wesley R. Elsberry on Feb. 07 2005,11:31

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
PaulK



Posts: 37
Joined: June 2004

(Permalink) Posted: Feb. 07 2005,17:56   

I beleive that there is an problem with Dembski's view of specification that is not adequately accounted for in the paper.

Dembski allows a specification that is read off from the result provided it meets the requirements he has established.  However this is not equivalent to providing a specification in advance.

Consider a sequence of 500 bits.  If we specify the full sequence in advance then the probability of hitting it by pure chance is 2^-500.

If on the other hand we read the specification off from the result, the probability we should consider is the probability of producing one of the sequences of 500 bits fully defined by a specification that Dembski would consider valid.  Thus to know the probability of achieving such a result by chance we should not consider the probability of producing that particular sequence - we must consider the probability of producing ANY ONE of those sequences.  But how many are there ?  This is where your point that "specification" is quite loosely defined hits hardest - the more sequences that have specifications meeting Dembski's criteria the greater the probability of hitting one by pure chance.  One is reminded of the mathematical "proof" that there are no uninteresting numbers*.

This represents a very serious problem with Dembski's method, and dealing with it will add very greatly to the (already huge) amount of work that must be done to apply Dembski's criterion.


*The proof relies on the idea that being the first uninteresting number is itself an interesting property.  Thus there can be no "first uninteresting number".

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Feb. 08 2005,06:21   

Complaints by Salvador Cordova

I posted a comment to the Bathroom Wall on PT that included a link to Salvador's supposedly "Forbidden URL". Salvador made this comment on ARN:

Quote

Thanks Wes for linking to my thread from the bathroom wall of PT where no else will read.


It is a certainty that Salvador made this comment in ignorance. He has no information about the relative popularity of pages at PT. And that leads to an amusing circumstance.

Salvador noted a particular post of his that had been moved to the Bathroom Wall from a thread.

Quote

Well why wouldn't one think so. PvM opens a thread, posts my name on it, I respond, and he deletes the thread to the bathroom wall to where few will even read and where the post is out of place.

http://www.pandasthumb.org/pt-archives/000525.html#c10599


That was in response to :
http://www.pandasthumb.org/pt-archives/000602.html

My response was perfectly relevent. He deleted it. His pre-rogative, and my pre-rogative to complain.


In fact, the opening post by Pim does not mention Salvador by name. Salvador is mentioned in the comments that followed that post. I'm afraid I'm not seeing how the comment Salvador lists is relevant to Pim's opening post. In fact, Salvador's comment was not deleted; it was moved to a thread where it was not off-topic. It should be noted that despite Salvador's claims that any mention of "the Forbidden URL" was moved to the Bathroom Wall, we have this comment of Salvador's within the thread he was complaining about being shut out of.

So I looked at the logs, and here is what I found. The Icons of ID: Argument from Ignorance and other logical fallacies article has been accessed 1,385 times. The Bathroom Wall where Salvador's comment was moved to has been accessed 4,443 times. Salvador is complaining about having his comment moved to a page that was accessed more than three times as often. It seems to me that 4,443 accesses is rather a lot of "no one else" having a read.

It's reassuring to have an antievolutionist shoot himself in the foot so convincingly. Perhaps the wages of ignorance are, sometimes, a public faux pas of this grand scale.

Edited by Wesley R. Elsberry on Feb. 08 2005,13:34

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Aug. 27 2005,20:25   

This is a reply to a comment by Salvador over on Panda's Thumb.


Sal,

Yes, that is amusing. Wrong again, but amusing.

As to definitions, I have repeatedly made the point that what CSI is depends upon how it is recognized, which is a property (allegedly) of the math Dembski has given. The “physical/conceptual” text is a descriptive interpretation of what the math defines. It is not, itself, the definition. We addressed the math. We didn’t address every handwaving description Dembski wrote.

As to “omega”, Sal is utterly confused. There are two different uses of “omega” in Dembski’s stuff. In The Design Inference, “omega” refers to “probabilistic resources”, a mapping function that yields “saturated” probabilities and events. TSPGRID doesn’t change “omega”_TDI, contrary to Sal’s claim. In No Free Lunch, “omega” is the “reference class of possible events”. TSPGRID is incapable of “increasing omega” by its operation.

Dembski discusses calculation of “omega” on p.52 of NFL. There, he gives the example of a six-sided die rolled 6,000,000 times. His “omega” for this “event” is “all 6-tuples of nonnegative integers that sum to 6,000,000”. In other words, “omega” includes every possible way that one could roll a die 6,000,000 times. In other equations, if one rolls an n-sided die k time, “omega” is k*n. (This is for the case in which only the distribution of rolls matters, which is the context of Dembski’s example, and not the sequence of rolls. For a sequence of die rolls, “omega” becomes n^k.)

As for the Sal’s claim that TSPGRID “increases omega as it outputs data”, that’s just silly. One does have to take into account the number of runs of TSPGRID, just as Sal takes into account the number of coins in his idee fixe. Sal’s objection to TSPGRID is exactly the same as objecting to coin-stacking on the grounds that he “increases omega as he adds coins”.

Sal says that we didn’t give “omega” for TSPGRID. This is literally true, but we do expect some minimal competence from our readers. The “omega”_NFL for TSPGRID with 4n^2 nodes run k times stated in the same way as Dembski’s dice example is “all (4n^2)!-tuples of nonnegative integers that sum to k”, or, more simply, k*(4n^2)! as anyone with a clue should be able to work out from the information that we gave. If you change n or k, you get a different “omega”, just as you get a different “omega” if you stack dice instead of coins, or stack a different number of dice or coins. Once n and k are fixed, as in some specific instance of one or more runs of TSPGRID to be analyzed as an “event” in Dembski’s parlance, “omega” is fixed as well.

So Sal’s random charge of “error” here is just as amusingly inept as his previous outings. It seems that Sal is not well acquainted with Dembski’s work, as “omega” is not all that mysterious. I suspect that Sal “knows” that the TSPGRID example just “has” to be wrong, therefore, any scattershot objection made will do. But if TSPGRID were actually wrong, and Sal were actually capable of analyzing it, he would have come up with a valid objection in the first place, and not have had to resort to flinging any odd objection at hand and hoping something sticks. So far there has been the “a deterministic version of TSPGRID doesn’t output CSI!” objection (which is why TSPGRID is non-deterministic), the “TSPGRID doesn’t provide PHYSICAL information!” objection (though several of Dembski’s own examples share this “error” and a run of TSPGRID or any other algorithm certainly is physical), and now the “you didn’t say what Omega was!” objection (where “omega” is easily calculated given the information we provided).

But I guess I will have to make do with amusement at further instances of random objections.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Dec. 12 2018,18:39   

It's been another thirteen years. "iscid.org" is defunct, the ARN forums have long since gone the way of the dodo, and as far as I know, Sal thinks his various errors are still good responses.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
  52 replies since Nov. 12 2003,08:26 < Next Oldest | Next Newest >  

Pages: (2) < [1] 2 >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]