Joined: Sep. 2002
Since ARN's Moderation is twitchy and its archived threads occasionally disappear from view, I''m archiving two posts I made there on irreducible complexity. This is the first.
In another thread jon_e provided a link to a recent paper by Dembski revisiting irreducible complexity. jon_e was making the point that "irreducible complexity" is alive and well in ID. I had previously scanned the paper but had not read it carefully. Looking again at it tonight, I see that Dembski has made a significant change in Behe's original conception of irreducible complexity, a change that eviscerates the utility of "irreducible complexity." Rather than being alive and well, in the light of Dembski's new paper irreducible complexity is dead on arrival.
To realize the nature of the change, it's first necessary to know what an "operational definition" is. Very briefly, an operational definition is a description of the procedures (operations) used to measure the value of a variable. So, for example, an operational definition of "temperature" is a description of how temperature is measured -- the apparatus used, conditions that apply, and steps performed in making the measurement. The Methods section of research papers contain explicit or implicit operational definitions of the variables under study.
With respect to any system, "irreducible complexity" is a variable that takes one of two values, 1 or 0 -- present or absent, true or false. So an operational definition of irreducible complexity is a description of the steps carried out to determine whether a given system is or is not IC. In Behe's original conception, the IC value for a system is assigned to be "1" (true) if the loss of any part/element/component prevents the system from performing the primary function that it performs when it is whole -- a 'knock-out' operation -- and "0" (false) otherwise. So Dembski wrote in 1998, two years after DBB
|Central to his [Behe's] argument is his notion of irreducible complexity. A system is irreducibly complex if it consists of several interrelated parts so that removing even one part completely destroys the system’s function.|
The operation specified for determining IC is knock out a part and see if the system still works: that's the operational definition of "irreducible complexity".
|Also, whether a biochemical system is irreducibly complex is a fully empirical question: Individually knock out each protein constituting a biochemical system to determine whether function is lost. If so, we are dealing with an irreducibly complex system. Experiments of this sort are routine in biology. |
In the recent paper referenced by jon_e, though, Dembski adds another operation to the procedure used to determine the value taken by IC:
|Thus, removing parts, even a single part, from the irreducible core results in complete loss of the system’s basic function. Nevertheless, to determine whether a system is irreducibly complex, it is not enough simply to identify those parts whose removal renders the basic function unrecoverable from the remaining parts. To be sure, identifying such indispensable parts is an important step for determining irreducible complexity in practice. But it is not sufficient. Additionally, we need to establish that no simpler system achieves the same basic function. (Emphasis added)|
That last criterion is an IC killer, at least empirically. One must show that no system that is simpler than the system under analysis can perform the function performed by the system under analysis. (I'm leaving aside the other change, the "rearranging and adapting remaining parts" addition to the original knockout operation. That change also raises problems for determining the value taken by IC.)
|To determine whether a system is irreducibly complex therefore employs two approaches: (1) An empirical analysis of the system that by removing parts (individually and in groups) and then by rearranging and adapting remaining parts determines whether the basic function can be recovered among those remaining parts. (2) A conceptual analysis of the system, and specifically of those parts whose removal renders the basic function unrecoverable, to demonstrate that no system with (substantially) fewer parts exhibits the basic function. (Emphases added)|
Note carefully that it's not sufficient to show that some subsystem of the system under analysis can't perform the function; according to Dembski it is necessary to show that no simpler system can perform it, regardless of whether that simpler system resembles the system under analysis or not.
It might be thought that I'm giving Dembski's words an uncharitable reading, but that's belied by Dembski's own example, sandwiched between the two quotations above:
Please pause and think about that for a moment.
|Consider, for instance, a three-legged stool. Suppose the stool’s basic function is to provide a seat by means of a raised platform. In that case each of the legs is indispensable for achieving this basic function (remove any leg and the basic function can’t be recovered among the remaining parts). Nevertheless, because it’s possible for a much simpler system to exhibit this basic function (for example, a solid block), the three-legged stool is not irreducibly complex.|
Now continue reading.
On Behe's original operational definition, that three-legged stool is irreducibly complex: remove any of the four components (three legs and the seat -- Dembski forgot to mention the seat) and it can no longer function "to provide a seat by means of a raised platform", and so on the original operational definition the stool is irreducibly complex. But under Dembski's revised operational definition, a three-legged stool is not irreducibly complex because some simpler system (that does not contain any of the parts of the original stool) can perform that same function.
As a result, in order to show that a system is IC, intelligent design "theorists" must show not only that the system fails to perform its function when any part is removed, but they must also show that no other simpler system can perform that function. That is, they must establish a universal negative. And (ask your friendly neighborhood logician) it is impossible to establish a universal negative. (Hint: black swans.) Dembski is back in the inductive soup. On Dembski's new operational definition, not even Behe's mousetrap is irreducibly complex!
In my less-than-humble opinion, in revising its operational definition Dembski has thoroughly gutted the notion of irreducible complexity.
Edited by RBH on Mar. 27 2005,21:12
"There are only two ways we know of to make extremely complicated things, one is by engineering, and the other is evolution. And of the two, evolution will make the more complex." - Danny Hillis.