RSS 2.0 Feed

» Welcome Guest Log In :: Register

    
  Topic: Avida simulation of IC evolution, Links to resources, discussions, etc.< Next Oldest | Next Newest >  
niiicholas



Posts: 319
Joined: May 2002

(Permalink) Posted: May 13 2003,00:23   

This thread is intended to serve as a central reference for disparate net resources and discussions on this recent paper:

Quote

Nature 2003 May 8;423(6936):139-44
 
The evolutionary origin of complex features.

Lenski RE, Ofria C, Pennock RT, Adami C.

Department of Microbiology & Molecular Genetics, Michigan State University, East Lansing, Michigan 48824, USA.

A long-standing challenge to evolutionary theory has been whether it can explain the origin of complex organismal features. We examined this issue using digital organisms-computer programs that self-replicate, mutate, compete and evolve. Populations of digital organisms often evolved the ability to perform complex logic functions requiring the coordinated execution of many genomic instructions. Complex functions evolved by building on simpler functions that had evolved earlier, provided that these were also selectively favoured. However, no particular intermediate stage was essential for evolving complex functions. The first genotypes able to perform complex functions differed from their non-performing parents by only one or two mutations, but differed from the ancestor by many mutations that were also crucial to the new functions. In some cases, mutations that were deleterious when they appeared served as stepping-stones in the evolution of complex features. These findings show how complex functions can originate by random mutation and natural selection.


The pdf of the paper, and supplementary information, links to Avida, etc., are all freely online here:

http://myxo.css.msu.edu/papers/nature2003/

Links to discussions:

ISCID: Nature refutes ID [?]

ARN: The evolutionary origin of complex features

IIDB: A-life and the evolution of "complex functions"

t.o. thread 1
t.o. thread 2


Other related material:

Pennock's role in the ID debate is well-known.  Lenski evidently wrote a letter on the NY Times' ID article:

http://www.jodkowski.pl/ip/Letters003.html

  
niiicholas



Posts: 319
Joined: May 2002

(Permalink) Posted: May 13 2003,16:45   

Another author makes the point that this work (although this paper postdates the abstract by a year) was going after IC:

Quote

http://www-smi.stanford.edu/events....43.html

The Evolution of Complex Features in Digital Organisms
Date: April 8, 2002
Time: 11:30 PM to 1:00 PM
Location: Lee B. Lusted Library (MSOB, Conference Room x275)
Speaker: Dr. Charles Ofria, Research Assistant Professor, Center for Microbial Ecology
Affiliation: Research Assistant Professor, Center for Microbial Ecology and, joint with the Computer Science Department, Michigan State University, , http://www.cse.msu.edu/~ofria/, ,

Ever since Darwin first published his theory of evolution by natural selection, critics have focused on the theory's apparent inability to explain the evolutionary origin of complex traits. Indeed, the issue of how "irreducibly complex structures" can arise has become the primary focal point of Intelligent Design Theory, a recent branch of creationism.

To address evolutionary questions such as this, we developed Avida, a software-based research platform. In Avida, Darwinian evolution acts on a population self-replicating computer programs with Turing complete genetic basis. These programs (or "digital organisms") experience natural selection through random mutations and competition for limited resources that are required for them to express (execute) their genome. The digital organisms evolve entirely new genes used to interact with their environment and each other. The specific adaptations incorporated into the genomes in each experiment are unique, unpredictable, and could even seem inspired to an outside observer.

I shall discuss the design of the Avida system and give an overview of the types of experiments it allows us to perform, going into detail about our work on the evolutionary origin of complex traits.

  
niiicholas



Posts: 319
Joined: May 2002

(Permalink) Posted: May 13 2003,16:59   

Slashdot discussion

Breeding Computer Code

Studying Evolution with Digital Organisms

  
niiicholas



Posts: 319
Joined: May 2002

(Permalink) Posted: May 13 2003,17:10   

Heh.  There is an actual post by one of the authors (Ofra) on slashdot here:

Evolution and Avida (from one of the authors)

Quote

Evolution and Avida (from one of the authors) (Score:1)
by mercere99 (193493) on Thursday May 08, @07:55PM (#5915074)
(http://www.cse.msu.edu/~ofria/)  
The parts of working on this that have amazed me most are when the evolution doesn't go as I've planned. In particular, when writing Avida, the best debugging technique I have is to just run it, and then see how some of the evolved organisms work. If I made any mistakes, they will find a way to exploit my errors.

One key thing about Avida is that its not exactly a genetic algorithm. The digital organisms must self-replicate. No matter how skilled they are at performing any of the rewarded computations, if they can't also copy themselves their genetic material will not make it into the next generation. Some people may consider this a minor difference, but it causes certain side effects, such as an evolution toward being more robust to mutations (See this [space.com] previous space.com article), and in general helps prevent the population from running into a complexity barrier.

Now, back to the organisms exploiting any mistakes I've made. The story that convices most biologists that these organisms can, in some sense be considered alive is this. I was working on a project where I didn't want any more beneficial mutations to be able to occur, so that I might be able to study more ecological effects. Since I'm using a computer system, I actually have the ability to fully analyze every mutation as it happens -- I can take the resulting organism, start up a new population, and run it for a while to see how it does. Now obviously this will slow down an experiment tremendously, but if I'm willing to take such a time hit, it will work. I can then take any mutation that would be beneficial, kill off those organisms (or even just revert that mutation). I implemented it and set it up to do.

What happened? Well, I watched the run as it was going (looking particularly at these test environments), and was surprized to see the fitness of the organisms appeared to be dropping. I couldn't understand it; just because there would be no mutations to improve their ability to survive, it still shouldn't drop. So I looked more carefull in the population itself, and there the fitness appeared to continue to rise. In enither case did I see the stasis that I expected.

Upon further investigation, it turned out that the organisms had evolved a way to distinguish between the test environments and the real one. I had made a slight difference on how I gave them the input numbers to the computations they needed to be able to perform, thinking it would really matter in the end. But from the organisms persepective they were able to use this -- if the inputs looked like those from a test environment, the organisms would purposfully not perform any tasks, while if they were from a real population they would do all of the rewarded tasks they could and continue to adapt to perform more.

It shocked me that they were able to figure this out so easily. A biologist friend of mine equated it to predator avoidance -- if they showed a particular behavior they would be killed for it, so they were careful how they did it. Kind of like if a squirral wasn't careful collecting nuts, a bird might swoop down and get it. Even being careful there is an occasional problem, but they can do quite well for themselves.

I went in and fixed it: I made both the real population and the test environment as random as possible, and started my runs up again.

Did this work? Of course not. What they started to do now was just play a probability game. They would ususally do all of their tasks, but sometimes they would do none of them. If an otherwise beneficial mutation happened to occur when they weren't doing tasks, it would slip through and get into the population, never to be checked again. This really slowed down the rate of evolution (most beneficial mutations were purged), but enough still slipped though.

I am actually at a loss on how to get rid of them all! Here I have this system that should make experimental evolution all the easier because I have "complete control" over it, when in truth life does always seem to find a way.

Charles Ofria  

  
Dr.GH



Posts: 2333
Joined: May 2002

(Permalink) Posted: May 17 2003,18:37   

I gotta say that that last post made me a bit unsettled.

"Der ar' tings man is not ment to know, Herr Doctor!  Der villagers might riot!"

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
RBH



Posts: 49
Joined: Sep. 2002

(Permalink) Posted: May 24 2003,14:44   

Two contiguous posts from the ISCID thread on the paper referenced above:
Quote
Posted by RBH (Member # 380) on 23. May 2003, 18:17:

Just to remind us all of the 'canonical' definitions of irreducible complexity, these are from ISCID's Encyclopedia:

Irreducible Complexity

Michael Behe's Original Definition:

A single system composed of several well-matched, interacting parts that contribute to the basic function of the system, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin's Black Box, 39)

William Dembski's Enhanced Definition:
A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. (No Free Lunch, 285)

[With a link to "irreducible core"]
Irreducible Core

The parts of a complex system which are indispensable to the basic functioning of the system.

Michael Behe's "Evolutionary" Definition
An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.


The first definition, Behe's original DBB formulation, is clearly an ahistorical one. There is no reference to the past or the pathway to the state of ICness so long as we interpret "basic function" to mean "current function" and assume that a system performs only one function or, if it performs more than one function, we can tell which is "basic." It is also the definition that specifies the operation necessary to classify a system as IC: the knockout procedure. "Interacting" can also be operationally determined by observing correlations between the behaviors of parts. The vagueness is in the term "well-matched." There is no way mentioned in the definition (nor elsewhere in DBB) for 'well-matchedness' to be measured. Hence operationally - that is, experimentally - we have only the knockout procedure and identifying interactions on which to determine IC or not-IC. On Behe's first definition, the programs that evolved to perform EQU meet the two operational criteria - knockout loss of function and interactions. Only the ill-defined "well-matched" stands between the programs and ICness.

Dembski's refinement of Behe's definition introduces two new elements: "basic, and therefore original, function" and "nonarbitrarily individuated parts." The first addition's reference to "original function" introduces history. In order to classify a system as IC we must know that the current function of some system was also its original function. The effect of this move is to definitionally eliminate cooption (which we know to be common in evolution) as a route to an IC system. Hence this definition is restricted to only those systems in which we know cooption did not play a role in the evolution of the system. This definition, in its reference to "irreducible core," preserves the knockout criterion.

The second addition in Dembski's definition is ambiguous. It is a negative prescription ('do not pick parts arbitrarily') but gives no guidance on what is non-arbitrary. In his NFL example of the flagellum, Dembski works with two levels. There's the 'parts of an outboard motor' level - power source, rotor, propellor - and the level of calculation - proteins. There is no clear justification for which level of parts to use for what part (!) of the definition; the choice seems to be arbitrary.

The programs that evolved to perform EQU do not meet Dembski's definition of ICness, since the final function performed by those programs is not the "basic, and therefore original" function. They coopted other functions. While some of those precursor functions are also performed by the final programs, other precursors were sometimes lost along the way. Hence the "original" functions were not always present in the final program.

Behe's "evolutionary" definition also invokes history. It requires that we know the complete pathway by which a candidate IC system evolved, so we can count the number of "unselected steps." This is also interesting for introducing the notion that "irreducible complexity" can take on values other than 0 or 1: "The degree of irreducible complexity is the number of unselected steps in the pathway."

By this "evolutionary" definition the programs that evolved to perform EQU are IC to some degree, since every step on the path to the programs that performed EQU was not "selected." In fact, some steps in at least some of the lineages leading to the final programs were deleterious and hence were selectively disadvantageous - there was selective pressure against them. Hence they display some degree of irreducible complexity.

Thus depending on the definition one chooses, the programs are IC, not IC, or IC to some degree, and we have no guidance in deciding which it is. Therefore unless and until Behe/Dembski, et al settle what IC means, it is useless from the point of view of doing meaningful research.

RBH

and
Quote
 Posted by Argon (Member # 276) on 24. May 2003, 12:06:

RBH writes:
Dembski's refinement of Behe's definition introduces two new elements: "basic, and therefore original, function" and "nonarbitrarily individuated parts." The first addition's reference to "original function" introduces history. In order to classify a system as IC we must know that the current function of some system was also its original function. The effect of this move is to definitionally eliminate cooption (which we know to be common in evolution) as a route to an IC system. Hence this definition is restricted to only those systems in which we know cooption did not play a role in the evolution of the system. This definition, in its reference to "irreducible core," preserves the knockout criterion.

You know, this definition would probably remove things like blood clotting, a large chunk of the immune system, and maybe even parts of the flagellum from the list of IC systems. For example, the blood clotting cascade is composed of numerous proteases that bear striking similarities to other proteases that are both ancestral to the clotting system and which have different "functions" in the cell. Thus the "original function" of most of the components had nothing to do with clotting. Ditto with the immune system. The current functions of flagellar components are mostly propulsion and cell adhesion, but parts of this system might have originated from a protein translocation system or pore. And so the "original functions" of the flagellar system components might not have been the same. How does one actually determine such things in ancient, ubiquitous systems that have undergone strong selection before diversification?

Obviously (to biochemists at least), it's a practical impossibility to be sure of the "original" function of any component. I think Dembski has a Platonic and "separate creation" view of organisms and biology. Ernst Mayr knocks that viewpoint down in a series of books.

In ruling out the possibility of co-opting other components Dembski seems to convert the IC definition into the truism that unevolvable IC systems are unevolvable. After all, does Dembski think that cells acquire large stretches of DNA that spontaneously appear out of nowhere and have no past? Yes, if one wanted to describe a system that had no past as "IC", one could do it. But then that definition would have little to do with the mechanisms by which evolution operates and would thus be orthogonal to the important question of evolvability.

RBH also wrote
Thus depending on the definition one chooses, the programs are IC, not IC, or IC to some degree, and we have no guidance in deciding which it is. Therefore unless and until Behe/Dembski, et al settle what IC means, it is useless from the point of view of doing meaningful research.

Why should they be the final arbiters of what is and isn't IC? Behe spent plenty of time writing his book and developing his ideas. His whole thesis fundamentally relies upon the abilty to properly identify IC systems. Dembski also had a long time to develop a mathematical "model" of ICness. They have both observed and participated in many discussions about these definitions and the problems associated with their various criteria. They have also given many talks on the subject. Since both men understand the crucial importance of having useable, workable guidelines when performing research, particularly in a new area, I have little doubt that they would have let all these years go by without already presenting all that they could on this subject about the clarifications and important distinctions.

IC was first defined in Behe's book, DBB. As RBH mentions, it was an ahistorical definition. All subsequent changes made by Behe and Dembski require historical knowledge about a system and thus substantially change the nature of evaluation. I cannot see any good reason why they should bear the "IC" moniker without a subheading to indicate the additional criteria that have been met. For example, it can sometimes be simple to apply Behe's original, operational criteria to determine whether a system is IC. But what this does not tell us is whether such a system was evolvable. Thus we potentially have two classes of IC systems: evolvable or unevolvable. Once a system is determined to be IC, we can now apply additional tests to determine the subclass to which it belongs. Until then it should be placed in a third subclass: "evolvability unknown". Here is how I see the current organization:

Class: IC
* Evolvability status:
* Unknown
* Evolvable

* Criteria type 1: Intermediate steps reproduced.
* Criteria type 2: Similarity to other evolvable systems.
* Criteria type 3: etc....
* Unevolvable
* Criteria type 1: Requires too many "lucky" neutral mutations. (from Behe IC v2)
* Criteria type 2: No possible ancestors. (modification of Dembski IC v2)
* Criteria type 3: Intelligent designer observed.
* etc...

Personally, I'd prefer that the "IC" label be dropped from the subsequent "redefinitions". I think clever people could invent other, more appropriately descriptive labels that would reflect the actual criteria being applied.

[ 24. May 2003, 12:09: Message edited by: Argon ]


--------------
"There are only two ways we know of to make extremely complicated things, one is by engineering, and the other is evolution. And of the two, evolution will make the more complex." - Danny Hillis.

  
niiicholas



Posts: 319
Joined: May 2002

(Permalink) Posted: May 27 2003,12:53   

Pennock's homepage, where the article can also be downloaded:

http://www.msu.edu/~pennock5/research/publications.html#EvoOrgComFeat

  
niiicholas



Posts: 319
Joined: May 2002

(Permalink) Posted: May 27 2003,13:17   

A great set of Paul Nelson quotes re-posted by GP:

Since my joining this forum, I have had on occasion asked several IDists to define what it means for something to be designed. I believe the closest came when Paul Nelson
  • wrote me:
    quote:
    --------------------------------------------------------------------------------
    For design (as a real cause, and scientific explanation) to have any empirical content, it must have a contrast class -- namely, natural causes. Now, if one requires the logically impossible, i.e., that we exhaust the universe of possible natural causes, one cannot infer design.
    --------------------------------------------------------------------------------

    Even in the critical thread about Dembski's CSI concept, I asked once again what it means to be not designed. And there, Nelson Alonso[+] tells me:
    quote:
    --------------------------------------------------------------------------------
    Indeed, there are many false negatives in Biology that provide fruitful research prospects for the design inference (i.e. sub-optimal design). A true negative can also be given, as for example, the hemoglobin case, where the specification is quite small and natural selection can indeed select functional intermediates.
    --------------------------------------------------------------------------------

    As far as I can see, there is only one common theme running in these two quotations: design is not natural, not evolved. So y'see, RBH isn't the only person with a narrow view of design here when he's in good company with the likes of the Nelsons, nor should he be faulted for picking up on the design notion du jour on an IDist site. Or to bring the point home, if we carry Micah's logic all the way to the other end of the spectrum, everything is designed -- for that is the broadest notion of design possible. And why not? The real shame of it is that design as a concept (as evinced by Nelson) is defined by its antithesis (supposedly natural evolution).


    http://www.iscid.org/ubb....01;p=12

  •   
    msparacio



    Posts: 10
    Joined: Feb. 2003

    (Permalink) Posted: May 29 2003,14:31   

    Hey nic.
    The line might get blurry towards the middle, but there has got to be more wiggle room on the design side than the typical "kid playing with his legos" type of design.  That's all I'm trying to say;-)

    Maybe the design/non-design line is an ontological blur.  But it seems that at the extremes, a demarcation can be made, and that the demarcation seems to persist beyond the obvious (strawman design).

    I'm not going to be around long...just popping my head in.

    --------------
    http://www.iscid.org/....cid.org

       
    niiicholas



    Posts: 319
    Joined: May 2002

    (Permalink) Posted: May 29 2003,22:35   

    Another story:

    http://www.eetimes.com/story/OEG20030521S0044

    ...one thing that the story mentions that I hadn't really seen mentioned elsewhere (including the paper IIRC, so I'm a bit suspicious) is that the building up of the functions sometimes "burned its bridges", i.e. eventually no trace of some intermediates can be found in the final program.  Would this be an instance of scaffolding?  It would be cool if so...

      
    niiicholas



    Posts: 319
    Joined: May 2002

    (Permalink) Posted: May 29 2003,22:52   

    Hey Micah, thanks for stopping by...

    I'm guessing you're replying to GP's post that I re-posted.

    I liked that post because GP found several quotes (which I could never find) from Paul Nelson that indicate that he understands the importance of having a fairly bright line between EvolutionDidIt and IDdidIt if you're going to attempt to make a strong argument for ID.

    True, it's always possible that the designer was just being difficult and mimicked/used evolution in various ways.  However, if such things are what is hypothesized, then the assertion that the arguments for ID are strong has to be tossed out the window.

    ID generally claims that design is fairly obvious; my experience has been, however, that when strong arguments for evolution of X are made, IDists don't say "Oh, you know you're probably right, it looks like X probably evolved after all" -- usually what happens is that we get some form of "if you can't beat 'em, join 'em" -- the ID hypothesis morphs and wriggles up to the evolution hypothesis and we get various versions of "the IDer frontloaded evolution" or "mutational mechanism Y was designed so there was still design in X" or other such obscure notions.  ID becomes a kind of parasite, good at hiding from criticism but without content of its own.

    Nelson, at least, sees that being irrelevant is worse than being wrong.

      
    RBH



    Posts: 49
    Joined: Sep. 2002

    (Permalink) Posted: May 30 2003,23:05   

    Nic wrote
    Quote
    ...one thing that the story mentions that I hadn't really seen mentioned elsewhere (including the paper IIRC, so I'm a bit suspicious) is that the building up of the functions sometimes "burned its bridges", i.e. eventually no trace of some intermediates can be found in the final program.  Would this be an instance of scaffolding?  It would be cool if so...

    I'll have to trudge back through the paper again, and probably through a lineage or two or three, to see if that was the case.  I know that the EETimes article had several serious errors in its description of the research, so much so that I emailed Johnson with them, and he asked if they were serious enough to warrant a Letter to the Editor with the corrections.  I said they were and did so - actually, I told him to use my original email as the correction letter.  This is the email I sent:
    Quote
    Dear Mr. Johnson,

    It was good to see your Avida story in EETimes.  However, there are a couple of misconceptions in it that distort the actual research.

    First, the story says "The organism's metabolism consists in the endless execution of the sequence of instructions. Energy from the environment, or "food," is modeled as single-instruction processors (SIPs) that are fed to the CPU. The number of SIPs that a CPU receives is proportional to the length of its tape. Thus, as the CPU becomes more complex in terms of the length of its instruction tape, it is able to get more food from the environment, giving more-complex organisms a competitive advantage."

    "SIPs" are "Single Instruction Processing" units, not "processors."  A SIP is a quantum - unit - of 'energy' that allows processing one instruction.  It is not additional processors.  And providing SIPs in proportion to length does not give longer organisms a competitive advantage.  It actually neutralizes genome length as a selective variable.
    >
    Second, the story says "SIPs introduce new instructions to the CPU, allowing it to grow as well as to reproduce."  Nope.  SIPs have nothing at all to do with introducing new instructions.  That was the role of mutations.

    Third, the story says "The researchers performed evolutionary runs starting with individuals that could replicate themselves but could not perform any logic operations except the simple NAND."

    In fact, the initial organisms (Ancestors) could perform NO logic operations, not even NAND.  NAND was in the instruction set available to be inserted by mutations, but was not in the initial organisms.  Further, the primitive NAND instruction could not by itself perform NAND in the context of a critter's program: It had to be appropriately embedded in a context of other instructions that gave it access to registers and I/O.  By itself, NAND in a critter's genome just sits there.

    Yours,
    RBH

    RBH

    --------------
    "There are only two ways we know of to make extremely complicated things, one is by engineering, and the other is evolution. And of the two, evolution will make the more complex." - Danny Hillis.

      
    RBH



    Posts: 49
    Joined: Sep. 2002

    (Permalink) Posted: June 06 2003,23:08   

    I want to capture this before it gets Modded away on Brainstorms.  The second to last paragraph is the one vulnerable to Modding:
    Quote
    posted 06. June 2003 23:56

    Argon,

    The program above performs EQU, NOT, OR-N, OR, AND-N, and NOR. It does not perform NAND, AND, or XOR. It's difficult to say by inspecting the code which of those simpler functions are necessary components of the code that performs EQU, since in the evolution of the program various primitive instructions become involved in multiple functions. There's an example above - three instructions that are part of the replication code have been recruited to also participate in performing EQU. Thus individual instructions can have multiple roles, contributing to several functions, making program analysis difficult to nigh unto impossible.

    There are other instances of this same kind of difficuly. For example, in the hardware evolution literature there was a flurry of publicity a year or so ago about a (hardware) radio receiver evolving under conditions that were selective for oscillation. In other outcomes of that same study, oscillators evolved as fairly simple circuits in the sense of using relatively few components of known properties that behaved appropriately under the selective conditions. The experimenters, experienced circuit designers themselves, could not figure out how the circuits did it. The outcomes of evolution in these kinds of studies are not simple, not obvious, and certainly not transparent in how they perform their behaviors.

    Mike,

    The principal function of the programs is to perform the logic operation EQU. All 23 programs (147 in the whole experiment, actually) evolved to perform EQU. The 23 that did so in the main condition are all different from one another. I don't know about the 124 that evolved in the various control conditions. As I noted above for the Case Study program, they all also performed some of the simpler logic operations, which are legitimately labeled "functions," too. But just as various kinds of bacteria evolved different sorts of flagella to perform the same principal function, motility, so the various Avida evolutionary runs evolved different programs to perform EQU.

    Note, by the way, that once a program that performs EQU has evolved, the simpler logic operations can drop out of the competitive race - now be unrewarded - and the main function will persist in the population so long as it continues to be selectively advantageous. Way downstream, since they no longer confer selective advantage, those simpler functions may no longer be performed and thus won't be visible in the current population. We'd see no "precursors" of EQU in the extant population and we'd wonder where the heck our programs with the ability to perform EQU came from. And we could generate an "irreducibly complex programs that perform EQU can't evolve" conjecture, and we could challenge critics of our IC program conjecture to produce the exact pathway and tell us what those hypothetical precursors are. (And if they can't specify the precise pathway and the exact precursors, we could say "Neener neener neener!" ;) )

    As to parts, in constructing your list of 14 you have conflated "part" and "kind of part." 'q' is a "kind of part;" 'q as instruction #11' is a "part." It's like a brick pillar made of a slew of bricks piled up one on top of another. All the bricks are identical - all are the same "kind of part" - but pulling out the brick on the bottom of the pillar has a considerably different effect than pulling out the brick on top. They are different "parts" in the context of the structure. I take "part" in the program above to be coterminous with "Instruction #." Thus 'q as Instruction #11' is a different part from 'q as Instruction #11' or 'q as Instruction #17'. So the example above contains 60 "parts." In a computer program, the same primitive instruction used in different places is most definitely not redundancy!

    RBH


    --------------
    "There are only two ways we know of to make extremely complicated things, one is by engineering, and the other is evolution. And of the two, evolution will make the more complex." - Danny Hillis.

      
      12 replies since May 13 2003,00:23 < Next Oldest | Next Newest >  

        


    Track this topic Email this topic Print this topic

    [ Read the Board Rules ] | [Useful Links] | [Evolving Designs]