RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (14) < [1] 2 3 4 5 6 ... >   
  Topic: Evolutionary Computation, Stuff that drives AEs nuts< Next Oldest | Next Newest >  
Wesley R. Elsberry

Posts: 4937
Joined: May 2002

(Permalink) Posted: Feb. 21 2018,20:37   

Disclosure: I did my postdoc working with Avida, I worked on the Avida-ED project, and my spouse is the architect and programmer on the current web-based Avida-ED.

Science Daily had an article discussing a research paper on the educational effectiveness of using Avida-ED, the award-winning software putting a user-friendly interface on the Avida resarch platform, to teach students basic concepts in evolutionary biology. Other recent notice came in a page at the IEEE 360Electronics site.

From the Results section of the research paper:


Student Acceptance of Evolution

Average student acceptance scores across all ten cases ranged from 173.95 to 90.04 on the pre-test and 76.28 to 91.06 on the post-test, or moderate to very high acceptance for both pre- and post-tests, using Rutledge’s (1996) guide to interpretation (Table 8). Average acceptance score increased significantly from pre- to post-test in four of the ten cases (Fig. 5). These four cases were all lowerdivision courses that also had statistically significant gains in average content score. Students in two of the three upper-division evolution courses, B_300Evo and F_400Evo, had very high acceptance on both the pre- and post-tests, with no significant change. These were the same two upper-division courses in which the highest pre-test content scores were observed. Students in the remaining upper-division course, C_400Evo, also did not show a significant change in acceptance from pre- to post-test, with the lowest average acceptance score on the post-test (76.28). Thus, case C_400Evo showed the lowest average scores for both content and acceptance on the post-test, despite being a senior-level evolution course (discussed below).

Understanding and Acceptance of Evolution

Most of the students in lower-division courses had significant increases in both average content and average acceptance scores, suggesting a relationship between the two. Again, we accounted for differences in levels of student prior knowledge by using normalized gains (g-avg; Hake, 2002), calculated for each student’s pre- and post-assessment scores, which were then averaged for each case. The Pearson correlation confirmed a significant, positive association between the change in average normalized content score and in average normalized acceptance score across the ten cases (r = 0.60, p < 0.05; Fig. 6).

Of course, whenever evolutionary computation gets favorable notice, you can count on the Discovery Institute to say "Pffff-ff-ff-fft" to that.


It’s time for a bit of honesty in evolution education! Avida shows that evolutionary processes require intelligent design to hit predetermined targets. That’s the candid takeaway from a lesson about this software. Since we don’t recommend trying to bring ID into public school classrooms, there are undoubtedly more effective uses of class time than playing with Avida-ED.

Well, predetermined targets are one thing, and actually "cheating", as Dembski has routinely called it, is quite another. There is a finite set of logic operations that can operate on one or two inputs. Avida has all of them implemented such that they could be specified and recognized if a program accomplished any of them. The Avida results that Dembski and others concern themselves with are a small sample, nine tasks, of that larger number. The 2003 Lenski et al. paper is based on the "logic-nine" definition of an environment.

Other environments are possible. By other, I mean something like 8.511301e10 of them or more when considered with nine selected tasks per environment. Choose other numbers of tasks and you'll different numbers, but mostly larger. Avida doesn't care which ones you specify, which makes it difficult to credit that the information of "intelligently designed" results for each of them is somehow crammed into the Avida executable, which the last time I checked weighed in at about 2e6 bytes.

During my postdoc, I used Avida, but here is the number of the existing predefined logic tasks I used in my research: 0. I added three instructions to the Avida instruction set, one to move the Avidian from its cell to an adjacent cell, one to randomly change which way it faced, and one that would load the difference in a resource between the current cell and the faced cell into a register. A task placeholder, "don't-care", got no merit bonus of its own; instead, Avidians received merit modulated by the concentration of resource in the cell the Avidian moved into, and only in the update it moved into it. What I was looking for was the evolution of movement strategies, of which there were this many defined in Avida: 0. What came out were a plethora of programs, hardly any alike, that moved Avidians around the grid in about eight general classes of movement. One of those classes corresponded to implementations of gradient ascent programs, the optimal movement strategy for the single-peaked resource environment I evolved them in. I certainly coded no "target" (where "target" would be a program or template for a program for moving) into Avida, and the Avida codebase didn't even have anything to move Avidians around before I worked on that.

Other researchers modified Avida to accomplish real-world things unrelated to the "logic-nine" environment Dembski and his comrades are fixated upon. There have been successful projects to code robotic controller programs, wirelessly networked sensor firmware that handles consensus-finding, and evolving feature sets specifying program UML. None of those things had a pre-specified endpoint in mind other than meeting a functional definition.

The DI screed also complains that "information sources" exist in Avida. Well, yeah, the things that are considered analogous to the environment are information sources. Just like environments are information sources for organisms. But information sources corresponding to what a Avidian genome is supposed to look like in the end? Nope.

The DI encourages people to look at the Evolutionary Informatics "Lab"-(not-affiliated-with-Baylor-despite-trying-really-hard) web dingus, Minivida, to see their version of what they imagine Avida is doing. I had a look. Avida is premised on self-replicating digital organisms. Minivida is not. Replication is something every viable Avidian does. Minivida just does some copying with mutation. The Minivida folks do offhandedly announce this departure from the source:


An attempt has been made to maintain as much compatilibity with Avida as possible, so most Avida programs should run on this simulator with the same results. However, all instructions relating to copying instructions are either ignored or only partially implemented.

Here's the code for reproduction:

Code Sample

// Make the next generation
AvidaSim.prototype.make_babies = function()
   // create an array with fitness(x) elements for every x in the current population
   var parents = new Array();
   for( var idx in this.population )
       var parent = this.population[idx];
       for( var pos = 0; pos < parent.score*2 + 1;pos ++)
           parents.push( parent.program );

   // create babies
   // select from above array so probability of being selected is correlated with fitness
   var babies = new Array();
   for( var child = 0; child < this.config.population_size; child++)
       var parent_idx = Math.floor( Math.random() * parents.length );
       babies.push( new ChildGenome( this, parents[parent_idx] ) );

   this.population = babies;

There's a call in there, so let's have a look:

Code Sample

function Genome(simulation)
   this.program = "";
   for(var i = 0; i < 85;i++)
       this.program += simulation.random_letter();
   this.score = undefined;

function ChildGenome(simulation, parent)

   var idx = Math.floor( Math.random() * parent.length);
   var letter = simulation.random_letter();

   this.program = parent.substr(0, idx).concat( letter, parent.substr(idx+1) );
   this.score = undefined;

Mutation in Minivida always is triggered for one location in a genome. The child genome is the parent genome with one instruction substituted randomly.

Minivida forces a fixed genome size (85) and fixed mutation rate (0.9615/85). Why the odd number? Their mutation routine does not check that the mutated instruction actually differs from the one being replaced, so 1/26th of the time it will be the same. Avida can be set to have fixed genome size or unconstrained genome size. Avida-ED uses a fixed size of 50. Avida and Avida-ED allow the setting of the mutation rate, which is a per-site rate, and mutation happens probabilistically. A standard initial exercise with Avida-ED is to have a class set a particular mutation rate, have all students put an organism in the organism viewer, run, and count the number of mutations in the offspring. The numbers are collected, and those show a gaussian distribution that comes close to the set mutation rate. That sort of demonstration is impossible in Minivida because looking even a little like what happens in biology isn't even on their radar.

Minivida only provides the current best genome for display, and it shows a diagram of a circuit, often with several levels, and I haven't found any explanation of what they are showing and how it is supposed to relate to Avida. The code looks to be picking out the math and logical operators from the organism program and showing transformations and sources going back to inputs. As the Minivida program goes on, the graphics are continuously changing. In Avida, small mutation rates mean that one is pretty likely to end up with Avidians that are accurate copies of it. So the best organism in an Avida population may well be the parent of an identical best Avidian in a later generation; one can see some stability of best Avidian. This does not appear to be the case for Minivida, where getting an accurate copy of a Minivida program will only happen 1/26th of the time. In Avida-ED and Minivida, one can pause the program at any point in a run. In Avida-ED, one then can examine any Avidian in the grid, see its genome, watch how its execution runs, and see a possible offspring from its replication, and examine various other statistics about the population, plus charts of several different population-level properties. In Minivida, only the best genome and a circuit representation of it can be seen.

The sorts of things one can use Avida-ED to demonstrate in a classroom on first introduction can be seen in the Lab Manual, in curriculum guides, and in various YouTube video tutorials. Assessments of the utility of Avida-ED are presented in peer-reviewed research, such as the example the DI got hurt feelings over. As far as I can tell, the notion that Minivida has something to show people relative to Avida-ED is unfounded. You can go through the Minivida code and note all the "TODO" lick-and-promise points in its incomplete approach to mimicking Avida. You can look for, and fail to find, documentation that would show any useful educational purpose for Minivida.

Now, I'll give the antievolutionists pushing Minivida and misguided critiques of Avida and Avida-ED one prop, which is that they haven't minified their Javascript or otherwise obfuscated the source to make it harder to see just how dreadful it is. (And I hope that observation doesn't lead them to do just that.)

But the DI did give Diane and I something else to have in common: we both now have the public disapproval of the DI.

"You can't teach an old dogma new tricks." - Dorothy Parker

  399 replies since Mar. 17 2009,11:00 < Next Oldest | Next Newest >  

Pages: (14) < [1] 2 3 4 5 6 ... >   

Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]