RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (14) < [1] 2 3 4 5 6 ... >   
  Topic: Evolutionary Computation, Stuff that drives AEs nuts< Next Oldest | Next Newest >  
Turncoat



Posts: 129
Joined: Dec. 2007

(Permalink) Posted: Oct. 10 2009,15:42   

Quote (dvunkannon @ Sep. 09 2009,16:37)
Here's a little something related to genetic algorithms that I was thinking about during my recent vacation. I decided to write it down and share it with y'all in the hope getting some feedback on the idea. Now that I've got the idea sketched out, I'll implement it in the little GA I've been building.

Thanks in advance for any comments.

Something few people understand about the NFL theorems is that they level algorithms in terms of how well they do with n evaluations of the fitness function, and not how well they do with t units of running time. If you do not have prior knowledge of a problem instance that leads you to prefer some algorithm over others, or you choose to ignore that knowledge, then select the algorithm that runs the fastest (i.e., completes the greatest number of fitness evaluations in available running time).

You'll generally do better letting the computer architecture drive the choice of algorithm than by choosing the algorithm and trying to get its parameters right.

Back in 1991, before I'd hit on the NFL rationale (1994), I was working with a SIMD architecture with a toroidal mesh of processing elements. While giving a course on GA's, I designed a GA to run really, really fast on that architecture, abandoning bio-mimicry when necessary, and handed off the programming to two doctoral students. One of the students -- the one who subsequently presented our work at a conference -- liked to speak of my "big, messy genetic algorithm." The phrase was not in our paper, but it evidently made its way to Goldberg. There has been a fair number of BMGA papers to come from his lab.

The size of demes on individual processing elements was determined entirely by available memory. The parameter we played with most was the number of generations between migrations of individuals between neighboring processing elements. (I can't remember how we selected the migrants.) We thought, buying into Goldberg, that migration and recombination were essential. There is actually a theoretical argument for isolated parallel runs. So nowadays I would leave out migration unless I had reason to believe it would be beneficial, even though our program did not spend much time on it.

The program blazed, in terms of wall-clock time, and this was due to exploitation of the architecture. My students also implemented and studied many existing GA's, and we discovered just how much unreported parameter tweaking underlay reported results. There is no magic in any form of evolutionary computation, and exploiting the architecture should be Job One unless you have prior knowledge of the problem instance to exploit.

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
  419 replies since Mar. 17 2009,11:00 < Next Oldest | Next Newest >  

Pages: (14) < [1] 2 3 4 5 6 ... >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]