RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (14) < 1 2 [3] 4 5 6 7 8 ... >   
  Topic: Evolutionary Computation, Stuff that drives AEs nuts< Next Oldest | Next Newest >  
midwifetoad



Posts: 4003
Joined: Mar. 2008

(Permalink) Posted: June 11 2009,18:11   

Quote
fast-reproducing sexual species that have existed a few million should have all been extinct by now, but they're not.


Isn't this the ultimate test of a simulation -- that it must model the fact that populations don't go extinct simply because their genes degrade?

Any simulation where this happens is obviously flawed. History trumps trumps any theory that says something that has happened can't happen.

--------------
Any version of ID consistent with all the evidence is indistinguishable from evolution.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 11 2009,18:30   

OK, why are there still Amoeba dubia around? I haven't found an explicit statement of average generation time for the species, but it is likely on the order of 24 hours based on generation times for other amoebae. Its genome is about 670 billion base pairs. That would seem to qualify as a large genome, wouldn't it?

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 11 2009,18:47   

Quote (Wesley R. Elsberry @ June 11 2009,18:30)
OK, why are there still Amoeba dubia around? I haven't found an explicit statement of average generation time for the species, but it is likely on the order of 24 hours based on generation times for other amoebae. Its genome is about 670 billion base pairs. That would seem to qualify as a large genome, wouldn't it?

Right, that's that objection answered then!

  
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 11 2009,19:05   

By the way, all this genetic entropy (why the stupid name, why not just Muller's Ratchet?) stuff relates to the work of Laurence Loewe at Edinburgh.  He's done a lot of research on Muller's Ratchet, well worth checking out:

http://evolutionary-research.net/people/lloewe

also see these classic papers by Michael Lynch:

Lynch, M. et al. 1993. Mutational meltdowns in asexual populations. J. Heredity 84: 339-344

http://www.indiana.edu/~lynchlab/PDF/Lynch58.pdf

Gabriel, W. et al. 1993. Muller's ratchet and mutational meltdowns. Evolution 47: 1744-1757.

http://www.indiana.edu/~lynchlab/PDF/Lynch62.pdf

I'm not a population geneticist or indeed any kind of evolutionary biologist whatsoever.  But it's my impression that Sanford is saying nothing new; he's just trying to repackage issues that pop gen people have known about for decades.  Indeed, occasional creationist basher Joe Felsenstein published one of the classic papers in this respect:

Felsenstein, J. (1974). The Evolutionary Advantage of Recombination. Genetics, 78, 737–756

Some time ago on PandasThumb, Felsenstein said he'd probably better read the Sanford book as creationists would be using it.  S Cordova offered to send it to him.  It'd be great to get his thoughts.  I think this is the discussion:

http://pandasthumb.org/archives/2008/05/gamblers-ruin-i.html

  
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 11 2009,19:08   

Aaah yes, found it on the final page of comments:



Quote
Dr. Felsenstein,

I sent you a copy of John Sanford’s Genetic Entropy.

Let me know if you received it or not. The admins at PT should have my e-mail.

Thank you again for taking time to read what I wrote at UD and for taking the time to respond. I’m deeply honored.

regards, Salvador Cordova


Quote
Sorry for the delay, I didn’t notice this inquiry until recently. Yes, the book arrived. Thanks for sending it. It will be helpful to have it, I am sure.

  
Zachriel



Posts: 2723
Joined: Sep. 2006

(Permalink) Posted: June 11 2009,20:41   

Take a look at the distribution of beneficial mutations. {The parameters are as on the image and Maximal beneficial mutation effects = 0.1} Beneficial mutations spike, then disappear.


                   Generation 3970, Fitness 0.106, Deleterious 38398, Favorable 0.

The program doesn't seem to use my available memory and quits well before the specified generations. It doesn't seem to reseed the randomizer with each run.


                 Generation 3972, Fitness 0.101, Deleterious 38422, Favorable 0.

Just look at those graphs. That just doesn't look right at all.

--------------

You never step on the same tard twice—for it's not the same tard and you're not the same person.

   
AnsgarSeraph



Posts: 11
Joined: June 2009

(Permalink) Posted: June 11 2009,20:50   

Quote (Zachriel @ June 11 2009,20:41)
The program doesn't seem to use my available memory and quits well before the specified generations.

With a fitness level at 0.1, I'm sure your populations went extinct. I can't keep populations below 1000 alive for very long; they certainly won't last for more than 20,000 generations.

—Sam

  
Zachriel



Posts: 2723
Joined: Sep. 2006

(Permalink) Posted: June 11 2009,21:01   

I manually changed the seed. This is what I got with the same parameters.


                   Generation 4376, Fitness 0.111, Deleterious 42350, Favorable, 0.

It's very odd having to change the seed every time. A common method of investigation is to rerun the same parameters to help distinguish trends from flukes.

There's something odd about the distribution. That might be due to the small population, though.

--------------

You never step on the same tard twice—for it's not the same tard and you're not the same person.

   
deadman_932



Posts: 3094
Joined: May 2006

(Permalink) Posted: June 11 2009,22:53   

Quote (mammuthus @ June 11 2009,18:08)
     
Quote (deadman_932 @ June 11 2009,15:57)
Sanford's "genomic (mutational) meltdown" scenarios are a hoot. Even DaveScot was bright enough to see that Sanford's proposed mutation rates were out of line with reality: fast-reproducing sexual species that have existed a few million should have all been extinct by now, but they're not. Sanford inflates deleterious mutation rates and disregards compensatory mechanisms.

His argument is a little more involved than that.  It seems to revolve around genome size; the smaller genome size of something like P.falciparum prevents genetic meltdown, but it would occur with larger genome sized mammals.  So genetic entropy is a problem for the latter (if not on Sanford YEC timescales).  You can't just take fast reproducing things like P.falciparum and apply the Genetic Entropy failure in this case widely.  At least that's how I read it.

Well, Wes mentioned one example of "large" - genomed rapidly-reproducing species, and there's a lot more available. Mammal genomes average between 2 and 3 gigabases (Gb) but lots of insect and plant genomes (like wheat) can be larger:   around 16 Gb in wheat or grasshoppers (Podisma pedestris) -- five times larger than humans.

Nailing Sanford down on questions about interesting populations like california condors would be fun -- they're the only North American remnant of Gymnogyps, been around since the early Pleistocene and their population dropped down to 22 individuals not very long ago... and their est. genome size is 1.5 Gb. They should have accumulated enough deleterious mutations so that such a small closely-related group would produce nothin' but dead young, right? Or how about Przywalski's horse?

Sanford is a YEC of sorts, so he skewed his parameters to fit his skewed view of the Earth's entire biome being less than 100 K years old, as I recall ( I may be wrong with the exact figure there).

-------------------------------------------

ETA: I was curious about known recessives in the existing condors and there is one identified (chondrodystrophy) that results in fatal abnormalities  :  

J. Geyer, O.A. Ryder, L.G. Chemnick and E.A. Thompson, Analysis of relatedness in the California condors: from DNA fingerprints, Mol. Biol. Evol. 10 (1993), pp. 571–589

Romanov MN, Koriabine M, Nefedov M, de Jong PJ, Ryder OA (2006) Construction of a California Condor BAC Library and First-generation Chicken-condor Comparative Physical Map as an Endangered Species Conservation Genomics Resource, Genomics, 88 (6), 711-8

--------------
AtBC Award for Thoroughness in the Face of Creationism

  
Steve Schaffner



Posts: 13
Joined: June 2009

(Permalink) Posted: June 11 2009,23:09   

Quote (Zachriel @ June 11 2009,20:41)
Take a look at the distribution of beneficial mutations.

Looks right to me, given your parameters. You're getting 10 mutations/individual for 100 individuals, or 1000 mutations per generation. Of those, 1/100,000 is beneficial, so you're only getting one beneficial mutation every 100 generations. Those are the tiny blips. Once in a while one or two of them drift up to an appreciable, and the mean number of beneficial alleles per individual climbs above 1.0.

None of them fix though, which is not surprising, since they're almost all effectively neutral. Which means that you should have one fixing by chance every 20,000 generations, plus some probability from the tail at higher selection coefficient.

  
Occam's Aftershave



Posts: 5287
Joined: Feb. 2006

(Permalink) Posted: June 11 2009,23:47   

Over at TWeb where this started I asked the same question; why haven't all the fast reproducing mammal species died out from genetic meltdown yet?  The topic of mice was raised, because while mice have a genome roughly the size of humans  (approx. 3 GB), they have a generation time some 170x faster (6 weeks vs. 20 years).  So why haven't all the mice gone extinct by now?

I made the statement ""All other things being equal, the population that breeds faster will accumulate mutations faster."

Jorge Fernandez (a YEC who was acting as a go between to Sanford) supposedly forwarded my questions to Sanford and got this reply:

Sanford:  " No, it is just the opposite, short generation times means more frequent and better selective filtering."

Which makes zero sense and is trivially easy to refute with their own program:

Run Mendel with two populations that are identical in every way (i.e genome size, mutation rate, selection pressure, etc.) except make one generation time 2x the other, say two per year year vs. one per year.

If you run them both for 1000 generations, both will end up with the same (lower) fitness level, but the two per year will only take 500 years to get there.

If you run them both for 1000 years, the once per year will end up in the exact same fitness as the first trial, but the two per year will have 2000 generations and end up with an even lower fitness level, if it doesn't just go extinct first.

These guys are busted, and they know they're busted.  Now it's just a question of how far they can push this shit and how much money they can make before the errors become well known.

--------------
"CO2 can't re-emit any trapped heat unless all the molecules point the right way"
"All the evidence supports Creation baraminology"
"If it required a mind, planning and design, it isn't materialistic."
"Jews and Christians are Muslims."

- Joke "Sharon" Gallien, world's dumbest YEC.

  
k.e..



Posts: 5432
Joined: May 2007

(Permalink) Posted: June 12 2009,00:06   

So they have gone from shining shit to simulating shit?

As a game strategy it could be a winner.

More obscurantism in the tard market makes it easier to collect loose fundy shekels.

--------------
"I get a strong breeze from my monitor every time k.e. puts on his clown DaveTard suit" dogdidit
"ID is deader than Lenny Flanks granmaws dildo batteries" Erasmus
"I'm busy studying scientist level science papers" Galloping Gary Gaulin

  
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 12 2009,04:18   

This new paper may be of interest:

Quote
Mustonen, V. and Lassig, M.  (2009) From fitness landscapes to seascapes: non-equilibrium dynamics of selection and adaptation.  Trends in Genetics, 25, 111-119.

Evolution is a quest for innovation. Organisms adapt to changing natural selection by evolving new phenotypes. Can we read this dynamics in their genomes? Not every mutation under positive selection responds to a change in selection: beneficial changes also occur at evolutionary equilibrium, repairing previous deleterious changes and restoring existing functions. Adaptation, by contrast, is viewed here as a non-equilibrium phenomenon: the genomic response to time-dependent selection. Our approach extends the static concept of fitness landscapes to dynamic fitness seascapes. It shows that adaptation requires a surplus of beneficial substitutions over deleterious ones. Here, we focus on the evolution of yeast and Drosophila genomes, providing examples where adaptive evolution can and cannot be inferred, despite the presence of positive selection.


there's a section on Muller's Ratchet:

Quote
Here, we argue for a sharpened concept of adaptive evolution at the molecular level. Adaptation requires positive selection, but not every mutation under positive selection is adaptive. Selection and adaptation always refer to a molecular phenotype depending on a single genomic locus or on multiple loci, such as the energy of a transcription-factor-binding site in our first example. This correlates the direction of selection at all loci contributing to the phenotype and calls for the distinction between adaptation and compensation. The infinite-sites approximation, which is contained in many population-genetic models, neglects such correlations and is therefore not optimally suited to infer adaptation [16] and [23]. Here, we address this problem by a joint dynamical approach to selection and genomic response in a genome with finite number of sites. In this approach, adaptive evolution is characterized by a positive fitness flux ?, which measures the surplus of beneficial over deleterious substitutions.

It is instructive to contrast this view of adaptive evolution with Muller's ratchet, a classical model of evolution by deleterious substitutions [53] and [54]. This model postulates a well-adapted initial state of the genome so that all, or the vast majority of, mutations have negative fitness effects. Continuous fixations of slightly deleterious changes then lead to a stationary decline in fitness (i.e. to negative values of ?). Similarly to the infinite-sites approximation, this model neglects compensatory mutations. In a picture of a finite number of sites, it becomes clear that every deleterious substitution leads to the opportunity for at least one compensatory beneficial mutation (or more, if the locus contributes to a quantitative trait), so that the rate of beneficial substitutions increases with decreasing fitness. Therefore, assuming selection is time-independent, decline of fitness (? < 0) is only a transient state and the genome will eventually reach detailed balance between deleterious and beneficial substitutions, that is, evolutionary equilibrium (? = 0). As long as selection is time-independent, an equilibrium state exists for freely recombining loci and in a strongly linked (i.e. weakly recombining) genome, although its form is altered in the latter case by interference selection [55] and [56]. Conversely, an initially poorly adapted system will have a transient state of adaptive evolution (? > 0) before reaching equilibrium. Time-dependent selection, however, continuously opens new windows of positive selection, the genome is always less adapted than at equilibrium and the adaptive state becomes stationary. Thus, we reach a conclusion contrary to Muller's ratchet. Because selection in biological systems is generically time-dependent, decline of fitness is less likely even as a transient state than suggested by Muller's ratchet: the model offers no explanation of how a well-adapted initial state without opportunities of beneficial mutations is reached in the first place.

As a minimal model for adaptive evolution, we have introduced the Fisher-Wright process in a macro-evolutionary fitness seascape, which is defined by stochastic changes of selection coefficients at individual genomic positions on time scales larger than the fixation time of polymorphisms (and is thus different from micro-evolutionary selection fluctuations and genetic draft). Time-dependence of selection is required to maintain fitness flux: the seascape model is the simplest model that has a non-equilibrium stationary state with positive ?. The two parameters of the minimal model (strength and rate of selection changes) are clearly just summary variables for a much more complex reality. The vastly larger genomic datasets within and across species will enable us to infer the dynamics of selection beyond this minimal model.

  
damitall



Posts: 331
Joined: Jan. 2009

(Permalink) Posted: June 12 2009,04:33   

Quote (mammuthus @ June 11 2009,18:08)
 
Quote (deadman_932 @ June 11 2009,15:57)
Sanford's "genomic (mutational) meltdown" scenarios are a hoot. Even DaveScot was bright enough to see that Sanford's proposed mutation rates were out of line with reality: fast-reproducing sexual species that have existed a few million should have all been extinct by now, but they're not. Sanford inflates deleterious mutation rates and disregards compensatory mechanisms.

His argument is a little more involved than that.  It seems to revolve around genome size; the smaller genome size of something like P.falciparum prevents genetic meltdown, but it would occur with larger genome sized mammals.  So genetic entropy is a problem for the latter (if not on Sanford YEC timescales).  You can't just take fast reproducing things like P.falciparum and apply the Genetic Entropy failure in this case widely.  At least that's how I read it.

   
Quote
It occured to me recently that Sanford’s projected rate of genetic decay doesn’t square with the observed performance of P.falciparum. P.falciparum’s genome is about 23 million nucleotides. At Sanford’s lowest given rate of nucleotide copy errors that means each individual P.falciparum should have, on average, about 3 nucleotide errors compared to its immediate parent. If those are nearly neutral but slightly deleterious mutations (as the vast majority of eukaryote mutations appear to be) then the number should be quite sufficient to cause a genetic meltdown from their accumulation over the course of billions of trillions of replications. Near neutral mutations are invisible to natural selection but the accumulation of same will eventually become selectable. If all individuals accumulate errors the result is decreasing fitness and natural selection will eventually kill every last individual (extinction). Yet P.falciparum clearly didn’t melt down but rather demonstrated an amazing ability to keep its genome perfectly intact. How?

After thinking about it for a while I believe I found the answer - the widely given rate of eukaryote replication errors is correct. If P.falciparum individuals get an average DNA copy error rate of one in one billion nucleotides then it follows that approximately 97% of all replications result in a perfect copy of the parent genome. That’s accurate enough to keep a genome that size intact. An enviromental catastrophe such as an ice age which lowers temperatures even at the equator below the minimum of ~60F in which P.falciparum can survive would cause it to become extinct while genetic meltdown will not. Mammals however, with an average genome size 100 times that of P.falciparum, would have an average of 3 replication errors in each individual. Thus mammalian genomes would indeed be subject to genetic decay over a large number of generations which handily explains why the average length of time between emergence to extinction for mammals and other multicelled organisms with similar genome sizes is about 10 million years if the fossil and geological evidence paints an accurate picture of the past. I DO believe the fossil and geological records present us with an incontrovertible picture of progressive phenotype evolution that occured over a period of billions of years. I don’t disbelieve common ancestry and phenotype evolution by descent with modification - I question the assertion that random mutation is the ultimate source of modification which drove phylogenetic diversification.



Here is an abstract which might inform this particular question

  
Lou FCD



Posts: 5455
Joined: Jan. 2006

(Permalink) Posted: June 12 2009,06:35   

You've all forgotten the most important part of the simulation, and that's why your results are skewed.

You have to throw the computer off a cliff to get an accurate simulation.

duh.

--------------
“Why do creationists have such a hard time with commas?

Linky“. ~ Steve Story, Legend

   
deadman_932



Posts: 3094
Joined: May 2006

(Permalink) Posted: June 12 2009,09:28   

Quote (Lou FCD @ June 12 2009,06:35)
You've all forgotten the most important part of the simulation, and that's why your results are skewed.

You have to throw the computer off a cliff to get an accurate simulation.

duh.

Lou = absotively correckt. Heck, even checker-playing computers have to be painted in squares. Everyone knows that.

--------------
AtBC Award for Thoroughness in the Face of Creationism

  
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 12 2009,10:58   

Jorge Fernandez at TWeb is in contact with Sanford.  He just posted the following from Sanford:

Quote
Hi Jorge - I have been traveling ... The comment ... about "cooking the books" is, of course, a false accusation. The issue has to do with memory limits. Before a Mendel run starts it allocates the memory needed for different tasks. With deleterious mutations this is straight-forward - the upper range of mutation count is known. With beneficials it is harder to guess final mutation count - some beneficials can be vastly amplified. Where there is a high rate of beneficials they can quickly exhaust RAM and the run crashes. Wesley Brewer [one of the creators of Mendel] has tried to avoid this by placing certain limits - but fixing this is a secondary priority and will not happen right away. With more RAM we can do bigger experiments. It is just a RAM issue.

Best - John


This is in response to - "Wes Elseberry made a comment that I think could be a good title, 'Mendel's Accountant
cooks the books."  I assume that they're talking about the failure of the program to increase fitness when a high number of beneficial mutations are specified.

I guess Sanford et al would argue that this problem isn't a big issue, since there's never a case in which there are loads (e.g. 90%) of beneficial mutations.  Deleterious or slightly deleterious are in the majority in reality, there's no RAM problem with these, and so the main conclusion they draw from Mendel is unaffacted by the problems shown with beneficial mutations.  At least I guess that's what he'd say.

Sanford also says:

Quote
The fact that our runs crash when we run out of RAM is not by design. If someone can help us solve this problem we would be very grateful. We typically need to track hundreds of millions of mutations. Beneficials create a problem for us because they amplify in number. We are doing the best we can.

I would urge your colleagues [Heaven help me - John is under the impression that you people are my colleagues ... brrrrrrrr!] to use more care. In science we should be slow to raise claims of fraud without first talking to the scientist in question to get their perspective. Otherwise one might unwittingly be engaging in character assassination.


http://www.theologyweb.com/campus....unt=131

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 12 2009,11:06   

Quote

I guess Sanford et al would argue that this problem isn't a big issue, since there's never a case in which there are loads (e.g. 90%) of beneficial mutations.


No, the problem is quantitative and not qualitative. If the program doesn't handle the 90% case correctly, it isn't handling the 0.001% case correctly, either. And we know that v1.2.1 did not handle it correctly. If you are going around claiming to have produced an "accurate" simulation, you are on the hook for that.

The 90% case just makes the error blatantly obvious.

Speaking of hypocrisy, how careful is Sanford in not making sweeping generalizations about biologists having gotten things wrong?

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 12 2009,11:13   

As demonstrated in the two runs I did comparing the output of v1.2.1 and v1.4.1 on the very same configuration, v1.2.1 has a major error in its handling of beneficial mutations. This has nothing at all to do with memory limits; I also ran both with the default case, and the experimental case used in both merely changed the two parameters as specified by Zachriel above. The memory usage was under 130MB for all cases I ran; the memory I had was sufficient and the simulations ran to completion. Sanford either was given a garbled account of the issue or is deploying a meaningless digression as a response.

ETfix: 130,000KB = 130MB

Edited by Wesley R. Elsberry on June 12 2009,11:19

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 12 2009,11:18   

Quote (Wesley R. Elsberry @ June 12 2009,11:06)
Quote

I guess Sanford et al would argue that this problem isn't a big issue, since there's never a case in which there are loads (e.g. 90%) of beneficial mutations.


No, the problem is quantitative and not qualitative. If the program doesn't handle the 90% case correctly, it isn't handling the 0.001% case correctly, either. And we know that v1.2.1 did not handle it correctly. If you are going around claiming to have produced an "accurate" simulation, you are on the hook for that.

The 90% case just makes the error blatantly obvious.

Speaking of hypocrisy, how careful is Sanford in not making sweeping generalizations about biologists having gotten things wrong?

Ok, thanks Wesley.  I know nothing about programming, so a lot of what I have to say on realted subjects will be utter nonsense!.

I totally concur about Sanford's sweeping generalisations.  He claims that Mendel's Accountant has "falsified" Neo-Darwinian evolution:

Quote
When any reasonable set of biological parameters are used, Mendel provides overwhelming empirical evidence that all of the “fatal flaws” inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified—with a degree of certainty which should satisfy any reasonable and open-minded person.


and

Quote
As a consequence, evolutionary genetic theory now has no theoretical support—it is an indefensible scientific model. Rigorous analysis of evolutionary genetic theory consistently indicates that the entire enterprise is actually bankrupt. In this light, if science is to actually be self-correcting, geneticists must “come clean” and acknowledge the historical error, and must now embrace honest genetic accounting procedures.


http://www.icr.org/i....ory.pdf

I have zero respect for anyone who provides such rhetoric, without actually submitting their claims to review by the scientific community.  The very people they are lambasting.  That is fundamentally dishonest.

Sam at TWeb has emailed Sanford to see if he will engage directly at that messageboard.  Could be interesting.

  
mammuthus



Posts: 13
Joined: June 2009

(Permalink) Posted: June 12 2009,11:22   

Oh and an additional response from Sanford.  This is an explanation as to why such low population sizes (1000) were used and how this doesn't affect their conclusions.  In addition it's a response to the question of why mice (as an example of a pretty fast reproducing species) have not yet gone extinct.

Quote
Hi Jorge - Please tell these folks that I appreciate their interest in Mendel, and if they see certain ways we can make it more realistic, we will try and accommodate them.

Mendel is fundamentally a research tool, and so offers a high degree of user-specification. There is no inherently "realistic" population size - it just depends on what circumstance you wish to study. The default setting for population size is set at 1000 because it is convenient - whether you are using the Windows version on your laptop, or any other computer, you are less likely to run out of memory. We are proceeding to study population size and also population sub-structure. I believe larger populations should realistically be set up as multiple tribes with a given migration rate between tribes. Under these conditions we see little improvement with larger population sizes. But they are welcome to do bigger runs if they have the memory resources.

The mouse question is interesting. I think one would need
to change various parameters for mouse - each species is
different. I would like to know the maximal (not minimal)
generation time - do they know? This would define the
maximal time to extinction. I have read that the per
generation mutation rate is about an order of magnitude
lower in mouse - which makes sense if there are fewer cell
divisions in the generative cells between generations.
I would be happy to do such experiments when I get the
input data.

Best - John


http://www.theologyweb.com/campus....unt=134

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 12 2009,11:25   

Does anyone know of an open-source UML system that takes FORTRAN code as input?

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
deadman_932



Posts: 3094
Joined: May 2006

(Permalink) Posted: June 12 2009,11:47   

Quote (mammuthus @ June 12 2009,10:58)
Jorge Fernandez at TWeb is in contact with Sanford.  He just posted the following from Sanford:

   
Quote
Hi Jorge - I have been traveling ...The comment...about "cooking the books" is, of course, a false accusation. The issue has to do with memory limits. Before a Mendel run starts it allocates the memory needed for different tasks. With deleterious mutations this is straight-forward - the upper range of mutation count is known. With beneficials it is harder to guess final mutation count - some beneficials can be vastly amplified. Where there is a high rate of beneficials they can quickly exhaust RAM and the run crashes. Wesley Brewer [one of the creators of Mendel] has tried to avoid this by placing certain limits - but fixing this is a secondary priority and will not happen right away. With more RAM we can do bigger experiments. It is just a RAM issue.

Best - John


This is in response to - "Wes Elseberry made a comment that I think could be a good title, 'Mendel's Accountant
cooks the books."  I assume that they're talking about the failure of the program to increase fitness when a high number of beneficial mutations are specified...
[snip]

Sanford also says:
 
Quote
"The fact that our runs crash when we run out of RAM is not by design. If someone can help us solve this problem we would be very grateful. We typically need to track hundreds of millions of mutations. Beneficials create a problem for us because they amplify in number. We are doing the best we can. I would urge your colleagues [Heaven help me - John is under the impression that you people are my colleagues ... brrrrrrrr!] to use more care. In science we should be slow to raise claims of fraud without first talking to the scientist in question to get their perspective. Otherwise one might unwittingly be engaging in character assassination."

http://www.theologyweb.com/campus....unt=131

That's interesting, because the 2008 ICR "Proceedings of the Sixth International Conference on Creationism (pp. 87–98)." Has a "paper" by John Baumgardner, John Sanford, Wesley Brewer, Paul Gibson and Wally Remine.

The title of that paper is  "Mendel’s Accountant: A New Population Genetics Simulation Tool for Studying Mutation and Natural Selection"  (.PDF link)

So what does John Sanford say there? Well, he says this:  

 
Quote
Mendel  represents  an  advance  in  forward-time simulations by incorporating several improvements over previous simulation tools...
Mendel is tuned for speed, efficiency and memory usage to handle large populations and high mutation rates....
We  recognized that to track millions of individual mutations in a sizable population over many generations, effcient use of memory would be a critical issue – even with the large amount of memory commonly available on current generation computers. We therefore selected an approach that uses a single 32-bit (four-byte) integer to encode a mutation’s fitness effect, its location in the genome, and whether it is dominant or recessive. Using this approach, given 1.6 gigabytes of memory on a single microprocessor, we can accommodate at any one time some 400 million mutations...This implies that, at least in terms of memory, we can treat reasonably large cases using a single processor of the type found in many desktop computers today.


I await the actual achievement of these claims with un-bated breath. All emphases are mine.

--------------
AtBC Award for Thoroughness in the Face of Creationism

  
deadman_932



Posts: 3094
Joined: May 2006

(Permalink) Posted: June 12 2009,12:11   

Quote (Wesley R. Elsberry @ June 12 2009,11:25)
Does anyone know of an open-source UML system that takes FORTRAN code as input?

You might want to look through these: http://olex.openlogic.com/wazi....lopment

ETA:  Sorry, Nope, I can't find anything open-source... and I looked quite a bit at various fora, etc.

--------------
AtBC Award for Thoroughness in the Face of Creationism

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 12 2009,12:14   

Acceleo seems to be able to generate FORTRAN from UML, but I'm looking for a free tool to generate UML from FORTRAN.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 12 2009,13:42   

Mutations are not beneficial, neutral, or detrimental on their own, nor is their contribution to fitness fixed for all time. Mutations contribute to fitness in a context, and as the context changes, so may the value of its contribution to fitness. Fitness is a value that applies to the phenotype in ensemble. Mendel's Accountant appears instead to assume that mutations have a fixed value that cannot be changed by context. Thus, Mendel's Accountant appears to completely ignore research on compensatory mutations.

Because the value of a mutation depends on context, a particular mutation may be beneficial, neutral, or detrimental at initial appearance, but later become part of a different class as other mutations come into play. Mendel's Accountant treats mutations as falling into a fixed class.

These faults alone suffice to disqualify Mendel's Accountant from any claim to providing an accurate simulation of biological evolution.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: June 12 2009,16:28   

Of course, I tend to think that a good approach to critique of a program to do a particular task is to actually produce a program that does that task better. I think that is something that we could give some thought to here. Much of the same background work applies to analysis of MA or design of an alternative.

Some ideas:

- Develop a test suite based on published popgen findings in parallel with development

- Base it on the most general, abstract principles for broad applicability

- Aim for number of generations to be limited only by amount of disk or other long-term storage available

- Consider means for handling large population sizes

- Start with a simple system, either as run-up to version 1 or with sufficient generality to be extensible to more complex systems

It seems to me that producing a thoroughly-vetted and tested platform that covers fewer cases is far better than producing a large, unwieldy, and bug-ridden product whose output cannot be trusted.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Bob O'H



Posts: 2564
Joined: Oct. 2005

(Permalink) Posted: June 12 2009,17:02   

Quote
I'm not a population geneticist or indeed any kind of evolutionary biologist whatsoever.  But it's my impression that Sanford is saying nothing new; he's just trying to repackage issues that pop gen people have known about for decades.  

What's new is his claim that meltdown affects sexual populations.  I should check the evolution of sex literature, I'm sure they (Sally Otto and Nick Barton, amongst others) showed that it doesn't happen.  In his book Sanford ignores the recent evolution of sex literature.

Wes -
Quote
OK, why are there still Amoeba dubia around?

Indeed - hasn't it turned into Amoeba dubya?

Anyway, remember that Sanford is a YEC, so millions of years aren't relevant for him.

--------------
It is fun to dip into the various threads to watch cluelessness at work in the hands of the confident exponent. - Soapy Sam (so say we all)

   
Dr.GH



Posts: 2333
Joined: May 2002

(Permalink) Posted: June 12 2009,20:44   

Quote (Wesley R. Elsberry @ June 12 2009,14:28)
Of course, I tend to think that a good approach to critique of a program to do a particular task is to actually produce a program that does that task better. I think that is something that we could give some thought to here. Much of the same background work applies to analysis of MA or design of an alternative.

Some ideas:

- Develop a test suite based on published popgen findings in parallel with development

- Base it on the most general, abstract principles for broad applicability

- Aim for number of generations to be limited only by amount of disk or other long-term storage available

- Consider means for handling large population sizes

- Start with a simple system, either as run-up to version 1 or with sufficient generality to be extensible to more complex systems

It seems to me that producing a thoroughly-vetted and tested platform that covers fewer cases is far better than producing a large, unwieldy, and bug-ridden product whose output cannot be trusted.

Wes, How would your proposed project improve on other programs? For example, of the goals that you list, does existing software such as AVIDA or other models not already satisfy you criticisms?

Next, I see that there are two goals. The first is to refute lame ass creatocrap like "“Mendel's Accountant provides overwhelming empirical evidence that all of the "fatal flaws" inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified--with a degree of certainty that should satisfy any reasonable and open-minded person.”

The second would be to actually advance the scientific work of evo simulations.

I might be able to assist the first, and I am happy to leave the second to the rest of you.

Your list of ideas do add to the refutation of the creatocrap, as they are features of what a good simulator should be able to do.

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
Steve Schaffner



Posts: 13
Joined: June 2009

(Permalink) Posted: June 12 2009,21:56   

There may be some value in checking Mendel's Accountant, to see whether it really implements the model that it claims to, but I don't see much point in trying to cobble together a new program to simulate evolution here. That is a major research project, with many unknown parameters, i.e. a truly realistic simulation of evolution isn't possible yet.

The important questions about MA, assuming the program isn't simply fatally flawed, concern the model that it is implementing. For the default values, you don't have to run the program to know that it will produce genetic collapse of the population -- that's inevitable, given the assumptions of the model. The model assumes a large number of mildly deleterious mutations, so mild that they are unaffected by purifying selection. It also assumes purely hard selection, in which lower fitness translates directly into loss of fertility for the population, and few beneficial mutations (which are also of small effect), independent of the fitness of the population (i.e. no compensating mutations). Given those assumptions, the population will inevitably decline towards extinction, since there is no force counteracting the relentless accumulation of deleterious mutations. The model stands or falls on those assumptions; the code is a side-issue.

  
  418 replies since Mar. 17 2009,11:00 < Next Oldest | Next Newest >  

Pages: (14) < 1 2 [3] 4 5 6 7 8 ... >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]