RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (13) < ... 4 5 6 7 8 [9] 10 11 12 13 >   
  Topic: Evolutionary Computation, Stuff that drives AEs nuts< Next Oldest | Next Newest >  
deadman_932



Posts: 3094
Joined: May 2006

(Permalink) Posted: July 25 2009,08:44   

The broad form that a reply to Sanford should take -- to give the fullest possible refutation of Sanford -- would be to eventually demonstrate how Mendel's Accountant is a deceptive product of Sanford's overall views.

Because Sanford believes that all life on Earth is between 5,000-100,000 years old, Mendel's Accountant essentially cooks the books to arrive at output which is intended to bolster Sanford's claims set out in his book "Genetic Entropy and the Mystery of the Genome."

I think everyone here is aware of that. Mendel's Accountant and Sanford's "Genetic Entropy" book go hand-in-hand

Over at TalkRational, Febble, Vox Rat and others are going through Sanford's book. This can be useful in the future. See here,  here and here keeping in mind that the last two are currently less useful because the discussion really hasn't begun yet -- due to "AF Dave" serving as a foil for Febble. He's really putting off any in-depth discussion because (1) he's an idiot and (2) he's doing what he usually does; use a discussion for propaganda purposes rather than anything difficult like, y'know...learning. Then there's the good discussion at Theology Web. I'll look around and see what else I can find at other BB's

Anyway, all of this has to be brought together at some point to show the pattern of pseudoscience and deception inherent in Sanford's efforts. It's a largish task, but manageable when broken down into parts.

--------------
AtBC Award for Thoroughness in the Face of Creationism

  
midwifetoad



Posts: 3607
Joined: Mar. 2008

(Permalink) Posted: July 25 2009,10:57   

So is genetic entropy almost like a prediction of ID, or is it just debris tossed behind to impede pursuit?

--------------
”let’s not make a joke of ourselves.”

Pat Robertson

  
Bob O'H



Posts: 1994
Joined: Oct. 2005

(Permalink) Posted: July 25 2009,17:08   

Shit, I really should write my review of Genetic Entropy.

Shorter version: Sandford ignores (a) multiple genes, and (b) sex.

--------------
It is fun to dip into the various threads to watch cluelessness at work in the hands of the confident exponent. - Soapy Sam (so say we all)

   
Steve Schaffner



Posts: 13
Joined: June 2009

(Permalink) Posted: July 25 2009,21:27   

[I posted this on TW, and Dr. GH asked me to repost it here. It mostly addresses the genetic model used in MA, rather than the implementation.]


John Sanford wrote me several weeks ago, replying to my previous comments on his model of evolution. I have just replied to his email. Since I do not have permission to quote his words, I tried to make my mail stand on its own as much as possible; if context is not clear, please ask me for clarification. (Or reply to praise my limpid prose style, or to tell me I'm a nitwit, or whatever. I.e. the usual.)

Here is my reply:

Hi John,

Viewed from a high level, populations crash in your model because of several features in the model. First, it has a high rate of very slightly deleterious mutations, ones that have too weak an effect to be weeded out by selection. Second, the accumulation of these mutations reduces the absolute fitness of the entire population. Third, beneficial mutations (and in particular compensating mutations) are rare enough (and remain rare enough even as the fitness declines) and of weak enough effect that they do not counteract the deleterious mutations. As far as I can tell, any model of evolution that has these features will lead to eventual extinction -- the details of the simulation shouldn't matter at this level. (Indeed, Kondrashov pointed out this general problem in 1995; I wouldn't be surprised if others have made the same point earlier.)

So there is no question that if these premises of the model are correct, organisms with modest population sizes (including all mammals, for example) are doomed, and Darwinian evolution fails as an explanation for the diversity of life. If one wishes to conclude that evolution does fail, however, it is necessary to show that all of the premises are true -- not merely that they are possible, but they reflect the real processes occurring in natural populations. From my perspective, that means you need to provide empirical evidence to support each of them, and I don't think you have done so.

Turning specifcially to issue of soft selection: it matters here becuase it severs the connection between relative fitness and absolute population fitness. The essence of soft selection is that the absolute fitness of the population does not change, regardless of the relative fitness effects of individual mutations that accumulate in the population. As Kimura put it, "Therefore, under soft selection, the average fitness of the population remains the same even if the genetic constitution of the population changes drastically. This type of selection does not apply to recessive lethals that unconditionally kill homozygotes. However, if we consider the fact that weak competitors could still survive if strong competitors are absent, soft selection may not be uncommon in nature." (p. 126, The Neutral Theory of Evolution).

(An unimportant point: my understnading from reading Wallace is that he introduced the term "soft selection" in the context of accumulating deleterious mutations (especially concerns about them raised by Jim Crow), not in connection with Haldane's dilemma or the rate of beneficial substitution. If you have a citation that provides evidence otherwise, I would be interested in seeing it. The basic model of soft selection actually goes back at least to Levene in 1953 (predating Haldane's work by a few years), when he was considering the maintenance of varied alleles in a mixed environment. So this is not a new idea, and it is (contra your suggestion) is a well-defined concept, and one that is in fact often considered in the context of deleterious mutations and genetic load. Are there any recent published discussions of genetic load that do not consider soft selection as a possibility?)

In your reply to me, you said that the default in your program is purely soft selection. I don't know what the actual default is for deciding whether fitness affects fertility (since I have not run the program), but the online user manual says that an effect on fertility is in fact the default ("The default value is “Yes”, which means that fertility declines with fitness, especially as fitness approaches zero.") Regardless of the direct effect on fertility, the use of an additive model of fitness means that deleterious selection in your program ultimately ceases to be soft, since accumulating additive fitness always ends up or below zero, at which point the relative fitness values no longer matter. In a model of soft selection, the magnitude of the populations's fitness makes no difference at all; only the relative values of individuals have an effect. In your program, that is not the case. So in practice, your program does not seem to model long-term soft selection.

(As an aside, I'm afraid I don't understand your comments about having tested a multiplicative model of fitness. You say that in such a model, as the mean fitness falls, you see increasing numbers of individuals inherit a set of mutations that give a fitness less than or equal to zero. Under a multiplicative model, the fitness is given by f = (1-s1) * (1-s2) * (1-s3) *..., where s1, s2, s3... are the selection coefficients for the different mutations. If the various s values are less than 1.0 (as they must be if the mutations have been inherited), then f must always be greater than 0. I don't see how you can have a multiplicative model with the reported behavior. Perhaps you have a noise term that is still additive?)

The real question is whether or not soft selection is actually important and needs to be modeled. As you say, soft selection is a mental construct -- but so is hard selection. You dismiss it as a real phenonenon, but do you have any evidence to support your point here? Your populations crash because of very slightly deleterious mutations, and as far as I know, virtually nothing is known about what kind of fitness effects these mutations have. In general, there has been very little empirical work distinguishing soft from hard selection (or equivalently, quantifying the difference between absolute and relative fitness). The only recent study I know of to attempt it looked only at plant defense traits in A. thaliana (Kelley et al, Evolutionary Ecology Research, 2005, 7: 287–302), and they found soft selection effects to be more powerful than hard effects. So I do not see good empirical grounds for rejecting an important role for soft selection.

This isn't to suggest that all selection is soft, or that many mutations don't have real effects on the population fitness -- but there are good theoretical and empirical reasons to think that the net effect of many deleterious mutations is smaller when they are fixed in the population than their relative fitness would suggest. (Not that we actually know what the distribution of relative fitnesses looks like, either. You can pick a functional form for that distribution for the purpose of doing a simulation, but it based on no real experimental evidence. Are deleterious mutations really so highly weighted toward very slight effects? There are just no data available to decide.

If much selection actualy is soft, then humans (and other mammals) could have in their genome millions of deleterious mutations already, the result of hundreds of millions of years of evolution; this is the standard evolutionary model. These mutations would have accumulated as population sizes shrank slowly (relaxing selection) and functional genome sizes grew (increasing the deleterious mutation rate). Indeed, many functional parts of the genome may never have been optimized at all: the deleterious "mutations" were there from the start. The results of this process are organisms that are imperfect compared to a platonic ideal version of the species, but perfectly functional in their own right. In your response, you cite systems biology's assessment that many organisms are highly optimized to counter this possibility. I do not find this persuasive, partly because systems biologists can also cite many features that are suboptimal, but mostly because no branch of biology has the ability to quantify the overall optimization of an organism, or to detect tiny individual imperfections in fitness.

Alternatively, beneficial mutations may be more common and of larger effect than in your default model. I pointed to one recent example of a beneficial mutation with a much larger selective advantage than your model would allow (lactase persistence in human adults). In turn you suggest that such large effects occur only in response to fatal environmental conditions, but the example I gave does not fall in that class. Do you have any empirical evidence that the selective advantage is restricted to such small values?

Michael Whitlock has a nice discussion of this kind of model in a paper from 2000 ("Fixation of new alleles and the extinction of small populations: drift load, beneficial alleles, and sexual selection." (Evolution, 54(6), 2000, pp. 1855–1861.)) His model tries to answer very similar questions to yours. With the choice of parameters that he thinks is reasonable, he finds that only a few hundred individuals are needed to prevent genetic decline.

He also discusses many of the same issues that we're discussing here. For example, on the subject of soft selection he writes, "We also have insufficient information about the relationship between the effects of alleles on relative fitness in segregating populations and their effects on absolute fitness when fixed. Whitlock and Bourguet (2000) have shown that for new mutations in Drosophila melanogaster, there is a positive correlation across alleles between the effects of alleles on productivity (a combined measure of the fecundity of adults and the survivorship of offspring) and male mating success. This productivity score should reflect effects of alleles on mean fitness, but the effects of male mating success are relative. Without choice, females will eventually mate with the males available, but given a choice the males with deleterious alleles have a low probability of mating. Other studies on the so-called good-genes hypothesis have confirmed that male mating success correlates with offspring fitness (e.g., Partridge 1980; Welch et al. 1998; see Andersson 1994)."

His conclusion about his own model strikes me as equally appropriate to yours: "We should not have great confidence in the quantitative values of the predictions made in this paper. In addition to the usual concern that the theoretical model may not include enough relevant properties of the system (e.g., this model neglects dominance and interlocus interactions, the Hill-Robertson effect, the effects of changing environments), the empirical measurements of many of the most important genetic parameters range from merely controversial to nearly nonexistent."

Using this kind of model to explore what factors might be important in evolution is fine, but I think using them to draw conclusions about the viability of evolution as a theory is quite premature.

  
Dr.GH



Posts: 1969
Joined: May 2002

(Permalink) Posted: July 26 2009,00:14   

Thanks, Steve. Your post, the observations and graphics by Zachriel, plus the graphics by Sam (AKA Ansgar Seraph) should add up to a solid refutation.

This is not my area, so if I missed anyone else's solid contribution- I apologize now rather than later.

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
MichaelJ



Posts: 455
Joined: June 2009

(Permalink) Posted: July 26 2009,06:54   

Quote (deadman_932 @ July 25 2009,08:44)
The broad form that a reply to Sanford should take -- to give the fullest possible refutation of Sanford -- would be to eventually demonstrate how Mendel's Accountant is a deceptive product of Sanford's overall views.

Because Sanford believes that all life on Earth is between 5,000-100,000 years old, Mendel's Accountant essentially cooks the books to arrive at output which is intended to bolster Sanford's claims set out in his book "Genetic Entropy and the Mystery of the Genome."

I think everyone here is aware of that. Mendel's Accountant and Sanford's "Genetic Entropy" book go hand-in-hand

Over at TalkRational, Febble, Vox Rat and others are going through Sanford's book. This can be useful in the future. See here,  here and here keeping in mind that the last two are currently less useful because the discussion really hasn't begun yet -- due to "AF Dave" serving as a foil for Febble. He's really putting off any in-depth discussion because (1) he's an idiot and (2) he's doing what he usually does; use a discussion for propaganda purposes rather than anything difficult like, y'know...learning. Then there's the good discussion at Theology Web. I'll look around and see what else I can find at other BB's

Anyway, all of this has to be brought together at some point to show the pattern of pseudoscience and deception inherent in Sanford's efforts. It's a largish task, but manageable when broken down into parts.

Febble has to be one of the clearest and most logical writers I have come across. afDave hasn't changed at all.

  
dvunkannon



Posts: 1377
Joined: June 2008

(Permalink) Posted: Aug. 01 2009,14:42   

I saw Nilsson and Pelger come up again on UD. Is there really no eye evolution computer simulation on the web?

--------------
I’m referring to evolution, not changes in allele frequencies. - Cornelius Hunter
I’m not an evolutionist, I’m a change in allele frequentist! - Nakashima

  
dvunkannon



Posts: 1377
Joined: June 2008

(Permalink) Posted: Sep. 09 2009,16:37   

Here's a little something related to genetic algorithms that I was thinking about during my recent vacation. I decided to write it down and share it with y'all in the hope getting some feedback on the idea. Now that I've got the idea sketched out, I'll implement it in the little GA I've been building.

Thanks in advance for any comments.

Quote
The Valley of the Demes
An Island Model GA with Asymmetric Topology, Parameters, and Migration Policy

Overview

The Valley Model ("Valley of the Demes") consists of several changes to the standard Island Model for deme-structured GAs. The biological inspiration for the Valley Model is the typical alpine valley. The fertile central valley is connected to a branching network of side valleys in which conditions are harsher and more challenging.
While deme-structured GAs offer the opportunity to take advantage of multiple CPUs, there is also some evidence that the separation into sub-populations itself helps to maintain divesity and slow premature convergence. The Valley Model attempts to take advantage of, and to encourage, this diversity in a variety of ways that make sense from a general EA point of view, and are still consistent with a specific ecological metaphor.

Topology
The demes are connected in the topology of a truncated Bethe lattice. The central deme has three connections to demes in the next ring outwards. Each of these demes has two demes connected to it in the next most outward ring, and so on until the last ring. The total number of rings is a model parameter.
The number of demes in each ring is 1, 3, 6, 12, 24,... and the total number of demes is 1, 4, 10, 22, 46,...
In the valley metaphor, the side valleys are connected downwards to the central plain, but not to neighboring side valleys.

Deme Sizing
The central deme has an initial (and average, if varying) carrying capacity of half the total population. Each ring outward holds half of the remaining population. For example, a model with four rings would allocate the population carrying capacities between the rings as 1/2, 1/4, 1/8, 1/8. However, because the number of demes is growing in each ring, the per deme carrying capacities would be 1/2, 1/12, 1/48, 1/96. Obviously, it would not make sense to run this four ring model with a total population less than 192 if all spaces are intended to be occupied, and the smallest deme has a carrying capacity of two.
In the valley metaphor, the carrying capacity of each deme is reduced by altitude, less space and poorer resources.

Deme Parameters
While crossover rates are expected to be constant across demes, the mutation rate is higher in each ring outward from the center.
In the valley metaphor, this could be attributed to more cosmic rays at higher altitudes but the major reason for raising the mutation rate is to give the model a chance to generate diversity.

Migration Policy
At the end of each 'generation', the population of a deme must be reduced to its current carrying capacity. (Carrying capacity can be parametric in time, for each deme independently.) In a typical GA, losers of some selection process (or the entire old generation) die. In the Valley Model, these poor performers are instead exported to demes one ring outwards. Only in the last ring of demes do losers die.
(The above assumes that migration takes place every generation. If migration only occurs every few generations, as in many Island Models, over capacity populations in the central demes would have to cull individuals.)
Except for the central deme, demes may also attempt to reduce their population to the carrying capacity level by sending the best of population inwards, replacing some individual in the receiving deme.
While this inflow to the center is a 'best replaces worst', the outward migration is not, so the overall effect should not raise selection pressures. Also, the outward flow is based on population carrying capacity and population growth, while the inflow is based on the topology. If the generation size was small, close to a Steady State GA, these flows would balance.
In the valley metaphor, poor performers are pushed out of the fertile territory and forced to move higher upslope by overcrowding. The only hope to move back towards the lower ground is to take someone else's place.

Initialization
To take advantage of founder effects, the model can be initialized so that only the central deme contains population members at time 0. In this way, several generations will pass before any population member is actually removed from the model.
In the valley metaphor, this initialization choice parallels the colonization of the valley for the first time by population members from elsewhere.

Summary
The Valley Model is intended to explore areas of GA model design in which model asymmetries help to preserve diversity in the population. The resulting diversity may support discovery of multiple solutions (niching) or simply avoid premature convergence, thereby improving the solution quality or speed to solution metrics.
The specific choice of asymmetries in the Valley Model is inspired by the ecologies and dynamics of real world mountain valleys.


--------------
I’m referring to evolution, not changes in allele frequencies. - Cornelius Hunter
I’m not an evolutionist, I’m a change in allele frequentist! - Nakashima

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 10 2009,14:12   

DiEb and I want to get serious about identifying errors in the IEEE SMC-A article of Dembski and Marks. Do you folks feel that the discussion deserves its own thread?

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 10 2009,15:42   

Quote (dvunkannon @ Sep. 09 2009,16:37)
Here's a little something related to genetic algorithms that I was thinking about during my recent vacation. I decided to write it down and share it with y'all in the hope getting some feedback on the idea. Now that I've got the idea sketched out, I'll implement it in the little GA I've been building.

Thanks in advance for any comments.

Something few people understand about the NFL theorems is that they level algorithms in terms of how well they do with n evaluations of the fitness function, and not how well they do with t units of running time. If you do not have prior knowledge of a problem instance that leads you to prefer some algorithm over others, or you choose to ignore that knowledge, then select the algorithm that runs the fastest (i.e., completes the greatest number of fitness evaluations in available running time).

You'll generally do better letting the computer architecture drive the choice of algorithm than by choosing the algorithm and trying to get its parameters right.

Back in 1991, before I'd hit on the NFL rationale (1994), I was working with a SIMD architecture with a toroidal mesh of processing elements. While giving a course on GA's, I designed a GA to run really, really fast on that architecture, abandoning bio-mimicry when necessary, and handed off the programming to two doctoral students. One of the students -- the one who subsequently presented our work at a conference -- liked to speak of my "big, messy genetic algorithm." The phrase was not in our paper, but it evidently made its way to Goldberg. There has been a fair number of BMGA papers to come from his lab.

The size of demes on individual processing elements was determined entirely by available memory. The parameter we played with most was the number of generations between migrations of individuals between neighboring processing elements. (I can't remember how we selected the migrants.) We thought, buying into Goldberg, that migration and recombination were essential. There is actually a theoretical argument for isolated parallel runs. So nowadays I would leave out migration unless I had reason to believe it would be beneficial, even though our program did not spend much time on it.

The program blazed, in terms of wall-clock time, and this was due to exploitation of the architecture. My students also implemented and studied many existing GA's, and we discovered just how much unreported parameter tweaking underlay reported results. There is no magic in any form of evolutionary computation, and exploiting the architecture should be Job One unless you have prior knowledge of the problem instance to exploit.

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
dvunkannon



Posts: 1377
Joined: June 2008

(Permalink) Posted: Oct. 10 2009,16:12   

Quote (Turncoat @ Oct. 10 2009,15:12)
DiEb and I want to get serious about identifying errors in the IEEE SMC-A article of Dembski and Marks. Do you folks feel that the discussion deserves its own thread?

I'd prefer for that discussion to happen here. Special purpose threads are hard to find (at least for me). I prefer general threads that pick up topics as necessary. Viz. the discussion of Mendel's Accountant and previous Weasel discussions on this thread.

--------------
I’m referring to evolution, not changes in allele frequencies. - Cornelius Hunter
I’m not an evolutionist, I’m a change in allele frequentist! - Nakashima

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 10 2009,21:31   

Quote (Turncoat @ Oct. 10 2009,15:42)
There is actually a theoretical argument for isolated parallel runs. So nowadays I would leave out migration unless I had reason to believe it would be beneficial, even though our program did not spend much time on it.

I should clarify that a bit. I would not run isolated GA's with small populations, high crossover rates, and low mutation rates in parallel. There would be a lot of wasteful reevaluation of individuals. I would run isolated algorithms with relatively high mutation rates in parallel. With high mutation rates, few individuals are evaluated more than once.

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
dvunkannon



Posts: 1377
Joined: June 2008

(Permalink) Posted: Oct. 10 2009,22:17   

Quote (Turncoat @ Oct. 10 2009,16:42)
Quote (dvunkannon @ Sep. 09 2009,16:37)
Here's a little something related to genetic algorithms that I was thinking about during my recent vacation. I decided to write it down and share it with y'all in the hope getting some feedback on the idea. Now that I've got the idea sketched out, I'll implement it in the little GA I've been building.

Thanks in advance for any comments.

Something few people understand about the NFL theorems is that they level algorithms in terms of how well they do with n evaluations of the fitness function, and not how well they do with t units of running time. If you do not have prior knowledge of a problem instance that leads you to prefer some algorithm over others, or you choose to ignore that knowledge, then select the algorithm that runs the fastest (i.e., completes the greatest number of fitness evaluations in available running time).

Taken to the extreme, that says just run "Generate and Test" and give up on this fancy selecting, sorting, etc. NFL is NFL.

However, I think we agree that most human interesting problems are in a class where some level of GA can help if we don't already know a closed form solution. I realize there is a view that demes are just to allocate hardware appropriately, but I've also seen research that avoiding panmixis is a benefit irrespective of hardware. YMMV.

--------------
I’m referring to evolution, not changes in allele frequencies. - Cornelius Hunter
I’m not an evolutionist, I’m a change in allele frequentist! - Nakashima

  
midwifetoad



Posts: 3607
Joined: Mar. 2008

(Permalink) Posted: Oct. 11 2009,07:14   

There are only about half a dozen threads that maintain visibility here. so I vote for keeping the number down, and keeping them general.

--------------
”let’s not make a joke of ourselves.”

Pat Robertson

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 11 2009,23:36   

Anyone else notice the attributed oxymoron here?
   
Quote
English’s Law of Conservation of Information (COI) [15] notes “the futility of attempting to design a generally superior optimizer” without problem-specific information about the search.

If you're attempting to design a generally superior optimizer, you are not looking at problem-specific information.

Mighty rude to complain about being made infamous, ain't it?

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 11 2009,23:50   

Quote (dvunkannon @ Oct. 10 2009,22:17)
Quote (Turncoat @ Oct. 10 2009,16:42)
 
Quote (dvunkannon @ Sep. 09 2009,16:37)
Here's a little something related to genetic algorithms that I was thinking about during my recent vacation. I decided to write it down and share it with y'all in the hope getting some feedback on the idea. Now that I've got the idea sketched out, I'll implement it in the little GA I've been building.

Thanks in advance for any comments.

Something few people understand about the NFL theorems is that they level algorithms in terms of how well they do with n evaluations of the fitness function, and not how well they do with t units of running time. If you do not have prior knowledge of a problem instance that leads you to prefer some algorithm over others, or you choose to ignore that knowledge, then select the algorithm that runs the fastest (i.e., completes the greatest number of fitness evaluations in available running time).

Taken to the extreme, that says just run "Generate and Test" and give up on this fancy selecting, sorting, etc. NFL is NFL.

However, I think we agree that most human interesting problems are in a class where some level of GA can help if we don't already know a closed form solution. I realize there is a view that demes are just to allocate hardware appropriately, but I've also seen research that avoiding panmixis is a benefit irrespective of hardware. YMMV.

My main point was that hardware considerations must sometimes take precedence over algorithmics.

When we get into the Dembski and Marks article, you'll see that I believe we have learned a lot about problems, and that D&M ignore this source of information in their comments about evolutionary optimizers. I don't believe that their "search for a search" regress of probability measures models our learning through experience.

It seems to me that you have a metaheuristic, and that you would instantiate generic operations differently for different problems, presumably exploiting knowledge of the problem. It is not merely a fine theoretical point that different instantiations give different algorithms.

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
Dr.GH



Posts: 1969
Joined: May 2002

(Permalink) Posted: Oct. 12 2009,09:48   

Quote (Turncoat @ Oct. 11 2009,21:36)
Anyone else notice the attributed oxymoron here?
   
Quote
English’s Law of Conservation of Information (COI) [15] notes “the futility of attempting to design a generally superior optimizer” without problem-specific information about the search.

If you're attempting to design a generally superior optimizer, you are not looking at problem-specific information.

Mighty rude to complain about being made infamous, ain't it?

HehHeh
Nicely put.

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
DiEb



Posts: 238
Joined: May 2008

(Permalink) Posted: Oct. 13 2009,13:40   

I'm mainly interested in the evolutionary algorithms used in W. Dembski's and R. Marks's paper Conservation of Information in Search - Measuring the Cost of Success, i.e., the examples E and F in section III examples of active information in search. I tried to gather my thoughts, at first in my blog, but now on this page of <a href="http//rationalwiki.com" target="_blank">rationalwiki</a>: This wiki allows for math-tags, and - of course - for  collaboration. I'd love to get some input/critique/reactions...

   
Bob O'H



Posts: 1994
Joined: Oct. 2005

(Permalink) Posted: Oct. 13 2009,13:55   

Quote
I don't believe that their "search for a search" regress of probability measures models our learning through experience.

I recall from a previous incarnation of this idea (when wMad assumed, for consistency's sake, that q<p and proved, 4 pages later, that log(q)<log(p)) I thought a nice metaphor would be that Dembski was saying it's easier to find a needle in a haystack than to find that huge electromagnet in the shed next to the haystack.

--------------
It is fun to dip into the various threads to watch cluelessness at work in the hands of the confident exponent. - Soapy Sam (so say we all)

   
midwifetoad



Posts: 3607
Joined: Mar. 2008

(Permalink) Posted: Oct. 13 2009,14:08   

I can't evaluate the math, but I think people are being led down a garden bath with the concept of search.

I don't see any evidence that biology is modeled by a search algorithm. In biology a change might affect fitness for unknown reasons, and a subsequent change to the same position might further affect fitness. There is no single correct value for any given position in the string.

Behe and Dembski want you to believe that evolution must progress toward goals (consider Behe's obsession with the flagellum), but biology merely chugs along with whatever is adequate.

--------------
”let’s not make a joke of ourselves.”

Pat Robertson

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 13 2009,14:15   

Quote (Bob O'H @ Oct. 13 2009,13:55)
Quote
I don't believe that their "search for a search" regress of probability measures models our learning through experience.

I recall from a previous incarnation of this idea (when wMad assumed, for consistency's sake, that q<p and proved, 4 pages later, that log(q)<log(p)) I thought a nice metaphor would be that Dembski was saying it's easier to find a needle in a haystack than to find that huge electromagnet in the shed next to the haystack.

Neigh-h-h-h.



--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 13 2009,14:46   

Quote (midwifetoad @ Oct. 13 2009,14:08)
I can't evaluate the math, but I think people are being led down a garden bath with the concept of search.

I don't see any evidence that biology is modeled by a search algorithm. In biology a change might affect fitness for unknown reasons, and a subsequent change to the same position might further affect fitness. There is no single correct value for any given position in the string.

Behe and Dembski want you to believe that evolution must progress toward goals (consider Behe's obsession with the flagellum), but biology merely chugs along with whatever is adequate.

I agree entirely. I raised the issue of whether optimization was a good model of biological evolution in my first NFL paper, back in 1996. Now I am completely convinced that it is not. As Allen MacNeill rightly emphasizes, the consequence of variety, heredity, and fecundity is demography. A novel biological type can survive by virtue of its difference from the type that gave rise to it. There is not necessarily any basis for saying that the difference makes it better or worse.

Dembski and Marks have indicated that there are "implicit targets" for biological search. It's hilarious that they smuggle in teleology while accusing others of smuggling information into computational models of biological evolution. Creationists have long mistaken what did happen for what had to happen.

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
midwifetoad



Posts: 3607
Joined: Mar. 2008

(Permalink) Posted: Oct. 13 2009,15:00   

Quote
I agree entirely. I raised the issue of whether optimization was a good model of biological evolution in my first NFL paper, back in 1996. Now I am completely convinced that it is not.


I think fitness is a useful concept, as long as you don't conflate it with correctness.

BTW, bath=path, for more fitness.

--------------
”let’s not make a joke of ourselves.”

Pat Robertson

  
Dr.GH



Posts: 1969
Joined: May 2002

(Permalink) Posted: Oct. 13 2009,15:17   

Quote (Turncoat @ Oct. 13 2009,12:46)
Creationists have long mistaken what did happen for what had to happen.

I like that a lot. Coupled with evolutionary ratcheting, it makes a big punch-out to creationist arguments.

Over course, religionists can all believe that their god(s) knew how every thing would turn out all the while. Except they never seem to be able to figure it out.

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
Turncoat



Posts: 124
Joined: Dec. 2007

(Permalink) Posted: Oct. 13 2009,15:50   

In every introductory AI lecture I gave, I asked my students, "Which are more intelligent, cats or dogs?" The question is just as silly with "fit" in place of "intelligent."

I've read an interesting article by Sober on the role of reproductive fitness in biological modeling. As I recall, he indicates that its main use is in modeling at the level of a few alleles and traits (i.e., in population genetics). I am not sure that the concept is necessarily tautological at the level of whole organisms, but I definitely have seen many folks slip into tautology.

Creationist arguments that bacteria gain antibiotic resistance through a decrease in "real" fitness (i.e., the environment outside the hospital, in which the antibiotic-resistant strain fares poorly, is more real than the environment inside the hospital) are silly. And they go "poof" with emphasis on demography, as opposed to fitness.

--------------
I never give them hell. I just tell the truth about them, and they think it's hell. — Harry S Truman

  
DiEb



Posts: 238
Joined: May 2008

(Permalink) Posted: Oct. 13 2009,19:39   

I completed my little project on the evolutionary algorithms in Dembski's and Marks paper here. Completed? Well, it's a complete draft :-)
One little insight: I suppose that one author  wrote 2) Optimization by Mutation and the other 3) Optimization by Mutation With Elitism - it's virtually the same, just with another notation, so that they didn't spot what they were doing....
I'd bet that Marks wrote 2) - it feels a little bit more rigorous

   
Dr.GH



Posts: 1969
Joined: May 2002

(Permalink) Posted: Oct. 13 2009,22:55   

Dang. I thought I had a thought, but I forgot. Glad to see you are all happy.

(Actually, I think that a GA needs to replicate the HW equilibrium data on a real population to defeat Sanford).

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
DiEb



Posts: 238
Joined: May 2008

(Permalink) Posted: Oct. 19 2009,05:24   

R. Marks and W. Dembski try to get a new paper through peer-review. In a podcast, W. Dembski announced it as

We have some powerful results that follow up on this paper [Conservation of Information in Search:Measuring the Cost of Success]. This is a paper called "The Search for the Search" which is coming out. It should be out now, but there is some delay in the journal's publishing vent[?]

You can find a draft here.

   
DiEb



Posts: 238
Joined: May 2008

(Permalink) Posted: Oct. 21 2009,01:09   

I stated my problems with Dembski's and Marks's new paper The Search for a Search here. Don't worry, I kept it entirely non-technical :-)

   
DiEb



Posts: 238
Joined: May 2008

(Permalink) Posted: Oct. 22 2009,07:42   

I tried to post the following at UncommonDescent:

The Horizontal No free Lunch Theorem doesn't work for a search of length m > 1 for a target T ? ?:

Let ? be a finite search-space, T ? ? the target - a non-empty subset of ?. A search is a (finite) sequence of ?-valued random variables (?1, ?2, ..., , ?m). A search is successful, if ?n ? T for one n, 1 ? n ? m.

I suppose we do agree here. Now, we look at a search ? as a ?m-valued random variable, i.e., ? := (?1, ?2, ..., , ?m).

When is it successful? If we are still looking for a T ? ? we can say that we found T during our search if

? ? ?m / (? / T)m

Let's define ? as the subspace of ?m which exists from the representations of targets in ?, i.e.,

? := {?m / (? / T)m|T non-empty subset of ?}

Obviously, ? is much smaller than ?m.

But this ? is the space of feasible targets. And if you take an exhaustive partition of ? instead of ?m in Theorem III.1 Horizontal No Free Lunch, you'll find that you can indeed have positive values for the active entropy as defined in the same theorem.

But that's not much of a surprise, as random sampling without repetition works better than random sampling with repetition.

But if you allow T to be any subset of ?m, your results get somewhat trivial, as you are now looking at m independent searches of length 1 for different targets.

The searches which you state as examples in this paper and the previous one all work with a fixed target, i.e., elements of ?. You never mention the possibility that the target changes between the steps of the search (one possible interpretation of taking arbitrary subsets of ?m into account).

So, I'm facing two possibilities:

  1. You didn't realize the switch from stationary targets to moving ones when you introduced searching for an arbitrary subset of ?m
  2. You realized this switch to a very different concept, but chose not to stress the point.

   
  360 replies since Mar. 17 2009,11:00 < Next Oldest | Next Newest >  

Pages: (13) < ... 4 5 6 7 8 [9] 10 11 12 13 >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]