Joined: Oct. 2005
This amuses me
Bob O'HOctober 3, 2019 at 2:08 am
ba77 – As. you’ve read Durrett and Schmidt, you’ll be able to tell me how it is relevant, i.e. go through the examples of positive selection and show that the Durrett and Schmidt model is appropriate.
Bornagain77October 3, 2019 at 5:18 am
Bob (and weave) O’Hara wants to know how the waiting time problem is even ‘relevant’ to his claims that there is evidence for positive selection for gradual Darwinian. That is an interesting question coming from a professor/statistician who admits on his ‘linked in’ page that “I torture data until it confesses. Sometimes I have to resort to Bayesianism”
Professor at NTNU
“I torture data until it confesses. Sometimes I have to resort to Bayesianism” – 2016
“I tortured data, mainly in ecology and evolutionary biology.” – 2009
Of course the problem with Bob (and weave) O’Hara’s admitting that he statistically ‘tortures data until it confesses’ is that data, like people, will confess to anything you want if you torture it/them long enough.
“If you torture the data long enough, it will confess to anything.”
– Ronald Harry Coase (/?ko?s/; 29 December 1910 – 2 September 2013) was a British economist. Coase believed economists should study real markets and not theoretical ones,
And like economists should study real markets and not theoretical ones, I suggest that Bob (and weave) O’Hara should study the real world of the fossil record, mutations, and genetics and not his theoretical world based on his self admitted ‘tortured’ version of population genetics.
Right of Reply: Our Response to Jerry Coyne
written by Günter Bechly, Brian Miller and David Berlinski – Sept. 2019
Excerpt: The whales? And in twelve million years? Not likely. The available window of time for the transition from the terrestrial pakicetids to fully marine basilosaurids (Pelagiceti) is only 4.5 million years. This corresponds to the lifespan of a single larger mammal species, as Donald Prothero correctly notes. Prothero is Coyne’s ideological ally. They should be better friends. Short time spans give rise to a generic waiting time problem—a much-discussed issue in mainstream population genetics. It is easy to see why. The time required for even a single pair of coordinated mutations to originate and spread in a population is, at least, an order of magnitude longer than the window of time established by the fossil record. Either the fossil record must go, or the waiting time must go, but they cannot go on together. The whales are the least of it. The emergence of a single pair of coordinated mutations in the human lineage required a waiting time of 216 million years. The separation of the chimpanzee and human lineages took place only six or seven million years ago. These figures are clearly in conflict. This is the standard view, the one held by mainstream evolutionary biologists. (Rick Durrett and Deena Schmidt GENETICS November 1, 2008)
Also of note to the waiting time problem:
The waiting time problem in a model hominin population – 2015 Sep 17
John Sanford, Wesley Brewer, Franzine Smith, and John Baumgardner
Excerpt: The program Mendel’s Accountant realistically simulates the mutation/selection process,,,
Given optimal settings, what is the longest nucleotide string that can arise within a reasonable waiting time within a hominin population of 10,000? Arguably, the waiting time for the fixation of a “string-of-one” is by itself problematic (Table 2). Waiting a minimum of 1.5 million years (realistically, much longer), for a single point mutation is not timely adaptation in the face of any type of pressing evolutionary challenge. This is especially problematic when we consider that it is estimated that it only took six million years for the chimp and human genomes to diverge by over 5 % . This represents at least 75 million nucleotide changes in the human lineage, many of which must encode new information.
While fixing one point mutation is problematic, our simulations show that the fixation of two co-dependent mutations is extremely problematic – requiring at least 84 million years (Table 2). This is ten-fold longer than the estimated time required for ape-to-man evolution. In this light, we suggest that a string of two specific mutations is a reasonable upper limit, in terms of the longest string length that is likely to evolve within a hominin population (at least in a way that is either timely or meaningful). Certainly the creation and fixation of a string of three (requiring at least 380 million years) would be extremely untimely (and trivial in effect), in terms of the evolution of modern man.
It is widely thought that a larger population size can eliminate the waiting time problem. If that were true, then the waiting time problem would only be meaningful within small populations. While our simulations show that larger populations do help reduce waiting time, we see that the benefit of larger population size produces rapidly diminishing returns (Table 4 and Fig. 4). When we increase the hominin population from 10,000 to 1 million (our current upper limit for these types of experiments), the waiting time for creating a string of five is only reduced from two billion to 482 million years.
The real world simply is not kind to Bob (and weave) O’Hara’s statistically tortured theoretical one in the least. For instance, when the ‘real world’ effects of mutations were added to ‘Fisher’s ‘fundamental’ proof for Darwinian evolution’ then Fisher’s supposed theoretical proof for Darwinian evolution was falsified by real world data.
Fisher’s proof of Darwinian evolution has been flipped?
– December 27, 2017
Excerpt: we re-examine Fisher’s Theorem, showing that because it disregards mutations, and because it is invalid beyond one instant in time, it has limited biological relevance. We build a differential equations model from Fisher’s first principles with mutations added, and prove a revised theorem showing the rate of change in mean fitness is equal to genetic variance plus a mutational effects term. We refer to our revised theorem as the fundamental theorem of natural selection with mutations. Our expanded theorem, and our associated analyses (analytic computation, numerical simulation, and visualization), provide a clearer understanding of the mutation–selection process, and allow application of biologically realistic parameters such as mutational effects. The expanded theorem has biological implications significantly different from what Fisher had envisioned.
The fundamental theorem of natural selection with mutations – June 2018
Excerpt: Because the premise underlying Fisher’s corollary is now recognized to be entirely wrong, Fisher’s corollary is falsified. Consequently, Fisher’s belief that he had developed a mathematical proof that fitness must always increase is also falsified.
As well, it is now found that “Promising efforts at disentangling the effects of genes and the environment on complicated traits may have been confounded by statistical problems.”
New Turmoil Over Predicting the Effects of Genes – April 2019
Promising efforts at disentangling the effects of genes and the environment on complicated traits may have been confounded by statistical problems.
Excerpt:,,, But now, two results published last month have cast doubt on those findings, and have illustrated that problems with interpretations of GWAS, (genome-wide association studies), results are far more pervasive than anyone realized. The work has implications for how scientists think about the interactions between genetic and environmental effects. It also “raise[s] the ghosts of the possibility that we overestimate … how important genetics is in contributing to differences between people,”
,,, “The new studies are really quite disconcerting,” Barton said, because they demonstrated that scientists had been mistaking biases in the polygenic score calculations for something biologically interesting.,,,
,,, Though it was always understood to be a problem, “no one realized how big of a problem it was,”,,,
“It was just that sort of feeling where the world shifts under your feet slightly,”,,, “It’s fairly humbling to see all of that work go away.”,,,
That’s not to say that genome-wide association studies do not have incredible power.,,,
It’s when they’re accumulated to make inferences about differences between populations, both in evolutionary and medical contexts, that things can go wrong.
“We have to go back to the thinking box,” Nielsen said. “This is a major wake-up call … a game changer.”
i.e. the ‘real world’ was not kind to the ‘tortured’ statistical one that Darwinists had constructed!
Of course the real world is never kind to those who prefer to live in their own imagined version of reality. But it is simply insulting, especially in science, that Darwinists such as Bob (and weave) insist that we accept their imagined/tortured statistical version of reality as representative of the real world.
Here is a song you may enjoy Bob (and weave)
“I wish the real world, would just stop hassling me
I wish the real world, would just stop hassling me
I wish the real world, would just stop hassling me
And you, and me”
– Matchbox Twenty – Real World (Official Video)
Bornagain77October 3, 2019 at 8:02 am
Of note to the preceding comment,,, that is not for me to say that statistics, when properly used, is not very useful. What I am trying to say is that statistics, especially in the hands of Darwinists, is very much ripe for abuse:
Scientific method: Statistical errors – P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume. – Regina Nuzzo – 12 February 2014
Excerpt: “P values are not doing their job, because they can’t,” says Stephen Ziliak, an economist at Roosevelt University in Chicago, Illinois, and a frequent critic of the way statistics are used.,,,
“Change your statistical philosophy and all of a sudden different things become important,” says Steven Goodman, a physician and statistician at Stanford. “Then ‘laws’ handed down from God are no longer handed down from God. They’re actually handed down to us by ourselves, through the methodology we adopt.”,,
One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.,,
The irony is that when UK statistician Ronald Fisher introduced the P value in the 1920s, he did not mean it to be a definitive test. He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: worthy of a second look. The idea was to run an experiment, then see if the results were consistent with what random chance might produce.,,,
Neyman called some of Fisher’s work mathematically “worse than useless”,,,
“The P value was never meant to be used the way it’s used today,” says Goodman.,,,
The more implausible the hypothesis — telepathy, aliens, homeopathy — the greater the chance that an exciting finding is a false alarm, no matter what the P value is.,,,
“It is almost impossible to drag authors away from their p-values, and the more zeroes after the decimal point, the harder people cling to them”11,,
Yet to repeat, statistics, when properly used, (i.e. letting the data speak for itself instead of trying to ‘torture’ the data into saying what you want it to say), then statistics can indeed be very useful,
This Could Be One of the Most Important Scientific Papers of the Decade – July 23, 2018
Excerpt: Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree.
This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets.
Response to a Critic: But What About Undirected Graphs? – Andrew Jones – July 24, 2018
Excerpt: The thing is, Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.” Likewise, in Metazoa, hybridization is generally restricted to the lower taxonomic groupings such as species and genera — the twigs and leaves of the tree of life. In a realistic evolutionary model for Metazoa, we can expect to get lots of “reticulation” at lower twigs and branches, but the main trunk and branches ought to have a pretty clear tree-like form. In other words, a realistic undirected graph of Metazoa should look mostly like a regular tree.
New Paper by Winston Ewert Demonstrates Superiority of Design Model – Cornelius Hunter – July 20, 2018
Excerpt: Ewert’s three types of data are: (i) sample computer software, (ii) simulated species data generated from evolutionary/common descent computer algorithms, and (iii) actual, real species data.
Ewert’s three models are: (i) a null model which entails no relationships between any species, (ii) an evolutionary/common descent model, and (iii) a dependency graph model.
Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree.
Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process.
Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
Where It Counts
Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered — the actual, real, biological species data — the results were unambiguous.
Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent.
Darwin could never have even dreamt of a test on such a massive scale. Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other.
We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand.
Ten thousand is a big number. But it gets worse, much worse.
Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division.
The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set, given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division.
Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth?
Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros. That’s the ratio of how probable the data are on these two models!
By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage over that of common descent.
10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence.
This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits.
But It Gets Worse
The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450.
In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450.
We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data.
Bob O'HOctober 3, 2019 at 9:00 am
bs77 – you have utterly failed to answer my question. It can’t have been that difficult to grasp the issue – even ET understood it. Yes, waiting time is important, but waiting time for what? And is the situation examined by Durrett and Schmidt relevant to positive selection in whale evolution?