RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (14) < ... 9 10 11 12 13 [14] >   
  Topic: Evolutionary Computation, Stuff that drives AEs nuts< Next Oldest | Next Newest >  
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Feb. 21 2017,22:37   

Dieb,

I did the empirical approximation thing some time ago with "weasel" to plot a measure of average number of candidates across a swath of parameter space for population size and mutation rate:

http://www.antievolution.org/cgi-bin....y142675

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Feb. 21 2017,22:45   

And a bit of forensics on the unknown parameters Richard Dawkins used for his 1986 book and his 1987 video featuring 'weasel':

http://www.antievolution.org/cgi-bin....y142694

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
DiEb



Posts: 310
Joined: May 2008

(Permalink) Posted: Feb. 28 2017,02:32   

Quote (Wesley R. Elsberry @ Feb. 22 2017,04:45)
And a bit of forensics on the unknown parameters Richard Dawkins used for his 1986 book and his 1987 video featuring 'weasel':

http://www.antievolution.org/cgi-bin....y142694

Thanks for the links - it was fun to revisit your posts! Wow, time flies....

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: May 02 2017,20:10   

Sometime back, I had another look at a "weasel" post on my blog, and checked out an antievolutionist's blog where I had left a comment. I found he had responded about two months after I left my message. It took years for me to see it.

But it also goaded me into doing some more to describe "weasel" math right on up to incorporating population-level probabilities. I'll be putting this up as a blog post soon, but I'll put some of the goodies here first.

So...

If you generate a population of organisms randomly, what are the odds that at least one organism will have at least one match to the target?

C = # of correct bases
L = # of bases in organism genome
K = # of separate characters in the pool
N = population size
u = mutation rate (may be expressed as a fraction with L)

We need the p(org_random(C>=1)), the probability that a organism with a randomly assigned genome will have one or more matches to the target. We can get that by using the probability that a randomly assigned base is not a match, raise that to the power of the number of bases, and take the complement.

p(org_random(C>=1)) = (1.0 - (1.0 - 1/K)**L)

For "weasel", K = 27 and L = 28 and p(org_random(C>=1)) = 0.652

Getting the population probability is straightforward.

p(pop_random(C>=1)) = 1.0 - ((1.0 - 1/K)**L)**N

So how does this go, over, say, populations of size from 1 to 10? Like this:

p(pop_random(C>=1)) N= 1 p = 0.652
p(pop_random(C>=1)) N= 2 p = 0.879
p(pop_random(C>=1)) N= 3 p = 0.958
p(pop_random(C>=1)) N= 4 p = 0.985
p(pop_random(C>=1)) N= 5 p = 0.995
p(pop_random(C>=1)) N= 6 p = 0.998
p(pop_random(C>=1)) N= 7 p = 0.999
p(pop_random(C>=1)) N= 8 p = 0.9998
p(pop_random(C>=1)) N= 9 p = 0.9999
p(pop_random(C>=1)) N= 10 p = 0.99997

And there is a graph.



Some more graphs...

This one is the graph of the probability that at least one daughter organism will have more correct bases than its parent given population size, mutation rate (at u=1/L), and number of correct bases.



This shows how increasing the population size makes a huge difference in the probability of progress in each generation.

And there is the ever-popular notion of "latching". I should note that what cumulative selection is about is showing that you can have the retention of beneficial changes without having a system that privileges any particular portion of the genome. It's that specific privilege part that is what is so wildly misleading about claims of latching.

I have two graphs for this. Both are with u=1/L, that is, an expectation of mutation of one base per replication. There are points plotted in red for my direct calculation of probability, and blue for the Monte Carlo (MC) estimate of the probability. For runs with smaller population sizes, there are ten thousand MC trials per point, and for the runs including larger population sizes, there are a thousand MC trials per point.

These probability graphs show the probability that one would see a stepback in fitness for a particular population size, mutation rate, and number of correct bases in the parent. These demonstrate that for a reasonable mutation rate, even tiny population sizes can reduce the expectation of observing a stepback in fitness to very slim probabilities. As population sizes reach anything near a reasonable size, those probabilities fall very close to zero.





--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Dr.GH



Posts: 2331
Joined: May 2002

(Permalink) Posted: May 04 2017,00:09   

I should never forget population size as a massive factor. But I seem to do that.

--------------
"Science is the horse that pulls the cart of philosophy."

L. Susskind, 2004 "SMOLIN VS. SUSSKIND: THE ANTHROPIC PRINCIPLE"

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Feb. 21 2018,20:37   

Disclosure: I did my postdoc working with Avida, I worked on the Avida-ED project, and my spouse is the architect and programmer on the current web-based Avida-ED.

Science Daily had an article discussing a research paper on the educational effectiveness of using Avida-ED, the award-winning software putting a user-friendly interface on the Avida resarch platform, to teach students basic concepts in evolutionary biology. Other recent notice came in a page at the IEEE 360Electronics site.

From the Results section of the research paper:

         
Quote

Student Acceptance of Evolution

Average student acceptance scores across all ten cases ranged from 173.95 to 90.04 on the pre-test and 76.28 to 91.06 on the post-test, or moderate to very high acceptance for both pre- and post-tests, using Rutledge’s (1996) guide to interpretation (Table 8). Average acceptance score increased significantly from pre- to post-test in four of the ten cases (Fig. 5). These four cases were all lowerdivision courses that also had statistically significant gains in average content score. Students in two of the three upper-division evolution courses, B_300Evo and F_400Evo, had very high acceptance on both the pre- and post-tests, with no significant change. These were the same two upper-division courses in which the highest pre-test content scores were observed. Students in the remaining upper-division course, C_400Evo, also did not show a significant change in acceptance from pre- to post-test, with the lowest average acceptance score on the post-test (76.28). Thus, case C_400Evo showed the lowest average scores for both content and acceptance on the post-test, despite being a senior-level evolution course (discussed below).

Understanding and Acceptance of Evolution

Most of the students in lower-division courses had significant increases in both average content and average acceptance scores, suggesting a relationship between the two. Again, we accounted for differences in levels of student prior knowledge by using normalized gains (g-avg; Hake, 2002), calculated for each student’s pre- and post-assessment scores, which were then averaged for each case. The Pearson correlation confirmed a significant, positive association between the change in average normalized content score and in average normalized acceptance score across the ten cases (r = 0.60, p < 0.05; Fig. 6).


Of course, whenever evolutionary computation gets favorable notice, you can count on the Discovery Institute to say "Pffff-ff-ff-fft" to that.

         
Quote

It’s time for a bit of honesty in evolution education! Avida shows that evolutionary processes require intelligent design to hit predetermined targets. That’s the candid takeaway from a lesson about this software. Since we don’t recommend trying to bring ID into public school classrooms, there are undoubtedly more effective uses of class time than playing with Avida-ED.


Well, predetermined targets are one thing, and actually "cheating", as Dembski has routinely called it, is quite another. There is a finite set of logic operations that can operate on one or two inputs. Avida has all of them implemented such that they could be specified and recognized if a program accomplished any of them. The Avida results that Dembski and others concern themselves with are a small sample, nine tasks, of that larger number. The 2003 Lenski et al. paper is based on the "logic-nine" definition of an environment.

Other environments are possible. By other, I mean something like 8.511301e10 of them or more when considered with nine selected tasks per environment. Choose other numbers of tasks and you'll different numbers, but mostly larger. Avida doesn't care which ones you specify, which makes it difficult to credit that the information of "intelligently designed" results for each of them is somehow crammed into the Avida executable, which the last time I checked weighed in at about 2e6 bytes.

During my postdoc, I used Avida, but here is the number of the existing predefined logic tasks I used in my research: 0. I added three instructions to the Avida instruction set, one to move the Avidian from its cell to an adjacent cell, one to randomly change which way it faced, and one that would load the difference in a resource between the current cell and the faced cell into a register. A task placeholder, "don't-care", got no merit bonus of its own; instead, Avidians received merit modulated by the concentration of resource in the cell the Avidian moved into, and only in the update it moved into it. What I was looking for was the evolution of movement strategies, of which there were this many defined in Avida: 0. What came out were a plethora of programs, hardly any alike, that moved Avidians around the grid in about eight general classes of movement. One of those classes corresponded to implementations of gradient ascent programs, the optimal movement strategy for the single-peaked resource environment I evolved them in. I certainly coded no "target" (where "target" would be a program or template for a program for moving) into Avida, and the Avida codebase didn't even have anything to move Avidians around before I worked on that.

Other researchers modified Avida to accomplish real-world things unrelated to the "logic-nine" environment Dembski and his comrades are fixated upon. There have been successful projects to code robotic controller programs, wirelessly networked sensor firmware that handles consensus-finding, and evolving feature sets specifying program UML. None of those things had a pre-specified endpoint in mind other than meeting a functional definition.

The DI screed also complains that "information sources" exist in Avida. Well, yeah, the things that are considered analogous to the environment are information sources. Just like environments are information sources for organisms. But information sources corresponding to what a Avidian genome is supposed to look like in the end? Nope.

The DI encourages people to look at the Evolutionary Informatics "Lab"-(not-affiliated-with-Baylor-despite-trying-really-hard) web dingus, Minivida, to see their version of what they imagine Avida is doing. I had a look. Avida is premised on self-replicating digital organisms. Minivida is not. Replication is something every viable Avidian does. Minivida just does some copying with mutation. The Minivida folks do offhandedly announce this departure from the source:

         
Quote

An attempt has been made to maintain as much compatilibity with Avida as possible, so most Avida programs should run on this simulator with the same results. However, all instructions relating to copying instructions are either ignored or only partially implemented.


Here's the code for reproduction:

Code Sample

// Make the next generation
AvidaSim.prototype.make_babies = function()
{
   // create an array with fitness(x) elements for every x in the current population
   var parents = new Array();
   for( var idx in this.population )
   {
       var parent = this.population[idx];
       for( var pos = 0; pos < parent.score*2 + 1;pos ++)
       {
           parents.push( parent.program );
       }
   }

   // create babies
   // select from above array so probability of being selected is correlated with fitness
   var babies = new Array();
   for( var child = 0; child < this.config.population_size; child++)
   {
       var parent_idx = Math.floor( Math.random() * parents.length );
       babies.push( new ChildGenome( this, parents[parent_idx] ) );
   }

   this.population = babies;
}


There's a call in there, so let's have a look:

Code Sample

function Genome(simulation)
{
   this.program = "";
   for(var i = 0; i < 85;i++)
   {
       this.program += simulation.random_letter();
   }
   this.score = undefined;
}

function ChildGenome(simulation, parent)
{

   var idx = Math.floor( Math.random() * parent.length);
   var letter = simulation.random_letter();

   this.program = parent.substr(0, idx).concat( letter, parent.substr(idx+1) );
   this.score = undefined;
}


Mutation in Minivida always is triggered for one location in a genome. The child genome is the parent genome with one instruction substituted randomly.

Minivida forces a fixed genome size (85) and fixed mutation rate (0.9615/85). Why the odd number? Their mutation routine does not check that the mutated instruction actually differs from the one being replaced, so 1/26th of the time it will be the same. Avida can be set to have fixed genome size or unconstrained genome size. Avida-ED uses a fixed size of 50. Avida and Avida-ED allow the setting of the mutation rate, which is a per-site rate, and mutation happens probabilistically. A standard initial exercise with Avida-ED is to have a class set a particular mutation rate, have all students put an organism in the organism viewer, run, and count the number of mutations in the offspring. The numbers are collected, and those show a gaussian distribution that comes close to the set mutation rate. That sort of demonstration is impossible in Minivida because looking even a little like what happens in biology isn't even on their radar.

Minivida only provides the current best genome for display, and it shows a diagram of a circuit, often with several levels, and I haven't found any explanation of what they are showing and how it is supposed to relate to Avida. The code looks to be picking out the math and logical operators from the organism program and showing transformations and sources going back to inputs. As the Minivida program goes on, the graphics are continuously changing. In Avida, small mutation rates mean that one is pretty likely to end up with Avidians that are accurate copies of it. So the best organism in an Avida population may well be the parent of an identical best Avidian in a later generation; one can see some stability of best Avidian. This does not appear to be the case for Minivida, where getting an accurate copy of a Minivida program will only happen 1/26th of the time. In Avida-ED and Minivida, one can pause the program at any point in a run. In Avida-ED, one then can examine any Avidian in the grid, see its genome, watch how its execution runs, and see a possible offspring from its replication, and examine various other statistics about the population, plus charts of several different population-level properties. In Minivida, only the best genome and a circuit representation of it can be seen.

The sorts of things one can use Avida-ED to demonstrate in a classroom on first introduction can be seen in the Lab Manual, in curriculum guides, and in various YouTube video tutorials. Assessments of the utility of Avida-ED are presented in peer-reviewed research, such as the example the DI got hurt feelings over. As far as I can tell, the notion that Minivida has something to show people relative to Avida-ED is unfounded. You can go through the Minivida code and note all the "TODO" lick-and-promise points in its incomplete approach to mimicking Avida. You can look for, and fail to find, documentation that would show any useful educational purpose for Minivida.

Now, I'll give the antievolutionists pushing Minivida and misguided critiques of Avida and Avida-ED one prop, which is that they haven't minified their Javascript or otherwise obfuscated the source to make it harder to see just how dreadful it is. (And I hope that observation doesn't lead them to do just that.)

But the DI did give Diane and I something else to have in common: we both now have the public disapproval of the DI.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Henry J



Posts: 5778
Joined: Mar. 2005

(Permalink) Posted: Feb. 21 2018,21:06   

Re "But the DI did give Diane and I something else to have in common: we both now have the public disapproval of the DI."

That's what you get for intelligently designing something! :p

  
k.e..



Posts: 5430
Joined: May 2007

(Permalink) Posted: Feb. 22 2018,19:19   

Quote (Henry J @ Feb. 22 2018,05:06)
Re "But the DI did give Diane and I something else to have in common: we both now have the public disapproval of the DI."

That's what you get for intelligently designing something! :p

And it's a very worthy credit to Wes and his spouse. Proof that the DI is "Anti-evolution".

--------------
"I get a strong breeze from my monitor every time k.e. puts on his clown DaveTard suit" dogdidit
"ID is deader than Lenny Flanks granmaws dildo batteries" Erasmus
"I'm busy studying scientist level science papers" Galloping Gary Gaulin

  
fnxtr



Posts: 3502
Joined: June 2006

(Permalink) Posted: Feb. 22 2018,21:14   

Ya but sooper-genius-confined-to-his-pigpen disagrees and he knows more than anybody!

--------------
"[A] book said there were 5 trillion witnesses. Who am I supposed to believe, 5 trillion witnesses or you? That shit's, like, ironclad. " -- stevestory

"Wow, you must be retarded. I said that CO2 does not trap heat. If it did then it would not cool down at night."  Joe G

  
DiEb



Posts: 310
Joined: May 2008

(Permalink) Posted: Feb. 23 2018,17:20   

Quote (Wesley R. Elsberry @ Feb. 22 2018,02:37)


The sorts of things one can use Avida-ED to demonstrate in a classroom on first introduction can be seen in the Lab Manual, in curriculum guides, and in various YouTube video tutorials. Assessments of the utility of Avida-ED are presented in peer-reviewed research, such as the example the DI got hurt feelings over. As far as I can tell, the notion that Minivida has something to show people relative to Avida-ED is unfounded. You can go through the Minivida code and note all the "TODO" lick-and-promise points in its incomplete approach to mimicking Avida. You can look for, and fail to find, documentation that would show any useful educational purpose for Minivida.


Interestingly, the number of  TODOs has not been reduced over the last six years: here is the identical version of the script from March 2012.

At that time, the "Evolutionary Informatics Lab" listed a programmer as one of its people, but he isn't listed nowadays.

Dembski, Marks, and Ewert have shown that they are able to botch even more simple computer tasks: they obviously got it wrong when they tried to implement a simulation for the "Cracker Barrel Puzzle".

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 03 2020,22:51   

Quote (Wesley R. Elsberry @ Feb. 15 2017,12:06)
Quote (DiEb @ Feb. 14 2017,12:16)
 
Quote (WebHopper @ Feb. 14 2017,12:45)
   
Quote (DiEb @ Feb. 13 2017,05:40)
Frankly, I'm not sure what you mean by "trial"...

The number of trials is the number of tries. When you roll dices for instance, a trial is when you throw the dice once. Or for the Weasel algorithm, it's the number of generations.

I calculated the probability distribution for the Weasel algorithm with a mutation rate of 0.05 and a population size of 100 according to Utiger's paper:

P(v)=|H.F^(v-2).A|

where v is the number of trials (or generations), H and F are matrices, A is a vector and |.| is the 1-norm. This yields



As you can see, the numerical blue points and the analytical red curve perfectly fit. The mean calculated analytically is 79.19 and the standard deviation is 24.64. So the number of queries is 100*79.19 = 7919 according to your indications. On your graphic above however, the intersection point of the vertical line at 5e-02=0.05 with the green curve passing through the point 100... is about 2e+05=2*10^5. This is the number of queries if I understand you well. So there is disagreement with both results...

Great - I will try to find out where my error laid...

Dieb, I don't think it is your error. The plot WebHopper has is for population size 100, and should have been run for population size 9 to speak to your numbers.

It's come to my attention that "WebHopper" was likely simply a sockpuppet of Utiger himself.

The self-praise is something I now see he has been carrying out in a number of venues.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
DiEb



Posts: 310
Joined: May 2008

(Permalink) Posted: Jan. 05 2020,16:26   

Quote (Wesley R. Elsberry @ Jan. 04 2020,04:51)

It's come to my attention that "WebHopper" was likely simply a sockpuppet of Utiger himself.

The self-praise is something I now see he has been carrying out in a number of venues.

WTF? So, when I linked to his own article, he wrote  
Quote
Thanks for the link, I will look into this. Seems to be complicated however...


Has anybody ever encountered an honest proponent of ID?

PS: Wesley, I enjoyed re-reading your thoughts! Should be done at least every two years....

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 06 2020,10:26   

Quote (DiEb @ Jan. 05 2020,16:26)
Quote (Wesley R. Elsberry @ Jan. 04 2020,04:51)

It's come to my attention that "WebHopper" was likely simply a sockpuppet of Utiger himself.

The self-praise is something I now see he has been carrying out in a number of venues.

WTF? So, when I linked to his own article, he wrote    
Quote
Thanks for the link, I will look into this. Seems to be complicated however...


Has anybody ever encountered an honest proponent of ID?

PS: Wesley, I enjoyed re-reading your thoughts! Should be done at least every two years....

That graph showing the distribution of generations to completion for N=100? That was hosted at historycycles.tripod.com, which now redirects to historycycles.org, which is Utiger's page for his book on 'salvation history'. That's just one clue.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 06 2020,11:07   

Quote (DiEb @ Feb. 23 2018,17:20)
 
Quote (Wesley R. Elsberry @ Feb. 22 2018,02:37)


The sorts of things one can use Avida-ED to demonstrate in a classroom on first introduction can be seen in the Lab Manual, in curriculum guides, and in various YouTube video tutorials. Assessments of the utility of Avida-ED are presented in peer-reviewed research, such as the example the DI got hurt feelings over. As far as I can tell, the notion that Minivida has something to show people relative to Avida-ED is unfounded. You can go through the Minivida code and note all the "TODO" lick-and-promise points in its incomplete approach to mimicking Avida. You can look for, and fail to find, documentation that would show any useful educational purpose for Minivida.


Interestingly, the number of  TODOs has not been reduced over the last six years: here is the identical version of the script from March 2012.

At that time, the "Evolutionary Informatics Lab" listed a programmer as one of its people, but he isn't listed nowadays.

Dembski, Marks, and Ewert have shown that they are able to botch even more simple computer tasks: they obviously got it wrong when they tried to implement a simulation for the "Cracker Barrel Puzzle".

One would think QA would be higher on their priority list after the "unacknowledged costs fiasco.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
DiEb



Posts: 310
Joined: May 2008

(Permalink) Posted: Jan. 06 2020,16:36   

Quote (Wesley R. Elsberry @ Jan. 06 2020,17:07)
One would think QA would be higher on their priority list after the "unacknowledged costs fiasco.

Another example of ID's approach to science:

Our math is strong because our cause is just!

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 07 2020,22:36   

Quote (DiEb @ Jan. 05 2020,16:26)
Quote (Wesley R. Elsberry @ Jan. 04 2020,04:51)

It's come to my attention that "WebHopper" was likely simply a sockpuppet of Utiger himself.

The self-praise is something I now see he has been carrying out in a number of venues.

WTF? So, when I linked to his own article, he wrote    
Quote
Thanks for the link, I will look into this. Seems to be complicated however...


Has anybody ever encountered an honest proponent of ID?

PS: Wesley, I enjoyed re-reading your thoughts! Should be done at least every two years....

I skimmed over my essay for the 1997 (DI CRSC) conference, "Natualism, Theism, and the Scientific Enterprise" just now, and that reminds me of various things that I think I need to emphasize more in argumentation. The "axiom of comprehensibility" that I introduced there is, as I'm looking at it, a critical component of saying *why* the IDC effort to redefine science itself is so dangerous. (It's because inherent in their new conception of "science" there is no expectation that all phenomena are capable of being comprehended by humans.)

But besides that, I liked seeing a replay of kicking Wilder-Smith's notions to the curb.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 08 2023,15:29   

So far as I know, this isn't really evolutionary computation, but it is computation-related.

A friend of mine is pretty much the sole technical talent at a start-up. Last month, we started chatting about his use of ChatGPT and other generative AI solutions. He is doing a lot of firmware coding in Rust now, and he has been telling me having the generative AI is like having a bunch of interns or volunteers on hand so far as routine coding tasks go. So much so that he has sprung for an RTX 3090 GPU in order to self-host models and get past arbitrary restrictions on usage.

I only myself got started using ChatGPT via OpenAI's API this weekend. I've asked for, and gotten, several Python code examples, including what purports to be an inverse CDF for the logit function (I still need to test it), and some Python to database functions (which I have tested). I've requested some text concerning topics with references, which it does OK. And that got me thinking about long-planned projects and how this tech could assist in those.

One of the tasks I've been thinking about a long time is a refresh of Marty Leipzig's keyword-indexed bibliography hosted on the TalkOrigins Archive. That was put up in the mid-1990s and I think may have had one update from Marty over twenty years ago. Obviously, time marches on. While I'm at it, the 'Biographica' material at the TOA could use a major update, too.

So I wrote a program that I can feed a topic name from the bibliography, it builds a prompt, then calls the OpenAI API to get the completion. There's two prompts built per topic currently, one to write a summary paragraph, and another to find references for the topic. There's currently some waste/redundancy because the API apparently limits the length of the completion sent back, and I have to send back context and another prompt to resume from where it truncated. There are other things I had to build in, like delays to avoid 'rate-limit' error messages. The topic summaries and bibliography entries get saved to a SQLite database.

I have run this to get a sample of topics covered (35 of them), and then checked my OpenAI account, where I've used $13 from an $18 credit. Taking off a buck for the incidental exploration, my estimate of cost to complete the exercise for the TOA materials is:

Topic count: 1049, likely cost to complete $347.66
Biography count: 318 likely cost to complete $109.03

The bibliography topic estimate is low, because some more needs to be done to fill out the number of bibliography entries, and that will take both exploration and more prompts with context to get the completions I am after.

If I go forward on this, I'm seriously considering following my friend's lead on getting a good graphics card capable of self-hosting one or more generative AI models. The cost I'd sink into interactive use is already about a third of my estimate of what it would take to augment my current best desktop, and about a fifth of what I'd estimate a purpose-built new system would take.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 08 2023,19:26   

This is an example of a ChatGPT-generated summary paragraph for the topic of 'Athropods'.

Quote
Arthropods are an incredibly diverse group of animals that have been around since the Cambrian explosion, making them one of the most successful groups of animals in evolutionary history. They have adapted to almost any type of environment on Earth, ranging from deserts to oceans, and are found in nearly every habitat imaginable. Arthropods have evolved a vast range of morphological and behavioral adaptations, enabling them to thrive in almost any environment. They also display immense genetic diversity, with many species displaying impressive levels of biochemical sophistication.  in a different way. Arthropods have had a profound impact on evolution, forming vital links in food chains, cycling nutrients, and helping to keep ecosystems healthy and functioning.


If I were fine-tuning that, it needs diagnostic criteria for the group, because if you didn't already know what arthropods looked like, there's not a clue in that paragraph to help you.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Dr.GH



Posts: 2331
Joined: May 2002

(Permalink) Posted: Jan. 08 2023,20:17   

I have no grasp of what your comment means.

I feel old.

:O

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 08 2023,21:19   

Quote (Dr.GH @ Jan. 08 2023,20:17)
I have no grasp of what your comment means.

I feel old.

:O

I asked ChatGPT to write a paragraph explaining itself, and it came up with this:

Quote
ChatGPT is a type of artificial intelligence technology that enables computers to answer questions and generate text conversations in natural language. It uses a combination of machine learning, natural language processing, and neural networks to power these conversations. For example, when you enter a paragraph, ChatGPT will analyze the content and generate a response based on its understanding of the conversation. In other words, it can act like a virtual chatbot that engages in conversations with human users. So don't worry about feeling old- ChatGPT is here to help us navigate this new technology!


--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Dr.GH



Posts: 2331
Joined: May 2002

(Permalink) Posted: Jan. 09 2023,21:42   

Wow.  :O

   
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 10 2023,07:37   

It gets worse.

I asked ChatGPT to translate a MATLAB function into Python. The MATLAB function I chose was one called 'motion' written by an acoustics expert who apparently thought variable names longer than two characters would be harmful. I removed all the MATLAB comments from it.

ChatGPT passed back a creditable start on a translation, assigning things like "cut()" to "np.cut()" (NumPy 'cut', a reference to the usual linear algebra package in Python). But it missed all the two-character mathematical operators in MATLAB, like "./" for "element-wise right divide", and failed to pick up the apostrophe as "ctranspose". It passed those on unchanged in the output, though, so its translation was not destructive. So far, so good; a reasonably limited result given the generic nature of ChatGPT training.

Then I asked ChatGPT to write an explanation of the function as a Python docstring. It made good guesses on expanding the single-character parameter names "t" and "x" as "(time)" and "state vector", and stated that the function calculated the derivatives of motion of a 2D mass-spring system, which is exactly what it does. There was nothing in the input saying "mass" or "spring", and nothing but code to derive the dimensionality from. This bit is not what I expected. ChatGPT's generalized understanding seems to cover technical topics to much greater depth than I expected. There are some implications that I think are concerning. There appears to be potential here for ChatGPT, or the broader class of generative AI models, to be an extremely disruptive technology.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
KevinB



Posts: 525
Joined: April 2013

(Permalink) Posted: Jan. 12 2023,16:54   

But would ChatGPT make a better job of writing a Dawkins' Weasel program than Eric Holloway?

  
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 12 2023,22:55   

Quote (KevinB @ Jan. 12 2023,16:54)
But would ChatGPT make a better job of writing a Dawkins' Weasel program than Eric Holloway?

Interesting. I asked ChatGPT to write one.

That took longer to do than I thought it would.

One thing ChatGPT has going for it is that it isn't as obstinate as certain antievolutionary-axe-to-grind coders are.

Quote

Wesley Elsberry
Explain Richard Dawkins' 'weasel' program.

The "weasel program" is a thought experiment proposed by evolutionary biologist and author Richard Dawkins in his book "The Blind Watchmaker." The experiment is meant to demonstrate the process of natural selection and how it can lead to the evolution of complex systems. The program starts with a simple string of random letters, which represents a randomly generated "creature." The program then repeatedly makes small random changes to the letters in the string, simulating the process of genetic mutation. The program then selects the strings that are most similar to a target string, such as the word "METHINKSITISLIKEAWESAEL," which represents the phrase "METHINKS IT IS LIKE A WEASEL." Over many generations, the program will gradually converge on the target string, demonstrating how natural selection can lead to the evolution of complex systems.
Wesley Elsberry
Write a well-commented version of Dawkins' 'weasel' program in Python.

import random

# The target string represents the phrase "METHINKS IT IS LIKE A WEASEL"
target = "METHINKS IT IS LIKE A WEASEL"

# The initial string is randomly generated
initial = "".join([random.choice("ABCDEFGHIJKLMNOPQRSTUVWXYZ ") for _ in range(len(target))])
print("Initial String:", initial)

# The mutation rate controls the probability of a character changing in the string
mutation_rate = 0.05

def mutate(string):
   """
   This function simulates the process of genetic mutation by randomly
   changing characters in the string. The mutation rate controls the
   probability of a character changing.
   """
   return "".join([random.choices(["ABCDEFGHIJKLMNOPQRSTUVWXYZ "], k=1, weights=[1-mutation_rate, mutation_rate])[0] if random.random() < mutation_rate else c for c in string])

def get_fitness(string):
   """
   This function calculates the fitness of the string by comparing it
   to the target string. The fitness is the number of characters that
   match the target string.
   """
   return sum(1 for c1, c2 in zip(string, target) if c1 == c2)

# The number of generations to run the simulation
generations = 1000

for generation in range(generations):
   initial = mutate(initial)
   fitness = get_fitness(initial)
   if fitness == len(target):
       print("Target String Found!")
       break
   print(f"Generation: {generation} String: {initial} Fitness: {fitness}")

The program starts with a simple random string as the initial string, in this case, we are using the letters A-Z and space to randomly generate the initial string. The program then repeatedly makes small random changes to the letters in the string, simulating the process of genetic mutation, this is done by the mutate function. The program then selects the strings that are most similar to the target string, in this case, we are comparing the string with the target string, the fitness of the string is defined as the number of characters that match the target string, this is done by the get_fitness function. Over many generations, the program will gradually converge on the target string, demonstrating how natural selection can lead to the evolution of complex systems. At the end of the program, the final string will be printed along with the generation number, the string and its fitness.

Wesley Elsberry

Correct the error from this traceback. Initial String: EOPLLFLRQKJOPHKYSXTEVVIYAKZL
0 EOPLLFLRQKJOPHKYSXTEVVIYAKZL 2
Traceback (most recent call last):
 File "gptweasel.py", line 34, in <module>
   initial = mutate(initial)
 File "gptweasel.py", line 19, in mutate
   return "".join([random.choices(["ABCDEFGHIJKLMNOPQRSTUVWXYZ "], k=1, weights=[1-mutation_rate, mutation_rate])[0] if random.random() < mutation_rate else c for c in string])
 File "gptweasel.py", line 19, in <listcomp>
   return "".join([random.choices(["ABCDEFGHIJKLMNOPQRSTUVWXYZ "], k=1, weights=[1-mutation_rate, mutation_rate])[0] if random.random() < mutation_rate else c for c in string])
 File "/home/netuser/programs/anaconda2/envs/py38-analysis/lib/python3.8/random.py", line 404, in choices
   raise ValueError('The number of weights does not match the population')
ValueError: The number of weights does not match the population

It looks like the error is coming from the line return "".join([random.choices(["ABCDEFGHIJKLMNOPQRSTUVWXYZ "], k=1, weights=[1-mutation_rate, mutation_rate])[0] if random.random() < mutation_rate else c for c in string]) in the mutate function.

The issue here is that the random.choices function requires the population to be the same length as the weights. In this case, the population is the list of characters ["ABCDEFGHIJKLMNOPQRSTUVWXYZ "] which has 27 characters, but the weights only has 2 (the probability of a character changing and the probability of a character staying the same).

One way to fix this would be to change the population to be all the characters in the target string and then set the weight of the characters that are already in the string to 1-mutation_rate and the weight of the characters that are not in the string to mutation_rate/ (len(target) - len(string)).

def mutate(string):
   """
   This function simulates the process of genetic mutation by randomly
   changing characters in the string. The mutation rate controls the
   probability of a character changing.
   """
   # Create a list of all the characters in the target string
   population = list(target)
   # Create a list of the weights for each character
   weights = []
   for c in population:
       if c in string:
           weights.append(1-mutation_rate)
       else:
           weights.append(mutation_rate/ (len(target) - len(string)))
   # Select a random character from the population using the weights
   new_char = random.choices(population, k=1, weights=weights)[0]
   # Select a random position in the string to change
   pos = random.randint(0, len(string)-1)
   # Replace the character at the selected position with the new character
   new_string = string[:pos] + new_char + string[pos+1:]
   return new_string

With this modification, the population and weights list will have the same length and the program will work as expected.

Wesley Elsberry
Why would you treat any character differently in the mutate function?

In the original version of the mutate function, the population is all the possible characters (A-Z, space), and all the characters are treated the same, meaning that any character can replace any other character, regardless of whether it's already present in the string or not. But in the corrected version, characters that are already present in the string are treated differently than the characters that are not present.

By treating the characters already present in the string differently, it increases the chances of getting a better fitness, because the characters that are already in the string are more likely to be part of the final solution. This way, the program can converge faster to the target string.

An analogy would be if you are trying to solve a puzzle and you have some pieces already in place, it's more likely that you will complete the puzzle faster if you try to fit the remaining pieces to the ones you already have, rather than trying to fit them randomly.

It's worth noting that this is just one of the ways to optimize the algorithm, there are many other ways to improve the performance of the program, such as adding a selection mechanism, or introducing a recombination operator.


Wesley Elsberry
The point of the 'weasel' program was to illustrate a biological concept, and treating characters differently for mutation is counter to that goal. Change the program so the mutate function is not counter to biological knowledge.

You are correct, the point of the "weasel" program is to demonstrate the process of natural selection and how it can lead to the evolution of complex systems, so it should adhere to the principles of natural selection.

One way to make the mutate() function adhere to the principles of natural selection is to use a mutation rate that is consistent with the expected values observed in real-world populations. This means that the mutation rate should be low, typically on the order of one per million letters per generation.

Another way to make the mutate() function adhere to the principles of natural selection is to randomly select a position in the string and change it to any of the characters in the target string. This is more consistent with the way mutations occur in nature, where any position in a genome can be affected by a mutation and the resulting change can be any of the possible nucleotides or amino acids.

Here's the modified version of the mutate() function that adheres to these principles:

def mutate(string):
   """
   This function simulates the process of genetic mutation by randomly
   changing a character in the string to any of the characters in the target string.
   """
   pos = random.randint(0, len(string)-1)
   new_char = random.choice(target)
   new_string = string[:pos] + new_char + string[pos+1:]
   return new_string

In this version, the characters already present in the string are treated the same as the characters that are not present. The character that is replaced is selected at random from the target string and the position of the string that is to be replaced is selected randomly, this way the mutation is more consistent with the way mutations occur in nature.

It's also worth noting that this version of the program will not necessarily converge as fast as the previous one but it will be more biologically accurate.


Wesley Elsberry
Consider an experiment where a one is determining a mutation rate from empirical data. Data show that genetic bases have approximately equal probabilities of point mutations, and there is variation in the number of mutations observed in a long sequence of bases. Thus, a mutation rate is not an exact number drawn from invariant data, but more like the mean taken of a Gaussian distribution of observed numbers of changed bases. Reconsider your reasoning on biological accuracy and produce another version of the mutate function.

You are correct that a mutation rate is not an exact number drawn from invariant data, but more like the mean taken of a Gaussian distribution of observed numbers of changed bases. This means that in order to be more biologically accurate, the program should use a random number of mutations per generation, rather than a fixed number.

One way to achieve this is to use the Poisson distribution to generate a random number of mutations per generation based on the expected mean mutation rate. The Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known average rate and independently of the time since the last event.

Here's the modified version of the mutate() function that adheres to this principle:

import math

def mutate(string):
   """
   This function simulates the process of genetic mutation by randomly
   changing a certain number of characters in the string to any of the characters in the target string.
   The number of mutations is determined by the Poisson distribution based on the expected mutation rate.
   """
   # Calculate the expected number of mutations
   expected_mutations = len(string) * mutation_rate
   # Generate a random number of mutations using the Poisson distribution
   num_mutations = math.floor(random.gammavariate(expected_mutations, 1))
   # Make the mutations
   for _ in range(num_mutations):
       pos = random.randint(0, len(string)-1)
       new_char = random.choice(target)
       string = string[:pos] + new_char + string[pos+1:]
   return string

In this version, the program uses a Gaussian distribution of observed numbers of changed bases, this way the mutation is more consistent with the way mutations occur in nature. The program calculates the expected number of mutations, and then it generates a random number of mutations using the Poisson distribution, this way the program will use a random number of mutations per generation, rather than a fixed number. This way the program will be more biologically accurate and will better simulate the real-world population.


Wesley Elsberry
Your program only ever has a single parent and offspring which fails to find the target in the given number of generations. Revise your program to use a population of strings where the number is given by a constant 'population_size', and the first population is comprised of the initial string and population_size-1 mutated copies of it.

You are correct that the previous version of the program only uses a single parent and offspring and it is unlikely to find the target string in the given number of generations. To address this, the program can be modified to use a population of strings where the number is given by a constant population_size, and the first population is comprised of the initial string and population_size-1 mutated copies of it.

Here's the modified version of the program that uses a population of strings:

# The number of strings in the population
population_size = 100

# Create the initial population
population = [initial] + [mutate(initial) for _ in range(population_size-1)]

# The number of generations to run the simulation
generations = 1000

for generation in range(generations):
   # Calculate the fitness of each string in the population
   fitness_values = [get_fitness(string) for string in population]
   # Select the most fit strings for reproduction
   parents = [string for _, string in sorted(zip(fitness_values, population), key=lambda pair: pair[0], reverse=True)][:population_size//2]
   # Create the next generation by mutating the parents
   population = [mutate(parent) for parent in parents]
   # Check if the target string has been found
   if target in population:
       print("Target String Found!")
       break
   print(f"Generation: {generation}")

In this version, the program uses a population of strings where the number is given by a constant population_size, and the first population is created by taking population_size-1 mutated copies of the initial string. The program then runs the simulation for a given number of generations, on each generation, the program calculates the fitness of each string in the population. The program then selects the most fit strings for reproduction and creates the next generation by mutating the parents. The program also checks if the target string has been found, if the target string is found the program stops. This way the program will be more likely to find the target string in the given number of generations.


Wesley Elsberry
'string' is a Python keyword. Refactor your program to use 'orgstring' instead.

You are correct that 'string' is a Python keyword, I apologize for not noticing that before. Here's the modified version of the program that uses 'orgstring' instead of 'string':

# The number of orgstrings in the population
population_size = 100

# Create the initial population
population = [initial] + [mutate(initial) for _ in range(population_size-1)]

# The number of generations to run the simulation
generations = 1000

for generation in range(generations):
   # Calculate the fitness of each orgstring in the population
   fitness_values = [get_fitness(orgstring) for orgstring in population]
   # Select the most fit orgstrings for reproduction
   parents = [orgstring for _, orgstring in sorted(zip(fitness_values, population), key=lambda pair: pair[0], reverse=True)][:population_size//2]
   # Create the next generation by mutating the parents
   population = [mutate(parent) for parent in parents]
   # Check if the target string has been found
   if target in population:
       print("Target String Found!")
       break
   print(f"Generation: {generation}")

In this version of the program, I've replaced all instances of 'string' with 'orgstring' to avoid using a Python keyword. This should not change the program's behavior, but it will make it syntactically correct.


Wesley Elsberry
Write a function to determine the realized mutation rate in a population of orgstrings.

def realized_mutation_rate(population):
   """
   This function calculates the realized mutation rate of a population of orgstrings.
   The realized mutation rate is the proportion of characters that have been changed
   in the orgstrings compared to the original orgstring.
   """
   # Get the original orgstring
   original = population[0]
   # Initialize the count of changed characters
   changes = 0
   # Iterate through the orgstrings in the population
   for orgstring in population:
       # Iterate through the characters in the orgstring
       for c1, c2 in zip(orgstring, original):
           # If the characters are different, increment the changes count
           if c1 != c2:
               changes += 1
   # Calculate the realized mutation rate
   rate = changes / (len(original) * len(population))
   return rate

This function accepts a population of orgstrings and returns the realized mutation rate. The function first gets the original orgstring from the population, then it initializes the count of changed characters. The function iterates through the orgstrings in the population, then it iterates through the characters in the orgstring, if the characters are different, the function increments the changes count. Finally, the function calculates the realized mutation


Wesley Elsberry
Rewrite realized_mutation_rate to accept a parameter _k' with a default value of 10000, and a parameter 'mutatefxn' with default value of mutate. Within the function, generate an initial string, then apply mutatefxn k times to it, summing the number of changed characters found at each iteration. Derive the per-character observed mutation rate and return that.

def realized_mutation_rate(initial, k=10000, mutatefxn=mutate):
   """
   This function calculates the realized mutation rate of a population of orgstrings.
   The realized mutation rate is the proportion of characters that have been changed
   in the orgstrings compared to the original orgstring.
   """
   # Initialize the count of changed characters
   changes = 0
   for _ in range(k):
       # Generate a new orgstring by applying the mutatefxn to the initial orgstring
       new_orgstring = mutatefxn(initial)
       # Iterate through the characters in the orgstring
       for c1, c2 in zip(new_orgstring, initial):
           # If the characters are different, increment the changes count
           if c1 != c2:
               changes += 1
   # Calculate the realized mutation rate
   rate = changes / (len(initial) * k)
   return rate

In this version, the realized_mutation_rate function accepts an initial orgstring, a parameter k with a default value of 10000, and a parameter 'mutatefxn' with default value of mutate. Within the function, the function generates an initial orgstring and applies mutatefxn k times to it, summing the number of changed characters found at each iteration. The function then derives the per-character observed mutation rate and return that. This way you can use different mutation functions and compare the results. Also you can control the number of iterations to get more precise results.


ChatGPT Jan 9 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.


I still had to make some changes in the code to get a version that converged in something like the usual number of generations. ChatGPT was far too generous in the number of 'parents' contributing to the next generation, for one.

BTW, there's a link to a free 'research' version of ChatGPT : https://chat.openai.com/chat....hat . You will still need to create an account, and it is monitored, so be careful of what you tell it.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 12 2023,23:01   

I suspect GPT-4 is the AI model behind the free 'research' ChatGPT. There are some differences in the feel of interaction in the new chat and the API I have been using. More flexibility and ability to revise seems apparent, though this is all subjective on my part.

OpenAI has previously said that they would be releasing GPT-4 in the December to February timeframe. I think things will get very interesting shortly.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 13 2023,15:25   

Quote (Wesley R. Elsberry @ Mar. 18 2009,11:47)
   
Quote (Richardthughes @ Mar. 18 2009,11:24)
Can you derive an optimal mutation rate?

I'll have to think about that some. Later.

I asked ChatGPT about this.

   
Quote

   
Quote

Produce an equation that estimates the optimal mutation rate for a given population size.


   
Quote

Estimating the optimal mutation rate for a given population size is a complex task as it depends on various factors such as the number of characters in the orgstring, the number of characters in the symbol pool, and the specific goals of the simulation.

There are different ways to estimate the optimal mutation rate, but one possible approach is to use the concept of a "mutational load" which expresses the trade-off between the beneficial effects of mutations (which increase the chances of finding an orgstring with all correct characters) and the deleterious effects of mutations (which decrease the fitness of the orgstrings in the population).

The mutational load is defined as the product of the deleterious effect of a mutation (1 - (1/size of symbol pool) * mutation rate) and the number of characters in the orgstring.

The optimal mutation rate is the one that minimizes the mutational load.

Given a population size n, the optimal mutation rate can be estimated by minimizing the following equation :
mutational load = n * (1 - (1/size of symbol pool) * mutation rate) * len(orgstring)

This equation is a simplified version of the mutational load and it would be important to consider other factors that can influence the optimal mutation rate such as the specific goals of the simulation, the size of the symbol pool, and the number of characters in the orgstring.

It's important to note that the optimal mutation rate is not a fixed value, but it can change during the simulation depending on the specific goals of the simulation.


 
Quote

Write a fully-parameterized Python function to calculate the optimal mutation rate by minimizing the mutational load equation.


 
Quote

Code Sample

import scipy.optimize as optimize

def optimal_mutation_rate(population_size, orgstring, symbol_pool):
   def mutational_load(mutation_rate):
       return population_size * (1 - (1/len(symbol_pool)) * mutation_rate) * len(orgstring)
   
   result = optimize.minimize_scalar(mutational_load)
   return result.x


This function takes in three parameters: population_size, orgstring and symbol_pool, and it uses the optimize package of scipy library to minimize the mutational load equation using the minimize_scalar function.

The mutational_load function is defined inside the optimal_mutation_rate function and it calculates the mutational load equation using the input parameters.

The minimize_scalar function finds the minimum value of the mutational_load function and the x attribute of the result variable holds the optimal mutation rate.

It's important to note that this function provides an estimation of the optimal mutation rate based on the assumptions of the mutational load equation, which may not hold in practice. Additionally, it's important to take into account other factors that can influence the optimal mutation rate such as the specific goals of the simulation, the size of the symbol pool, and the number of characters in the orgstring.




As a friend of mine said to me, he was both amazed and appalled by ChatGPT.

Generative AI models are going to democratize expertise like bullets democratized soldiers.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Wesley R. Elsberry



Posts: 4983
Joined: May 2002

(Permalink) Posted: Jan. 30 2023,08:32   

Back in the 1990s, I wrote the start of a draft paper on responding to antievolution criticisms of evolutionary computation. One of the objection/response pairs went like this:

 
Quote

The generation of a natural language sentence via means of evolutionary computation is either difficult or impossible.

I think that instead of either being difficult or impossible, the correct classification is that it would be time-consuming to generate such an application. I’ll lay out the approach I would take if I had the time and inclination to do such. First, I would not use fixed-length bit strings, so the underlying computational approach would not quite match the definition of a GA, although most of the same code would likely be useful. Second, the initialization of the evaluation function would involve scanning a large source of text in the language of choice, building a symbol sequence frequency table. (A possible or likely objection here is that this gives information about the language to be generated. However, this procedure gives far less information than is provided to developing humans, who in the absence of examples of language use do not generate grammatically correct sentences, either.) Third, the evaluation function would return a probability value for a bit-string based on the likelihood that the bit-string could be drawn from the distribution represented by the symbol sequence frequency table, with extra points for the final symbol being a period, and the initial symbol being a capital letter. The GA would finish when a bit-string achieved a threshold evaluation value. The likely results will be the production of nonsensical, but often grammatically correct or near-correct sentences. I say this on the basis of experience in coding ‘travesty’ generators and information entropy analysis applications. The use of evolutionary computation in this regard would be no huge stretch.


I ran across a YouTube video about coding a GPT model from scratch. It is a couple of hours long. At the start, though, the material seems pretty familiar. It talks about using as a training data set some existing text, and it starts from a bigram analysis of characters in the text, then extending to higher order n-grams. It really isn't any surprise that this general form of approach is the basis of modern language models; I was basing my conjecture on information theoretic work, and info theory works well.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
  416 replies since Mar. 17 2009,11:00 < Next Oldest | Next Newest >  

Pages: (14) < ... 9 10 11 12 13 [14] >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]