GaryGaulin
Posts: 3722 Joined: Oct. 2012

Quote (olegt @ Dec. 06 2012,19:13)  Hehehe, Gary thinks his code is maybe also a Monte Carlo code, even though he has no idea what a Monte Carlo procedure does. 
First, please excuse AE = EA typo in last post. Was in a rush to get it online. At least made it!
Second, I at least have a Wikipedia level understanding of how the "method" follows a particular pattern, that is similar in the physics "algorithm" too. There is a "Generate inputs randomly" that equates to what happens after taking a deterministic guess into motor latch which in turn changes what happens to inputs of RAM:
Quote  Monte Carlo method From Wikipedia, the free encyclopedia
Monte Carlo methods (or Monte Carlo experiments) are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems. These methods are most suited to calculation by a computer and tend to be used when it is infeasible to compute an exact result with a deterministic algorithm.[1] This method is also used to complement theoretical derivations.
Monte Carlo methods are especially useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). They are used to model phenomena with significant uncertainty in inputs, such as the calculation of risk in business. They are widely used in mathematics, for example to evaluate multidimensional definite integrals with complicated boundary conditions. When Monte Carlo simulations have been applied in space exploration and oil exploration, their predictions of failures, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.[2]
The Monte Carlo method was coined in the 1940s by John von Neumann, Stanislaw Ulam and Nicholas Metropolis, while they were working on nuclear weapon projects (Manhattan Project) in the Los Alamos National Laboratory. It was named after the Monte Carlo Casino, a famous casino where Ulam's uncle often gambled away his money.
Introduction
Monte Carlo method applied to approximating the value of ?. After placing 30000 random points, the estimate for ? is within 0.07% of the actual value. This happens with an approximate probability of 20%. After 30000 points it is within 7%. [needs reference or/and verification].
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs.
Generate inputs randomly from a probability distribution over the domain.
Perform a deterministic computation on the inputs.
Aggregate the results.
For example, consider a circle inscribed in a unit square. Given that the circle and the square have a ratio of areas that is ?/4, the value of ? can be approximated using a Monte Carlo method:[4]
Draw a square on the ground, then inscribe a circle within it.
Uniformly scatter some objects of uniform size (grains of rice or sand) over the square.
Count the number of objects inside the circle and the total number of objects.
The ratio of the two counts is an estimate of the ratio of the two areas, which is ?/4. Multiply the result by 4 to estimate ?.
In this procedure the domain of inputs is the square that circumscribes our circle. We generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the circle). Finally, we aggregate the results to obtain our final result, the approximation of ?.
If grains are purposefully dropped into only the center of the circle, they are not uniformly distributed, so our approximation is poor. Second, there should be a large number of inputs. The approximation is generally poor if only a few grains are randomly dropped into the whole square. On average, the approximation improves as more grains are dropped. 
Much of it seems even harder to compare, but the purpose to better resolving behavior of something between experimental points (given) is the same when used to train what we could call also call quantum mechanics Quanta Bots (which behave like QM predicts).
It's maybe not much of an MC either, still seems to have more features in common with that, than an EA. In a sense because of the way it works does not need Monte Carlo method to figure anything out, line that blurs out to a probability range would be different lifetimes with something different that is made to happen along the way. Would see how often they end up going one way or another after that. One of the Baldwin Effect lines perhaps.
 The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.
