RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (14) < ... 7 8 9 10 11 [12] 13 14 >   
  Topic: Evolutionary Computation, Stuff that drives AEs nuts< Next Oldest | Next Newest >  
dvunkannon



Posts: 1377
Joined: June 2008

(Permalink) Posted: Feb. 06 2012,16:33   

Quote (midwifetoad @ Feb. 06 2012,15:10)
In my word evolver I simply kill off the highest scoring candidate, "randomly," 25 percent of the time. It seems to prevent getting stuck.

How did this come up in a discussion of Chaitin and evolution? I'd be interested to hear what the arguments were.

--------------
I’m referring to evolution, not changes in allele frequencies. - Cornelius Hunter
I’m not an evolutionist, I’m a change in allele frequentist! - Nakashima

  
midwifetoad



Posts: 4003
Joined: Mar. 2008

(Permalink) Posted: Feb. 07 2012,08:01   

Chaitin is the new darling at UD. Not sure why.

--------------
Any version of ID consistent with all the evidence is indistinguishable from evolution.

  
DiEb



Posts: 312
Joined: May 2008

(Permalink) Posted: Mar. 13 2012,16:19   

I just took another go on Dembski's and Marks's Horizontal No Free Lunch Theorem, as KairosFocus referred to it at UncommonDescent:

On a Wrong Remark in a Paper of Robert J. Marks II and William A Dembski

 
Quote
Abstract: In their 2010 paper The Search for a Search - Measuring the Information Cost of Higher Level Search, the authors William A. Dembski and Robert J. Marks II present as one of two results their so-called Horizontal No Free Lunch Theorem. One of the consequences of this theorem is their remark: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search. This is quite surprising, as one would expect in the tradition of the No Free Lunch theorem that the performances are equally good (or bad). Using only very basic elements of probability theory, this essay shows that their remark is wrong - as is their theorem.

The whole essay can be found here.

   
Henry J



Posts: 5787
Joined: Mar. 2005

(Permalink) Posted: Mar. 13 2012,22:55   

Well of course there's no free lunch. Even a slice of pizza costs something. ;)

Henry

  
DiEb



Posts: 312
Joined: May 2008

(Permalink) Posted: May 24 2012,03:13   

Quote (DiEb @ Mar. 13 2012,22:19)
I just took another go on Dembski's and Marks's Horizontal No Free Lunch Theorem, as KairosFocus referred to it at UncommonDescent:

On a Wrong Remark in a Paper of Robert J. Marks II and William A Dembski

   
Quote
Abstract: In their 2010 paper The Search for a Search - Measuring the Information Cost of Higher Level Search, the authors William A. Dembski and Robert J. Marks II present as one of two results their so-called Horizontal No Free Lunch Theorem. One of the consequences of this theorem is their remark: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search. This is quite surprising, as one would expect in the tradition of the No Free Lunch theorem that the performances are equally good (or bad). Using only very basic elements of probability theory, this essay shows that their remark is wrong - as is their theorem.

The whole essay can be found here.

I was just informed by Winston Ewert that there is a new erratum at the paper A Search for a Search which should address (some of) my points. Here is my first reaction. And does the Journal of Advanced Computational Intelligence and Intelligent Informatics know?

   
The whole truth



Posts: 1554
Joined: Jan. 2012

(Permalink) Posted: May 24 2012,04:35   

Quote (DiEb @ May 24 2012,01:13)
Quote (DiEb @ Mar. 13 2012,22:19)
I just took another go on Dembski's and Marks's Horizontal No Free Lunch Theorem, as KairosFocus referred to it at UncommonDescent:

On a Wrong Remark in a Paper of Robert J. Marks II and William A Dembski

     
Quote
Abstract: In their 2010 paper The Search for a Search - Measuring the Information Cost of Higher Level Search, the authors William A. Dembski and Robert J. Marks II present as one of two results their so-called Horizontal No Free Lunch Theorem. One of the consequences of this theorem is their remark: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search. This is quite surprising, as one would expect in the tradition of the No Free Lunch theorem that the performances are equally good (or bad). Using only very basic elements of probability theory, this essay shows that their remark is wrong - as is their theorem.

The whole essay can be found here.

I was just informed by Winston Ewert that there is a new erratum at the paper A Search for a Search which should address (some of) my points. Here is my first reaction. And does the Journal of Advanced Computational Intelligence and Intelligent Informatics know?

Regarding ear stoppers, it wouldn't surprise me if one of these days that IDiots are found to have evolved a flap inside their ears that automatically and quickly closes at the first sign of any sort of reality trying to get in. Of course if such a flap were found the IDiots would claim that it's the result of intelligent design by their designer/creator, who did it so that they won't be plagued with hearing realistic challenges to their unsupported beliefs and assertions.  :)

--------------
Think not that I am come to send peace on earth: I came not to send peace, but a sword. - Jesus in Matthew 10:34

But those mine enemies, which would not that I should reign over them, bring hither, and slay them before me. -Jesus in Luke 19:27

   
DiEb



Posts: 312
Joined: May 2008

(Permalink) Posted: May 24 2012,08:12   

Quote (The whole truth @ May 24 2012,10:35)
Quote (DiEb @ May 24 2012,01:13)
 
Quote (DiEb @ Mar. 13 2012,22:19)
I just took another go on Dembski's and Marks's Horizontal No Free Lunch Theorem, as KairosFocus referred to it at UncommonDescent:

On a Wrong Remark in a Paper of Robert J. Marks II and William A Dembski

       
Quote
Abstract: In their 2010 paper The Search for a Search - Measuring the Information Cost of Higher Level Search, the authors William A. Dembski and Robert J. Marks II present as one of two results their so-called Horizontal No Free Lunch Theorem. One of the consequences of this theorem is their remark: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search. This is quite surprising, as one would expect in the tradition of the No Free Lunch theorem that the performances are equally good (or bad). Using only very basic elements of probability theory, this essay shows that their remark is wrong - as is their theorem.

The whole essay can be found here.

I was just informed by Winston Ewert that there is a new erratum at the paper A Search for a Search which should address (some of) my points. Here is my first reaction. And does the Journal of Advanced Computational Intelligence and Intelligent Informatics know?

Regarding ear stoppers, it wouldn't surprise me if one of these days that IDiots are found to have evolved a flap inside their ears that automatically and quickly closes at the first sign of any sort of reality trying to get in. Of course if such a flap were found the IDiots would claim that it's the result of intelligent design by their designer/creator, who did it so that they won't be plagued with hearing realistic challenges to their unsupported beliefs and assertions.  :)

I exchanged emails on this subject with Bob Marks back in 2010! Even before the paper was published in the first place, I had pointed out this problem - in private and in public. In Sep 2010, Bob Marks informed me that has a policy not to engage in correspondence with anyone publically critical of him or his work, as independent of the validity or invalidity of the details of the exchange, these things are best discussed thoroughly before any public pronouncements. So he willfully  chose to ignore every unpleasant critic, on his own peril.

   
The whole truth



Posts: 1554
Joined: Jan. 2012

(Permalink) Posted: May 25 2012,01:10   

Quote (DiEb @ May 24 2012,06:12)
Quote (The whole truth @ May 24 2012,10:35)
 
Quote (DiEb @ May 24 2012,01:13)
   
Quote (DiEb @ Mar. 13 2012,22:19)
I just took another go on Dembski's and Marks's Horizontal No Free Lunch Theorem, as KairosFocus referred to it at UncommonDescent:

On a Wrong Remark in a Paper of Robert J. Marks II and William A Dembski

         
Quote
Abstract: In their 2010 paper The Search for a Search - Measuring the Information Cost of Higher Level Search, the authors William A. Dembski and Robert J. Marks II present as one of two results their so-called Horizontal No Free Lunch Theorem. One of the consequences of this theorem is their remark: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search. This is quite surprising, as one would expect in the tradition of the No Free Lunch theorem that the performances are equally good (or bad). Using only very basic elements of probability theory, this essay shows that their remark is wrong - as is their theorem.

The whole essay can be found here.

I was just informed by Winston Ewert that there is a new erratum at the paper A Search for a Search which should address (some of) my points. Here is my first reaction. And does the Journal of Advanced Computational Intelligence and Intelligent Informatics know?

Regarding ear stoppers, it wouldn't surprise me if one of these days that IDiots are found to have evolved a flap inside their ears that automatically and quickly closes at the first sign of any sort of reality trying to get in. Of course if such a flap were found the IDiots would claim that it's the result of intelligent design by their designer/creator, who did it so that they won't be plagued with hearing realistic challenges to their unsupported beliefs and assertions.  :)

I exchanged emails on this subject with Bob Marks back in 2010! Even before the paper was published in the first place, I had pointed out this problem - in private and in public. In Sep 2010, Bob Marks informed me that has a policy not to engage in correspondence with anyone publically critical of him or his work, as independent of the validity or invalidity of the details of the exchange, these things are best discussed thoroughly before any public pronouncements. So he willfully  chose to ignore every unpleasant critic, on his own peril.

Ignoring critics, whether in public or in private, is a skill that IDiots have thoroughly mastered.

I'm sure that Marks and the other IDiots never even consider that their assertions are or could be perilous, because to them being wrong just doesn't compute. They want to dictate and preach, not listen, discuss, learn, or be corrected.

From what I've seen Marks seems to be one of the most isolated IDiots (and willingly so).

--------------
Think not that I am come to send peace on earth: I came not to send peace, but a sword. - Jesus in Matthew 10:34

But those mine enemies, which would not that I should reign over them, bring hither, and slay them before me. -Jesus in Luke 19:27

   
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Jan. 16 2014,04:04   

While I was looking up use of PyQtGraph in PySide, I ran across an example program in an unfamiliar language. A little probing led me to the website for Julia, a new (2013 FOSS release) programming language for scientific and technical computing. The language is dynamic, but uses just-in-time (JIT) compiler technology to achieve benchmark tests within 2x the time C code takes. There's an IDE, Julia Studio, that I am finding useful. Emacs and the command line work just fine as well.

As an initial exercise, I ported my "minimal weasel" program from Python to Julia. The result is a 46-line program. This is slightly longer than the Python version because I'm using a line each for the Julia convention of closing a block with "end". Python uses indentation to indicate block closure.

Code Sample

# Minimum Weasel in Julia  -- Wesley R. Elsberry
t = "METHINKS IT IS LIKE A WEASEL"    # Target phrase
b = " ABCDEFGHIJKLMNOPQRSTUVWXYZ"     # Base pool
n = 178                          # Population size
u = (1.0 / length(t))              # Mutation rate
@printf("Popsize=%d, Mutrate=%f, Bases=%s, Target=%s\n", n,u,b,t)
p = ""                        # Parent string
for ii in [1:length(t)]            # Compose random parent string
   p = p * string(b[rand(1:length(b))])
end
@printf("                        Parent=%s\n",p)
done = false                    # Loop control variable
g = 0                          # Generation counter
bmcnt = 0                        # Base match count (max. in pop.)
bc = ""                        # Best candidate variable
while (done == false)
     pop = ASCIIString[]            # Population of strings
     bmcnt = 0                  # Start with no bases matched
     bcindex = 1                  # Point to first candidate in pop. by default
     for ii in [1:n]              # For size of population
         push!(pop,"")            # Add an empty candidate
         mcnt = 0                # Initialize candidate base match count to zero
         for jj in [1:length(t)]     # Compose a new candidate possibly mutated from the parent
             if u >= rand()        # We have a mutated base
                pop[ii] = pop[ii][1:jj-1] * string(b[rand(1:length(b))])
             else                # No mutation, just copy this base
                pop[ii] = pop[ii][1:jj-1] * string(p[jj])
             end
             if pop[ii][jj] == t[jj] # Candidate matches target at this base
                mcnt += 1          # Increment candidate base match count
             end
             if mcnt > bmcnt        # Candidate is better than current best match
                bmcnt = mcnt        # Change best match count
                bcindex = ii        # Store index of best candidate
             end
             if mcnt >= (length(t) - 0)    # Do enough bases match the target?
                done = true        # Yes, so set loop control for exit
             end
         end
     end
     bc = pop[bcindex]            # Set best candidate as candidate at index
     g += 1                    # Increment generation count
     @printf("Gen=%05d, %02d/%d matched, Best=%s, Total=%06d\n", g, bmcnt, length(t), bc, g*n)
     p = bc                    # Parent for next generation is the best candidate from this one
end
println("weasel done.")


There are a few noteworthy differences from Python. First, Julia is a base-1 language. Arrays start with index 1. Second, Julia's dynamic variable creation requires that a type be provided for declaration of empty arrays. Julia can figure out the type itself for an array that is assigned at least one element, but the programmer has to give a type in order to start with no elements at all. The type system in Julia is apparently one big reason why the developers are able to obtain the good benchmark results. Third, while I have not taken advantage of it here, Julia's syntax allows for expressions to be closer to mathematical notation. "1 + 2x" in Julia is legal, where in Python one would have to have, "1 + 2*x".

I'm planning on porting a more complex program to Julia to see how it performs compared to PyPy.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
Quack



Posts: 1961
Joined: May 2007

(Permalink) Posted: Jan. 16 2014,07:14   

Looks like something I'd have enjoyed using sometime in my past. Alas.

--------------
Rocks have no biology.
              Robert Byers.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Jan. 22 2014,09:05   

My MacBook Pro suffered a mainboard failure some time ago, and I just bit the bullet to get it fixed. That was the development machine for my Avida work. I thought that too much bit rot might have rendered things uncompilable on Linux anymore, but I'm happy to say my initial pessimism was unfounded, and I now have a running Linux executable.

I should note just what I'm talking about here. Back in 2009, I presented a paper with my colleagues at Michigan State University titled, Cockroaches, drunkards, and climbers: Modeling the evolution of simple movement strategies using digital organisms. I'm working toward taking the next steps in this research. To do that, I needed to get my tools up to date. Where I have the computing power now is in an 8-core desktop running Ubuntu Linux, so I was aiming to get my code compiled there. That's happened just this week. I've been running tests to make sure I'm getting results as I did back in 2009. And the newly-compiled system is, I'm pleased to say, showing the evolution of gradient-ascent effective methods just as the runs in 2009 did.

As a recap, I'm using a version of Avida that I extended to permit movement of organisms in the Avida world grid. Avida's normal mode of operation puts an organism in a specific cell in the world grid, and there it stays for its entire life. I wrote Avida CPU instructions for "tumble", "move", and "sense-diff-facing". The "tumble" instruction simply rotates the organism to a new random facing in its grid cell. A facing is always toward another adjacent grid cell, so for an interior grid cell there are eight legal facings, five legal facings on an edge, and three at each corner grid cell. The "move" instruction puts the organism executing it into the cell that it currently faces. (If there is another organism in that cell, they swap cells.) The "sense-diff-facing" instruction puts the difference in the amount of a specified resource between the current grid cell and the faced grid cell into one of the Avida registers. The run is seeded with the default classic Avida organism. This is an organism whose only functionality is to make a copy of itself. None of the codes associated with movement is included in the initial organism. Mutation is the only way those instructions can enter the genome at the outset.

The environment is defined with a positively rewarding resource, with a peak in the resource set off-center in the world grid. This was done so that the peak resource would not be on an edge or diagonal of the world grid.

The run also includes a 2% cost to the organism for each move instruction that it executes.

The updates are set to permit about three Avida instructions to be executed per-organism per-update. The runs go on for two million updates. The total population is capped at 200 Avidians, so the world grid has about 2% of its grid cells filled with Avidians at any time.

During the run, each grid cell has a count of the number of visits it receives from Avidians. I output these visit counts every 5000 updates. I then plot a surface map of the difference in visits between each update and the one prior, which shows in aggregate the movement of the population for 5000 updates. It becomes very clear when a gradient-ascent effective method, or "climber", becomes the dominant class of organism in the population. I have a few plots to show the transition from "drunkard" dominant to "climber" dominant, and from "climber" to a more efficient "climber".













The results show the evolution of a useful algorithm from scratch. Part of what I did in the work at MSU was in collaboration with Jacob Walker to use the evolved organisms as robotic controllers, which we did with both a Roomba Create robot and a Lego Mindstorms robot. We used a point light source to create a "resource peak" that the robot displayed phototropic behavior toward with the "climber" organisms loaded.

This isn't about adjusting weights of some existing model. This is about evolution creating algorithms that did not exist before, based on nothing more than having a resource to exploit, the ability to take a step, to change direction randomly, and to sense differences in the local environment. (Very local, just to the extent of where a move instruction would take the organism were it executed, and only so far as to give a relative difference, not an absolute number.) There's no reward system other than "organisms do better if they go where resources are more abundant". There's nothing in the system to prefer inclusion of the new instructions, and there's actually a cost associated with executing the "move" instruction. And yet, time and again, this system can produce effective methods in the provably optimal class of gradient ascent algorithms.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
OgreMkV



Posts: 3668
Joined: Oct. 2009

(Permalink) Posted: Jan. 22 2014,09:25   

Wes,

This is very cool stuff. Are you going to publish this?

I ask, because I'd like to put it on my blog.  If you're going to publish, I'll wait.  But if you don't mind, I can write something up or if you want to write something up, I'd like to post this.

It's directly relevant to a discussion I'm having with another Meyer worshiper who doesn't realize that mutations can cause huge changes in populations.

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

   
fnxtr



Posts: 3504
Joined: June 2006

(Permalink) Posted: Jan. 22 2014,10:04   

A little something for Gary, from B. Kliban:



--------------
"[A] book said there were 5 trillion witnesses. Who am I supposed to believe, 5 trillion witnesses or you? That shit's, like, ironclad. " -- stevestory

"Wow, you must be retarded. I said that CO2 does not trap heat. If it did then it would not cool down at night."  Joe G

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Jan. 22 2014,11:34   

Quote (OgreMkV @ Jan. 22 2014,09:25)
Wes,

This is very cool stuff. Are you going to publish this?

I ask, because I'd like to put it on my blog.  If you're going to publish, I'll wait.  But if you don't mind, I can write something up or if you want to write something up, I'd like to post this.

It's directly relevant to a discussion I'm having with another Meyer worshiper who doesn't realize that mutations can cause huge changes in populations.

The linked paper is as published as that will be getting.

     
Quote

Elsberry, W.R.; Grabowski, L.M.; Ofria, C.; Pennock, R.T.  2009.  Cockroaches, drunkards, and climbers: Modeling the evolution of simple movement strategies using digital organisms.  SSCI: IEEE Symposium on Artificial Life, 2009: 92--99.


So anything out of that is fair game for comment.

There's a section on "future work" in the paper. What I'm hoping to do next is to move on to building environments with both a positive and a negative "resource", so that the organisms will need to evolve both appetitive and aversive responses. If that comes together quickly enough, a colleague has pointed out a conference submission deadline at the end of March.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
OgreMkV



Posts: 3668
Joined: Oct. 2009

(Permalink) Posted: Jan. 22 2014,23:10   

I posted something. If I messed anything up, please let me know.  It's been a heck of a week.

I've never heard anyone puking on a conference call... until today.  Geez.

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

   
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Jan. 23 2014,03:17   

While I did stress that the genomic content of the initial organism, and thus the Avidian population, could only acquire the new instructions via mutation, once an ancestral organism had one or more of those, they would be passed down to offspring with the usual frequency. And any effects they had on the organism could yield a difference in fitness, driving the usual selective processes. I think saying mutation was the only operative process goes too far. Not including the instructions in any way in the initial organism simply eliminates the possibility that I as experimenter set up a particular outcome by whatever arrangement of movement-relevant instructions might be set in that initial organism.

One question I was asked at SSCI in 2009 was why use Avida and not something like Echo. And while the efficient answer is that when one is at the Devolab, one is usually going to be using Avida, I did survey the available software at the time for applicability to the question I was looking at. The software systems allowing for agent movement all treated movement as a primitive property, often requiring some fixed movement strategy be defined for the agents a priori. I was interested in looking at what evolution could do given just the sort of capabilities underlying movement as seen in organisms like E. coli, but without specifying how those capabilities were used. And that kind of question was not what the other software packages could address.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
BillB



Posts: 388
Joined: Aug. 2009

(Permalink) Posted: Jan. 23 2014,04:32   

Quote (Wesley R. Elsberry @ Jan. 23 2014,09:17)
While I did stress that the genomic content of the initial organism, and thus the Avidian population, could only acquire the new instructions via mutation, once an ancestral organism had one or more of those, they would be passed down to offspring with the usual frequency. And any effects they had on the organism could yield a difference in fitness, driving the usual selective processes. I think saying mutation was the only operative process goes too far. Not including the instructions in any way in the initial organism simply eliminates the possibility that I as experimenter set up a particular outcome by whatever arrangement of movement-relevant instructions might be set in that initial organism.

One question I was asked at SSCI in 2009 was why use Avida and not something like Echo. And while the efficient answer is that when one is at the Devolab, one is usually going to be using Avida, I did survey the available software at the time for applicability to the question I was looking at. The software systems allowing for agent movement all treated movement as a primitive property, often requiring some fixed movement strategy be defined for the agents a priori. I was interested in looking at what evolution could do given just the sort of capabilities underlying movement as seen in organisms like E. coli, but without specifying how those capabilities were used. And that kind of question was not what the other software packages could address.

Excellent stuff, and something I'm really interested in despite having no time to work on any more ...

I'm not intimately familiar with Avida but a few things jumped to mind whilst reading the description:
     
Quote
A facing is always toward another adjacent grid cell, so for an interior grid cell there are eight legal facings, five legal facings on an edge, and three at each corner grid cell.


I would say that there should be no illegal facings, just an inability to move when facing an edge – this would prevent a bias towards movement back to the centre – A bit like breeding E. coli in a jar: They cannot pass through the glass container but they could repeatedly bump against it until they die. By having illegal facings you are, in one sense, providing them with obstacle avoidance behaviour for free.

     
Quote
The "sense-diff-facing" instruction puts the difference in the amount of a specified resource between the current grid cell and the faced grid cell into one of the Avida registers.


What if this was expanded to be a “sense X,Y diff” instruction where X and Y can be any of the surrounding cells, or your own cell? The values for X and Y would be heritable. (And I don't know what you do about sensing the cell in front of you when facing the edge of the world)

Perhaps if you wanted to add an interesting twist you could turn that into something like "Z=F(X,Y)" where X and Y are as described above but the function F is a heritable operand (Add, Subtract Multiply Divide or Modulo) - you might even include bit shifting as a possible operand? Z=X<<Y or Z=X>>Y

The point would be to provide multiple pathways for this sensory apparatus to work - and for it to fail to work.

Expanding on this a bit more (if it is worth doing) you could allow for more distal sensing - maybe a Z=F((A,B)(X,Y)) instruction where A and B, and X and Y, are relative cell co-ordinates, perhaps capped to a maximum range of +/- 5. If you did this then I would be tempted to add a cost for longer range sensing (You need more energy to grow those longer whiskers!)

     
Quote
The environment is defined with a positively rewarding resource, with a peak in the resource set off-center in the world grid.


Can you make this more complex and dynamic? Perhaps try something more akin to a simple hydrothermal vent model:

A source (of the resource) pops up at a random location and begins churning out the ‘resource’, creating a gradient. Eventually the source is exhausted and the gradient disappears. You can have a maximum of x sources in the world at any time and when the number of sources is less than x a new source has some probability of appearing at a new random location.

It would also be nice to have a negative resource – something that causes harm but which is not simply a lack of positive resource – using the same hydrothermal vent model you could have a second resource whose intensity costs or harms an agent. This should result in a much more interesting and dynamic resource landscape for the agents to navigate.

I'm not sure if this should be a sense-able resource (something the agent can sense) of if it just causes harm without the agent realising -- Something I'm not clear on with Avida: can the agent sense its own 'energy' and as a result tell if it is being rewarded or harmed?

I am tempted to suggest actually defining a spectrum of resources (some good, some bad) but this would require many more methods for the agent to sense them (and makes for a much more complex research project). What I am thinking of here (and it is a vague thought without any of the important details) is to include potential routes by which an agent can gain an advantage by combining certain resources in certain ratios – it can create a more potent energy source than the ones it absorbs passively – This would, of course, be balanced by the potential for agents to combine resources into fatal concoctions.

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: Jan. 23 2014,09:41   

Quote (BillB @ Jan. 23 2014,04:32)
     
Quote (Wesley R. Elsberry @ Jan. 23 2014,09:17)
While I did stress that the genomic content of the initial organism, and thus the Avidian population, could only acquire the new instructions via mutation, once an ancestral organism had one or more of those, they would be passed down to offspring with the usual frequency. And any effects they had on the organism could yield a difference in fitness, driving the usual selective processes. I think saying mutation was the only operative process goes too far. Not including the instructions in any way in the initial organism simply eliminates the possibility that I as experimenter set up a particular outcome by whatever arrangement of movement-relevant instructions might be set in that initial organism.

One question I was asked at SSCI in 2009 was why use Avida and not something like Echo. And while the efficient answer is that when one is at the Devolab, one is usually going to be using Avida, I did survey the available software at the time for applicability to the question I was looking at. The software systems allowing for agent movement all treated movement as a primitive property, often requiring some fixed movement strategy be defined for the agents a priori. I was interested in looking at what evolution could do given just the sort of capabilities underlying movement as seen in organisms like E. coli, but without specifying how those capabilities were used. And that kind of question was not what the other software packages could address.

Excellent stuff, and something I'm really interested in despite having no time to work on any more ...

I'm not intimately familiar with Avida but a few things jumped to mind whilst reading the description:
             
Quote
A facing is always toward another adjacent grid cell, so for an interior grid cell there are eight legal facings, five legal facings on an edge, and three at each corner grid cell.


I would say that there should be no illegal facings, just an inability to move when facing an edge – this would prevent a bias towards movement back to the centre – A bit like breeding E. coli in a jar: They cannot pass through the glass container but they could repeatedly bump against it until they die. By having illegal facings you are, in one sense, providing them with obstacle avoidance behaviour for free.


Avida giveth, and Avida taketh away. Facing is very basic to the software. Illegal facings, when exercised, terminate the program with an ugly "bus error" message.

On the other hand, the world geometry options are (or I should say "were", I haven't checked the latest code) grid, torus, and clique. I have no idea what clique does. Torus, though, wraps the edges of the world grid. Using torus would solve the illegal facing issue, since every cell would then be an interior cell. However, I also thought of torus as giving Avidians something for free, since on a relatively prime grid size I think movement on the diagonal will give the organism access to a lot of the grid, if not all of it.

   
Quote (BillB @ Jan. 23 2014,04:32)

             
Quote
The "sense-diff-facing" instruction puts the difference in the amount of a specified resource between the current grid cell and the faced grid cell into one of the Avida registers.


What if this was expanded to be a “sense X,Y diff” instruction where X and Y can be any of the surrounding cells, or your own cell? The values for X and Y would be heritable. (And I don't know what you do about sensing the cell in front of you when facing the edge of the world)


As I recall it, access to adjoining cells is entirely defined by facing. It would be nice to have X,Y addressable during the run, but as I recall, it doesn't work that way.

I think this issue, among others, led a colleague of mine to give up on modifying the Avida grid system entirely, and instead implemented a separate arena-style system that was instantiated on a per-organism basis, what she referred to in the planning stages as "dream-a-grid". Many of the things that I am describing as constraints would not be in her codebase. (Her Avidians evolved such things as perfect maze-running, but she had a complex system of markers that when correctly sensed and acted upon would lead to that.) The tradeoff, though, is that her movement experiments were all about individual performances, and no interaction between members of the population would be possible. I'm thinking in terms of future experiments possibly having a larger role for competition.

   
Quote (BillB @ Jan. 23 2014,04:32)

Perhaps if you wanted to add an interesting twist you could turn that into something like "Z=F(X,Y)" where X and Y are as described above but the function F is a heritable operand (Add, Subtract Multiply Divide or Modulo) - you might even include bit shifting as a possible operand? Z=X<<Y or Z=X>>Y

The point would be to provide multiple pathways for this sensory apparatus to work - and for it to fail to work.

Expanding on this a bit more (if it is worth doing) you could allow for more distal sensing - maybe a Z=F((A,B)(X,Y)) instruction where A and B, and X and Y, are relative cell co-ordinates, perhaps capped to a maximum range of +/- 5. If you did this then I would be tempted to add a cost for longer range sensing (You need more energy to grow those longer whiskers!)


There was already code in Avida for distinguishing resources. This was based on a label system, where several bases in the genome get interpreted as a label, so what the organism gets when it processes a sensory instruction is heritable. All the sensory instruction does is put a value into an Avidian CPU register. What happens to it after that has to evolve, too.

Like I said above, I don't know that distant sensing has an obvious implementation pathway.

   
Quote (BillB @ Jan. 23 2014,04:32)

             
Quote
The environment is defined with a positively rewarding resource, with a peak in the resource set off-center in the world grid.


Can you make this more complex and dynamic? Perhaps try something more akin to a simple hydrothermal vent model:

A source (of the resource) pops up at a random location and begins churning out the ‘resource’, creating a gradient. Eventually the source is exhausted and the gradient disappears. You can have a maximum of x sources in the world at any time and when the number of sources is less than x a new source has some probability of appearing at a new random location.


The current way I define a resource gradient is quite cumbersome. I have a Perl script that set up CELL declarations in the environment config for every cell in the grid. I do have code for a method to establish a resource gradient at runtime, but that's not yet tested. Yes, I'd like to have a moving resource at some point. I don't think it will be the first thing out the gate.

   
Quote (BillB @ Jan. 23 2014,04:32)

It would also be nice to have a negative resource – something that causes harm but which is not simply a lack of positive resource – using the same hydrothermal vent model you could have a second resource whose intensity costs or harms an agent. This should result in a much more interesting and dynamic resource landscape for the agents to navigate.

I'm not sure if this should be a sense-able resource (something the agent can sense) of if it just causes harm without the agent realising -- Something I'm not clear on with Avida: can the agent sense its own 'energy' and as a result tell if it is being rewarded or harmed?


The "detrimental resource" is likely the first thing out the gate. There's some issues on how this gets implemented, but I think I see a way forward on that that won't impact what I've already done too much.

As far as the Avidians sensing whether they are doing well or not, I think the answer is "no". The system scheduler assigns cycles based on merit, so poorly performing Avidians are also slowly performing Avidians. As far as I know, permitting an Avidian to have access to some transformation of its own merit would require setting up an instruction to do just that. Plus, an absolute value for merit wouldn't be terribly useful. In the first hundred updates, a merit of 0.29 would be excellent, but then pretty miserable not so much further into the run. What would be useful to the Avidian is some relative number related to their ranking in the population. I don't know of any biological correlate to that, though.

   
Quote (BillB @ Jan. 23 2014,04:32)

I am tempted to suggest actually defining a spectrum of resources (some good, some bad) but this would require many more methods for the agent to sense them (and makes for a much more complex research project). What I am thinking of here (and it is a vague thought without any of the important details) is to include potential routes by which an agent can gain an advantage by combining certain resources in certain ratios – it can create a more potent energy source than the ones it absorbs passively – This would, of course, be balanced by the potential for agents to combine resources into fatal concoctions.


Actually, the sensing system is already label-based, so multiplying the resources could be done without any particular hassle for the programmer. What it would do to the Avidians... that's an experiment.

The first experiment was pretty much a stab in the dark. We set up something that hadn't been tried, and we didn't know whether we were posing a challenge outside the scope of what could be evolved in Avida. Now that we know that Avidians can evolve movement strategies, including ones in an optimal class of strategies, we can raise the bar some.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
BillB



Posts: 388
Joined: Aug. 2009

(Permalink) Posted: Jan. 23 2014,10:51   

Thanks for the detailed reply. I think I might try and find some time to familiarise myself with Avida ... I have, in the past, sketched out a framework for doing experiments like this myself but based on what you have written I think my system would not be possible to impliment in Avida.

I might go back over my early notes and see if there is anything sensible I can summarise here.

A thought occured after my last post - would it be possible to have the resource inflict a penalty when it is above a certain level? Think of it like a nutrient rich gradient coming from a hydrothermal vent - if you get too close to the source you literaly start to cook. There would be an optimal distance (A habitable zone?), and I would expect to see the resulting pattern of activity to appear as a ring rather than a point (referring to your plots above)

  
BillB



Posts: 388
Joined: Aug. 2009

(Permalink) Posted: Jan. 24 2014,11:43   

this caught my eye - a thermodynamic theory of the origin of life. Reminds me of the Maximum Entropy Production Principle.

  
Henry J



Posts: 5787
Joined: Mar. 2005

(Permalink) Posted: Jan. 24 2014,14:21   

Quote (BillB @ Jan. 24 2014,10:43)
this caught my eye - a thermodynamic theory of the origin of life. Reminds me of the Maximum Entropy Production Principle.

So he thinks that abiogenesis is caused by some sort of positive feedback loop among systems that redistribute incoming energy?

  
Wesley R. Elsberry



Posts: 4991
Joined: May 2002

(Permalink) Posted: April 18 2014,10:02   

Physicist Sean Devine critiques Dembski's "design inference" in Zygon.

Jeff Shallit and I are cited extensively.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

    
midwifetoad



Posts: 4003
Joined: Mar. 2008

(Permalink) Posted: April 18 2014,11:27   

Quote (Wesley R. Elsberry @ April 18 2014,10:02)
Physicist Sean Devine critiques Dembski's "design inference" in Zygon.

Jeff Shallit and I are cited extensively.

Quote
The fundamental choice to be made, given the available information, is not whether chance provides a better explanation than design, but whether natural laws provide a better explanation than a design.


Sig worthy

--------------
Any version of ID consistent with all the evidence is indistinguishable from evolution.

  
Quack



Posts: 1961
Joined: May 2007

(Permalink) Posted: April 19 2014,02:54   

Quote
Sig worthy

Indeed, but who am I qouting?

--------------
Rocks have no biology.
              Robert Byers.

  
midwifetoad



Posts: 4003
Joined: Mar. 2008

(Permalink) Posted: April 19 2014,14:48   

Quote (Quack @ April 19 2014,02:54)
Quote
Sig worthy

Indeed, but who am I qouting?

Physicist Sean Devine critiques Dembski's "design inference" in Zygon.

--------------
Any version of ID consistent with all the evidence is indistinguishable from evolution.

  
DiEb



Posts: 312
Joined: May 2008

(Permalink) Posted: Sep. 28 2014,13:14   

William Dembski held a talk at the University of Chicago in August 2014. There is a youtube video of this one hour long talk which the Discovery Institute provided.

At the moment, I'm trying to transcribe this video in a series of posts on my blog: William Dembski's talk at the University of Chicago.

At 45', I had to stop for a moment, as there is such an amusing elementary mistake on the slides which the eminent Dr. Dr. uses... And, as 1/2 != 3/5, Dr. Dr. Dembski gets as wrong result - which is, as he says - " typical for these search-for-a-search situations". I couldn't agree more.

Please, take a look at: Conservation of Information in Evolutionary Search - Talk by William Dembski - part 4 . Thanks :-)

Edited by DiEb on Sep. 28 2014,21:53

   
Henry J



Posts: 5787
Joined: Mar. 2005

(Permalink) Posted: Sep. 28 2014,18:30   

Yeah, but he doesn't have to stoop to your pathetic level of detail!

Or something.

  
fusilier



Posts: 252
Joined: Feb. 2003

(Permalink) Posted: Sep. 29 2014,06:44   

Quote (DiEb @ Sep. 28 2014,14:14)
William Dembski held a talk at the University of Chicago in August 2014. There is a youtube video of this one hour long talk which the Discovery Institute provided.

At the moment, I'm trying to transcribe this video in a series of posts on my blog: William Dembski's talk at the University of Chicago.

At 45', I had to stop for a moment, as there is such an amusing elementary mistake on the slides which the eminent Dr. Dr. uses... And, as 1/2 != 3/5, Dr. Dr. Dembski gets as wrong result - which is, as he says - " typical for these search-for-a-search situations". I couldn't agree more.

Please, take a look at: Conservation of Information in Evolutionary Search - Talk by William Dembski - part 4 . Thanks :-)

Is that "one over two-factorial equals three over five"  or "one over two does not equal three over five?"

Either way it's not sensible, but ....

=B^0

--------------
fusilier
James 2:24

  
k.e..



Posts: 5432
Joined: May 2007

(Permalink) Posted: Sep. 30 2014,09:51   

Quote (fusilier @ Sep. 29 2014,14:44)
Quote (DiEb @ Sep. 28 2014,14:14)
William Dembski held a talk at the University of Chicago in August 2014. There is a youtube video of this one hour long talk which the Discovery Institute provided.

At the moment, I'm trying to transcribe this video in a series of posts on my blog: William Dembski's talk at the University of Chicago.

At 45', I had to stop for a moment, as there is such an amusing elementary mistake on the slides which the eminent Dr. Dr. uses... And, as 1/2 != 3/5, Dr. Dr. Dembski gets as wrong result - which is, as he says - " typical for these search-for-a-search situations". I couldn't agree more.

Please, take a look at: Conservation of Information in Evolutionary Search - Talk by William Dembski - part 4 . Thanks :-)

Is that "one over two-factorial equals three over five"  or "one over two does not equal three over five?"

Either way it's not sensible, but ....

=B^0

Didn't you mean =BS^0 ?

--------------
"I get a strong breeze from my monitor every time k.e. puts on his clown DaveTard suit" dogdidit
"ID is deader than Lenny Flanks granmaws dildo batteries" Erasmus
"I'm busy studying scientist level science papers" Galloping Gary Gaulin

  
fusilier



Posts: 252
Joined: Feb. 2003

(Permalink) Posted: Sep. 30 2014,20:14   

Quote (k.e.. @ Sep. 30 2014,10:51)
Quote (fusilier @ Sep. 29 2014,14:44)
Quote (DiEb @ Sep. 28 2014,14:14)
William Dembski held a talk at the University of Chicago in August 2014. There is a youtube video of this one hour long talk which the Discovery Institute provided.

At the moment, I'm trying to transcribe this video in a series of posts on my blog: William Dembski's talk at the University of Chicago.

At 45', I had to stop for a moment, as there is such an amusing elementary mistake on the slides which the eminent Dr. Dr. uses... And, as 1/2 != 3/5, Dr. Dr. Dembski gets as wrong result - which is, as he says - " typical for these search-for-a-search situations". I couldn't agree more.

Please, take a look at: Conservation of Information in Evolutionary Search - Talk by William Dembski - part 4 . Thanks :-)

Is that "one over two-factorial equals three over five"  or "one over two does not equal three over five?"

Either way it's not sensible, but ....

=B^0

Didn't you mean =BS^0 ?

touche!

--------------
fusilier
James 2:24

  
  418 replies since Mar. 17 2009,11:00 < Next Oldest | Next Newest >  

Pages: (14) < ... 7 8 9 10 11 [12] 13 14 >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]