Evolving Dynamical Neural Networks for Adaptive Behavior Beer Gallagher 1992

From enfascination

Jump to: navigation, search

This paper, though it gives substantial space to an evolved chemotaxis agent, is the main place to find the multilegged controller elaborated. it is also the paper that distinguishes central pattern generators from reflexive pattern generators and mixed pattern generators.

it makes a small case back in favor of mutation as an operator in GAs.

it elaborates the main model i'll be using in my work, the 6 legged 5-neuron leg model.

  • CPG generates a limit cycle in the absence of feedback.
  • RPG generates locomotion only with sensory feedback, if sensation is lesioned, it stands still
  • MPG operates both with and without sensory feedback. You make one by evolving an agent both with and without sensation (by selecting for the average of average forward velocity in each trial)

with sensation, the MPG exhibits higher stepping frequency and 'cleaner phasing' that without sensation.

Really interesting result:

Evolution can be sped up by evolving subproblems and creating modules. SOlve one leg than copy it six times and evolve a controllers. However, performance is always better in a (slow) homegrown network (evolved all at once). conversely, a leg coevolved with its controller and the other legs doesn't perform well alone. Sometimes it doesn't even oscillate.


"It is also unclear how well dynamical nerual neworks can cope with discontinuous tasks, such as sequential decistion making, though the switches beween tropotaxis and kilnotaxis are an encouraging step in this direction"

"Like a number of other authors, we have found that the performance of generic search on neural netwrok sapces does not scale well with problem size using the naive encoding we employed here."

The last paragraph is good, talking about how these GAs fixed the behavior apriori by measuring fitness in terms of it, and that real life doesn't do that, but has an 'intrinsic' fitness.