The simplest evolution/learning hybrid: LEM with KNN

Guleng Sheri, David W. Corne

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)


The Learnable Evolution Model (LEM) was introduced by Michalski in 2000, and involves interleaved bouts of evolution and learning. Here we investigate LEM in (we think) its simplest form, using k-nearest neighbour as the 'learning' mechanism. The essence of the hybridisation is that candidate children are filtered, before evaluation, based on predictions from the learning mechanism (which learns based on previous populations). We test the resulting 'KNNGA' on the same set of problems that were used in the original LEM paper. We find that KNNGA provides very significant advantages in both solution speed and quality over the unadorned GA. This is in keeping with the original LEM paper's results, in which the learning mechanism was AQ and the evolution/learning interface was more sophisticated. It is surprising and interesting to see such beneficial improvement in the GA after such a simple learning-based intervention. Since the only applicationspecific demand of KNN is a suitable distance measure (in that way it is more generally applicable than many other learning mechanisms), LEM methods using KNN are clearly recommended to explore for large-scale optimization tasks in which savings in evaluation time are necessary. © 2008 IEEE.

Original languageEnglish
Title of host publication2008 IEEE Congress on Evolutionary Computation, CEC 2008
Number of pages8
Publication statusPublished - 2008
Event2008 IEEE Congress on Evolutionary Computation, - Hong Kong, China
Duration: 1 Jun 20086 Jun 2008


Conference2008 IEEE Congress on Evolutionary Computation,
Abbreviated titleCEC 2008
CityHong Kong


Dive into the research topics of 'The simplest evolution/learning hybrid: LEM with KNN'. Together they form a unique fingerprint.

Cite this