Abstract
The Learnable Evolution Model (LEM) was introduced by Michalski in 2000, and involves interleaved bouts of evolution and learning. Here we investigate LEM in (we think) its simplest form, using k-nearest neighbour as the 'learning' mechanism. The essence of the hybridisation is that candidate children are filtered, before evaluation, based on predictions from the learning mechanism (which learns based on previous populations). We test the resulting 'KNNGA' on the same set of problems that were used in the original LEM paper. We find that KNNGA provides very significant advantages in both solution speed and quality over the unadorned GA. This is in keeping with the original LEM paper's results, in which the learning mechanism was AQ and the evolution/learning interface was more sophisticated. It is surprising and interesting to see such beneficial improvement in the GA after such a simple learning-based intervention. Since the only applicationspecific demand of KNN is a suitable distance measure (in that way it is more generally applicable than many other learning mechanisms), LEM methods using KNN are clearly recommended to explore for large-scale optimization tasks in which savings in evaluation time are necessary. © 2008 IEEE.
Original language | English |
---|---|
Title of host publication | 2008 IEEE Congress on Evolutionary Computation, CEC 2008 |
Pages | 3244-3251 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 2008 |
Event | 2008 IEEE Congress on Evolutionary Computation, - Hong Kong, China Duration: 1 Jun 2008 → 6 Jun 2008 |
Conference
Conference | 2008 IEEE Congress on Evolutionary Computation, |
---|---|
Abbreviated title | CEC 2008 |
Country/Territory | China |
City | Hong Kong |
Period | 1/06/08 → 6/06/08 |