TY - CHAP
T1 - Training Sets for Algorithm Tuning
T2 - Exploring the Impact of Structural Distribution
AU - Alkhalifa, Mashal
AU - Corne, David
N1 - Funding Information:
We are grateful to King Abdulaziz City for Science and Technology (KACST) for their support of the first author.
Publisher Copyright:
© 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021
Y1 - 2021
N2 - Algorithm tuning, especially in the context of metaheuristic algorithms, is a growing area of research and practice, which concerns configuring the parameters of an algorithm on the basis of a ‘training set’ of problem instances. There is much interest (but little research so far) on the relationship between the training set and generalization performance. In general terms, we expect that, if an algorithm is targeted at problems of a certain type, then the training set of problem instances should represent a diverse collection of problems of that type. However, many questions around this area are still the topic of research. In this paper, we explore how certain structural aspects of the training set problem instances (in contrast, for example, to problem size) affect generalization performance. Specifically, we look at spaces of simple TSP instances where the number of cities is constant, but with different spatial distributions. We find that, when the training problem sets are suitably difficult, an algorithm tuned for a specific spatial distribution tends towards being the best algorithm on test sets following that distribution. Conversely, and significantly, if the target problem set is not from a ‘particularly difficult’ spatial distribution, a better optimizer for that distribution may be produced by choosing a different spatial distribution for the training set. This has implications for the development of real-world optimizers for specific problem classes, such as, for example, vehicle routing problems with particular depot and customer distributions.
AB - Algorithm tuning, especially in the context of metaheuristic algorithms, is a growing area of research and practice, which concerns configuring the parameters of an algorithm on the basis of a ‘training set’ of problem instances. There is much interest (but little research so far) on the relationship between the training set and generalization performance. In general terms, we expect that, if an algorithm is targeted at problems of a certain type, then the training set of problem instances should represent a diverse collection of problems of that type. However, many questions around this area are still the topic of research. In this paper, we explore how certain structural aspects of the training set problem instances (in contrast, for example, to problem size) affect generalization performance. Specifically, we look at spaces of simple TSP instances where the number of cities is constant, but with different spatial distributions. We find that, when the training problem sets are suitably difficult, an algorithm tuned for a specific spatial distribution tends towards being the best algorithm on test sets following that distribution. Conversely, and significantly, if the target problem set is not from a ‘particularly difficult’ spatial distribution, a better optimizer for that distribution may be produced by choosing a different spatial distribution for the training set. This has implications for the development of real-world optimizers for specific problem classes, such as, for example, vehicle routing problems with particular depot and customer distributions.
KW - Algorithm tuning
KW - Combinatorial optimization
KW - Parameter tuning
UR - http://www.scopus.com/inward/record.url?scp=85101098322&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-61659-5_16
DO - 10.1007/978-3-030-61659-5_16
M3 - Chapter
AN - SCOPUS:85101098322
SN - 9783030616588
T3 - Studies in Fuzziness and Soft Computing
SP - 183
EP - 195
BT - Studies in Fuzziness and Soft Computing
A2 - Matoušek, Radek
A2 - Kůdela, Jakub
PB - Springer
ER -