Can We Use Small/Easy Problems to Tune Algorithms for Large/Difficult Problems?

Mashal Alkhalifah, David Corne

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Algorithm tuning is a thriving area of current research. Tuning an algorithm involves a set of problem instances, called the 'training set', together with a procedure that modifies the parameters of the algorithm to improve its overall performance on the training set. Tuning is invariably successful, improving subsequent performance on new problem instances. However, very little research has looked at the relationship between the composition of the training set and the algorithm's generalization performance. For example, if an algorithm is targeted at problems of a certain type, then we would expect that the training set should contain instances of the same type. Similarly, we would expect that to perform well on (say) large-scale problems, an algorithm needs to be tuned on large-scale instances. We explore such expectations, using subspaces of Traveling Salesperson Problem (TSP) instances as our testbed. In one experiment, we create TSP instances that vary in terms of a single parameter of spatial structure; in another experiment, the instances only vary in problem size (number of cities). In both experiments, a suite of algorithms is generated by tuning on each subspace, and each trained algorithm is then tested on every subspace. We find, in both experiments, that the prior expectations are clearly refuted. In particular, the difficulty of the training set (estimated by fitness-distance-correlation) seems to have no influence on the relationship between the training set and generalization performance (or, any such influence is swamped by other factors). Meanwhile, the best tuned algorithm for a given subspace is often one that was tuned on a different subspace; this includes several cases where the best algorithm for a given problem size was one tuned on a subspace of smaller problems. We conclude with consideration of the limitations of the current work, and speculate on the potential implications.

Original languageEnglish
Title of host publication2019 Third International Conference on Intelligent Computing in Data Sciences (ICDS)
PublisherIEEE
ISBN (Electronic)9781728100036
DOIs
Publication statusPublished - 26 Dec 2019
Event3rd International Conference on Intelligent Computing in Data Sciences 2019 - Marrakech, Morocco
Duration: 28 Oct 201930 Oct 2019

Conference

Conference3rd International Conference on Intelligent Computing in Data Sciences 2019
Abbreviated titleICDS 2019
Country/TerritoryMorocco
CityMarrakech
Period28/10/1930/10/19

Keywords

  • algorithm tuning
  • combinatorial optimization
  • parameter tuning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Modelling and Simulation
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Can We Use Small/Easy Problems to Tune Algorithms for Large/Difficult Problems?'. Together they form a unique fingerprint.

Cite this