TY - GEN
T1 - Quantifying the effects of objective space dimension in evolutionary multiobjective optimization
AU - Knowles, Joshua
AU - Corne, David
PY - 2007
Y1 - 2007
N2 - The scalability of EMO algorithms is an issue of significant concern for both algorithm developers and users. A key aspect of the issue is scalability to objective space dimension, other things being equal. Here, we make some observations about the efficiency of search in discrete spaces as a function of the number of objectives, considering both uncorrected and correlated objective values. Efficiency is expressed in terms of a cardinality-based (scaling-independent) performance indicator. Considering random sampling of the search space, we measure, empirically, the fraction of the true PF covered after p iterations, as the number of objectives grows, and for different correlations. A general analytical expression for the expected performance of random search is derived, and is shown to agree with the empirical results. We postulate that for even moderately large numbers of objectives, random search will be competitive with an EMO algorithm and show that this is the case empirically: on a function where each objective is relatively easy for an EA to optimize (an NK-landscape with K=2), random search compares favourably to a well-known EMO algorithm when objective space dimension is ten, for a range of inter-objective correlation values. The analytical methods presented here may be useful for benchmarking of other EMO algorithms. © Springer-Verlag Berlin Heidelberg 2007.
AB - The scalability of EMO algorithms is an issue of significant concern for both algorithm developers and users. A key aspect of the issue is scalability to objective space dimension, other things being equal. Here, we make some observations about the efficiency of search in discrete spaces as a function of the number of objectives, considering both uncorrected and correlated objective values. Efficiency is expressed in terms of a cardinality-based (scaling-independent) performance indicator. Considering random sampling of the search space, we measure, empirically, the fraction of the true PF covered after p iterations, as the number of objectives grows, and for different correlations. A general analytical expression for the expected performance of random search is derived, and is shown to agree with the empirical results. We postulate that for even moderately large numbers of objectives, random search will be competitive with an EMO algorithm and show that this is the case empirically: on a function where each objective is relatively easy for an EA to optimize (an NK-landscape with K=2), random search compares favourably to a well-known EMO algorithm when objective space dimension is ten, for a range of inter-objective correlation values. The analytical methods presented here may be useful for benchmarking of other EMO algorithms. © Springer-Verlag Berlin Heidelberg 2007.
KW - Coverage indicator
KW - Inter-objective correlation
KW - Many objectives
KW - Multiobjective optimization
KW - Non-dominated ranking
KW - Nondominated sorting
KW - Random search
UR - http://www.scopus.com/inward/record.url?scp=37249057083&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9783540709275
VL - 4403 LNCS
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 757
EP - 771
BT - Evolutionary Multi-Criterion Optimization - 4th International Conference, EMO 2007, Proceedings
T2 - 4th International Conference on Evolutionary Multi-Criterion Optimization
Y2 - 5 March 2007 through 8 March 2007
ER -