TY - CHAP

T1 - Mean-field Particle Swarm Optimization

AU - Grassi, Sara

AU - Huang, Hui

AU - Pareschi, Lorenzo

AU - Qiu, Jinniao

N1 - Funding Information:
This chapter has been written within the activities of GNCS group of INdAM (National Institute of High Mathematics). The support of MIURPRIN Project 2017, No. 2017KKJP4X "Innovative numerical methods for evolutionary partial differential equations and applications" and of the ESF PhD grant "Mathematical and statistical methods for machine learning in biomedical and socio-sanitary applications" is acknowledged. H. H. is partially supported by the Pacific Institute for the Mathematical Sciences (PIMS) postdoc fellowship. J. Q. is partially supported by the National Science and Engineering Research Council of Canada (NSERC) and by the start-up funds from the University of Calgary.
Publisher Copyright:
© 2023 World Scientific Publishing Company.

PY - 2023/2

Y1 - 2023/2

N2 - In this chapter we survey some recent results on the global minimization of a non-convex and possibly non-smooth high dimensional objective function by means of particle-based gradient-free methods. Such problems arise in many situations of contemporary interest in machine learning and signal processing. After a brief overview of metaheuristic methods based on particle swarm optimization (PSO), we introduce a continuous formulation via second-order systems of stochastic differential equations that generalize PSO methods and provide the basis for their theoretical analysis. Subsequently, we will show how through the use of mean-field techniques it is possible to derive in the limit of large particles number the corresponding mean-field PSO description based on Vlasov-Fokker-Planck type equations. Finally, in the zero inertia limit, we will analyze the corresponding macroscopic hydrodynamic equations, showing that they generalize the recently introduced consensus-based optimization (CBO) methods by including memory effects. Rigorous results concerning the mean-field limit, the zero-inertia limit, and the convergence of the mean-field PSO method towards the global minimum are provided along with a suite of numerical examples.

AB - In this chapter we survey some recent results on the global minimization of a non-convex and possibly non-smooth high dimensional objective function by means of particle-based gradient-free methods. Such problems arise in many situations of contemporary interest in machine learning and signal processing. After a brief overview of metaheuristic methods based on particle swarm optimization (PSO), we introduce a continuous formulation via second-order systems of stochastic differential equations that generalize PSO methods and provide the basis for their theoretical analysis. Subsequently, we will show how through the use of mean-field techniques it is possible to derive in the limit of large particles number the corresponding mean-field PSO description based on Vlasov-Fokker-Planck type equations. Finally, in the zero inertia limit, we will analyze the corresponding macroscopic hydrodynamic equations, showing that they generalize the recently introduced consensus-based optimization (CBO) methods by including memory effects. Rigorous results concerning the mean-field limit, the zero-inertia limit, and the convergence of the mean-field PSO method towards the global minimum are provided along with a suite of numerical examples.

UR - http://www.scopus.com/inward/record.url?scp=85149258030&partnerID=8YFLogxK

U2 - 10.1142/9789811266140_0003

DO - 10.1142/9789811266140_0003

M3 - Chapter

AN - SCOPUS:85149258030

SN - 9789811266133

T3 - Lecture Notes Series, Institute for Mathematical Sciences

SP - 127

EP - 193

BT - Modeling and Simulation for Collective Dynamics

PB - WSPC

ER -