In this chapter we survey some recent results on the global minimization of a non-convex and possibly non-smooth high dimensional objective function by means of particle-based gradient-free methods. Such problems arise in many situations of contemporary interest in machine learning and signal processing. After a brief overview of metaheuristic methods based on particle swarm optimization (PSO), we introduce a continuous formulation via second-order systems of stochastic differential equations that generalize PSO methods and provide the basis for their theoretical analysis. Subsequently, we will show how through the use of mean-field techniques it is possible to derive in the limit of large particles number the corresponding mean-field PSO description based on Vlasov-Fokker-Planck type equations. Finally, in the zero inertia limit, we will analyze the corresponding macroscopic hydrodynamic equations, showing that they generalize the recently introduced consensus-based optimization (CBO) methods by including memory effects. Rigorous results concerning the mean-field limit, the zero-inertia limit, and the convergence of the mean-field PSO method towards the global minimum are provided along with a suite of numerical examples.