A stochastic use of the Kurdyka-Łojasiewicz property: Investigation of optimization algorithms behaviours in a non-convex differentiable framework

Jean-Baptiste Fest*, Audrey Repetti, Emilie Chouzenoux

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Downloads (Pure)

Abstract

Asymptotic analysis of generic stochastic algorithms often relies on descent conditions. In a convex setting, some technical shortcuts can be considered to establish asymptotic convergence guarantees of the associated scheme. However, in a non-convex setting, obtaining similar guarantees is usually more complicated, and relies on the use of the Kurdyka-Lojasiewicz (KL) property. While this tool has become popular in the field of deterministic optimization, it is much less widespread in the stochastic context and the few works making use of it are essentially based on trajectory-by-trajectory approaches. In this paper, we propose a new framework for using the KL property in a non-convex stochastic setting based on conditioning theory. We show that this framework allows for deeper asymptotic investigations on stochastic schemes verifying some generic descent conditions. We further show that our methodology can be used to prove convergence of generic stochastic gradient descent (SGD) schemes, and unifies conditions investigated in multiple articles of the literature.
Original languageEnglish
JournalFoundations of Data Science
Early online date18 Aug 2025
DOIs
Publication statusE-pub ahead of print - 18 Aug 2025

Fingerprint

Dive into the research topics of 'A stochastic use of the Kurdyka-Łojasiewicz property: Investigation of optimization algorithms behaviours in a non-convex differentiable framework'. Together they form a unique fingerprint.

Cite this