Probabilistic Spiking Neural Networks Training with Expectation-Propagation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Downloads (Pure)


In this paper, we propose to use variational inference based on expectation-propagation (EP) for training probabilistic spiking networks. We adopt a Bayesian formalism and formulate the training process as finding parametric approximations for the marginal distribution of the network weights given training data. We investigate two variants of EP and through an image classification problem, we illustrate how EP methods can handle large training sets and multimodal, sparsity promoting priors. This preliminary study thus provides with methodological tools that can be leveraged to increase the interpretability and poten- tially accelerate the training of deeper and more complex spiking networks for computer vision tasks.
Original languageEnglish
Title of host publication9th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
ISBN (Electronic)9798350344523
Publication statusPublished - 31 Jan 2024

Cite this