Probabilistic Spiking Neural Networks Training with Expectation-Propagation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

119 Downloads (Pure)

Abstract

In this paper, we propose to use variational inference based on expectation-propagation (EP) for training probabilistic spiking networks. We adopt a Bayesian formalism and formulate the training process as finding parametric approximations for the marginal distribution of the network weights given training data. We investigate two variants of EP and through an image classification problem, we illustrate how EP methods can handle large training sets and multimodal, sparsity promoting priors. This preliminary study thus provides with methodological tools that can be leveraged to increase the interpretability and poten- tially accelerate the training of deeper and more complex spiking networks for computer vision tasks.
Original languageEnglish
Title of host publication9th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
PublisherIEEE
Pages41-45
Number of pages5
ISBN (Electronic)9798350344523
DOIs
Publication statusPublished - 31 Jan 2024

Keywords

  • Expectation-Propagation
  • Spiking neural network
  • Variational inference
  • sparse regression

ASJC Scopus subject areas

  • Artificial Intelligence
  • Control and Optimization
  • Signal Processing
  • Instrumentation
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Probabilistic Spiking Neural Networks Training with Expectation-Propagation'. Together they form a unique fingerprint.

Cite this