Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning

George Tzougas, Konstantin Kutzkov

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
148 Downloads (Pure)

Abstract

We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach.
Original languageEnglish
Article number99
JournalAlgorithms
Volume16
Issue number2
DOIs
Publication statusPublished - 9 Feb 2023

Keywords

  • CANN approach
  • LIME model agnostic approach
  • Newton–Raphson algorithm
  • cross-entropy loss function
  • deviance loss function
  • embedding layers
  • gradient descent algorithm
  • logistic regression
  • neural networks
  • predictive insurance analytics
  • regularization procedures
  • transfer learning

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Numerical Analysis
  • Computational Theory and Mathematics
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning'. Together they form a unique fingerprint.

Cite this