To BI or not to BI?

Adil Alizada*, Johan John Thomas*, Mir Imaad Ali*, Kayvan Karim

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Downloads (Pure)

Abstract

Suicide continues to be an issue in our society. Studies agree that it is best to deal with suicidal ideation in its early stages, and for this reason, researchers have been conducting experiments training different Neural Networks (NN)
to detect it. Transformers are the dominant Neural Network architecture in the domain of Suicidal Ideation detection, being a robust solution for not only the proposed problem but for a wide variety of NLP problems. LSTM-CNN, one of the prominent architectures in the field is also proposed as a great solution. This study aims to evaluate the performance of BERT, RoBERTa, LSTM-CNN, and Bi-LSTM-CNN models for suicidal ideation detection. Our experiments indicated that BERT models have an edge over both LSTM-CNN and BI-LSTM-CNN models, scoring up to 0.986 accuracy on our test set. Furthermore, while directly comparing LSTM-CNN with Bi-LSTM-CNN, it was observed that the difference between the models isn’t significant. Our paper contributes to the domain by proving no advantage of using LSTM-CNN models over the Transformers.
Original languageEnglish
Title of host publication5th International Conference on Computers and Artificial Intelligence Technology (CAIT)
PublisherIEEE
Pages586-590
Number of pages5
ISBN (Electronic)9798331530891
DOIs
Publication statusPublished - 17 Apr 2025
Event5th International Conference on Computers and Artificial Intelligence Technology 2024 - Hangzhou, China
Duration: 20 Dec 202422 Dec 2024
Conference number: 5th
https://www.cait.net/2024.html

Conference

Conference5th International Conference on Computers and Artificial Intelligence Technology 2024
Abbreviated titleCAIT 2024
Country/TerritoryChina
CityHangzhou
Period20/12/2422/12/24
Internet address

Keywords

  • suicide
  • depression
  • machine learning
  • NLP
  • natural language processing
  • deep learning

Fingerprint

Dive into the research topics of 'To BI or not to BI?'. Together they form a unique fingerprint.

Cite this