A multimodal multitask deep learning framework for vibrotactile feedback and sound rendering

Joolekha Bibi Joolee, Md Azher Uddin

Research output: Contribution to journalArticlepeer-review

8 Downloads (Pure)


Data-driven approaches are often utilized to model and generate vibrotactile feedback and sounds for rigid stylus-based interaction. Nevertheless, in prior research, these two modalities were typically addressed separately due to challenges related to synchronization and design complexity. To this end, we introduce a novel multimodal multitask deep learning framework. In this paper, we developed a comprehensive end-to-end data-driven system that encompasses the capture of contact acceleration signals and sound data from various texture surfaces. This framework introduces novel encoder-decoder networks for modeling and rendering vibrotactile feedback through an actuator while routing sound to headphones. The proposed encoder-decoder networks incorporate stacked transformers with convolutional layers to capture both local variability and overall trends within the data. To the best of our knowledge, this is the first attempt to apply transformer-based data-driven approach for modeling and rendering of vibrotactile signals as well as sounds during tool-surface interactions. In numerical evaluations, the proposed framework demonstrates a lower RMS error compared to state-of-the-art models for both vibrotactile signals and sound data. Additionally, subjective similarity evaluation also confirm the superiority of proposed method over state-of-the-art.
Original languageEnglish
Article number13335
JournalScientific Reports
Publication statusPublished - 10 Jun 2024


  • Computer science
  • Information technology


Dive into the research topics of 'A multimodal multitask deep learning framework for vibrotactile feedback and sound rendering'. Together they form a unique fingerprint.

Cite this