Robust Attentional Aggregation of Deep Feature Sets for Multi-view 3D Reconstruction

Bo Yang, Sen Wang, Andrew Markham, Niki Trigoni

Research output: Contribution to journalArticlepeer-review

36 Citations (SciVal)
21 Downloads (Pure)


We study the problem of recovering an underlying 3D shape from a set of images. Existing learning based approaches usually resort to recurrent neural nets, e.g., GRU, or intuitive pooling operations, e.g., max/mean poolings, to fuse multiple deep features encoded from input images. However, GRU based approaches are unable to consistently estimate 3D shapes given different permutations of the same set of input images as the recurrent unit is permutation variant. It is also unlikely to refine the 3D shape given more images due to the long-term memory loss of GRU. Commonly used pooling approaches are limited to capturing partial information, e.g., max/mean values, ignoring other valuable features. In this paper, we present a new feed-forward neural module, named AttSets, together with a dedicated training algorithm, named FASet, to attentively aggregate an arbitrarily sized deep feature set for multi-view 3D reconstruction. The AttSets module is permutation invariant, computationally efficient and flexible to implement, while the FASet algorithm enables the AttSets based network to be remarkably robust and generalize to an arbitrary number of input images. We thoroughly evaluate FASet and the properties of AttSets on multiple large public datasets. Extensive experiments show that AttSets together with FASet algorithm significantly outperforms existing aggregation approaches.
Original languageEnglish
Pages (from-to)1-21
Number of pages21
JournalInternational Journal of Computer Vision
Early online date28 Aug 2019
Publication statusE-pub ahead of print - 28 Aug 2019


  • Deep learning on sets
  • Multi-view 3D reconstruction
  • Robust attention model

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Robust Attentional Aggregation of Deep Feature Sets for Multi-view 3D Reconstruction'. Together they form a unique fingerprint.

Cite this