TY - JOUR
T1 - Deep Generative Model for Spatial-spectral Unmixing with Multiple Endmember Priors
AU - Shi, Shuaikai
AU - Zhang, Lijun
AU - Altmann, Yoann
AU - Chen, Jie
N1 - Funding Information:
The work of Yoann Altmann was supported in part by the Royal Academy of Engineering through the Research Fellowship Scheme under Grant RF201617/16/31.
Publisher Copyright:
IEEE
PY - 2022/4/18
Y1 - 2022/4/18
N2 - Spectral unmixing is an effective tool to mine information at the subpixel level from complex hyperspectral images. To consider the spatially correlated materials distributions in the scene, many algorithms unmix the data in a spatial-spectral fashion; however, existing models are usually unable to model spectral variability simultaneously. In this article, we present a variational autoencoder-based deep generative model for spatial-spectral unmixing (DGMSSU) with endmember variability, by linking the generated endmembers to the probability distributions of endmember bundles extracted from the hyperspectral imagery via discriminators. Besides the convolutional autoencoder-like architecture that can only model the spatial information within the regular patch inputs, DGMSSU is able to alternatively choose graph convolutional networks or self-attention mechanism modules to handle the irregular but more flexible data - superpixel. Experimental results on a simulated dataset, as well as two well-known real hyperspectral images, show the superiority of our proposed approach in comparison with other state-of-the-art spatial-spectral unmixing methods. Compared to the conventional unmixing methods that consider the endmember variability, our proposed model generates more accurate endmembers on each subimage by the adversarial training process. The codes of this work will be available at https://github.com/shuaikaishi/DGMSSU for the sake of reproducibility.
AB - Spectral unmixing is an effective tool to mine information at the subpixel level from complex hyperspectral images. To consider the spatially correlated materials distributions in the scene, many algorithms unmix the data in a spatial-spectral fashion; however, existing models are usually unable to model spectral variability simultaneously. In this article, we present a variational autoencoder-based deep generative model for spatial-spectral unmixing (DGMSSU) with endmember variability, by linking the generated endmembers to the probability distributions of endmember bundles extracted from the hyperspectral imagery via discriminators. Besides the convolutional autoencoder-like architecture that can only model the spatial information within the regular patch inputs, DGMSSU is able to alternatively choose graph convolutional networks or self-attention mechanism modules to handle the irregular but more flexible data - superpixel. Experimental results on a simulated dataset, as well as two well-known real hyperspectral images, show the superiority of our proposed approach in comparison with other state-of-the-art spatial-spectral unmixing methods. Compared to the conventional unmixing methods that consider the endmember variability, our proposed model generates more accurate endmembers on each subimage by the adversarial training process. The codes of this work will be available at https://github.com/shuaikaishi/DGMSSU for the sake of reproducibility.
KW - Deep neural network
KW - endmember variability
KW - graph convolution
KW - self-attention
KW - spatial-spectral model
KW - spectral unmixing
UR - http://www.scopus.com/inward/record.url?scp=85128657642&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2022.3168712
DO - 10.1109/TGRS.2022.3168712
M3 - Article
AN - SCOPUS:85128657642
SN - 0196-2892
VL - 60
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
M1 - 5527214
ER -