Mainstream spectral reconstruction methods typically meticulously design complex and computationally intensive architectures in convolutional neural networks (CNNs) or Transformers to model the mapping from RGB to hyperspectral image (HSI). However, the bottleneck in achieving accurate spectral reconstruction may not lie in model complexity. Direct end-to-end learning on limited training samples struggles to encapsulate discriminative and generalizable feature representations, leading to overfitting and consequently suboptimal reconstruction fidelity. To address these challenges, we propose a new Masked Autoencoder-based Knowledge Transfer network for Spectral Reconstruction from RGB images (MAE-KTSR). MAE-KTSR decouples the feature representation process into a two-stage paradigm, facilitating a holistic comprehension of diverse objects and scenes, thereby enhancing the generalizability of spectral reconstruction. In the first stage, we introduce Spatial-Spectral Masked Autoencoders (S^{2}-MAE) to extract discriminative spectral features through masked modeling under constrained spectral conditions. S^{2}-MAE reconstructs spectral images from partially masked inputs, learning a generalizable feature representation that provides useful prior knowledge for RGB-to-HSI reconstruction. In the second stage, a lightweight convolutional reconstruction network is deployed to further extract and aggregate local spectral-spatial features. Specifically, an Inter-Stage Feature Fusion module (ISFF) is introduced to effectively exploit the global MAE-based spectral priors learned in the first stage. Experimental results on three spectral reconstruction benchmarks (NTIRE2020-Clean, CAVE, and Harvard) and one real-world hyperspecral dataset (Pavia University) demonstrate the effectiveness of MAE-KTSR. Additionally, MAE-KTSR is experimentally validated to facilitate downstream real-world applications, such as HSI classification.

Masked Autoencoder-Based Knowledge Transfer for Spectral Reconstruction From RGB Images

Gamba P.
2025-01-01

Abstract

Mainstream spectral reconstruction methods typically meticulously design complex and computationally intensive architectures in convolutional neural networks (CNNs) or Transformers to model the mapping from RGB to hyperspectral image (HSI). However, the bottleneck in achieving accurate spectral reconstruction may not lie in model complexity. Direct end-to-end learning on limited training samples struggles to encapsulate discriminative and generalizable feature representations, leading to overfitting and consequently suboptimal reconstruction fidelity. To address these challenges, we propose a new Masked Autoencoder-based Knowledge Transfer network for Spectral Reconstruction from RGB images (MAE-KTSR). MAE-KTSR decouples the feature representation process into a two-stage paradigm, facilitating a holistic comprehension of diverse objects and scenes, thereby enhancing the generalizability of spectral reconstruction. In the first stage, we introduce Spatial-Spectral Masked Autoencoders (S^{2}-MAE) to extract discriminative spectral features through masked modeling under constrained spectral conditions. S^{2}-MAE reconstructs spectral images from partially masked inputs, learning a generalizable feature representation that provides useful prior knowledge for RGB-to-HSI reconstruction. In the second stage, a lightweight convolutional reconstruction network is deployed to further extract and aggregate local spectral-spatial features. Specifically, an Inter-Stage Feature Fusion module (ISFF) is introduced to effectively exploit the global MAE-based spectral priors learned in the first stage. Experimental results on three spectral reconstruction benchmarks (NTIRE2020-Clean, CAVE, and Harvard) and one real-world hyperspecral dataset (Pavia University) demonstrate the effectiveness of MAE-KTSR. Additionally, MAE-KTSR is experimentally validated to facilitate downstream real-world applications, such as HSI classification.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11571/1542507
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 4
social impact