New Publication: Fujiang Ji Develops Deep Learning Framework for High-Resolution Hyperspectral Reconstruction

We are excited to share that Fujiang Ji has published a new paper, Robust hyperspectral reconstruction from satellite and airborne observations via a deep hierarchical fusion network across heterogeneous scenarios,” in Remote Sensing of Environment. Co-authored with Jiaqi Yang, Philip A. Townsend, Ting Zheng, Kyle R. Kovach, Tong Yu, Ruqi Yang, Ming Liu, and Min Chen, the study presents a new deep learning framework for reconstructing high-resolution hyperspectral imagery from satellite and airborne observations.

The paper addresses a major challenge in remote sensing: how to generate hyperspectral data with both strong spectral fidelity and fine spatial detail under real-world, cross-sensor conditions. By integrating low-resolution hyperspectral imagery from NASA’s EMIT sensor with high-resolution multispectral imagery from PlanetScope, and validating results with AVIRIS observations across three ecologically distinct landscapes in the western United States, the team showed that their approach consistently outperformed seven state-of-the-art fusion models.

Beyond the strong technical performance, this work highlights the value of thoughtful, task-specific model design over simply increasing model complexity. The framework offers a scalable path toward high-fidelity hyperspectral reconstruction, with exciting potential for vegetation trait mapping, ecosystem monitoring, biodiversity studies, and environmental change analysis.