top of page

Existing hyperspectral imaging systems produce low spatial resolution images due to hardware constraints. We propose a sparse representation based approach for hyperspectral image super-resolution. The proposed approach first extracts distinct reflectance spectra of the scene from the available hyperspectral image. Then, the signal sparsity, non-negativity and the spatial structure in the scene are exploited to explain a high-spatial but low-spectral resolution image of the same scene in terms of the extracted spectra. This is done by learning a sparse code with an algorithm G-SOMP+. Finally, the learned sparse code is used with the extracted scene spectra to estimate the super-resolution hyper-spectral image. Comparison of the proposed approach with the state-of-the-art methods on both ground-based and remotely-sensed public hyperspectral image databases shows that the presented method achieves the lowest error rate on all test images in the three datasets.

Despite the proven efficacy of hyperspectral imaging in many computer vision tasks, its widespread use is hindered by its low spatial resolution, resulting from hardware limitations. We propose a hyperspectral image super resolution approach that fuses a high resolution image with the low resolution hyperspectral image using non-parametric Bayesian sparse representation. The proposed approach first infers probability distributions for the material spectra in the scene and their proportions. The distributions are then used to compute sparse codes of the high resolution image. To that end, we propose a generic Bayesian sparse coding strategy to be used with Bayesian dictionaries learned with the Beta process. We theoretically analyze the proposed strategy for its accurate performance. The computed sparse codes are used with the estimated scene spectra to construct the super resolution hyperspectral image. Exhaustive experiments on two public databases of ground based hyperspectral images and a remotely sensed image show that the proposed approach outperforms the existing state of the art.

Hyperspectral cameras acquire precise spectral information, however, their resolution is very low due to hardware constraints. We propose an image fusion based hyperspectral super resolution approach that employes a Bayesian representation model. The proposed model accounts for spectral smoothness and spatial consistency of the representation by using Gaussian Processes and a spatial kernel in a hierarchical formulation of the Beta Process.  The model is employed by our approach to first infer Gaussian Processes for the spectra present in the hyperspectral image. Then, it is used to estimate the activity level of the inferred processes in a sparse representation of a high resolution image of the same scene. Finally, we use the model to compute multiple sparse codes of the high resolution image, that are merged with the samples of the Gaussian Processes for an accurate estimate of the high resolution hyperspectral image. We perform experiments with remotely sensed and ground-based hyperspectral images to establish the effectiveness of our approach.

bottom of page