A deep neural network (DNN)-based new pansharpening method for the remote sensing image fusion problem is proposed in this letter. Research on representation learning suggests that the DNN can effectively model complex relationships between variables via the composition of several levels of nonlinearity. Inspired by this observation, a modified sparse denoising autoencoder (MSDA) algorithm is proposed to train the relationship between high-resolution (HR) and low-resolution (LR) image patches, which can be represented by the DNN. The HR/LR image patches only sample from the HR/LR panchromatic (PAN) images at hand, respectively, without requiring other training images.
By connecting a series of MSDAs, we obtain a stacked MSDA (S-MSDA), which can effectively pretrain the DNN. Moreover, in order to better train the DNN, the entire DNN is again trained by a back-propagation algorithm after pretraining. Finally, assuming that the relationship between HR/LR multispectral (MS) image patches is the same as that between HR/LR PAN image patches, the HR MS image will be reconstructed from the observed LR MS image using the trained DNN. Comparative experimental results with several quality assessment indexes show that the proposed method outperforms other pan-sharpening methods in terms of visual perception and numerical measures.