Optimized kernel Nonparametric Weighted Feature Extraction for Hyperspectral Image Classification
Subject Areas : Pattern Recognition
1 - Department of Electrical Engineering,University of Jiroft
Keywords: Feature Extraction, Image Classification, Optimized KNWFE, Hyperspectral, kernel,
Abstract :
Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this technique. Since hyperspectral images contain redundant measurements, it is crucial to identify a subset of efficient features for modeling the classes. Kernel-based methods are widely used in this field. In this paper, we introduce a new kernel-based method that defines Hyperplane more optimally than previous methods. The presence of noise data in many kernel-based HSI classification methods causes changes in boundary samples and, as a result, incorrect class hyperplane training. We propose the optimized kernel non-parametric weighted feature extraction for hyperspectral image classification. KNWFE is a kernel-based feature extraction method, which has promising results in classifying remotely-sensed image data. However, it does not take the closeness or distance of the data to the target classes. Solving the problem, we propose optimized KNWFE, which results in better classification performance. Our extensive experiments show that the proposed method improves the accuracy of HSI classification and is superior to the state-of-the-art HIS classifiers.
[1] H. Li, H. Zhou, L. Pan, and Q. Du, “Gabor feature-based composite kernel method for hyperspectral image classification,” vol. 54, no. 10, 2018, doi: 10.1049/el.2018.0272.
[2] D. Hong, X. Wu, P. Ghamisi, J. Chanussot, N. Yokoya, and X. X. Zhu, “Invariant Attribute Profiles: A Spatial-Frequency Joint Feature Extractor for Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., pp. 1–18, 2020, doi: 10.1109/TGRS.2019.2957251.
[3] S. Suresh and S. Lal, “A Metaheuristic Framework based Automated Spatial-Spectral Graph for Land Cover Classification from Multispectral and Hyperspectral Satellite Images,” Infrared Phys. Technol., vol. 105, no. January, p. 103172, 2020, doi: 10.1016/j.infrared.2019.103172.
[4] P. Xiang et al., “Hyperspectral anomaly detection by local joint subspace process and support vector machine,” Int. J. Remote Sens., vol. 41, no. 10, pp. 3798–3819, 2020.
[5] P. Ghamisi, J. Plaza, Y. Chen, J. Li, and A. J. Plaza, “Advanced spectral classifiers for hyperspectral images: A review,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 1, pp. 8–32, 2017.
[6] L. Fang, Z. Liu, and W. Song, “Deep Hashing Neural Networks for Hyperspectral,” IEEE Geosci. Remote Sens. Lett., vol. PP, pp. 1–5, 2019, doi: 10.1109/LGRS.2019.2899823.
[7] E. M. Paoletti, M. J. Haut, J. Plaza, and A. Plaza, “Deep&Dense Convolutional Neural Network for Hyperspectral Image Classification,” Remote Sens., vol. 10, no. 9, pp. 1–21, 2018, doi: 10.3390/rs10091454.
[8] H. Lee, M. Kim, D. Jeong, S. Delwiche, K. Chao, and B.-K. Cho, “Detection of cracks on tomatoes using a hyperspectral near-infrared reflectance imaging system,” Sensors, vol. 14, no. 10, pp. 18837–18850, 2014.
[9] B.-C. Kuo and D. A. Landgrebe, “Nonparametric weighted feature extraction for classification,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 5, pp. 1096–1105, 2004.
[10] M. R. Almeida, L. P. L. Logrado, J. J. Zacca, D. N. Correa, and R. J. Poppi, “Raman hyperspectral imaging in conjunction with independent component analysis as a forensic tool for explosive analysis: The case of an ATM explosion,” Talanta, vol. 174, pp. 628–632, 2017.
[11] Z. Chen, J. Jiang, X. Jiang, X. Fang, and Z. Cai, “Spectral-spatial feature extraction of hyperspectral images based on propagation filter,” Sensors (Switzerland), vol. 18, no. 6, pp. 1–16, 2018, doi: 10.3390/s18061978.
[12] J. Jiang, J. Ma, C. Chen, Z. Wang, Z. Cai, and L. Wang, “SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 8, pp. 4581–4593, Aug. 2018, doi: 10.1109/TGRS.2018.2828029.
[13] H. Su, S. Member, B. Zhao, Q. Du, P. Du, and S. Member, “With Local Correlation Features for Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., vol. PP, pp. 1–12, 2018, doi: 10.1109/TGRS.2018.2866190.
[14] G. Camps-Valls and L. Bruzzone, “Kernel-based methods for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 6, pp. 1351–1362, Jun. 2005, doi: 10.1109/TGRS.2005.846154.
[15] M. Khodadadzadeh, P. Ghamisi, C. Contreras, and R. Gloaguen, “Subspace Multinomial Logistic Regression Ensemble for Classification of Hyperspectral Images,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Jul. 2018, pp. 5740–5743, doi: 10.1109/IGARSS.2018.8519404.
[16] S. Song, H. Zhou, J. Zhou, K. Qian, K. Cheng, and Z. Zhang, “Hyperspectral anomaly detection based on anomalous component extraction framework,” Infrared Phys. Technol., vol. 96, pp. 340–350, 2019, doi: 10.1016/j.infrared.2018.12.008.
[17] E. Blanzieri and F. Melgani, “Nearest Neighbor Classification of Remote Sensing Images With the Maximal Margin Principle,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6, pp. 1804–1811, Jun. 2008, doi: 10.1109/TGRS.2008.916090.
[18] D. Tuia and G. Camps-Valls, “Semisupervised Remote Sensing Image Classification With Cluster Kernels,” IEEE Geosci. Remote Sens. Lett., vol. 6, no. 2, pp. 224–228, Apr. 2009, doi: 10.1109/LGRS.2008.2010275.
[19] Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral Image Classification via.pdf,” vol. 51, no. 1, pp. 217–231, 2013.
[20] X. Weng, W. Lei, and X. Ren, “Kernel sparse representation for hyperspectral unmixing based on high mutual coherence spectral library,” Int. J. Remote Sens., vol. 41, no. 4, pp. 1286–1301, 2020, doi: 10.1080/01431161.2019.1666215.
[21] Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, “Nonlocal Patch Tensor Sparse Representation for Hyperspectral Image Super-Resolution,” IEEE Trans. Image Process., vol. 28, no. 6, pp. 3034–3047, 2019, doi: 10.1109/TIP.2019.2893530.
[22] G. Cheng, Z. Li, J. Han, X. Yao, and L. Guo, “Exploring Hierarchical Convolutional Features for Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., vol. PP, pp. 1–11, 2018, doi: 10.1109/TGRS.2018.2841823.
[23] M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “ISPRS Journal of Photogrammetry and Remote Sensing A new deep convolutional neural network for fast hyperspectral image classification,” ISPRS J. Photogramm. Remote Sens., vol. 145, pp. 120–147, 2018, doi: 10.1016/j.isprsjprs.2017.11.021.
[24] B. Pan, Z. Shi, and X. Xu, “ISPRS Journal of Photogrammetry and Remote Sensing MugNet : Deep learning for hyperspectral image classification using limited samples,” ISPRS J. Photogramm. Remote Sens., 2017, doi: 10.1016/j.isprsjprs.2017.11.003.
[25] O. Okwuashi and C. E. Ndehedehe, “Deep support vector machine for hyperspectral image classification,” Pattern Recognit., vol. 103, pp. 2–25, 2020, doi: 10.1016/j.patcog.2020.107298.
[26] B. C. Kuo, C. H. Li, and J. M. Yang, “Kernel nonparametric weighted feature extraction for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 4, pp. 1139–1155, 2009, doi: 10.1109/TGRS.2008.2008308.
[27] L. Sun, C. Ma, Y. Chen, H. J. Shim, Z. Wu, and B. Jeon, “Adjacent superpixel-based multiscale spatial-spectral kernel for hyperspectral classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 12, no. 6, pp. 1905–1919, 2019.
[28] T. Zhan, L. Sun, Y. Xu, G. Yang, Y. Zhang, and Z. Wu, “Hyperspectral classification via superpixel kernel learning-based low rank representation,” Remote Sens., vol. 10, no. 10, p. 1639, 2018.
[29] J. Liu, Z. Wu, Z. Xiao, and J. Yang, “Region-based relaxed multiple kernel collaborative representation for hyperspectral image classification,” IEEE Access, vol. 5, pp. 20921–20933, 2017.
[30] Y. Xu, B. Du, F. Zhang, and L. Zhang, “Hyperspectral image classification via a random patches network,” ISPRS J. Photogramm. Remote Sens., vol. 142, pp. 344–357, 2018.
http://jist.acecr.org ISSN 2322-1437 / ESSN:2345-2773 |
Journal of Information Systems and Telecommunication
|
Optimized kernel Nonparametric Weighted Feature Extraction for Hyperspectral Image Classification |
Mohammad Hasheminejad1*
|
1.Department of Electrical Engineering, University of Jiroft, Kerman, Iran |
Received: 13 May 2021/ Revised: 04 Oct 2021/ Accepted: 21 Dec 2021 |
DOI: |
Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this technique. Since hyperspectral images contain redundant measurements, it is crucial to identify a subset of efficient features for modeling the classes. Kernel-based methods are widely used in this field. In this paper, we introduce a new kernel-based method that defines Hyperplane more optimally than previous methods. The presence of noise data in many kernel-based HSI classification methods causes changes in boundary samples and, as a result, incorrect class hyperplane training. We propose the optimized kernel non-parametric weighted feature extraction for hyperspectral image classification. KNWFE is a kernel-based feature extraction method, which has promising results in classifying remotely-sensed image data. However, it does not take the closeness or distance of the data to the target classes. Solving the problem, we propose optimized KNWFE, which results in better classification performance. Our extensive experiments show that the proposed method improves the accuracy of HSI classification and is superior to the state-of-the-art HIS classifiers.
Keywords: Feature Extraction; Image Classification; Optimized KNWFE; Hyperspectral; Kernel.
1- Introduction
Hyperspectral image (HSI) classification is widely used in many fields such as agriculture, mineralogy [1], environmental monitoring, and material analysis [2]. An HSI image contains spatial-spectral information, which is the visible and near-infrared, and short-wavelength infrared spectrum, for different locations in an image plane. This image plane is usually obtained by airborne and spaceborne spectrometers [3]. These images have many spectral bands and complex spatial structures containing lots of information. These images typically cover a wide spectral range of frequencies. As a result, each pixel vector is a highly-detailed spectral representative of each captured land cover material. Therefore, since the types of materials on the ground are better identified using HSI images, they can be used in many applications performed via surface analysis. The analysis of HSI involves classification. The goal of classification is to assign a unique class label to each pixel vector.
As an example of HSI classification methods, SVM can be cited [4]. SVM searches an optimal hyperplane to separate the data in a multi-dimensional feature space. Other widely used spectral classification methods include k-nearest-neighbors, maximum likelihood, logistic regression, neural networks [5]. To avoid the computational burden and increase the classification accuracy, it is recommended to use dimensionality reduction techniques [6]. In the past several years, many feature extraction and classification methods have been presented for hyperspectral data [1], [7]. An example of supervised dimensionality reduction is linear discriminant analysis[8]. Besides, non-parametric weighted feature extraction (NWFE) [9], local joint subspace (LJS) detection [4], independent component analysis [10], principal component analysis [11], superpixelwise PCA [12], and semi-supervised discriminant analysis (SDA) [12] are dimensionality reduction methods which are considered by the community. However, due to the unbalance between the limited number of training samples and the high dimensionality of data, HSI classification is still a highly challenging task [13].
In hyperspectral image classification, each pixel is labeled with one of the classes based on its features. SVM is known as a powerful method in HSI classification [14]. Another classifier that is widely used is multinomial logistic regression [15]. This classifier uses the logistic function to provide the posterior probability. In [15], an ensemble multinomial logistic regression-based method is used for HSI classification. An anomalous component extraction framework for the detection of hyperspectral anomalies based on Independent Component Analysis (ICA) and orthogonal subspace imaging (OSP) is proposed in [16]. Kernel-based SVM approaches can offer satisfying performance in HSI classification. Mountrakis et al. showed that using a nonlinear kernel with a local k-nearest neighbor adaptation improves the performance of localized types of SVM approaches [17]. A regularization method is proposed in [18] to address the issue of kernel predetermination. The technique identifies kernel structure through the analysis of unlabeled samples. H. C. Lee et al. proposed an HSI classifier that projects Gabor features of the hyperspectral image into the kernel induced space through composite kernel technique [1]. Representation-based methods such as sparse representation are proven to be promising in pattern recognition. HSI sparse representation classification is based on the assumption that pixels belonging to the same class lie in the same subspace. It is also applied to HSI classification [19], where the representation is performed in a feature space induced by a kernel function. Sparse representation classification is now a popular method in hyperspectral unmixing. Weng et al. used a kernel to map hyperspectral data and library atoms to a suitable space to unmix hyperspectral information [20]. Sparse representation is also used to enhance hyperspectral images [21].
Recently, a variety of deep learning-based algorithms has shown their promising performance in various applications, including HSI classification [22]. Due to the success of deep learning in the field of pattern recognition, it has attracted many researchers in hyperspectral image classification and analysis [23], [24]. In [23], a convolutional neural network (CNN) architecture is proposed for HSI classification. They proposed a 3-D network that uses both spectral and spatial information. To effectively process the border areas in the image, it implemented a border mirroring strategy. The proposed algorithm is implemented on graphical processing units. In [24], a simplified deep neural network is proposed. This network, which is called MugNet, utilizes the relationship between different spectral bands and neighboring pixels. It also generates a convolution kernel using a semi-supervised manner. The application of deep SVM in HSI classification is investigated in [25]. Four kernel functions were used in that study.
However, it is commonly necessary to pre-process that spectral information to use in HSI analysis. This process includes reducing the number of bands using proper techniques. In this case, non-parametric weighted feature extraction (NWFE) has shown promising results in HSI dimension reduction [9]. It is further improved in [26] as KNWFE, taking advantage of the kernel method. In this paper, we try to improve within and between class scattering matrices, correcting data weightings.
The rest of this paper is organized as follows: Section 2 overviews the KNWFE method. In Section 3, we propose our corrections on the KNWFE followed by the performed experiments in Section 4. We conclude in Section 5.
2- Related Work
Most of the time, HSIs are not linearly separable. Therefore kernel methods are used to project the data into a feature space, where the classes are linearly separable. The kernel function is a similarity function that corresponds to an inner product in some expanded feature space. Some popular kernel functions are linear kernel, polynomial kernel and gaussian radial-basis-function (RBF) kernel.
The proposed algorithm is a nonlinear kernel-mode based on the nonparametric weighted feature extraction (NWFE) method [26]. NWFE is a nonparametric method for high-dimensional multi-class pattern recognition problems. This algorithm is based on a non-parametric expression of the scatter matrix. The steps of this algorithm are to first calculate the Euclidean distance between each sample pair and place it in a matrix called the distance matrix. Then the weights matrix is calculated using the distance matrix. The weighted mean matrix is then calculated by putting different weights on every sample. Then, the distance between samples and their weighted means is calculated, as their closeness to the boundary. Finally, nonparametric between-class and within-class scatter matrices are defined, to put large weights on the samples close to the boundary and deemphasize samples far from the boundary. These matrices are defined respectively as [26]:
where is scatter matrix weight and is defined by:
with , that denoted the weighted mean concerning in class , the distance between and , and
Despite that NWFE has better performance than LDA, it is still linear. The KNWFE method, a kernel-based nonlinear version of the NWFE, is presented to derive the non-Gaussian data feature [26]. In this method, in the scatter matrices is replaced by , where is a kernel function.
2-1- Kernel Nonparametric Weighted Feature Extraction
The strategy of kernel-based methods is to map data from the original space to a higher-dimensional Hilbert space, where the data are expected to be more separable in this space. The kernel is an N×N matrix, where N is the total number of samples. In KNWFE, a weight matrix is firstly defined, based on data.
|
|
|
|
|
| (6) |
| (7) |
| (8) |
| (9) |
| (10) |
| (11) |
|
|
| (14) |
|
| (16) |
| (17) |
| (18) |
| (19) |
| (20) |
| (21) |
|
|
|
|
Related articles
-
Facial Images Quality Assessment based on ISO/ICAO Standard Compliance Estimation by HMAX Model
Print Date : 2020-03-15 -
An Efficient Method for Handwritten Kannada Digit Recognition based on PCA and SVM Classifier
Print Date : 2021-07-23 -
The rights to this website are owned by the Raimag Press Management System.
Copyright © 2017-2024