Drone Detection by Neural Network Using GLCM and SURF Features
Subject Areas : Image ProcessingTanzia Ahmed 1 , Tanvir Rahman 2 , Bir Ballav Roy 3 , Jia Uddin 4 *
1 - Software Engineering
2 - Brac University
3 - Brac University
4 - Woosong University
Keywords: Feature Extraction, GLCM Method, Image Processing, Neural Network, SURF Algorithm,
Abstract :
This paper presents a vision-based drone detection method. There are a number of researches on object detection which includes different feature extraction methods – all of those are used distinctly for the experiments. But in the proposed model, a hybrid feature extraction method using SURF and GLCM is used to detect object by Neural Network which has never been experimented before. Both are very popular ways of feature extraction. Speeded-up Robust Feature (SURF) is a blob detection algorithm which extracts the points of interest from an integral image, thus converts the image into a 2D vector. The Gray-Level Co-Occurrence Matrix (GLCM) calculates the number of occurrences of consecutive pixels in same spatial relationship and represents it in a new vector- 8 × 8 matrix of best possible attributes of an image. SURF is a popular method of feature extraction and fast matching of images, whereas, GLCM method extracts the best attributes of the images. In the proposed model, the images were processed first to fit our feature extraction methods, then the SURF method was implemented to extract the features from those images into a 2D vector. Then for our next step GLCM was implemented which extracted the best possible features out of the previous vector, into a 8 × 8 matrix. Thus, image is processed in to a 2D vector and feature extracted from the combination of both SURF and GLCM methods ensures the quality of the training dataset by not just extracting features faster (with SURF) but also extracting the best of the point of interests (with GLCM). The extracted featured related to the pattern are used in the neural network for training and testing. Pattern recognition algorithm has been used as a machine learning tool for the training and testing of the model. In the experimental evaluation, the performance of proposed model is examined by cross entropy for each instance and percentage error. For the tested drone dataset, experimental results demonstrate improved performance over the state-of-art models by exhibiting less cross entropy and percentage error.
[1] M. Hicks, “Criminal Intent: FBI Details How Drones are used in crime,” Techradar-the source for tech buying advice, May 2018. [online]. https://www.techradar.com/news/criminal-intent-fbi-details-how-drones-are-being-used-for-crime.#
[2] F. P. George, I. M. Shaikat, P. S. F. Hossain, M. Z. Parvez, and J. Uddin, “Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier,” International Journal of Electrical and Computer Engineering, 2019, vol. 9, no. 2, pp. 1012-1020.#
[3] R. Dong, H. Meng, Z. Long and H. Zhao, “Dimensionality reduction by soft-margin support vector machine,” IEEE International Conference on Agents (ICA), Beijing, China, 2017, pp. 154-156.#
[4] G. Yan, “Network Anomaly Traffic Detection Method Based on Support Vector Machine,” 2016 International Conference on Smart City and Systems Engineering (ICSCSE), Zhangjiajie, Hunan, China, 2016, pp. 3-6.#
[5] M. d. Barbosa, C. d. Barbosa, and A. F. Barbosa, “MuSSE: A Tool to Extract Meta-Data from Game Sprite Sheets Using Blob Detection Algorithm,” 14th Brazilian Symposium on Computer Games and Digital Entertainment (SBGAMES), Piauí, Brazil, 2015, pp. 61-69.#
[6] F. E. H Tay and L. Lao, “Application of support vector machines in financial time series forecasting,” omega, vol. 29, no. 4, Aug. 2001, pp. 309-317.#
[7] G. Kumar, P. K. Bhatia, “A detailed Review of Feature Extraction in Image Processing Systems,” Fourth International Conference on Advanced Computing and Communication Technologies, IEEE Computer Society, Washington DC, USA, 2014, pp. 5-12.#
[8] R. S. Choras, “Image Feature Extraction Techniques and Their Application for CBIR and Biometric Systems,” International Journal of Biology and Biomedical Engineering, 2007, vol. 1, no. 1, pp. 6-16.#
[9] Z. Yin et al., “A Deep Normalization and Convolutional Neural Network for Image Smoke Detection,” IEEE Access, vol. 6, 2018, pp. 4287-4296.#
[10] J. Chen et al., “Analysis of the recognition and localization techniques of power transmission lines components in aerial images acquired by drones,” The Institute of Engineering and Technology Journals, IEEE Access, 2017, pp. 29-32.#
[11] M. A. Abuzneid and A. Mahmood, “Enhanced Human Face Recognition Using LBPH Descriptor, Multi-KNN, and Back-Propagation Neural Network,” IEEE Access, vol. 6, 2018, pp. 20641-2065.#
[12] R. Hussin, M. R. Juhari, N. W. Kang, R. C. Ismail, A. Kamarudin, “Digital Image Processing Techniques for Object Detection from Complex Background Image,” Procedia Engineering, 2012, pp. 340–344.#
[13] M. J. Swain and D. H. Ballard, “Color indexing,” International Journal of Computer Vision, vol. 7, no. 11, 1991, pp. 11-32.#
[14] B. V. Funt and G. D. Finlayson, “Color constant color indexing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 5, May 1995, pp. 522-529.#
[15] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Neural Information Processing Systems, 2012, pp. 1-9.#
[16] M. Nakib, R. T. Khan, M. S. Hasan and J. Uddin, “Crime Scene Prediction by Detecting Threatening Objects Using Convolutional Neural Network,” International Conference on Computer, Communication, Chemical, Material and Electronic Engineering, Bangladesh, 2018, pp. 1-4.#
[17] H. Y. Chae, K. Lee, J. Jang, K. Park, and J. J. Kim, “A Wearable sEMG Pattern-Recognition Integrated Interface Embedding Analog Pseudo-Wavelet Preprocessing,” IEEE Access, 2019, vol. 7, pp. 151320-151328.#
[18] J. Zupan, “Introduction to Artificial Neural Network (ANN) Methods: What They Are and How to Use Them,” Acta Chimica Slovenica, 1994, vol. 41, no. 3, pp. 327-352.
[19] T. Youtang and W. Jianrning, “Air target fuzzy pattern recognition Threat-judgment model,” Journal of Systems Engineering and Electronics, 2003, vol. 14, no. 1, pp. 41-46.
[20] B. Nikhil, “Ïmage Data Pre-Processing for Neural Networks,” Becoming Human: Artificial Intelligence Magazine, 2017.#
[21] rgb2gray-Convert RGB image or colormap to grayscale, MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/help/matlab/ref/rgb2gray.#
[22] mat2gray-Convert matrix to grayscale image, MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/help/images/ref/mat2gray.#
[23] R.M. Haralick and L.G. Shapiro, “Computer and Robot Vision,” vol. 1, Addison-Wesley, 1992, pp. 1-459.#
[24] A. Xu and G. Namit, “SURF: Speeded-up Robust Features,” 2008, Project Report: McGill University.#
[25] T. Das, R. Hasan, M. R. Azam and J. Uddin, “A Robust Method for Detecting Copy-Move Image Forgery Using Stationary Wavelet Transform and Scale Invariant Feature Transform,” International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Bangladesh, 2018, pp. 1-4.#
[26] G. S. Rabbani, S. Sultana, M.N. Hasan, S. Q. Fahad, J. Uddin, “Person identification using SURF features of dental radiograph,” 3rd International Conference on Cryptography, Security and Privacy, 2019, pp. 303.#
[27] Detect SURF Features-Detect SURF features and return SURF Points object MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/help/vision/ref/detectsurffeatures.
[28] H. Bay, A. Ess, T. Tuytelaars, L. V. Gool, “SURF: Speeded Up Robust Features,” Computer Vision and Image Understanding, 2008, vol. 110, no. 3, pp. 346–359.#
[29] E. Oyallon and J. Rabin, “An Analysis of the SURF Method,” 2015, Image Processing On Line (IPOL), pp.176-218.#
[30] B. Fan, Z. Wang, F. Wu, “Local Image Descriptors: Modern Approaches,” Springer, 2015, vol. 12, pp. 1-99.#
[31] R. M. Haralick, K. Shanmugan, and I. Dinstein, “Textural Features for Image Classification,” IEEE Transactions on Systems, Man, and Cybernetics, 1973, vol. SMC-3, pp. 610-621.#
[32] A. Uppuluri, “GLCM texture features- Calculates texture features from the input GLCMs,” version:1.2.0.0, MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/matlabcentral/fileexchange/22187-glcm-texture-features.#
[33] P. Cosman, “Gray-Level Co-occurrence Matrices (GLCMs),” [online]. http://www.code.ucsd.edu/pcosman/glcm.pdf.#
[34] J. Uddin, R. Islam, J. M. Kim, “Texture Feature Extraction Techniques for Fault Diagnosis of Induction Motors,” Journal of Convergence, 2014, vol. 5, no. 2, pp. 15-20.#
[35] Image Recognition- Recognition methods in image processing, MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/discovery/pattern-recognition.#
[36] “The Machine Learning Dictionary,” available at: www.cse.unsw.edu.au. Retrieved at 4 November 2009.#
[37] A. Zell, “chapter 5.2,” Simulation neuronaler Netze [Simulation of Neural Networks] (in German) (1st ed.), Addison-Wesley, 2003.#
[38] C. M. Bishop, “Pattern Recognition and Machine Learning,” Springer, 2006, pp. 1-758.#
[39] Classify Patterns with a Shallow Neural Network, MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/help/nnet/gs/classify-patterns-with-a-neural-network.#
[40] M. F. Moller, “A Scale Conjugate Gradient Algorithm for Fast Supervised Learning,” Neural Networks, 1993, vol. 6, no. 4, pp. 525-533.#
[41] Crossentropy- Neural network performance, MathWorks, v: R2018a, 2018. [online]. https://www.mathworks.com/help/nnet/ref/crossentropy.#
[42] V. Tshitoyan (2021). Simple Neural Network (https://github.com/vtshitoyan/simpleNN), GitHub. Retrieved January 20, 2021.#