• OpenAccess
    • List of Articles Feature

      • Open Access Article

        1 - Fast Automatic Face Recognition from Single Image per Person Using GAW-KNN
        Hassan Farsi Mohammad Hasheminejad
        Real time face recognition systems have several limitations such as collecting features. One training sample per target means less feature extraction techniques are available to use. To obtain an acceptable accuracy, most of face recognition algorithms need more than on More
        Real time face recognition systems have several limitations such as collecting features. One training sample per target means less feature extraction techniques are available to use. To obtain an acceptable accuracy, most of face recognition algorithms need more than one training sample per target. In these applications, accuracy of recognition dramatically reduces for the case of one training sample per target face image because of head rotation and variation in illumination state. In this paper, a new hybrid face recognition method by using single image per person is proposed, which is robust against illumination variations. To achieve robustness against head variations, a rotation detection and compensation stage is added. This method is called Weighted Graphs and PCA (WGPCA). It uses harmony of face components to extract and normalize features, and genetic algorithm with a training set is used to learn the most useful features and real-valued weights associated to individual attributes in the features. The k-nearest neighbor algorithm is applied to classify new faces based on their weighted features from the templates of the training set. Each template contains the corrected distances (Graphs) of different points on the face components and the results of Principal Component Analysis (PCA) applied to the output of face detection rectangle. The proposed hybrid algorithm is trained using MATLAB software to determine best features and their associated weights and is then implemented by using delphi XE2 programming environment to recognize faces in real time. The main advantage of this algorithm is the capability of recognizing the face by only one picture in real time. The obtained results of the proposed technique on FERET database show that the accuracy and effectiveness of the proposed algorithm. Manuscript profile
      • Open Access Article

        2 - Cover Selection Steganography Via Run Length Matrix and Human Visual System
        Sara Nazari Mohammad Shahram Moin
        A novel approach for steganography cover selection is proposed, based on image texture features and human visual system. Our proposed algorithm employs run length matrix to select a set of appropriate images from an image database and creates their stego version after e More
        A novel approach for steganography cover selection is proposed, based on image texture features and human visual system. Our proposed algorithm employs run length matrix to select a set of appropriate images from an image database and creates their stego version after embedding process. Then, it computes similarity between original images and their stego versions by using structural similarity as image quality metric to select, as the best cover, one image with maximum similarity with its stego. According to the results of comparing our new proposed cover selection algorithm with other steganography methods, it is confirmed that the proposed algorithm is able to increase the stego quality. We also evaluated the robustness of our algorithm over steganalysis methods such as Wavelet based and Block based steganalyses; the experimental results show that the proposed approach decreases the risk of message hiding detection. Manuscript profile
      • Open Access Article

        3 - Online Signature Verification: a Robust Approach for Persian Signatures
        Mohamamd Esmaeel Yahyatabar Yasser  Baleghi Mohammad Reza Karami-Mollaei
        In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have More
        In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have more curvature and end in a specific style. Usually, Persian signatures have special characteristics, in terms of speed, acceleration and pen pressure, during drawing curves. An experiment has been designed to determine the function indicating the most robust features of Persian signatures. Results obtained from this experiment are then used in feature extraction stage. To improve the performance of verification, a combination of shape based and dynamic extracted features is applied to Persian signature verification. To classify these signatures, Support Vector Machine (SVM) is applied. The proposed method is examined on two common Persian datasets, the new proposed Persian dataset in this paper (Noshirvani Dynamic Signature Dataset) and an international dataset (SVC2004). For three Persian datasets EER value are equal to 3, 3.93, 4.79, while for SVC2004 the EER value is 4.43. Manuscript profile
      • Open Access Article

        4 - Fusion Infrared and Visible Images Using Optimal Weights
        Mehrnoush  Gholampour Hassan Farsi Sajad Mohammadzadeh
        Image fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is on More
        Image fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is one of image fusion methods. The fusion is performed based on maintaining useful features and reducing or removing useless features. The aim of fusion has to be clearly specified. In this paper we propose a new method which combines vision and infrared images by weighting average to provide better image quality. The weighting average is performed in gradient domain. The weight of each image depends on its useful features. Since these images are recorded in night vision, the useful features are related to clear scene details. For this reason, object detection is applied on the infrared image and considered as its weight. The vision image is also considered as a complementary of infrared image weight. The averaging is performed in gradient of input images, and final composed image is obtained by Gauss-Seidel method. The quality of resulted image by the proposed algorithm is compared to the obtained images by state-of-the-art algorithms using quantitative and qualitative measures. The obtained results show that the proposed algorithm provides better image quality. Manuscript profile
      • Open Access Article

        5 - Application of Curve Fitting in Hyperspectral Data Classification and Compression
        S. Abolfazl  Hosseini
        Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been develop More
        Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been developed using spectral and spatial domains. In this paper, a feature extracting technique is proposed based on rational function curve fitting. For each pixel of a hyperspectral image, a specific rational function approximation is developed to fit the spectral response curve of that pixel. Coefficients of the numerator and denominator polynomials of these functions are considered as new extracted features. This new technique is based on the fact that the sequence discipline - ordinance of reflectance coefficients in spectral response curve - contains some information which has not been considered by other statistical analysis based methods, such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and their nonlinear versions. Also, we show that naturally different curves can be approximated by rational functions with equal form, but different amounts of coefficients. Maximum likelihood classification results demonstrate that the Rational Function Curve Fitting Feature Extraction (RFCF-FE) method provides better classification accuracies compared to competing feature extraction algorithms. The method, also, has the ability of lossy data compression. The original data can be reconstructed using the fitted curves. In addition, the proposed algorithm has the possibility to be applied to all pixels of image individually and simultaneously, unlike to PCA and other methods which need to know whole data for computing the transform matrix. Manuscript profile
      • Open Access Article

        6 - On-road Vehicle detection based on hierarchical clustering using adaptive vehicle localization
        Moslem  Mohammadi Jenghara Hossein Ebrahimpour Komleh
        Vehicle detection is one of the important tasks in automatic driving. It is a hard problem that many researchers focused on it. Most commercial vehicle detection systems are based on radar. But these methods have some problems such as have problem in zigzag motions. Im More
        Vehicle detection is one of the important tasks in automatic driving. It is a hard problem that many researchers focused on it. Most commercial vehicle detection systems are based on radar. But these methods have some problems such as have problem in zigzag motions. Image processing techniques can overcome these problems.This paper introduces a method based on hierarchical clustering using low-level image features for on-road vehicle detection. Each vehicle assumed as a cluster. In traditional clustering methods, the threshold distance for each cluster is fixed, but in this paper, the adaptive threshold varies according to the position of each cluster. The threshold measure is computed with bivariate normal distribution. Sampling and teammate selection for each cluster is applied by the members-based weighted average. For this purpose, unlike other methods that use only horizontal or vertical lines, a fully edge detection algorithm was utilized. Corner is an important feature of video images that commonly were used in vehicle detection systems. In this paper, Harris features are applied to detect the corners. LISA data set is used to evaluate the proposed method. Several experiments are applied to investigate the performance of proposed algorithm. Experimental results show good performance compared to other algorithms . Manuscript profile
      • Open Access Article

        7 - Concept Detection in Images Using SVD Features and Multi-Granularity Partitioning and Classification
        Kamran  Farajzadeh Esmail  Zarezadeh Jafar Mansouri
        New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) " More
        New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) "directly" to the "raw" images. In SVD features edge, color and texture information is integrated simultaneously and is sorted based on their importance for the concept detection. Feature extraction is performed in a multi-granularity partitioning manner. In contrast to the existing systems, classification is carried out for each grid partition of each granularity separately. This separates the effect of classifications on partitions with and without the target concept on each other. Since SVD features have high dimensionality, classification is carried out with K-nearest neighbor (K-NN) algorithm that utilizes a new and "stable" distance function, namely, multiplicative distance. Experimental results on PASCAL VOC and TRECVID datasets show the effectiveness of the proposed SVD features and multi-granularity partitioning and classification method Manuscript profile
      • Open Access Article

        8 - Improved Generic Object Retrieval In Large Scale Databases By SURF Descriptor
        Hassan Farsi Reza Nasiripour Sajad Mohammadzadeh
        Normally, the-state-of-the-art methods in field of object retrieval for large databases are achieved by training process. We propose a novel large-scale generic object retrieval which only uses a single query image and training-free. Current object retrieval methods req More
        Normally, the-state-of-the-art methods in field of object retrieval for large databases are achieved by training process. We propose a novel large-scale generic object retrieval which only uses a single query image and training-free. Current object retrieval methods require a part of image database for training to construct the classifier. This training can be supervised or unsupervised and semi-supervised. In the proposed method, the query image can be a typical real image of the object. The object is constructed based on Speeded Up Robust Features (SURF) points acquired from the image. Information of relative positions, scale and orientation between SURF points are calculated and constructed into the object model. Dynamic programming is used to try all possible combinations of SURF points for query and datasets images. The ability to match partial affine transformed object images comes from the robustness of SURF points and the flexibility of the model. Occlusion is handled by specifying the probability of a missing SURF point in the model. Experimental results show that this matching technique is robust under partial occlusion and rotation. The properties and performance of the proposed method are demonstrated on the large databases. The obtained results illustrate that the proposed method improves the efficiency, speeds up recovery and reduces the storage space. Manuscript profile
      • Open Access Article

        9 - Mitosis detection in breast cancer histological images based on texture features using AdaBoost
        Sooshiant  Zakariapour Hamid Jazayeri Mehdi Ezoji
        Counting mitotic figures present in tissue samples from a patient with cancer, plays a crucial role in assessing the patient’s survival chances. In clinical practice, mitotic cells are counted manually by pathologists in order to grade the proliferative activity of brea More
        Counting mitotic figures present in tissue samples from a patient with cancer, plays a crucial role in assessing the patient’s survival chances. In clinical practice, mitotic cells are counted manually by pathologists in order to grade the proliferative activity of breast tumors. However, detecting mitoses under a microscope is a labourious, time-consuming task which can benefit from computer aided diagnosis. In this research we aim to detect mitotic cells present in breast cancer tissue, using only texture and pattern features. To classify cells into mitotic and non-mitotic classes, we use an AdaBoost classifier, an ensemble learning method which uses other (weak) classifiers to construct a strong classifier. 11 different classifiers were used separately as base learners, and their classification performance was recorded. The proposed ensemble classifier is tested on the standard MITOS-ATYPIA-14 dataset, where a pixel window around each cells center was extracted to be used as training data. It was observed that an AdaBoost that used Logistic Regression as its base learner achieved a F1 Score of 0.85 using only texture features as input which shows a significant performance improvement over status quo. It also observed that "Decision Trees" provides the best recall among base classifiers and "Random Forest" has the best Precision. Manuscript profile
      • Open Access Article

        10 - The Separation of Radar Clutters using Multi-Layer Perceptron
        Mohammad Akhondi Darzikolaei Ataollah Ebrahimzadeh Elahe Gholami
        Clutter usually has negative influence on the detection performance of radars. So, the recognition of clutters is crucial to detect targets and the role of clutters in detection cannot be ignored. The design of radar detectors and clutter classifiers are really complica More
        Clutter usually has negative influence on the detection performance of radars. So, the recognition of clutters is crucial to detect targets and the role of clutters in detection cannot be ignored. The design of radar detectors and clutter classifiers are really complicated issues. Therefore, in this paper aims to classify radar clutters. The novel proposed MLP-based classifier for separating radar clutters is introduced. This classifier is designed with different hidden layers and five training algorithms. These training algorithms consist of Levenberg-Marquardt, conjugate gradient, resilient back-propagation, BFGS and one step secant algorithms. Statistical distributions are established models which widely used in the performance calculations of radar clutters. Hence In this research, Rayleigh, Log normal, Weibull and K-distribution clutters are utilized as input data. Then Burg’s reflection coefficients, skewness and kurtosis are three features which applied to extract the best characteristics of input data. In the next step, the proposed classifier is tested in different conditions and the results represent that the proposed MLP-based classifier is very successful and can distinguish clutters with high accuracy. Comparing the results of proposed technique and RBF-based classifier show that proposed method is more efficient. The results of simulations prove that the validity of MLP-based method. Manuscript profile
      • Open Access Article

        11 - Eye Gaze Detection Based on Learning Automata by Using SURF Descriptor
        Hassan Farsi Reza Nasiripour Sajad Mohammadzadeh
        In the last decade, eye gaze detection system is one of the most important areas in image processing and computer vision. The performance of eye gaze detection system depends on iris detection and recognition (IR). Iris recognition is very important role for person iden More
        In the last decade, eye gaze detection system is one of the most important areas in image processing and computer vision. The performance of eye gaze detection system depends on iris detection and recognition (IR). Iris recognition is very important role for person identification. The aim of this paper is to achieve higher recognition rate compared to learning automata based methods. Usually, iris retrieval based systems consist of several parts as follows: pre-processing, iris detection, normalization, feature extraction and classification which are captured from eye region. In this paper, a new method without normalization step is proposed. Meanwhile, Speeded up Robust Features (SURF) descriptor is used to extract features of iris images. The descriptor of each iris image creates a vector with 64 dimensions. For classification step, learning automata classifier is applied. The proposed method is tested on three known iris databases; UBIRIS, MMU and UPOL database. The proposed method results in recognition rate of 100% for UBIRIS and UPOL databases and 99.86% for MMU iris database. Also, EER rate of the proposed method for UBIRIS, UPOL and MMU iris database are 0.00%, 0.00% and 0.008%, respectively. Experimental results show that the proposed learning automata classifier results in minimum classification error, and improves precision and computation time. Manuscript profile
      • Open Access Article

        12 - Handwritten Digits Recognition Using an Ensemble Technique Based on the Firefly Algorithm
        Azar Mahmoodzadeh Hamed Agahi Marzieh  Salehi
        This paper develops a multi-step procedure for classifying Farsi handwritten digits using a combination of classifiers. Generally, the technique relies on extracting a set of characteristics from handwritten samples, training multiple classifiers to learn to discriminat More
        This paper develops a multi-step procedure for classifying Farsi handwritten digits using a combination of classifiers. Generally, the technique relies on extracting a set of characteristics from handwritten samples, training multiple classifiers to learn to discriminate between digits, and finally combining the classifiers to enhance the overall system performance. First, a pre-processing course is performed to prepare the images for the main steps. Then three structural and statistical characteristics are extracted which include several features, among which a multi-objective genetic algorithm selects those more effective ones in order to reduce the computational complexity of the classification step. For the base classification, a decision tree (DT), an artificial neural networks (ANN) and a k-nearest neighbor (KNN) models are employed. Finally, the outcomes of the classifiers are fed into a classifier ensemble system to make the final decision. This hybrid system assigns different weights for each class selected by each classifier. These voting weights are adjusted by a metaheuristic firefly algorithm which optimizes the accuracy of the overall system. The performance of the implemented approach on the standard HODA dataset is compared with the base classifiers and some state-of-the-art methods. Evaluation of the proposed technique demonstrates that the proposed hybrid system attains high performance indices including accuracy of 98.88% with only eleven features. Manuscript profile
      • Open Access Article

        13 - Long-Term Spectral Pseudo-Entropy (LTSPE): A New Robust Feature for Speech Activity Detection
        Mohammad Rasoul  kahrizi Seyed jahanshah kabudian
        Speech detection systems are known as a type of audio classifier systems which are used to recognize, detect or mark parts of an audio signal including human speech. Applications of these types of systems include speech enhancement, noise cancellation, identification, r More
        Speech detection systems are known as a type of audio classifier systems which are used to recognize, detect or mark parts of an audio signal including human speech. Applications of these types of systems include speech enhancement, noise cancellation, identification, reducing the size of audio signals in communication and storage, and many other applications. Here, a novel robust feature named Long-Term Spectral Pseudo-Entropy (LTSPE) is proposed to detect speech and its purpose is to improve performance in combination with other features, increase accuracy and to have acceptable performance. To this end, the proposed method is compared to other new and well-known methods of this context in two different conditions, with uses a well-known speech enhancement algorithm to improve the quality of audio signals and without using speech enhancement algorithm. In this research, the MUSAN dataset has been used, which includes a large number of audio signals in the form of music, speech and noise. Also various known methods of machine learning have been used. As well as Criteria for measuring accuracy and error in this paper are the criteria for F-Score and Equal-Error Rate (EER) respectively. Experimental results on MUSAN dataset show that if our proposed feature LTSPE is combined with other features, the performance of the detector is improved. Moreover, this feature has higher accuracy and lower error compared to similar ones. Manuscript profile
      • Open Access Article

        14 - Graph Based Feature Selection Using Symmetrical Uncertainty in Microarray Dataset
        Soodeh Bakhshandeh azmi azmi Mohammad Teshnehlab
        Microarray data with small samples and thousands of genes makes a difficult challenge for researches. Using gene selection in microarray data helps to select the most relevant genes from original dataset with the purpose of reducing the dimensionality of the microarray More
        Microarray data with small samples and thousands of genes makes a difficult challenge for researches. Using gene selection in microarray data helps to select the most relevant genes from original dataset with the purpose of reducing the dimensionality of the microarray data as well as increasing the prediction performance. In this paper, a new gene selection method is proposed based on community detection technique and ranking the best genes. Symmetric Uncertainty is used for selection of the best genes by calculation of similarity between two genes and between each gene and class label which leads to representation of search space as a graph, in the first step. Afterwards, the proposed graph is divided into several clusters using community detection algorithm and finally, after ranking the genes, the genes with maximum ranks are selected as the best genes. This approach is a supervised/unsupervised filter-based gene selection method that minimizes the redundancy between genes and maximizes the relevance of genes and class label. Performance of the proposed method is compared with thirteen well-known unsupervised/supervised gene selection approaches over six microarray datasets using four classifiers including SVM, DT, NB and k-NN. Results show the advantages of the proposed approach. Manuscript profile
      • Open Access Article

        15 - Drone Detection by Neural Network Using GLCM and SURF Features
        Tanzia  Ahmed Tanvir  Rahman Bir  Ballav Roy Jia Uddin
        This paper presents a vision-based drone detection method. There are a number of researches on object detection which includes different feature extraction methods – all of those are used distinctly for the experiments. But in the proposed model, a hybrid feature extrac More
        This paper presents a vision-based drone detection method. There are a number of researches on object detection which includes different feature extraction methods – all of those are used distinctly for the experiments. But in the proposed model, a hybrid feature extraction method using SURF and GLCM is used to detect object by Neural Network which has never been experimented before. Both are very popular ways of feature extraction. Speeded-up Robust Feature (SURF) is a blob detection algorithm which extracts the points of interest from an integral image, thus converts the image into a 2D vector. The Gray-Level Co-Occurrence Matrix (GLCM) calculates the number of occurrences of consecutive pixels in same spatial relationship and represents it in a new vector- 8 × 8 matrix of best possible attributes of an image. SURF is a popular method of feature extraction and fast matching of images, whereas, GLCM method extracts the best attributes of the images. In the proposed model, the images were processed first to fit our feature extraction methods, then the SURF method was implemented to extract the features from those images into a 2D vector. Then for our next step GLCM was implemented which extracted the best possible features out of the previous vector, into a 8 × 8 matrix. Thus, image is processed in to a 2D vector and feature extracted from the combination of both SURF and GLCM methods ensures the quality of the training dataset by not just extracting features faster (with SURF) but also extracting the best of the point of interests (with GLCM). The extracted featured related to the pattern are used in the neural network for training and testing. Pattern recognition algorithm has been used as a machine learning tool for the training and testing of the model. In the experimental evaluation, the performance of proposed model is examined by cross entropy for each instance and percentage error. For the tested drone dataset, experimental results demonstrate improved performance over the state-of-art models by exhibiting less cross entropy and percentage error. Manuscript profile
      • Open Access Article

        16 - Farsi Font Detection using the Adaptive RKEM-SURF Algorithm
        Zahra Hossein-Nejad Hamed Agahi Azar Mahmoodzadeh
        Farsi font detection is considered as the first stage in the Farsi optical character recognition (FOCR) of scanned printed texts. To this aim, this paper proposes an improved version of the speeded-up robust features (SURF) algorithm, as the feature detector in the font More
        Farsi font detection is considered as the first stage in the Farsi optical character recognition (FOCR) of scanned printed texts. To this aim, this paper proposes an improved version of the speeded-up robust features (SURF) algorithm, as the feature detector in the font recognition process. The SURF algorithm suffers from creation of several redundant features during the detection phase. Thus, the presented version employs the redundant keypoint elimination method (RKEM) to enhance the matching performance of the SURF by reducing unnecessary keypoints. Although the performance of the RKEM is acceptable in this task, it exploits a fixed experimental threshold value which has a detrimental impact on the results. In this paper, an Adaptive RKEM is proposed for the SURF algorithm which considers image type and distortion, when adjusting the threshold value. Then, this improved version is applied to recognize Farsi fonts in texts. To do this, the proposed Adaptive RKEM-SURF detects the keypoints and then SURF is used as the descriptor for the features. Finally, the matching process is done using the nearest neighbor distance ratio. The proposed approach is compared with recently published algorithms for FOCR to confirm its superiority. This method has the capability to be generalized to other languages such as Arabic and English. Manuscript profile
      • Open Access Article

        17 - An Effective Method of Feature Selection in Persian Text for Improving the Accuracy of Detecting Request in Persian Messages on Telegram
        zahra khalifeh zadeh Mohammad Ali Zare Chahooki
        In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from More
        In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from these sources of information. In automatic text classification, words are assessed as a set of features. Selecting useful features from each text reduces the size of the feature vector and improves classification performance. Many algorithms have been applied for the automatic classification of text. Although all the methods proposed for other languages are applicable and comparable, studies on classification and feature selection in the Persian text have not been sufficiently carried out. The present research is conducted in Persian, and the introduction of a Persian dataset is a part of its innovation. In the present article, an innovative approach is presented to improve the performance of Persian text classification. The authors extracted 85,000 Persian messages from the Idekav-system, which is a Telegram search engine. The new idea presented in this paper to process and classify this textual data is on the basis of the feature vector expansion by adding some selective features using the most extensively used feature selection methods based on Local and Global filters. The new feature vector is then filtered by applying the secondary feature selection. The secondary feature selection phase selects more appropriate features among those added from the first step to enhance the effect of applying wrapper methods on classification performance. In the third step, the combined filter-based methods and the combination of the results of different learning algorithms have been used to achieve higher accuracy. At the end of the three selection stages, a method was proposed that increased accuracy up to 0.945 and reduced training time and calculations in the Persian dataset. Manuscript profile
      • Open Access Article

        18 - A Hybrid Machine Learning Approach for Sentiment Analysis of Beauty Products Reviews
        Kanika Jindal Rajni Aron
        Nowadays, social media platforms have become a mirror that imitates opinions and feelings about any specific product or event. These product reviews are capable of enhancing communication among entrepreneurs and their customers. These reviews need to be extracted and an More
        Nowadays, social media platforms have become a mirror that imitates opinions and feelings about any specific product or event. These product reviews are capable of enhancing communication among entrepreneurs and their customers. These reviews need to be extracted and analyzed to predict the sentiment polarity, i.e., whether the review is positive or negative. This paper aims to predict the human sentiments expressed for beauty product reviews extracted from Amazon and improve the classification accuracy. The three phases instigated in our work are data pre-processing, feature extraction using the Bag-of-Words (BoW) method, and sentiment classification using Machine Learning (ML) techniques. A Global Optimization-based Neural Network (GONN) is proposed for the sentimental classification. Then an empirical study is conducted to analyze the performance of the proposed GONN and compare it with the other machine learning algorithms, such as Random Forest (RF), Naive Bayes (NB), and Support Vector Machine (SVM). We dig further to cross-validate these techniques by ten folds to evaluate the most accurate classifier. These models have also been investigated on the Precision-Recall (PR) curve to assess and test the best technique. Experimental results demonstrate that the proposed method is the most appropriate method to predict the classification accuracy for our defined dataset. Specifically, we exhibit that our work is adept at training the textual sentiment classifiers better, thereby enhancing the accuracy of sentiment prediction. Manuscript profile
      • Open Access Article

        19 - An Automatic Thresholding Approach to Gravitation-Based Edge Detection in Grey-Scale Images
        Hamed Agahi Kimia Rezaei
        This paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal thres More
        This paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal threshold is automatically found, according to which the proposed method dichotomizes the pixels to the edges and non-edges. First, some pre-processing operations are applied to the image. Then, the vector sum of the gravitational forces applied to each pixel by its neighbors is computed according to the universal law of gravitation. Afterwards, the force magnitude is mapped to a new characteristic called the force feature. Following this, the histogram representation of this feature is determined, for which an optimal threshold is aimed to be discovered. Three thresholding techniques are proposed, two of which contain iterative processes. The parameters of the formulation used in these techniques are adjusted by means of the metaheuristic grasshopper optimization algorithm. To evaluate the proposed system, two standard databases were used and multiple qualitative and quantitative measures were utilized. The results confirmed that the methodology of our work outperformed some conventional and recent detectors, achieving the average precision of 0.894 on the BSDS500 dataset. Moreover, the outputs had high similarity to the ideal edge maps. Manuscript profile
      • Open Access Article

        20 - Optimized kernel Nonparametric Weighted Feature Extraction for Hyperspectral Image Classification
        Mohammad Hasheminejad
        Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this techni More
        Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this technique. Since hyperspectral images contain redundant measurements, it is crucial to identify a subset of efficient features for modeling the classes. Kernel-based methods are widely used in this field. In this paper, we introduce a new kernel-based method that defines Hyperplane more optimally than previous methods. The presence of noise data in many kernel-based HSI classification methods causes changes in boundary samples and, as a result, incorrect class hyperplane training. We propose the optimized kernel non-parametric weighted feature extraction for hyperspectral image classification. KNWFE is a kernel-based feature extraction method, which has promising results in classifying remotely-sensed image data. However, it does not take the closeness or distance of the data to the target classes. Solving the problem, we propose optimized KNWFE, which results in better classification performance. Our extensive experiments show that the proposed method improves the accuracy of HSI classification and is superior to the state-of-the-art HIS classifiers. Manuscript profile
      • Open Access Article

        21 - Content-based Retrieval of Tiles and Ceramics Images based on Grouping of Images and Minimal Feature Extraction
        Simin RajaeeNejad Farahnaz Mohanna
        One of the most important databases in the e-commerce is tile and ceramic database, for which no specific retrieval method has been provided so far. In this paper, a method is proposed for the content-based retrieval of digital images of tiles and ceramics databases. Fi More
        One of the most important databases in the e-commerce is tile and ceramic database, for which no specific retrieval method has been provided so far. In this paper, a method is proposed for the content-based retrieval of digital images of tiles and ceramics databases. First, a database is created by photographing different tiles and ceramics on the market from different angles and directions, including 520 images. Then a query image and the database images are divided into nine equal sub-images and all are grouped based on their sub-images. Next, the selected color and texture features are extracted from the sub-images of the database images and query image, so, each image has a feature vector. The selected features are the minimum features that are required to reduce the amount of computations and information stored, as well as speed up the retrieval. Average precision is calculated for the similarity measure. Finally, comparing the query feature vector with the feature vectors of all database images leads to retrieval. According to the retrieving results by the proposed method, its accuracy and speed are improved by 16.55% and 23.88%, respectively, compared to the most similar methods. Manuscript profile
      • Open Access Article

        22 - Application of Machine Learning in the Telecommunications Industry: Partial Churn Prediction by using a Hybrid Feature Selection Approach
        Fatemeh Mozaffari Iman Raeesi Vanani Payam Mahmoudian Babak Sohrabi
        The telecommunications industry is one of the most competitive industries in the world. Because of the high cost of customer acquisition and the adverse effects of customer churn on the company's performance, customer retention becomes an inseparable part of strategic d More
        The telecommunications industry is one of the most competitive industries in the world. Because of the high cost of customer acquisition and the adverse effects of customer churn on the company's performance, customer retention becomes an inseparable part of strategic decision-making and one of the main objectives of customer relationship management. Although customer churn prediction models are widely studied in various domains, several challenges remain in designing and implementing an effective model. This paper addresses the customer churn prediction problem with a practical approach. The experimental analysis was conducted on the customers' data gathered from available sources at a telecom company in Iran. First, partial churn was defined in a new way that exploits the status of customers based on criteria that can be measured easily in the telecommunications industry. This definition is also based on data mining techniques that can find the degree of similarity between assorted customers with active ones or churners. Moreover, a hybrid feature selection approach was proposed in which various feature selection methods, along with the crowd's wisdom, were applied. It was found that the wisdom of the crowd can be used as a useful feature selection method. Finally, a predictive model was developed using advanced machine learning algorithms such as bagging, boosting, stacking, and deep learning. The partial customer churn was predicted with more than 88% accuracy by the Gradient Boosting Machine algorithm by using 5-fold cross-validation. Comparative results indicate that the proposed model performs efficiently compared to the ones applied in the previous studies. Manuscript profile