• Home
  • Classification
  • OpenAccess
    • List of Articles Classification

      • Open Access Article

        1 - Tracking Performance of Semi-Supervised Large Margin Classifiers in Automatic Modulation Classification
        Hamidreza Hosseinzadeh Farbod Razzazi Afrooz Haghbin
        Automatic modulation classification (AMC) in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, we propose a semi-superv More
        Automatic modulation classification (AMC) in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, we propose a semi-supervised Large margin AMC and evaluate it on tracking the received signal to noise ratio (SNR) changes to classify all forms of signals in a cognitive radio environment. To achieve this objective, two structures for self-training of large margin classifiers were developed in additive white Gaussian noise (AWGN) channels with priori unknown SNR. A suitable combination of the higher order statistics and instantaneous characteristics of digital modulation are selected as effective features. Simulation results show that adding unlabeled input samples to the training set, improve the tracking capacity of the presented system to robust against environmental SNR changes. Manuscript profile
      • Open Access Article

        2 - A new Sparse Coding Approach for Human Face and Action Recognition
        Mohsen Nikpoor Mohammad Reza Karami-Mollaei Reza Ghaderi
        Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image, video and etc. In the cases where we have some similar images from the different classes, using the sparse coding method the images may be classified into More
        Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image, video and etc. In the cases where we have some similar images from the different classes, using the sparse coding method the images may be classified into the same class and devalue classification performance. In this paper, we propose an Affine Graph Regularized Sparse Coding approach for resolving this problem. We apply the sparse coding and graph regularized sparse coding approaches by adding the affinity constraint to the objective function to improve the recognition rate. Several experiments has been done on well-known face datasets such as ORL and YALE. The first experiment has been done on ORL dataset for face recognition and the second one has been done on YALE dataset for face expression detection. Both experiments have been compared with the basic approaches for evaluating the proposed method. The simulation results show that the proposed method can significantly outperform previous methods in face classification. In addition, the proposed method is applied to KTH action dataset and the results show that the proposed sparse coding approach could be applied for action recognition applications too. Manuscript profile
      • Open Access Article

        3 - Instance Based Sparse Classifier Fusion for Speaker Verification
        Mohammad Hasheminejad Hassan Farsi
        This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers More
        This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method. Manuscript profile
      • Open Access Article

        4 - Concept Detection in Images Using SVD Features and Multi-Granularity Partitioning and Classification
        Kamran  Farajzadeh Esmail  Zarezadeh Jafar Mansouri
        New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) " More
        New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) "directly" to the "raw" images. In SVD features edge, color and texture information is integrated simultaneously and is sorted based on their importance for the concept detection. Feature extraction is performed in a multi-granularity partitioning manner. In contrast to the existing systems, classification is carried out for each grid partition of each granularity separately. This separates the effect of classifications on partitions with and without the target concept on each other. Since SVD features have high dimensionality, classification is carried out with K-nearest neighbor (K-NN) algorithm that utilizes a new and "stable" distance function, namely, multiplicative distance. Experimental results on PASCAL VOC and TRECVID datasets show the effectiveness of the proposed SVD features and multi-granularity partitioning and classification method Manuscript profile
      • Open Access Article

        5 - An Experimental Study on Performance of Text Representation Models for Sentiment Analysis
        Sajjad Jahanbakhsh Gudakahriz Amir Masoud Eftekhari Moghaddam Fariborz Mahmoudi
        Sentiment analysis in social networks has been an active research field since 2000 and it is highly useful in the decision-making process of various domains and applications. In sentiment analysis, the goal is to analyze the opinion texts posted in social networks and o More
        Sentiment analysis in social networks has been an active research field since 2000 and it is highly useful in the decision-making process of various domains and applications. In sentiment analysis, the goal is to analyze the opinion texts posted in social networks and other web-based resources to extract the necessary information from them. The data collected from various social networks and web sites do not possess a structured format, and this unstructured format is the main challenge for facing such data. It is necessary to represent the texts in the form of a text representation model to be able to analyze the content to overcome this challenge. Afterward, the required analysis can be done. The research on text modeling started a few decades ago, and so far, various models have been proposed for performing this modeling process. The main purpose of this paper is to evaluate the efficiency and effectiveness of a number of commons and famous text representation models for sentiment analysis. This evaluation is carried out by using these models for sentiment classification by ensemble methods. An ensemble classifier is used for sentiment classification and after preprocessing, the texts is represented by selected models. The selected models for this study are TF-IDF, LSA, Word2Vec, and Doc2Vec and the used evaluation measures are Accuracy, Precision, Recall, and F-Measure. The results of the study show that in general, the Doc2Vec model provides better performance compared to other models in sentiment analysis and at best, accuracy is 0.72. Manuscript profile
      • Open Access Article

        6 - Predicting Student Performance for Early Intervention using Classification Algorithms in Machine Learning
        Kalaivani K Ulagapriya K Saritha A Ashutosh  Kumar
        Predicting Student’s Performance System is to find students who may require early intervention before they fail to graduate. It is generally meant for the teaching faculty members to analyze Student's Performance and Results. It stores Student Details in a database and More
        Predicting Student’s Performance System is to find students who may require early intervention before they fail to graduate. It is generally meant for the teaching faculty members to analyze Student's Performance and Results. It stores Student Details in a database and uses Machine Learning Model using i. Python Data Analysis tools like Pandas and ii. Data Visualization tools like Seaborn to analyze the overall Performance of the Class. The proposed system suggests student performance prediction through Machine Learning Algorithms and Data Mining Techniques. The Data Mining technique used here is classification, which classifies the students based on student’s attributes. The Front end of the application is made using React JS Library with Data Visualization Charts and connected to a backend Database where all student’s records are stored in MongoDB and the Machine Learning model is trained and deployed through Flask. In this process, the machine learning algorithm is trained using a dataset to create a model and predict the output on the basis of that model. Three different types of data used in Machine Learning are continuous, categorical and binary. In this study, a brief description and comparative analysis of various classification techniques is done using student performance dataset. The six different machine learning Classification algorithms, which have been compared, are Logistic Regression, Decision Tree, K-Nearest Neighbor, Naïve Bayes, Support Vector Machine and Random Forest. The results of Naïve Bayes classifier are comparatively higher than other techniques in terms of metrics such as precision, recall and F1 score. The values of precision, recall and F1 score are 0.93, 0.92 and 0.92 respectively. Manuscript profile
      • Open Access Article

        7 - Diagnosis of Gastric Cancer via Classification of the Tongue Images using Deep Convolutional Networks
        Elham Gholami Seyed Reza Kamel Tabbakh Maryam khairabadi
        Gastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting nu More
        Gastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting numerous tests and imagings, which are costly and time-consuming. Therefore, doctors are seeking a cost-effective and time-efficient alternative. One of the medical solutions is Chinese medicine and diagnosis by observing changes of the tongue. Detecting the disease using tongue appearance and color of various sections of the tongue is one of the key components of traditional Chinese medicine. In this study, a method is presented which can carry out the localization of tongue surface regardless of the different poses of people in images. In fact, if the localization of face components, especially the mouth, is done correctly, the components leading to the biggest distinction in the dataset can be used which is favorable in terms of time and space complexity. Also, since we have the best estimation, the best features can be extracted relative to those components and the best possible accuracy can be achieved in this situation. The extraction of appropriate features in this study is done using deep convolutional neural networks. Finally, we use the random forest algorithm to train the proposed model and evaluate the criteria. Experimental results show that the average classification accuracy has reached approximately 73.78 which demonstrates the superiority of the proposed method compared to other methods. Manuscript profile
      • Open Access Article

        8 - Performance Analysis of Hybrid SOM and AdaBoost Classifiers for Diagnosis of Hypertensive Retinopathy
        Wiharto Wiharto Esti Suryani Murdoko Susilo
        The diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD More
        The diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD-RH system based on feature extraction tortuosity of retinal blood vessels. This study uses a segmentation method based on clustering self-organizing maps (SOM) combined with feature extraction, feature selection, and the ensemble Adaptive Boosting (AdaBoost) classification algorithm. Feature extraction was performed using fractal analysis with the box-counting method, lacunarity with the gliding box method, and invariant moment. Feature selection is done by using the information gain method, to rank all the features that are produced, furthermore, it is selected by referring to the gain value. The best system performance is generated in the number of clusters 2 with fractal dimension, lacunarity with box size 22-29, and invariant moment M1 and M3. Performance in these conditions is able to provide 84% sensitivity, 88% specificity, 7.0 likelihood ratio positive (LR+), and 86% area under the curve (AUC). This model is also better than a number of ensemble algorithms, such as bagging and random forest. Referring to these results, it can be concluded that the use of this model can be an alternative to CAD-RH, where the resulting performance is in a good category. Manuscript profile
      • Open Access Article

        9 - Optimized kernel Nonparametric Weighted Feature Extraction for Hyperspectral Image Classification
        Mohammad Hasheminejad
        Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this techni More
        Hyperspectral image (HSI) classification is an essential means of the analysis of remotely sensed images. Remote sensing of natural resources, astronomy, medicine, agriculture, food health, and many other applications are examples of possible applications of this technique. Since hyperspectral images contain redundant measurements, it is crucial to identify a subset of efficient features for modeling the classes. Kernel-based methods are widely used in this field. In this paper, we introduce a new kernel-based method that defines Hyperplane more optimally than previous methods. The presence of noise data in many kernel-based HSI classification methods causes changes in boundary samples and, as a result, incorrect class hyperplane training. We propose the optimized kernel non-parametric weighted feature extraction for hyperspectral image classification. KNWFE is a kernel-based feature extraction method, which has promising results in classifying remotely-sensed image data. However, it does not take the closeness or distance of the data to the target classes. Solving the problem, we propose optimized KNWFE, which results in better classification performance. Our extensive experiments show that the proposed method improves the accuracy of HSI classification and is superior to the state-of-the-art HIS classifiers. Manuscript profile