• List of Articles


      • Open Access Article

        1 - Opinion Mining in Persian Language Using Supervised Algorithms
        Saeedeh Alimardani abdollah aghaei
        Rapid growth of Internet results in large amount of user-generated contents in social media, forums, blogs, and etc. Automatic analysis of this content is needed to extract valuable information from these contents. Opinion mining is a process of analyzing opinions, sent More
        Rapid growth of Internet results in large amount of user-generated contents in social media, forums, blogs, and etc. Automatic analysis of this content is needed to extract valuable information from these contents. Opinion mining is a process of analyzing opinions, sentiments and emotions to recognize people’s preferences about different subjects. One of the main tasks of opinion mining is classifying a text document into positive or negative classes. Most of the researches in this field applied opinion mining for English language. Although Persian language is spoken in different countries, but there are few studies for opinion mining in Persian language. In this article, a comprehensive study of opinion mining for Persian language is conducted to examine performance of opinion mining in different conditions. First we create a Persian SentiWordNet using Persian WordNet. Then this lexicon is used to weight features. Results of applying three machine learning algorithms Support vector machine (SVM), naive Bayes (NB) and logistic regression are compared before and after weighting by lexicon. Experiments show support vector machine and logistic regression achieve better results in most cases and applying SO (semantic orientation) improves the accuracy of logistic regression. Increasing number of instances and using unbalanced dataset has a positive effect on the performance of opinion mining. Generally this research provides better results comparing to other researches in opinion mining of Persian language. Manuscript profile
      • Open Access Article

        2 - Application of Curve Fitting in Hyperspectral Data Classification and Compression
        S. Abolfazl  Hosseini
        Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been develop More
        Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been developed using spectral and spatial domains. In this paper, a feature extracting technique is proposed based on rational function curve fitting. For each pixel of a hyperspectral image, a specific rational function approximation is developed to fit the spectral response curve of that pixel. Coefficients of the numerator and denominator polynomials of these functions are considered as new extracted features. This new technique is based on the fact that the sequence discipline - ordinance of reflectance coefficients in spectral response curve - contains some information which has not been considered by other statistical analysis based methods, such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and their nonlinear versions. Also, we show that naturally different curves can be approximated by rational functions with equal form, but different amounts of coefficients. Maximum likelihood classification results demonstrate that the Rational Function Curve Fitting Feature Extraction (RFCF-FE) method provides better classification accuracies compared to competing feature extraction algorithms. The method, also, has the ability of lossy data compression. The original data can be reconstructed using the fitted curves. In addition, the proposed algorithm has the possibility to be applied to all pixels of image individually and simultaneously, unlike to PCA and other methods which need to know whole data for computing the transform matrix. Manuscript profile
      • Open Access Article

        3 - Acoustic Noise Cancellation Using an Adaptive Algorithm Based on Correntropy Criterion and Zero Norm Regularization
        Mojtaba Hajiabadi
        The least mean square (LMS) adaptive algorithm is widely used in acoustic noise cancellation (ANC) scenario. In a noise cancellation scenario, speech signals usually have high amplitude and sudden variations that are modeled by impulsive noises. When the additive noise More
        The least mean square (LMS) adaptive algorithm is widely used in acoustic noise cancellation (ANC) scenario. In a noise cancellation scenario, speech signals usually have high amplitude and sudden variations that are modeled by impulsive noises. When the additive noise process is nonGaussian or impulsive, LMS algorithm has a very poor performance. On the other hand, it is well-known that the acoustic channels usually have sparse impulse responses. When the impulse response of system changes from a non-sparse to a highly sparse one, conventional algorithms like the LMS based adaptive filters can not make use of the priori knowledge of system sparsity and thus, fail to improve their performance both in terms of transient and steady state. Impulsive noise and sparsity are two important features in the ANC scenario that have paid special attention, recently. Due to the poor performance of the LMS algorithm in the presence of impulsive noise and sparse systems, this paper presents a novel adaptive algorithm that can overcomes these two features. In order to eliminate impulsive disturbances from speech signal, the information theoretic criterion, that is named correntropy, is used in the proposed cost function and the zero norm is also employed to deal with the sparsity feature of the acoustic channel impulse response. Simulation results indicate the superiority of the proposed algorithm in presence of impulsive noise along with sparse acoustic channel. Manuscript profile
      • Open Access Article

        4 - Effects of Wave Polarization on Microwave Imaging Using Linear Sampling Method
        Mehdi Salar Kaleji Mohammad  Zoofaghari reza Safian Zaker Hossein  Firouzeh
        Linear Sampling Method (LSM) is a simple and effective method for the shape reconstruction of unknown objects. It is also a fast and robust method to find the location of an object. This method is based on far field operator which relates the far field radiation to its More
        Linear Sampling Method (LSM) is a simple and effective method for the shape reconstruction of unknown objects. It is also a fast and robust method to find the location of an object. This method is based on far field operator which relates the far field radiation to its associated line source in the object. There has been an extensive research on different aspects of the method. But from the experimental point of view there has been little research especially on the effect of polarization on the imaging quality of the method. In this paper, we study the effect of polarization on the quality of shape reconstruction of two dimensional targets. Some examples are illustrated to compare the effect of transverse electric (TE) and transverse magnetic (TM) polarizations, on the reconstruction quality of penetrable and non-penetrable objects. Manuscript profile
      • Open Access Article

        5 - A New Approach to the Quantitative Measurement of Software Reliability
        Abbas  Rasoolzadegan
        Nowadays software systems have very important role in a lot of sensitive and critical applications. Sometimes a small error in software could cause financial or even health loss in critical applications. So reliability assurance as a nun-functional requirement, is very More
        Nowadays software systems have very important role in a lot of sensitive and critical applications. Sometimes a small error in software could cause financial or even health loss in critical applications. So reliability assurance as a nun-functional requirement, is very vital.One of the key tasks to ensure error-free operation of the software, is to have a quantitative measurement of the software reliability.Software reliability engineering is defined as the quantitative study of the operational behavior of software systems with respect to user requirements concerning reliability. Software Reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Quantifying software reliability is increasingly becoming necessary. We have recently proposed a new approach (referred to as SDAFlex&Rel) to the development of «reliable yet flexible» software. In this paper, we first present the definitions of a set of key terms that are necessary to communicate with the scope and contributions of this work. Based on the fact that software reliability is directly proportional to the reliability of the development approach used, in this paper, a new approach is proposed to quantitatively measure the reliability of the software developed using SDAFlex&Rel, thereby making precise informal claims on the reliability improvement. The quantitative results confirm the reliability improvement that is informally promised by SDAFlex&Rel. Manuscript profile
      • Open Access Article

        6 - Fusion Infrared and Visible Images Using Optimal Weights
        Mehrnoush  Gholampour Hassan Farsi Sajad Mohammadzadeh
        Image fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is on More
        Image fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is one of image fusion methods. The fusion is performed based on maintaining useful features and reducing or removing useless features. The aim of fusion has to be clearly specified. In this paper we propose a new method which combines vision and infrared images by weighting average to provide better image quality. The weighting average is performed in gradient domain. The weight of each image depends on its useful features. Since these images are recorded in night vision, the useful features are related to clear scene details. For this reason, object detection is applied on the infrared image and considered as its weight. The vision image is also considered as a complementary of infrared image weight. The averaging is performed in gradient of input images, and final composed image is obtained by Gauss-Seidel method. The quality of resulted image by the proposed algorithm is compared to the obtained images by state-of-the-art algorithms using quantitative and qualitative measures. The obtained results show that the proposed algorithm provides better image quality. Manuscript profile
      • Open Access Article

        7 - A Persian Fuzzy Plagiarism Detection Approach
        Shima Rakian Faramarz Safi Esfahani Hamid Rastegari
        Plagiarism is one of the common problems that is present in all organizations that deal with electronic content. At present, plagiarism detection tools, only detect word by word or exact copy phrases and paraphrasing is often mixed. One of the successful and applicable More
        Plagiarism is one of the common problems that is present in all organizations that deal with electronic content. At present, plagiarism detection tools, only detect word by word or exact copy phrases and paraphrasing is often mixed. One of the successful and applicable methods in paraphrasing detection is fuzzy method. In this study, a new fuzzy approach has been proposed to detect external plagiarism in Persian texts which is called Persian Fuzzy Plagiarism Detection (PFPD). The proposed approach compares paraphrased texts with the aim to recognize text similarities. External plagiarism detection, evaluates through a comparison between query document and a document collection. To avoid un-necessary comparisons this tool employs intelligent technology for comparing, suspicious documents, in different levels hierarchically. This method intends to conformed Fuzzy model to Persian language and improves previous methods to evaluate similarity degree between two sentences. Experiments on three corpora TMC, Irandoc and extracted corpus from prozhe.com, are performed to get confidence on proposed method performance. The obtained results showed that using proposed method in candidate documents retrieval, and in evaluating text similarity, increases the precision, recall and F measurement in comparing with one of the best previous fuzzy methods, respectively 22.41, 17.61, and 18.54 percent on the average. Manuscript profile
      • Open Access Article

        8 - Simultaneous Methods of Image Registration and Super-Resolution Using Analytical Combinational Jacobian Matrix
        Hossein  Rezayi Seyed Alireza  Seyedin
        In this paper we propose two new simultaneous image registration (IR) and super-resolution (SR) methods using a novel approach to calculate the Jacobian matrix. SR is the process of fusing several low resolution (LR) images to reconstruct a high resolution (HR) image; h More
        In this paper we propose two new simultaneous image registration (IR) and super-resolution (SR) methods using a novel approach to calculate the Jacobian matrix. SR is the process of fusing several low resolution (LR) images to reconstruct a high resolution (HR) image; however as inverse problem it consists of three principal operations of warping, blurring and down-sampling should be applied to the desired HR image to produce the existing LR images. Unlike the previous methods, we neither calculate the Jacobian matrix numerically nor derive the Jacobian matrix by treating the three principal operations separately. We develop a new approach to derive the Jacobian matrix analytically from combinational form of the three principal operations. In this approach, a Gaussian kernel (as it is more realistic in a wide rang of applications) is considered for blurring, which can be adaptively resized for each LR image. The main intended method is established by applying the aforementioned ideas to the joint methods, a class of simultaneous iterative methods in which the incremental values for both registration parameters and HR image are obtained by solving one system of equations per iteration. Our second proposed method is formed by applying these ideas to the alternating minimization (AM) methods, a class of simultaneous iterative methods in which the incremental values of registration parameters are obtained after calculating the high resolution image at each iteration. The results show that our methods are superior to the recently proposed methods such as Tian's joint and Hardie's AM method. Additionally, the computational cost of our proposed methods has also been reduced. Manuscript profile