List of subject articles Signal Processing


    • Open Access Article

      1 - A New Switched-beam Setup for Adaptive Antenna Array Beamforming
      Shahriar Shirvani Moghaddam Farida Akbari
      In this paper, a new spatio-temporal based approach is proposed which improves the speed and performance of temporal-based algorithms, conventional Least Mean Square (LMS), Normalized LMS (NLMS) and Variable Step-size LMS (VSLMS), by using the switched beam technique. I Full Text
      In this paper, a new spatio-temporal based approach is proposed which improves the speed and performance of temporal-based algorithms, conventional Least Mean Square (LMS), Normalized LMS (NLMS) and Variable Step-size LMS (VSLMS), by using the switched beam technique. In the proposed algorithm, first, DOA of the signal source is estimated by MUltiple SIgnal Classification (MUSIC) algorithm. In the second step, depending on the desired user's location, the closest beam of the switched beam system is selected and its predetermined weights are chosen as the initial values for the weight vector. Finally, LMS/NLMS/VSLMS algorithm is applied to initial weights and final weights are calculated. Simulation results show improved convergence and tracking speed and also a higher efficiency in data transmission through increasing the Signal to Interference plus Noise Ratio (SINR) as well as decreasing the Bit Error Rate (BER) and Mean Square Error (MSE), in a joint state. Moreover, Error Vector Magnitude (EVM) as a measure for distortion introduced by the proposed adaptive scheme on the received signal is evaluated for all LMS-based proposed algorithms which are approximately the same as that for conventional ones. In order to investigate the tracking capability of the proposed method, the system is assumed to be time varying and the desired signal location is considered once in the centre of the initial beam and once in the edge of the fixed beam. As depicted in simulation results, the proposed DOA-based methods offer beamforming with higher performance in both cases of the initial beam, centre as the best case and edge as the worst case, with respect to conventional ones. The MSE diagrams for this time varying system show an ideal response for DOA-based methods in the best case. Also, in the worst case, initial height of MSE is reduced and consequently the required iteration to converge is less than the conventional LMS/NLMS/VSLMS Manuscript Document
    • Open Access Article

      2 - An Intelligent Algorithm for the Process Section of Radar Surveillance Systems
      حبیب راثی
      In this paper, an intelligent algorithm for clustering, intra-pulse modulation detection and separation and identification of overlapping radar pulse train is presented. In most cases, based only on primary features of incoming radar signals, the modern electronic intel Full Text
      In this paper, an intelligent algorithm for clustering, intra-pulse modulation detection and separation and identification of overlapping radar pulse train is presented. In most cases, based only on primary features of incoming radar signals, the modern electronic intelligence system cannot recognize the different devices of the same type or class. A very important role is played by Measurement and Signature Intelligence. A radar intercept receiver passively collects incoming pulse samples from a number of unknown emitters. The information such as Pulse Repetition Interval (PRI), Angle of Arrival (AoA), Pulse Width (PW), Radio Frequencies (RF), and Doppler shifts are not usable. In the proposed algorithm, for clustering of overlapping pulses received from self-organization neural network SOFM (due to its high accuracy in comparison with other neural networks, such as CLNN and neural networks (Fuzzy ART), and for detecting intra-pulse modulation type, matrix method, and for identifying the radar type, RBF neural network have been used. The simulation results of the proposed algorithm shows that in the presence 5% noise and 5% missing pulse, the accuracy of the clustering part of the proposed algorithm is equivalent to 91/8%, intra-pulse modulation recognition accuracy is 98%, the detection accuracy is 2/99%, and the total output of the algorithm precision is 89/244%, respectively. Manuscript Document
    • Open Access Article

      3 - Speech Intelligibility Improvement in Noisy Environments for Near-End Listening Enhancement
      Peyman Goli Mohammad Reza Karami-Mollaei
      A new speech intelligibility improvement method for near-end listening enhancement in noisy environments is proposed. This method improves speech intelligibility by optimizing energy correlation of one-third octave bands of clean speech and enhanced noisy speech without Full Text
      A new speech intelligibility improvement method for near-end listening enhancement in noisy environments is proposed. This method improves speech intelligibility by optimizing energy correlation of one-third octave bands of clean speech and enhanced noisy speech without power increasing. The energy correlation is determined as a cost function based on frequency band gains of the clean speech. Interior-point algorithm which is an iterative procedure for the nonlinear optimization is used to determine the optimal points of the cost function because of nonlinearity and complexity of the energy correlation function. Two objective intelligibility measures, speech intelligibility index and short-time objective intelligibility measure, are employed to evaluate the noisy enhanced speech intelligibility. Furthermore, the speech intelligibility scores are compared with unprocessed speech and a baseline method under various noisy conditions. The results show large intelligibility improvements with the proposed method over the unprocessed noisy speech. Manuscript Document
    • Open Access Article

      4 - Online Signature Verification: a Robust Approach for Persian Signatures
      Mohamamd Esmaeel Yahyatabar Yasser  Baleghi Mohammad Reza Karami-Mollaei
      In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have Full Text
      In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have more curvature and end in a specific style. Usually, Persian signatures have special characteristics, in terms of speed, acceleration and pen pressure, during drawing curves. An experiment has been designed to determine the function indicating the most robust features of Persian signatures. Results obtained from this experiment are then used in feature extraction stage. To improve the performance of verification, a combination of shape based and dynamic extracted features is applied to Persian signature verification. To classify these signatures, Support Vector Machine (SVM) is applied. The proposed method is examined on two common Persian datasets, the new proposed Persian dataset in this paper (Noshirvani Dynamic Signature Dataset) and an international dataset (SVC2004). For three Persian datasets EER value are equal to 3, 3.93, 4.79, while for SVC2004 the EER value is 4.43. Manuscript Document
    • Open Access Article

      5 - Early Detection of Pediatric Heart Disease by Automated Spectral Analysis of Phonocardiogram
      Azra Rasouli Kenari
      Early recognition of heart disease is an important goal in pediatrics. Developing countries have a large population of children living with undiagnosed heart murmurs. As a result of an accompanying skills shortage, most of these children will not get the necessary treat Full Text
      Early recognition of heart disease is an important goal in pediatrics. Developing countries have a large population of children living with undiagnosed heart murmurs. As a result of an accompanying skills shortage, most of these children will not get the necessary treatment. Taking into account that heart auscultation remains the dominant method for heart examination in the small health centers of the rural areas and generally in primary healthcare setups, the enhancement of this technique would aid significantly in the diagnosis of heart diseases. The detection of murmurs from phonocardiographic recordings is an interesting problem that has been addressed before using a wide variety of techniques. We designed a system for automatically detecting systolic murmurs due to a variety of conditions. This could enable health care providers in developing countries with tools to screen large amounts of children without the need for expensive equipment or specialist skills. For this purpose an algorithm was designed and tested to detect heart murmurs in digitally recorded signals. Cardiac auscultatory examinations of 93 children were recorded, digitized, and stored along with corresponding echocardiographic diagnoses, and automated spectral analysis using discrete wavelet transforms was performed. Patients without heart disease and either no murmur or an innocent murmur (n = 40) were compared to patients with a variety of cardiac diagnoses and a pathologic systolic murmur present (n = 53). A specificity of 100% and a sensitivity of 90.57% were achieved using signal processing techniques and a k-nn as classifier. Manuscript Document
    • Open Access Article

      6 - A Fast and Accurate Sound Source Localization Method using Optimal Combination of SRP and TDOA Methodologies
      Mohammad  Ranjkesh Eskolaki Reza Hasanzadeh
      This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach Full Text
      This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach to find sound source location in long distances and reverberant environments and so sensitive in noisy situations, on the other hand the conventional SRP method is time consuming but successful approach to accurately find sound source location in noisy and reverberant environment. Also another SRP based method namely SRP Phase Transform (SRP-PHAT) has been suggested for better noise robustness and more accuracy of sound source localization. In this paper, based on the combination of TDOA and SRP based methods, two approaches proposed for sound source localization. In the first proposed approach which is named Classical TDOA-SRP, the TDOA method is used to find approximate sound source direction and then SRP based methods used to find the accurate location of sound source in the Field of View (FOV) which is obtained through the TDOA method. In the second proposed approach which is named Optimal TDOA-SRP, for more reduction of computational time of processing of SRP based methods and better noise robustness, a new criteria has been proposed to find the effective FOV which is obtained through the TDOA method. Experiments carried out under different conditions confirm the validity of the purposed approaches. Manuscript Document
    • Open Access Article

      7 - Application of Curve Fitting in Hyperspectral Data Classification and Compression
      S. Abolfazl  Hosseini
      Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been develop Full Text
      Regarding to the high between-band correlation and large volumes of hyperspectral data, feature reduction (either feature selection or extraction) is an important part of classification process for this data type. A variety of feature reduction methods have been developed using spectral and spatial domains. In this paper, a feature extracting technique is proposed based on rational function curve fitting. For each pixel of a hyperspectral image, a specific rational function approximation is developed to fit the spectral response curve of that pixel. Coefficients of the numerator and denominator polynomials of these functions are considered as new extracted features. This new technique is based on the fact that the sequence discipline - ordinance of reflectance coefficients in spectral response curve - contains some information which has not been considered by other statistical analysis based methods, such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and their nonlinear versions. Also, we show that naturally different curves can be approximated by rational functions with equal form, but different amounts of coefficients. Maximum likelihood classification results demonstrate that the Rational Function Curve Fitting Feature Extraction (RFCF-FE) method provides better classification accuracies compared to competing feature extraction algorithms. The method, also, has the ability of lossy data compression. The original data can be reconstructed using the fitted curves. In addition, the proposed algorithm has the possibility to be applied to all pixels of image individually and simultaneously, unlike to PCA and other methods which need to know whole data for computing the transform matrix. Manuscript Document
    • Open Access Article

      8 - Acoustic Noise Cancellation Using an Adaptive Algorithm Based on Correntropy Criterion and Zero Norm Regularization
      Mojtaba Hajiabadi
      The least mean square (LMS) adaptive algorithm is widely used in acoustic noise cancellation (ANC) scenario. In a noise cancellation scenario, speech signals usually have high amplitude and sudden variations that are modeled by impulsive noises. When the additive noise Full Text
      The least mean square (LMS) adaptive algorithm is widely used in acoustic noise cancellation (ANC) scenario. In a noise cancellation scenario, speech signals usually have high amplitude and sudden variations that are modeled by impulsive noises. When the additive noise process is nonGaussian or impulsive, LMS algorithm has a very poor performance. On the other hand, it is well-known that the acoustic channels usually have sparse impulse responses. When the impulse response of system changes from a non-sparse to a highly sparse one, conventional algorithms like the LMS based adaptive filters can not make use of the priori knowledge of system sparsity and thus, fail to improve their performance both in terms of transient and steady state. Impulsive noise and sparsity are two important features in the ANC scenario that have paid special attention, recently. Due to the poor performance of the LMS algorithm in the presence of impulsive noise and sparse systems, this paper presents a novel adaptive algorithm that can overcomes these two features. In order to eliminate impulsive disturbances from speech signal, the information theoretic criterion, that is named correntropy, is used in the proposed cost function and the zero norm is also employed to deal with the sparsity feature of the acoustic channel impulse response. Simulation results indicate the superiority of the proposed algorithm in presence of impulsive noise along with sparse acoustic channel. Manuscript Document
    • Open Access Article

      9 - A new Sparse Coding Approach for Human Face and Action Recognition
      محسن نیک پور Mohammad Reza Karami-Mollaei رضا قادری
      Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image, video and etc. In the cases where we have some similar images from the different classes, using the sparse coding method the images may be classified into Full Text
      Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image, video and etc. In the cases where we have some similar images from the different classes, using the sparse coding method the images may be classified into the same class and devalue classification performance. In this paper, we propose an Affine Graph Regularized Sparse Coding approach for resolving this problem. We apply the sparse coding and graph regularized sparse coding approaches by adding the affinity constraint to the objective function to improve the recognition rate. Several experiments has been done on well-known face datasets such as ORL and YALE. The first experiment has been done on ORL dataset for face recognition and the second one has been done on YALE dataset for face expression detection. Both experiments have been compared with the basic approaches for evaluating the proposed method. The simulation results show that the proposed method can significantly outperform previous methods in face classification. In addition, the proposed method is applied to KTH action dataset and the results show that the proposed sparse coding approach could be applied for action recognition applications too. Manuscript Document
    • Open Access Article

      10 - Efficient Land-cover Segmentation Using Meta Fusion
      Morteza Khademi Hadi Sadoghi Yazdi
      Most popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data c Full Text
      Most popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data confusion in fusion process to segmentation. Indeed, limitations in proposed method are determined adaptively for each input data, separately. On the other hand, land-cover segmentation using remotely sensed (RS) images is a challenging research subject; due to the fact that objects in unique land-cover often appear dissimilar in different RS images. In this paper multiple co-registered RS images are utilized to segment land-cover using FCM (fuzzy c-means). As an appropriate tool to model changes, fuzzy concept is utilized to fuse and integrate information of input images. By categorizing the ground points, it is shown in this paper for the first time, fuzzy numbers are need and more suitable than crisp ones to merge multi-images information and segmentation. Finally, FCM is applied on the fused image pixels (with fuzzy values) to obtain a single segmented image. Furthermore mathematical analysis and used proposed cost function, simulation results also show significant performance of the proposed method in terms of noise-free and fast segmentation. Manuscript Document
    • Open Access Article

      11 - Instance Based Sparse Classifier Fusion for Speaker Verification
      Mohammad Hasheminejad Hassan Farsi
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers Full Text
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method. Manuscript Document
    • Open Access Article

      12 - Concept Detection in Images Using SVD Features and Multi-Granularity Partitioning and Classification
      Kamran  Farajzadeh Esmail  Zarezadeh Jafar Mansouri
      New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) " Full Text
      New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) "directly" to the "raw" images. In SVD features edge, color and texture information is integrated simultaneously and is sorted based on their importance for the concept detection. Feature extraction is performed in a multi-granularity partitioning manner. In contrast to the existing systems, classification is carried out for each grid partition of each granularity separately. This separates the effect of classifications on partitions with and without the target concept on each other. Since SVD features have high dimensionality, classification is carried out with K-nearest neighbor (K-NN) algorithm that utilizes a new and "stable" distance function, namely, multiplicative distance. Experimental results on PASCAL VOC and TRECVID datasets show the effectiveness of the proposed SVD features and multi-granularity partitioning and classification method Manuscript Document
    • Open Access Article

      13 - Improved Generic Object Retrieval In Large Scale Databases By SURF Descriptor
      Hassan Farsi Reza Nasiripour Sajad Mohammadzadeh
      Normally, the-state-of-the-art methods in field of object retrieval for large databases are achieved by training process. We propose a novel large-scale generic object retrieval which only uses a single query image and training-free. Current object retrieval methods req Full Text
      Normally, the-state-of-the-art methods in field of object retrieval for large databases are achieved by training process. We propose a novel large-scale generic object retrieval which only uses a single query image and training-free. Current object retrieval methods require a part of image database for training to construct the classifier. This training can be supervised or unsupervised and semi-supervised. In the proposed method, the query image can be a typical real image of the object. The object is constructed based on Speeded Up Robust Features (SURF) points acquired from the image. Information of relative positions, scale and orientation between SURF points are calculated and constructed into the object model. Dynamic programming is used to try all possible combinations of SURF points for query and datasets images. The ability to match partial affine transformed object images comes from the robustness of SURF points and the flexibility of the model. Occlusion is handled by specifying the probability of a missing SURF point in the model. Experimental results show that this matching technique is robust under partial occlusion and rotation. The properties and performance of the proposed method are demonstrated on the large databases. The obtained results illustrate that the proposed method improves the efficiency, speeds up recovery and reduces the storage space. Manuscript Document
    • Open Access Article

      14 - Improving Image Dynamic Range For An Adaptive Quality Enhancement Using Gamma Correction
      Hamid Hassanpour
      This paper proposes a new automatic image enhancement method by improving the image dynamic range. The improvement is performed via modifying the Gamma value of pixels in the image. Gamma distortion in an image is due to the technical limitations in the imaging device, Full Text
      This paper proposes a new automatic image enhancement method by improving the image dynamic range. The improvement is performed via modifying the Gamma value of pixels in the image. Gamma distortion in an image is due to the technical limitations in the imaging device, and impose a nonlinear effect. The severity of distortion in an image varies depends on the texture and depth of the objects. The proposed method locally estimates the Gamma values in an image. In this method, the image is initially segmented using a pixon-based approach. Pixels in each segment have similar characteristics in terms of the need for Gamma correction. Then the Gamma value for each segment is estimated by minimizing the homogeneity of co-occurrence matrix. This feature can represent image details. The minimum value of this feature in a segment shows maximum details of the segment. The quality of an image is improved once more details are presented in the image via Gamma correction. In this study, it is shown that the proposed method performs well in improving the quality of images. Subjective and objective image quality assessments performed in this study attest the superiority of the proposed method compared to the existing methods in image quality enhancement. Manuscript Document
    • Open Access Article

      15 - Mitosis detection in breast cancer histological images based on texture features using AdaBoost
      Sooshiant  Zakariapour Hamid Jazayeri Mehdi Ezoji
      Counting mitotic figures present in tissue samples from a patient with cancer, plays a crucial role in assessing the patient’s survival chances. In clinical practice, mitotic cells are counted manually by pathologists in order to grade the proliferative activity of brea Full Text
      Counting mitotic figures present in tissue samples from a patient with cancer, plays a crucial role in assessing the patient’s survival chances. In clinical practice, mitotic cells are counted manually by pathologists in order to grade the proliferative activity of breast tumors. However, detecting mitoses under a microscope is a labourious, time-consuming task which can benefit from computer aided diagnosis. In this research we aim to detect mitotic cells present in breast cancer tissue, using only texture and pattern features. To classify cells into mitotic and non-mitotic classes, we use an AdaBoost classifier, an ensemble learning method which uses other (weak) classifiers to construct a strong classifier. 11 different classifiers were used separately as base learners, and their classification performance was recorded. The proposed ensemble classifier is tested on the standard MITOS-ATYPIA-14 dataset, where a pixel window around each cells center was extracted to be used as training data. It was observed that an AdaBoost that used Logistic Regression as its base learner achieved a F1 Score of 0.85 using only texture features as input which shows a significant performance improvement over status quo. It also observed that "Decision Trees" provides the best recall among base classifiers and "Random Forest" has the best Precision. Manuscript Document
    • Open Access Article

      16 - A Global-Local Noise Removal Approach to Remove High Density Impulse Noise
      Ali Mohammad Fotouhi Samane Abdoli Vahid Keshavarzi
      Impulse noise removal from images is one of the most important concerns in digital image processing. Noise must be removed in a way that the main and important information of image is kept. Traditionally, the median filter has been the best way to deal with impulse nois Full Text
      Impulse noise removal from images is one of the most important concerns in digital image processing. Noise must be removed in a way that the main and important information of image is kept. Traditionally, the median filter has been the best way to deal with impulse noise; however, the image quality obtained in high noise density is not desirable. The aim of this paper is to propose an algorithm in order to improve the performance of adaptive median filter to remove high density impulse noise from digital images. The proposed method consists of two main stages of noise detection and noise removal. In the first stage, noise detection includes two global and local phases and in the second stage, noise removal is also done based on a two-phase algorithm. Global noise detection is done by a pixel classification approach in each block of the image and local noise detection is performed by automatically determining two threshold values in each block. In the noise removal stage only noisy pixels detected from the first stage of the algorithm are processed by estimating noise density and applying adaptive median filter on noise-free pixels in the neighborhood. Comparing experimental results obtained on standard images with other proposed methods proves the success of the proposed algorithm. Manuscript Document
    • Open Access Article

      17 - Automatic Facial Emotion Recognition Method Based on Eye Region Changes
      Mina  Navraan charkari charkari Muharram Mansoorizadeh
      Emotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals like heart beating. However, the most natural way that humans display emotion is facial expression. Facial expression recognition is a great challenge in the Full Text
      Emotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals like heart beating. However, the most natural way that humans display emotion is facial expression. Facial expression recognition is a great challenge in the area of computer vision for the last two decades. This paper focuses on facial expression to identify seven universal human emotions i.e. anger, disgust, fear, happiness, sadness, surprise, and neu7tral. Unlike the majority of other approaches which use the whole face or interested regions of face, we restrict our facial emotion recognition (FER) method to analyze human emotional states based on eye region changes. The reason of using this region is that eye region is one of the most informative regions to represent facial expression. Furthermore, it leads to lower feature dimension as well as lower computational complexity. The facial expressions are described by appearance features obtained from texture encoded with Gabor filter and geometric features. The Support Vector Machine with RBF and poly-kernel functions is used for proper classification of different types of emotions. The Facial Expressions and Emotion Database (FG-Net), which contains spontaneous emotions and Cohn-Kanade(CK) Database with posed emotions have been used in experiments. The proposed method was trained on two databases separately and achieved the accuracy rate of 96.63% for spontaneous emotions recognition and 96.6% for posed expression recognition, respectively Manuscript Document
    • Open Access Article

      18 - High-Resolution Fringe Pattern Phase Extraction, Placing a Focus on Real-Time 3D Imaging
      Amir Hooshang  Mazinan Ali  Esmaeili
      The idea behind the research is to deal with real-time 3D imaging that may extensively be referred to the fields of medical science and engineering in general. It is to note that most effective non-contact measurement techniques can include the structured light patterns Full Text
      The idea behind the research is to deal with real-time 3D imaging that may extensively be referred to the fields of medical science and engineering in general. It is to note that most effective non-contact measurement techniques can include the structured light patterns, provided in the surface of object for the purpose of acquiring its 3D depth. The traditional structured light pattern can now be known as the fringe pattern. In this study, the conventional approaches, realized in the fringe pattern analysis with applications to 3D imaging such as wavelet and Fourier transform are efficiently investigated. In addition to the frequency estimation algorithm in most of these approaches, additional unwrapping algorithm is needed to extract the phase, coherently. Considering problems regarding phase unwrapping of fringe algorithm surveyed in the literatures, a state-of-the-art approach is here organized to be proposed. In the aforementioned proposed approach, the key characteristics of the same conventional algorithms such as the frequency estimation and the Itoh algorithm are synchronously realized. At the end, the results carried out through the simulation programs have revealed that the proposed approach is able to extract image phase of simulated fringe patterns and correspondingly realistic patterns with high quality. Another advantage of this investigated approach is considered as its real-time application, while a significant part of operations might be executed in parallel. Manuscript Document
    • Open Access Article

      19 - An Efficient Noise Removal Edge Detection Algorithm Based on Wavelet Transform
      Ehsan Ehsaeian
      In this paper, we propose an efficient noise robust edge detection technique based on odd Gaussian derivations in the wavelet transform domain. At first, new basis wavelet functions are introduced and the proposed algorithm is explained. The algorithm consists of two st Full Text
      In this paper, we propose an efficient noise robust edge detection technique based on odd Gaussian derivations in the wavelet transform domain. At first, new basis wavelet functions are introduced and the proposed algorithm is explained. The algorithm consists of two stage. The first idea comes from the response multiplication across the derivation and the second one is pruning algorithm which improves fake edges. Our method is applied to the binary and the natural grayscale image in the noise-free and the noisy condition with the different power density. The results are compared with the traditional wavelet edge detection method in the visual and the statistical data in the relevant tables. With the proper selection of the wavelet basis function, an admissible edge response to the significant inhibited noise without the smoothing technique is obtained, and some of the edge detection criteria are improved. The experimental visual and statistical results of studying images show that our method is feasibly strong and has good edge detection performances, in particular, in the high noise contaminated condition. Moreover, to have a better result and improve edge detection criteria, a pruning algorithm as a post processing stage is introduced and applied to the binary and grayscale images. The obtained results, verify that the proposed scheme can detect reasonable edge features and dilute the noise effect properly. Manuscript Document
    • Open Access Article

      20 - Speech Emotion Recognition Based on Fusion Method
      Sara Motamed Saeed Setayeshi Azam Rabiee Arash  Sharifi
      Speech emotion signals are the quickest and most neutral method in individuals’ relationships, leading researchers to develop speech emotion signal as a quick and efficient technique to communicate between man and machine. This paper introduces a new classification meth Full Text
      Speech emotion signals are the quickest and most neutral method in individuals’ relationships, leading researchers to develop speech emotion signal as a quick and efficient technique to communicate between man and machine. This paper introduces a new classification method using multi-constraints partitioning approach on emotional speech signals. To classify the rate of speech emotion signals, the features vectors are extracted using Mel frequency Cepstrum coefficient (MFCC) and auto correlation function coefficient (ACFC) and a combination of these two models. This study found the way that features’ number and fusion method can impress in the rate of emotional speech recognition. The proposed model has been compared with MLP model of recognition. Results revealed that the proposed algorithm has a powerful capability to identify and explore human emotion. Manuscript Document
    • Open Access Article

      21 - The Separation of Radar Clutters using Multi-Layer Perceptron
      Mohammad Akhondi Darzikolaei Ataollah Ebrahimzadeh Elahe Gholami
      Clutter usually has negative influence on the detection performance of radars. So, the recognition of clutters is crucial to detect targets and the role of clutters in detection cannot be ignored. The design of radar detectors and clutter classifiers are really complica Full Text
      Clutter usually has negative influence on the detection performance of radars. So, the recognition of clutters is crucial to detect targets and the role of clutters in detection cannot be ignored. The design of radar detectors and clutter classifiers are really complicated issues. Therefore, in this paper aims to classify radar clutters. The novel proposed MLP-based classifier for separating radar clutters is introduced. This classifier is designed with different hidden layers and five training algorithms. These training algorithms consist of Levenberg-Marquardt, conjugate gradient, resilient back-propagation, BFGS and one step secant algorithms. Statistical distributions are established models which widely used in the performance calculations of radar clutters. Hence In this research, Rayleigh, Log normal, Weibull and K-distribution clutters are utilized as input data. Then Burg’s reflection coefficients, skewness and kurtosis are three features which applied to extract the best characteristics of input data. In the next step, the proposed classifier is tested in different conditions and the results represent that the proposed MLP-based classifier is very successful and can distinguish clutters with high accuracy. Comparing the results of proposed technique and RBF-based classifier show that proposed method is more efficient. The results of simulations prove that the validity of MLP-based method. Manuscript Document
    • Open Access Article

      22 - A New Calibration Method for SAR Analog-to-Digital Converters Based on All Digital Dithering
      Ebrahim Farshidi Shabnam Rahbar
      In this paper a new digital background calibration method for successive approximation register analog to digital converters is presented. For developing, a perturbation signal is added and also digital offset is injected. One of the main advantages of this work is that Full Text
      In this paper a new digital background calibration method for successive approximation register analog to digital converters is presented. For developing, a perturbation signal is added and also digital offset is injected. One of the main advantages of this work is that it is completely digitally and eliminates the nonlinear errors between analog capacitor and array capacitors due to converter’s capacitors mismatch error by correcting the relative weights. Performing of this digital dithering method does not require extra capacitors or double independent converters and it will eliminate mismatches caused by these added elements. Also, No extra calibration overhead for complicated mathematical calculation is needed. It unlike split calibration, does not need two independent converters for production of two specified paths and it just have one capacitor array which makes it possible with simple architecture. Furthermore, to improve DNL and INL and correct the missing code error, sub radix-2 is used in the converter structure. Proposed calibration method is implemented by a 10 bit, 1.87-radix SAR converter. Simulation results with MATLAB software show great improvement in static and dynamic characteristics in applied analog to digital converter after calibration. So, it can be used in calibration of successive approximation register analog to digital converters. Manuscript Document
    • Open Access Article

      23 - SSIM-Based Fuzzy Video Rate Controller for Variable Bit Rate Applications of Scalable HEVC
      Farhad Raufmehr Mehdi Rezaei
      Scalable High Efficiency Video Coding (SHVC) is the scalable extension of the latest video coding standard H.265/HEVC. Video rate control algorithm is out of the scope of video coding standards. Appropriate rate control algorithms are designed for various applications t Full Text
      Scalable High Efficiency Video Coding (SHVC) is the scalable extension of the latest video coding standard H.265/HEVC. Video rate control algorithm is out of the scope of video coding standards. Appropriate rate control algorithms are designed for various applications to overcome practical constraints such as bandwidth and buffering constraints. In most of the scalable video applications, such as video on demand (VoD) and broadcasting applications, encoded bitstreams with variable bit rates are preferred to bitstreams with constant bit rates. In variable bit rate (VBR) applications, the tolerable delay is relatively high. Therefore, we utilize a larger buffer to allow more variations in bitrate to provide smooth and high visual quality of output video. In this paper, we propose a fuzzy video rate controller appropriate for VBR applications of SHVC. A fuzzy controller is used for each layer of scalable video to minimize the fluctuation of QP at the frame level while the buffering constraint is obeyed for any number of layers received by a decoder. The proposed rate controller utilizes the well-known structural similarity index (SSIM) as a quality metric to increase the visual quality of the output video. The proposed rate control algorithm is implemented in HEVC reference software and comprehensive experiments are executed to tune the fuzzy controllers and also to evaluate the performance of the algorithm. Experimental results show a high performance for the proposed algorithm in terms of rate control, visual quality, and rate-distortion performance. Manuscript Document
    • Open Access Article

      24 - Wavelet-based Bayesian Algorithm for Distributed Compressed Sensing
      Razieh Torkamani Ramezan Ali Sadeghzadeh
      The emerging field of compressive sensing (CS) enables the reconstruction of the signal from a small set of linear projections. Traditional CS deals with a single signal; while one can jointly reconstruct multiple signals via distributed CS (DCS) algorithm. DCS inversio Full Text
      The emerging field of compressive sensing (CS) enables the reconstruction of the signal from a small set of linear projections. Traditional CS deals with a single signal; while one can jointly reconstruct multiple signals via distributed CS (DCS) algorithm. DCS inversion method exploits both the inter- and intra-signal correlations via joint sparsity models (JSM). Since the wavelet coefficients of many signals is sparse, in this paper, the wavelet transform is used as sparsifying transform, and a new wavelet-based Bayesian DCS algorithm (WB-DCS) is proposed, which takes into account the inter-scale dependencies among the wavelet coefficients via hidden Markov tree model (HMT), as well as the inter-signal correlations. This paper uses the Bayesian procedure to statistically model this correlations via the prior distributions. Also, in this work, a type-1 JSM (JSM-1) signal model is used for jointly sparse signals, in which every sparse coefficient vector is considered as the sum of a common component and an innovation component. In order to jointly reconstruct multiple sparse signals, the centralized approach is used in DCS, in which all the data is processed in the fusion center (FC). Also, variational Bayes (VB) procedure is used to infer the posterior distributions of unknown variables. Simulation results demonstrate that the structure exploited within the wavelet coefficients provides superior performance in terms of average reconstruction error and structural similarity index. Manuscript Document
    • Open Access Article

      25 - Reliability Analysis of the Sum-Product Decoding Algorithm for the PSK Modulation Scheme
      Hadi Khodaei Jooshin Mahdi Nangir
      Iteratively decoding and reconstruction of encoded data has been considered in recent decades. Most of these iterative schemes are based on graphical codes. Messages are passed through space graphs to reach a reliable belief of the original data. This paper presents a p Full Text
      Iteratively decoding and reconstruction of encoded data has been considered in recent decades. Most of these iterative schemes are based on graphical codes. Messages are passed through space graphs to reach a reliable belief of the original data. This paper presents a performance analysis of the Low-Density Parity-Check (LDPC) code design method which approach the capacity of the Additive White Gaussian Noise (AWGN) model for communication channels. We investigate the reliability of the system under Phase Shift Keying (PSK) modulation. We study the effects and advantages of variation in the codeword length, the rate of parity-check matrix of the LDPC codes, and the number of iterations in the Sum-Product Algorithm (SPA). By employing an LDPC encoder prior to the PSK modulation block and the SPA in the decoding part, the Bit Error Rate (BER) performance of the PSK modulation system can improve significantly. The BER performance improvement of a point-to-point communication system is measured in different cases. Our analysis is capable for applying any other iterative message-passing algorithm. The code design process of the communication systems and parameter selection of the encoding and decoding algorithms are accomplished by considering hardware limitations in a communication system. Our results help to design and select paramours efficiently. Manuscript Document
    • Open Access Article

      26 - Denoising and Enhancement Speech Signal Using Wavelet
      Meriane Brahim
      Speech enhancement aims to improve the quality and intelligibility of speech using various techniques and algorithms. The speech signal is always accompanied by background noise. The speech and communication processing systems must apply effective noise reduction techni Full Text
      Speech enhancement aims to improve the quality and intelligibility of speech using various techniques and algorithms. The speech signal is always accompanied by background noise. The speech and communication processing systems must apply effective noise reduction techniques in order to extract the desired speech signal from its corrupted speech signal. In this project we study wavelet and wavelet transform, and the possibility of its employment in the processing and analysis of the speech signal in order to enhance the signal and remove noise of it. We will present different algorithms that depend on the wavelet transform and the mechanism to apply them in order to get rid of noise in the speech, and compare the results of the application of these algorithms with some traditional algorithms that are used to enhance the speech. The basic principles of the wavelike transform are presented as an alternative to the Fourier transform. Or immediate switching of the window The practical results obtained are based on processing a large database dedicated to speech bookmarks polluted with various noises in many SNRs. This article tends to be an extension of practical research to improve speech signal for hearing aid purposes. Also learn about the main frequency of letters and their uses in intelligent systems, such as voice control systems. Manuscript Document