List of subject articles Pervasive computing


    • Open Access Article

      1 - A New Method for Transformation Techniques in Secure Information Systems
      hojatallah hamidi
      The transformation technique relies on the comparison of parity values computed in two ways. The fault detection structures are developed and they not only detected subsystem faults but also corrected faults introduced in the data processing system. Concurrent parity va Full Text
      The transformation technique relies on the comparison of parity values computed in two ways. The fault detection structures are developed and they not only detected subsystem faults but also corrected faults introduced in the data processing system. Concurrent parity values techniques are very useful in detecting numerical error in the data processing operations, where a single fault can propagate to many output faults. Parity values are the most effective tools used to detect faults occurring in the code stream. In this paper, we present a methodology for redundant systems that allows to detect faults. Checkpointing is the typical technique to tolerate such faults. This paper presents a checkpointing approach to operate on encoded data. The advantage of this method is that it is able to achieve very low overhead according to the specific characteristic of an application. The numerical results of the multiple checkpointing technique confirm that the technique is more efficient and reliable by not only distributing the process of checkpointing over groups of processors. This technique has been shown to improve both the reliability of the computation and the performance of the checkpointing. Manuscript Document
    • Open Access Article

      2 - A New Upper Bound for Free Space Optical Channel Capacity Using a Simple Mathematical in Equality
      Arezu Rezazadeh Ghosheh  Abed Hodtani
      In this paper, by using a simple mathematical inequality, we derive a $ new upper bound fkr the capacity of$ free space optical channel in coherent case. Then, by applying general fading distribution, we obtain an upper bound for mutual information in non-coherent cas Full Text
      In this paper, by using a simple mathematical inequality, we derive a $ new upper bound fkr the capacity of$ free space optical channel in coherent case. Then, by applying general fading distribution, we obtain an upper bound for mutual information in non-coherent case. Finally, we derive the corresponding optimal input distributions for both coherent and non-coherent cases, compare the results with previous works numerically and illustrate that our results subsume some of previous results in special cases. Manuscript Document
    • Open Access Article

      3 - Achieving Better Performance of S-MMA Algorithm in the OFDM Modulation
      Saeed Ghazi-Maghrebi Babak Haji Bagher Naeeni Mojtaba Lotfizad
      Effective algorithms in modern digital communication systems provide a fundamental basis for increasing the efficiency of the application networks which are in many cases neither optimized nor very close to their practical limits. Equalizations are one of the preferred Full Text
      Effective algorithms in modern digital communication systems provide a fundamental basis for increasing the efficiency of the application networks which are in many cases neither optimized nor very close to their practical limits. Equalizations are one of the preferred methods for increasing the efficiency of application systems such as orthogonal frequency division multiplexing (OFDM). In this paper, we study the possibility of improving the OFDM modulation employing sliced multi-modulus algorithm (S-MMA) equalization. We compare applying the least mean square (LMS), multi modulus algorithm (MMA) and S-MMA equalizations to the per tone equalization in the OFDM modulation. The paper contribution lies in using the S-MMA technique, for weight adaptation, to decreasing the BER in the OFDM multicarrier modulation. For more efficiency, it is assumed that the channel impulse response is longer than the cyclic prefix (CP) length and as a result, the system will be more efficient but at the expense of the high intersymbol interference (ISI) impairment existing. Both analysis and simulations demonstrate better performance of the S-MMA compared to LMS and MMA algorithms, in standard channels with additive white Gaussian noise (AWGN) and ISI impairment simultanously. Therefore, the S-MMA equalization is a good choice for high speed and real-time applications such as OFDM based systems. Manuscript Document
    • Open Access Article

      4 - A Conflict Resolution Approach using Prioritization Strategy
      Hojjat Emami Kamyar Narimanifar
      In current air traffic control system and especially in free flight method, the resolution of conflicts between different aircrafts is a critical problem. In recent years, conflict detection and resolution problem has been an active and hot research topic in the aviatio Full Text
      In current air traffic control system and especially in free flight method, the resolution of conflicts between different aircrafts is a critical problem. In recent years, conflict detection and resolution problem has been an active and hot research topic in the aviation industry. In this paper, we mapped the aircrafts’ conflict resolution process to graph coloring problem, then we used a prioritization method to solve this problem. Valid and optimal solutions for corresponding graph are equivalent to free conflict flight plans for aircrafts in airspace. The proposed prioritization method is based on some score allocation metrics. After score allocation process, how much the score of an aircraft be higher its priority will be higher and vice versa how much the score of an aircraft be lower its priority will be lower. We implemented and tested our proposed method by different test cases and test results indicate high efficiency of this method. Manuscript Document
    • Open Access Article

      5 - A Basic Proof Method for the Verification, Validation and Evaluation of Expert Systems
      Armin Ghasem Azar Zohreh Mohammad Alizadeh
      In the present paper, a basic proof method is provided for representing the verification, Validation and evaluation of expert systems. The result provides an overview of the basic method for formal proof such as: partition larger systems into small systems prove correct Full Text
      In the present paper, a basic proof method is provided for representing the verification, Validation and evaluation of expert systems. The result provides an overview of the basic method for formal proof such as: partition larger systems into small systems prove correctness on small systems by non-recursive means, prove that the correctness of all subsystems implies the correctness of the entire system. Manuscript Document
    • Open Access Article

      6 - Prediction of Deadlocks in Concurrent Programs Using Neural Network
      Elmira Hasanzad babamir babamir
      The dependability of concurrent programs is usually limited by concurrency errors like deadlocks and data races in allocation of resources. Deadlocks are difficult to find during the program testing because they happen under very specific thread or process scheduling an Full Text
      The dependability of concurrent programs is usually limited by concurrency errors like deadlocks and data races in allocation of resources. Deadlocks are difficult to find during the program testing because they happen under very specific thread or process scheduling and environmental conditions. In this study, we extended our previous approach for online potential deadlock detection in resources allocated by multithread programs. Our approach is based on reasoning about deadlock possibility using the prediction of future behavior of threads. Due to the nondeterministic nature, future behavior of multithread programs, in most of cases, cannot be easily specified. Before the prediction, the behavior of threads should be translated into a predictable format. Time series is our choice to this conversion because many Statistical and Artificial Intelligence techniques can be developed to predict the future members of the time series. Among all the prediction techniques, artificial neural networks showed applicable performance and flexibility in predicting complex behavioral patterns which are the most usual cases in real world applications. Our model focuses on the multithread programs which use locks to allocate resources. The proposed model was used to deadlock prediction in resources allocated by multithread Java programs and the results were evaluated. Manuscript Document
    • Open Access Article

      7 - Network RAM Based Process Migration for HPC Clusters
      Hamid Sharifian msharifi msharifi
      Process migration is critical to dynamic balancing of workloads on cluster nodes in any high performance computing cluster to achieve high overall throughput and performance. Most existing process migration mechanisms are however unsuccessful in achieving this goal pr Full Text
      Process migration is critical to dynamic balancing of workloads on cluster nodes in any high performance computing cluster to achieve high overall throughput and performance. Most existing process migration mechanisms are however unsuccessful in achieving this goal proper because they either allow once-only migration of processes or have complex implementations of address space transfer that degrade process migration performance. We propose a new process migration mechanism for HPC clusters that allows multiple migrations of each process by using the network RAM feature of clusters to transfer the address spaces of processes upon their multiple migrations. We show experimentally that the superiority of our proposed mechanism in attaining higher performance compared to existing comparable mechanisms is due to effective management of residual data dependencies. Manuscript Document
    • Open Access Article

      8 - Accurate Fire Detection System for Various Environments using Gaussian Mixture Model and HSV Space
      Khosro Rezaee Seyed Jalaleddin Mousavirad Mohammad Rasegh Ghezelbash Javad Haddadnia
      Smart and timely detection of fire can be very useful in coping with this phenomenon and its inhibition. Enhancing some image analysis methods such as converting RGB image to HSV image, smart selecting the threshold in fire separation, Gaussian mixture model, forming po Full Text
      Smart and timely detection of fire can be very useful in coping with this phenomenon and its inhibition. Enhancing some image analysis methods such as converting RGB image to HSV image, smart selecting the threshold in fire separation, Gaussian mixture model, forming polygon the enclosed area resulted from edge detection and its combination with original image, this papers addresses fire detection. Accuracy and precision in performance and rapid detection of fire are among the features that distinguish this proposed system from similar fire detection systems such as Markov model, GM, DBFIR and other algorithms introduced in valid articles. The average accuracy (95%) resulted from testing 35000 frames in different fire environments and the high sensitivity (96%) was quite significant. This system be regarded as a reliable suitable alternative for the sensory set used in residential areas, but also the high speed image processing and accurate detection of fire in wide areas makes it low cost, reliable and appropriate. Manuscript Document
    • Open Access Article

      9 - Multimodal Biometric Recognition Using Particle Swarm Optimization-Based Selected Features
      Sara Motamed Ali Broumandnia Azam sadat  Nourbakhsh
      Feature selection is one of the best optimization problems in human recognition, which reduces the number of features, removes noise and redundant data in images, and results in high rate of recognition. This step affects on the performance of a human recognition system Full Text
      Feature selection is one of the best optimization problems in human recognition, which reduces the number of features, removes noise and redundant data in images, and results in high rate of recognition. This step affects on the performance of a human recognition system. This paper presents a multimodal biometric verification system based on two features of palm and ear which has emerged as one of the most extensively studied research topics that spans multiple disciplines such as pattern recognition, signal processing and computer vision. Also, we present a novel Feature selection algorithm based on Particle Swarm Optimization (PSO). PSO is a computational paradigm based on the idea of collaborative behavior inspired by the social behavior of bird flocking or fish schooling. In this method, we used from two Feature selection techniques: the Discrete Cosine Transforms (DCT) and the Discrete Wavelet Transform (DWT). The identification process can be divided into the following phases: capturing the image; pre-processing; extracting and normalizing the palm and ear images; feature extraction; matching and fusion; and finally, a decision based on PSO and GA classifiers. The system was tested on a database of 60 people (240 palm and 180 ear images). Experimental results show that the PSO-based feature selection algorithm was found to generate excellent recognition results with the minimal set of selected features. Manuscript Document
    • Open Access Article

      10 - Performance Analysis of SVM-Type Per Tone Equalizer Using Blind and Radius Directed Algorithms for OFDM Systems
      Babak Haji Bagher Naeeni
      In this paper, we present Support Vector Machine (SVM)-based blind per tone equalization for OFDM systems. Blind per tone equalization using Constant Modulus Algorithm (CMA) and Multi-Modulus Algorithm (MMA) are used as the comparison benchmark. The SVM-based cost funct Full Text
      In this paper, we present Support Vector Machine (SVM)-based blind per tone equalization for OFDM systems. Blind per tone equalization using Constant Modulus Algorithm (CMA) and Multi-Modulus Algorithm (MMA) are used as the comparison benchmark. The SVM-based cost function utilizes a CMA-like error function and the solution is obtained by means of an Iterative Re-Weighted Least Squares Algorithm (IRWLS). Moreover, like CMA, the error function allows to extend the method to multilevel modulations. In this case, a dual mode algorithm is proposed. Dual mode equalization techniques are commonly used in communication systems working with multilevel signals. Practical blind algorithms for multilevel modulation are able to open the eye of the constellation, but they usually exhibit a high residual error. In a dual mode scheme, once the eye is opened by the blind algorithm, the system switches to another algorithm, which is able to obtain a lower residual error under a suitable initial ISI level. Simulation experiments show that the performance of blind per tone equalization using support vector machine has better than blind per tone equalization using CMA and MMA, from viewpoint of average Bit-Error Rate (BER). Manuscript Document
    • Open Access Article

      11 - Latent Feature Based Recommender System for Learning Materials Using Genetic Algorithm
      Mojtaba Salehi
      With the explosion of learning materials available on personal learning environments (PLEs) in the recent years, it is difficult for learners to discover the most appropriate materials according to keyword searching method. Recommender systems (RSs) that are used to sup Full Text
      With the explosion of learning materials available on personal learning environments (PLEs) in the recent years, it is difficult for learners to discover the most appropriate materials according to keyword searching method. Recommender systems (RSs) that are used to support activity of learners in PLE can deliver suitable material to learners. This technology suffers from the cold-start and sparsity problems. On the other hand, in most researches, less attention has been paid to latent features of products. For improving the quality of recommendations and alleviating sparsity problem, this research proposes a latent feature based recommendation approach. Since usually there isn’t adequate information about the observed features of learner and material, latent features are introduced for addressing sparsity problem. First preference matrix (PM) is used to model the interests of learner based on latent features of learning materials in a multidimensional information model. Then, we use genetic algorithm (GA) as a supervised learning task whose fitness function is the mean absolute error (MAE) of the RS. GA optimizes latent features weight for each learner based on his/her historical rating. The method outperforms the previous algorithms on accuracy measures and can alleviate the sparsity problem. The main contributions are optimization of latent features weight using genetic algorithm and alleviating the sparsity problem to improve the quality of recommendation. Manuscript Document
    • Open Access Article

      12 - High I/Q Imbalance Receiver Compensation and Decision Directed Frequency Selective Channel Estimation in an OFDM Receiver Employing Neural Network
      afalahati afalahati Sajjad Nasirpour
      The disparity introduced between In-phase and Quadrature components in a digital communication system receiver known as I/Q imbalance is a prime objective within the employment of direct conversion architectures. It reduces the performance of channel estimation and caus Full Text
      The disparity introduced between In-phase and Quadrature components in a digital communication system receiver known as I/Q imbalance is a prime objective within the employment of direct conversion architectures. It reduces the performance of channel estimation and causes to receive the data symbol with errors. This imbalance phenomenon, at its lowest still can result very serious signal distortions at the reception of an OFDM multi-carrier system. In this manuscript, an algorithm based on neural network scenario, is proposed that deploys both Long Training Symbols (LTS) as well as data symbols, to jointly estimate the channel and to compensate parameters that are damaged by I/Q imbalanced receiver. In this algorithm, we have a tradeoff between these parameters. I.e. when the minimum CG mean value is required, the minimum CG mean value could be chosen without others noticing it, but in usual case we have to take into account other parameters too, the limited values for the aimed parameters must be known. It uses the first iterations to train the system to reach the suitable value of GC without error floor. In this present article, it is assumed that the correlation between subcarriers is low and a few numbers of training and data symbols are used. The simulation results show that the proposed algorithm can compensate the high I/Q imbalance values and estimate channel frequency response more accurately compared with to date existing methods. Manuscript Document
    • Open Access Article

      13 - Target Tracking in MIMO Radar Systems Using Velocity Vector
      Mohammad Jabbarian Jahromi Hossein Khaleghi Bizaki
      The superiority of multiple-input multiple-output (MIMO) radars over conventional radars has been recently shown in many aspects. These radars consist of many transmitters and receivers located far from each other. In this scenario, the MIMO radar is able to observe the Full Text
      The superiority of multiple-input multiple-output (MIMO) radars over conventional radars has been recently shown in many aspects. These radars consist of many transmitters and receivers located far from each other. In this scenario, the MIMO radar is able to observe the targets from different directions. One of the advantages of these radars is exploitation of Doppler frequencies from different transmitter-target-receiver paths. The extracted Doppler frequencies can be used for estimation of target velocity vector so that, the radar can be able to track the targets by use of its velocity vector with reasonable accuracy. In this paper, two different processing systems are considered for MIMO radars. First one is the pulse Doppler system, and the second one is continuous wave (CW) system without range processing. The measurement of the velocity vector of the target and its counterpart errors are taken into account. Also, the extended Kalman target tracking by using its velocity vector is considered. Besides, its performance is compared with those of MIMO target tracking without using the velocity vector and conventional radars. The simulation results show that the MIMO radars using velocity vector have superior performance over other above-mentioned radars in fast maneuvering target tracking. Since the range processing is ignored in CW MIMO radar systems, the complexity of this system is much lower than that of Pulse Doppler MIMO radar system, but has lower performance in tracking fast maneuvering target. Manuscript Document
    • Open Access Article

      14 - Low Distance Airplanes Detection and Tracking Visually using Spectral Residual and KLT Composition
      Mohammad Anvaripour Sima Soltanpour
      This paper presents the method for detection and tracking airplanes which can be observed visually in low distances from sensors. They are used widely for some reasons such as military or unmanned aerial vehicle (UAV) because of their ability to hide from radar signals; Full Text
      This paper presents the method for detection and tracking airplanes which can be observed visually in low distances from sensors. They are used widely for some reasons such as military or unmanned aerial vehicle (UAV) because of their ability to hide from radar signals; however they can be detected and viewed by human eyes. Vision based methods are low cost and robust against jamming signals. Therefore, it is mandatory to have some visual approaches to detect airplanes. By this way, we propose spectral density for airplane detection and KLT algorithm for tracking. This approach is a hybrid of two distinct methods which have been presented by researchers and used widely in detection or tracking specific objects. To have accurate detection, image intensity would be adjusted adaptively. Correct detected airplanes would be achievable by eliminating some long optical flow trajectory in image frames. The proposed method would be analyzed and evaluated by comparison with state of the art approaches. The experimental results show the power of our approach in detection of multiple airplanes unless they become too small in presence of other objects and multiple airplanes. We make some test by implementing our approach on an useful database presented by some researchers. Manuscript Document
    • Open Access Article

      15 - A Low-Jitter 20-110MHz DLL Based on a Simple PD and Common-Mode Voltage Level Corrected Differential Delay Elements
      Sarang Kazeminia Khayrollah Hadidi Abdollah Khoei
      In this paper, a 16-phases 20MHz to 110MHz low jitter delay locked loop, DLL, is proposed in a 0.35µm CMOS process. A sensitive open loop phase detector, PD, is introduced based on a novel idea to simply detect small phase differences between reference clock and generat Full Text
      In this paper, a 16-phases 20MHz to 110MHz low jitter delay locked loop, DLL, is proposed in a 0.35µm CMOS process. A sensitive open loop phase detector, PD, is introduced based on a novel idea to simply detect small phase differences between reference clock and generated delayed signals. High sensitivity, besides the simplicity reduces the dead zone of PD and gives a better jitter on output generated clock signals, consequently. A new strategy of common mode setting is utilized on differential delay elements which no longer introduce extra parasitics on output nodes and brings the duty cycle of generated clock signals near to 50 percent. Also, small amplitude differential clock is carefully transferred inside the circuit to considerably suppress the noise effect of supply voltage. Post-Layout simulation results confirm the RMS jitter of less than 6.7ps at 20MHz and 2ps at 100MHz input clock frequency when the 3.3Volts supply voltage is subject to 75mVolts peak-to-peak noise disturbances. Total power consumption reaches from 7.5mW to 16.5mW when the operating frequency increases from 20MHz to 100MHz. The proposed low-jitter DLL can be implemented in small active area, around 380µm×210µm including the clock generation circuit, which is proper to be repeatedly used inside the chip. Manuscript Document
    • Open Access Article

      16 - Enhancing Efficiency of Software Fault Tolerance Techniques in Satellite Motion System
      Hoda Banki babamir babamir Azam Farokh Mohammad Mehdi Morovati
      This research shows the influence of using multi-core architecture to reduce the execution time and thus increase performance of some software fault tolerance techniques. According to superiority of N-version Programming and Consensus Recovery Block techniques in compar Full Text
      This research shows the influence of using multi-core architecture to reduce the execution time and thus increase performance of some software fault tolerance techniques. According to superiority of N-version Programming and Consensus Recovery Block techniques in comparison with other software fault tolerance techniques, implementations were performed based on these two methods. Finally, the comparison between the two methods listed above showed that the Consensus Recovery Block is more reliable. Therefore, in order to improve the performance of this technique, we propose a technique named Improved Consensus Recovery Block technique. In this research, satellite motion system which known as a scientific computing system is consider as a base for our experiments. Because of existing any error in calculation of system may result in defeat in system totally, it shouldn’t contains any error. Also the execution time of system must be acceptable. In our proposed technique, not only performance is higher than the performance of consensus recovery block technique, but also the reliability of our proposed technique is equal to the reliability of consensus recovery block technique. The improvement of performance is based on multi-core architecture where each version of software key units is executed by one core. As a result, by parallel execution of versions, execution time is reduced and performance is improved. Manuscript Document
    • Open Access Article

      17 - Design of Fall Detection System: A Dynamic Pattern Approach with Fuzzy Logic and Motion Estimation
      Khosro Rezaee Javad Haddadnia
      Every year thousands of the elderly suffer serious damages such as articular fractures, broken bones and even death due to their fall. Automatic detection of the abnormal walking in people, especially such accidents as the falls in the elderly, based on image processing Full Text
      Every year thousands of the elderly suffer serious damages such as articular fractures, broken bones and even death due to their fall. Automatic detection of the abnormal walking in people, especially such accidents as the falls in the elderly, based on image processing techniques and computer vision can help develop an efficient system that its implementation in various contexts enables us to monitor people’s movements. This paper proposes a new algorithm, which drawing on fuzzy rules in classification of movements as well as the implementation of the motion estimation, allows the rapid processing of the input data. At the testing stage, a large number of video frames received from CASIA, CAVAIR databases and the samples of the elderly’s falls in Sabzevar’s Mother Nursing Home containing the falls of the elderly were used. The results show that the mean absolute percent error (MAPE), root-mean-square deviation (RMSD) and standard deviation error (SDE) were at an acceptable level. The main shortcoming of other systems is that the elderly need to wear bulky clothes and in case they forget to do so, they will not be able to declare their situation at the time of the fall. Compared to the similar techniques, the implementation of the proposed system in nursing homes and residential areas allow the real time and intelligent monitoring of the people. Manuscript Document
    • Open Access Article

      18 - Fast Automatic Face Recognition from Single Image per Person Using GAW-KNN
      Hassan Farsi Mohammad Hasheminejad
      Real time face recognition systems have several limitations such as collecting features. One training sample per target means less feature extraction techniques are available to use. To obtain an acceptable accuracy, most of face recognition algorithms need more than on Full Text
      Real time face recognition systems have several limitations such as collecting features. One training sample per target means less feature extraction techniques are available to use. To obtain an acceptable accuracy, most of face recognition algorithms need more than one training sample per target. In these applications, accuracy of recognition dramatically reduces for the case of one training sample per target face image because of head rotation and variation in illumination state. In this paper, a new hybrid face recognition method by using single image per person is proposed, which is robust against illumination variations. To achieve robustness against head variations, a rotation detection and compensation stage is added. This method is called Weighted Graphs and PCA (WGPCA). It uses harmony of face components to extract and normalize features, and genetic algorithm with a training set is used to learn the most useful features and real-valued weights associated to individual attributes in the features. The k-nearest neighbor algorithm is applied to classify new faces based on their weighted features from the templates of the training set. Each template contains the corrected distances (Graphs) of different points on the face components and the results of Principal Component Analysis (PCA) applied to the output of face detection rectangle. The proposed hybrid algorithm is trained using MATLAB software to determine best features and their associated weights and is then implemented by using delphi XE2 programming environment to recognize faces in real time. The main advantage of this algorithm is the capability of recognizing the face by only one picture in real time. The obtained results of the proposed technique on FERET database show that the accuracy and effectiveness of the proposed algorithm. Manuscript Document
    • Open Access Article

      19 - A Wideband Low-Noise Downconversion Mixerwith Positive-Negative Feedbacks
      Hadi Naderian Ahmad Hakimi
      This paper presents a wideband low-noise mixer in CMOS 0.13-um technology that operates between 2–10.5 GHz. The mixer has a Gilbert cell configuration that employs broadband low-noise trans conductors designed using the negative-positive feedback technique used in low-n Full Text
      This paper presents a wideband low-noise mixer in CMOS 0.13-um technology that operates between 2–10.5 GHz. The mixer has a Gilbert cell configuration that employs broadband low-noise trans conductors designed using the negative-positive feedback technique used in low-noise amplifier designs. This method allows broadband input matching. The current-bleeding technique is also used so that a high conversion gain can be achieved. Simulation results show excellent noise and gain performance across the frequency span with an average double-sideband noise figure of 2.9 dB and a conversion gain of 15.5 dB. It has a third-order intermodulation intercept point of -8.7 dBm at 5 GHz. Manuscript Document
    • Open Access Article

      20 - A Robust Data Envelopment Analysis Method for Business and IT Alignment of Enterprise Architecture Scenarios
      Mehdi Fasanghari Mohsen  Sadegh Amalnick Reza Taghipour Anvari Jafar Razmi
      Information Technology is recognized as a competitive enabler in today’s dynamic business environment. Therefore, alliance of business and Information Technology process is critical, which is mostly emphasized in Information Technology governance frameworks. On the othe Full Text
      Information Technology is recognized as a competitive enabler in today’s dynamic business environment. Therefore, alliance of business and Information Technology process is critical, which is mostly emphasized in Information Technology governance frameworks. On the other hand, Enterprise Architectures are deployed to steer organizations for achieving their objectives while being responsive to changes. Thus, it is proposed to align the business and Information Technology through investigating the suitability of Enterprise Architecture scenarios. In view of this fact, investigating a flexible decision making method for business and information technology alignment analysis is necessary, but it is not sufficient since the subjective analysis is always perturbed by some degree of uncertainty. Therefore, we have developed a new robust Data Envelopment Analysis technique designed for Enterprise Architecture scenario analysis. Several numerical experiments and a sensitivity analysis are designed to show the performance, significance, and flexibility of the proposed method in a real case. Manuscript Document
    • Open Access Article

      21 - Wideband Log Periodic-Microstrip Antenna with Elliptic Patches
      hamed ghanbari foshtami Ali Hashemi Talkhouncheh Hossein Emami
      A broadband microstrip antenna based on log periodic technique was conceived and demonstrated practically. The antenna exhibits a wideband characteristic comparing with other microstrip antennas. Over the operation frequency range, i.e. 2.5-6 GHz, a 50 Ω input impedanc Full Text
      A broadband microstrip antenna based on log periodic technique was conceived and demonstrated practically. The antenna exhibits a wideband characteristic comparing with other microstrip antennas. Over the operation frequency range, i.e. 2.5-6 GHz, a 50 Ω input impedance has been considered. Manuscript Document
    • Open Access Article

      22 - A New Finite Field Multiplication Algorithm to Improve Elliptic Curve Cryptosystem Implementations
      Abdalhossein Rezai Parviz Keshavarzi
      This paper presents a new and efficient implementation approach for the elliptic curve cryptosystem (ECC) based on a novel finite field multiplication in GF(2m) and an efficient scalar multiplication algorithm. This new finite field multiplication algorithm performs zer Full Text
      This paper presents a new and efficient implementation approach for the elliptic curve cryptosystem (ECC) based on a novel finite field multiplication in GF(2m) and an efficient scalar multiplication algorithm. This new finite field multiplication algorithm performs zero chain multiplication and required additions in only one clock cycle instead of several clock cycles. Using modified (limited number of shifts) Barrel shifter; the partial result is also shifted in one clock cycle instead of several clock cycles. Both the canonical recoding technique and the sliding window method are applied to the multiplier to reduce the average number of required clock cycles. In the scalar multiplication algorithm of the proposed implementation approach, the point addition and point doubling operations are computed in parallel. The sliding window method and the signed-digit representation are also used to reduce the average number of point operations. Based on our analysis, the computation cost (the average number of required clock cycles) is effectively reduced in both the proposed finite field multiplication algorithm and the proposed implementation approach of ECC in comparison with other ECC finite field multiplication algorithms and implementation approaches. Manuscript Document
    • Open Access Article

      23 - Cover Selection Steganography Via Run Length Matrix and Human Visual System
      Sara Nazari Mohammad Shahram Moin
      A novel approach for steganography cover selection is proposed, based on image texture features and human visual system. Our proposed algorithm employs run length matrix to select a set of appropriate images from an image database and creates their stego version after e Full Text
      A novel approach for steganography cover selection is proposed, based on image texture features and human visual system. Our proposed algorithm employs run length matrix to select a set of appropriate images from an image database and creates their stego version after embedding process. Then, it computes similarity between original images and their stego versions by using structural similarity as image quality metric to select, as the best cover, one image with maximum similarity with its stego. According to the results of comparing our new proposed cover selection algorithm with other steganography methods, it is confirmed that the proposed algorithm is able to increase the stego quality. We also evaluated the robustness of our algorithm over steganalysis methods such as Wavelet based and Block based steganalyses; the experimental results show that the proposed approach decreases the risk of message hiding detection. Manuscript Document
    • Open Access Article

      24 - Pose-Invariant Eye Gaze Estimation Using Geometrical Features of Iris and Pupil Images
      Mohammad Reza Mohammadi Abolghasem Asadollah Raie
      In the cases of severe paralysis in which the ability to control the body movements of a person is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communications often require Full Text
      In the cases of severe paralysis in which the ability to control the body movements of a person is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communications often require special hardware or reliance on active infrared illumination. In this paper, we propose a non-intrusive algorithm for eye gaze estimation that works with video input from an inexpensive camera and without special lighting. The main contribution of this paper is proposing a new geometrical model for eye region that only requires the image of one iris for gaze estimation. Essential parameters for this system are the best fitted ellipse of the iris and the pupil center. The algorithms used for both iris ellipse fitting and pupil center localization pose no pre-assumptions on the head pose. All in all, the achievement of this paper is the robustness of the proposed system to the head pose variations. The performance of the method has been evaluated on both synthetic and real images leading to errors of 2.12 and 3.48 degrees, respectively. Manuscript Document
    • Open Access Article

      25 - Cyclic Correlation-Based Cooperative Detection for OFDM-Based Primary Users
      Hamed Sadeghi paeez azmi
      This paper develops a new robust cyclostationary detection technique for spectrum sensing of OFDM-based primary users (PUs). To do so, an asymptotically constant false alarm rate (CFAR) multi-cycle detector is proposed and its statistical behavior under null hypothesis Full Text
      This paper develops a new robust cyclostationary detection technique for spectrum sensing of OFDM-based primary users (PUs). To do so, an asymptotically constant false alarm rate (CFAR) multi-cycle detector is proposed and its statistical behavior under null hypothesis is investigated. Furthermore, to achieve higher detection capability, a soft decision fusion rule for performing cooperative spectrum sensing (CSS) in secondary networks is established. The proposed CSS scheme aims to maximize the deflection criterion at the fusion center (FC), while the reporting channels are under Rayleigh fading. In order to be able to evaluate the performance of the cooperative detector, some analytic threshold approximation methods are provided for the cases where the FC has direct sensing capability or not. Through numerical simulations, the proposed local and CSS schemes are shown to significantly enhance CR network performance in terms of detection probability metric. Manuscript Document
    • Open Access Article

      26 - A New Cooperative Approach for Cognitive Radio Networks with Correlated Wireless Channels
      Mehdi  Ghamari Adian Hassan Aghaeenia
      An effective cooperative cognitive radio system is proposed, when the wireless channels are highly correlated. The system model consists of two multi-antenna secondary users (SU TX and SU RX), constituting the desired link and some single-antenna primary and secondary u Full Text
      An effective cooperative cognitive radio system is proposed, when the wireless channels are highly correlated. The system model consists of two multi-antenna secondary users (SU TX and SU RX), constituting the desired link and some single-antenna primary and secondary users. The objective is the maximization of the data rates of the desired SU link subject to the interference constraints on the primary users. An effective system, exploiting Transmit Beamforming (TB) at SU TX, cooperation of some single-antenna SUs and Cooperative Beamforming (CB) at them and the antenna selection at SU RX to reduce the costs associated with RF-chains at the radio front end at SU RX, is proposed. Due to the issue of MIMO channels with correlated fading, some problems arise such as inapplicability of the well-known Grassmanian Beamforming as TB scheme at SU TX. We then propose a method to overcome this problem. After formulating the problem, a novel iterative scheme is proposed to find the best TB weight vector in SU TX and best subset of antennas at SU RX, considering the correlated channel. Manuscript Document
    • Open Access Article

      27 - Optimal Sensor Scheduling Algorithms for Distributed Sensor Networks
      Behrooz Safarinejadian Abdolah Rahimi
      In this paper, a sensor network is used to estimate the dynamic states of a system. At each time step, one (or multiple) sensors are available that can send its measured data to a central node, in which all of processing is done. We want to provide an optimal algorithm Full Text
      In this paper, a sensor network is used to estimate the dynamic states of a system. At each time step, one (or multiple) sensors are available that can send its measured data to a central node, in which all of processing is done. We want to provide an optimal algorithm for scheduling sensor selection at every time step. Our goal is to select the appropriate sensor to reduce computations, optimize the energy consumption and enhance the network lifetime. To achieve this goal, we must reduce the error covariance. Three algorithms are used in this work: sliding window, thresholding and randomly chosen algorithms. Moreover, we will offer a new algorithm based on circular selection. Finally, a novel algorithm for selecting multiple sensors is proposed. Performance of the proposed algorithms is illustrated with numerical examples. Manuscript Document
    • Open Access Article

      28 - Theory and Experiment of Parasitic Element Effects on Spherical Probe-Fed Antenna
      Javad Soleiman Meiguni Manouchehr Kamyab Ahmad Hosseinbeig
      Theory and experiment of a spherical probe-fed conformal antenna with a parasitic element mounted on a spherical multilayer structure are presented in this paper. Rigorous mathematical Method of Moments (MoMs) for analyzing various radiating spherical structures is pres Full Text
      Theory and experiment of a spherical probe-fed conformal antenna with a parasitic element mounted on a spherical multilayer structure are presented in this paper. Rigorous mathematical Method of Moments (MoMs) for analyzing various radiating spherical structures is presented in this paper by using Dyadic Green's Functions (DGFs) in conjunction with Mixed Potential Integral Equation (MPIE) formulation. Linear Rao-Wilton-Glisson (RWG) triangular basis functions are applied in MPIE formulation. Current distributions on coaxial probe and conformal radiating elements are computed by using spatial domain Dyadic Green's Function (DGF) and its asymptotic approximation. A prototype of such an antenna is fabricated and tested. The effect of the parasitic element on the input impedance and radiation patterns of the antenna is investigated. It is shown that the antenna characteristics are improved significantly with the presence of the conducting parasitic element. Good agreement is achieved between the results obtained from the proposed methods and the measurement results. Manuscript Document
    • Open Access Article

      29 - Ten Steps for Software Quality Rating Considering ISO/IEC
      Hassan Alizadeh Bahram Sadeghi Bigham Hossein Afsari
      In software rating area, it is necessary to apply a measurement reference model to evaluate the quality of software. The standard 25030 is an example of an evaluation system which is based on stakeholders' requirements. In this study, an attempt has been made to establi Full Text
      In software rating area, it is necessary to apply a measurement reference model to evaluate the quality of software. The standard 25030 is an example of an evaluation system which is based on stakeholders' requirements. In this study, an attempt has been made to establish a model in which all implicit and explicit requirements of stakeholders, users and policy makers have been taken into account. In addition, AHP method has been followed to weigh the indicators used in the model. The results show applicability of the model to meet the requirements of Iranian users. Manuscript Document
    • Open Access Article

      30 - A New Method for Detecting the Number of Coherent Sources in the Presence of Colored Noise
      Shahriar Shirvani Moghaddam Somaye  Jalaei
      In this paper, a new method for determining the number of coherent/correlated signals in the presence of colored noise is proposed which is based on the Eigen Increment Threshold (EIT) method. First, we present a new approach which combines EIT criterion and eigenvalue Full Text
      In this paper, a new method for determining the number of coherent/correlated signals in the presence of colored noise is proposed which is based on the Eigen Increment Threshold (EIT) method. First, we present a new approach which combines EIT criterion and eigenvalue correction. The simulation results show that the new method estimates the number of noncoherent signals in the presence of colored noise with higher detection probability respect to MDL, AIC, EGM and conventional EIT. In addition, to apply the proposed EIT algorithm to detect the number of sources in the case of coherent and/or correlated sources, a spatial smoothing preprocessing is added. In this case, simulation results show 100% detection probability for signal to noise ratios greater than -5dB. Final version of the proposed EIT-based method is a simple and efficient way to increase the detection probability of EIT method in the presence of colored noise considering either coherent/correlated or noncoherent sources. Manuscript Document
    • Open Access Article

      31 - Parameter Estimation in Hysteretic Systems Based on Adaptive Least-Squares
      Mansour Peimani Mohammad Javad Yazdanpanah Naser Khaji
      In this paper, various identification methods based on least-squares technique to estimate the unknown parameters of structural systems with hysteresis are investigated. The Bouc-Wen model is used to describe the behavior of hysteretic nonlinear systems. The adaptive ve Full Text
      In this paper, various identification methods based on least-squares technique to estimate the unknown parameters of structural systems with hysteresis are investigated. The Bouc-Wen model is used to describe the behavior of hysteretic nonlinear systems. The adaptive versions are based on the fixed and variable forgetting factor and the optimized version is based on optimized adaptive coefficient matrix. Simulation results show the efficient performance of the proposed technique in identification and tracking of hysteretic structural system parameters compared with other least square based algorithms. Manuscript Document
    • Open Access Article

      32 - Digital Video Stabilization System by Adaptive Fuzzy Kalman Filtering
      Mohammad javad Tanakian Mehdi Rezaei Farahnaz Mohanna
      Digital video stabilization (DVS) allows acquiring video sequences without disturbing jerkiness, removing unwanted camera movements. A good DVS should remove the unwanted camera movements while maintains the intentional camera movements. In this article, we propose a no Full Text
      Digital video stabilization (DVS) allows acquiring video sequences without disturbing jerkiness, removing unwanted camera movements. A good DVS should remove the unwanted camera movements while maintains the intentional camera movements. In this article, we propose a novel DVS algorithm that compensates the camera jitters applying an adaptive fuzzy filter on the global motion of video frames. The adaptive fuzzy filter is a Kalman filter which is tuned by a fuzzy system adaptively to the camera motion characteristics. The fuzzy system is also tuned during operation according to the amount of camera jitters. The fuzzy system uses two inputs which are quantitative representations of the unwanted and the intentional camera movements. Since motion estimation is a computation intensive operation, the global motion of video frames is estimated based on the block motion vectors which resulted by video encoder during motion estimation operation. Furthermore, the proposed method also utilizes an adaptive criterion for filtering and validation of motion vectors. Experimental results indicate a good performance for the proposed algorithm. Manuscript Document
    • Open Access Article

      33 - Video Transmission Using New Adaptive Modulation and Coding Scheme in OFDM based Cognitive Radio
      Hassan Farsi Farid Jafarian
      As Cognitive Radio (CR) used in video applications, user-comprehended video quality practiced by secondary users is an important metric to judge effectiveness of CR technologies. We propose a new adaptive modulation and coding (AMC) scheme for CR, which is OFDM based sy Full Text
      As Cognitive Radio (CR) used in video applications, user-comprehended video quality practiced by secondary users is an important metric to judge effectiveness of CR technologies. We propose a new adaptive modulation and coding (AMC) scheme for CR, which is OFDM based system that is compliant with the IEEE.802.16. The proposed CR alters its modulation and coding rate to provide high quality system. In this scheme, CR using its ability to consciousness of various parameters including knowledge of the white holes in the channel spectrum via channel sensing, SNR, carrier to interference and noise ratio (CINR), and Modulation order Product code Rate (MPR) selects an optimum modulation and coding rate. In this scheme, we model the AMC function using Artificial Neural Network (ANN). Since AMC is naturally a non-liner function, ANN is selected to model this function. In order to achieve more accurate model, Genetic algorithm (GA) and Particle Swarm Optimization (PSO) are selected to optimize the function representing relationship between inputs and outputs of ANN, i.e., AMC model. Inputs of ANN are CR knowledge parameters, and the outputs are modulation type and coding rate. Presenting a perfect AMC model is advantage of this scheme because of considering all impressive parameters including CINR, available bandwidth, SNR and MPR to select optimum modulation and coding rate. Also, we show that in this application, GA rather than PSO is better choice for optimization algorithm. Manuscript Document
    • Open Access Article

      34 - A Learning Automata Approach to Cooperative Particle Swarm Optimizer
      Mohammad Hasanzadeh meybodi meybodi mohamad mehdi ebadzade
      This paper presents a modification of Particle Swarm Optimization (PSO) technique based on cooperative behavior of swarms and learning ability of an automaton. The approach is called Cooperative Particle Swarm Optimization based on Learning Automata (CPSOLA). The CPSOLA Full Text
      This paper presents a modification of Particle Swarm Optimization (PSO) technique based on cooperative behavior of swarms and learning ability of an automaton. The approach is called Cooperative Particle Swarm Optimization based on Learning Automata (CPSOLA). The CPSOLA algorithm utilizes three layers of cooperation which are intra swarm, inter swarm and inter population. There are two active populations in CPSOLA. In the primary population, the particles are placed in all swarms and each swarm consists of multiple dimensions of search space. Also there is a secondary population in CPSOLA which is used the conventional PSO's evolution schema. In the upper layer of cooperation, the embedded Learning Automaton (LA) is responsible for deciding whether to cooperate between these two populations or not. Experiments are organized on five benchmark functions and results show notable performance and robustness of CPSOLA, cooperative behavior of swarms and successful adaptive control of populations. Manuscript Document
    • Open Access Article

      35 - An Improved Method for TOA Estimation in TH-UWB System considering Multipath Effects and Interference
      Mahdieh Ghasemlou Saeid Nader Esfahani Vahid  Tabataba Vakili
      UWB ranging is usually based on the time-of-arrival (TOA) estimation of the first path. There are two major challenges in TOA estimation. One challenge is to deal with multipath channel, especially in indoor environments. The other challenge is the existence of interfer Full Text
      UWB ranging is usually based on the time-of-arrival (TOA) estimation of the first path. There are two major challenges in TOA estimation. One challenge is to deal with multipath channel, especially in indoor environments. The other challenge is the existence of interference from other sources. In this paper, we propose a new method of TOA estimation, which is very robust against the interference. In this method, during the phase of TOA estimation, the transmitter sends its pulses in random positions within the frame. This makes the position of the interference relative to the main pulse to be random. Consequently, the energy of interference would be distributed, almost uniformly, along the frame. In energy detection methodes, a constant interference along the frame does not affect the detection of arrival time and only needs the adjustment of the threshold. Simulation results in IEEE.802.15.4a channels show that, even in presence of very strong interference, TOA estimation error of less than 3 nanoseconds is feasible with the proposed method. Manuscript Document
    • Open Access Article

      36 - Image Retrieval Using Color-Texture Features Extracted From Gabor-Walsh Wavelet Pyramid
      Sajad Mohammadzadeh Hassan Farsi
      Image retrieval is one of the most applicable image processing techniques which have been extensively used. Feature extraction is one of the most important procedures used for interpretation and indexing images in Content-Based Image Retrieval (CBIR) systems. Effective Full Text
      Image retrieval is one of the most applicable image processing techniques which have been extensively used. Feature extraction is one of the most important procedures used for interpretation and indexing images in Content-Based Image Retrieval (CBIR) systems. Effective storage, indexing and managing a large number of image collections are critical challenges in computer systems. There are many proposed methods to overcome these problems. However, the rate of image retrieval and speed of retrieval are still interesting fields of researches. In this paper, we propose a new method based on combination of Gabor filter and Walsh transform and Wavelet Pyramid (GWWP). The Crossover Point (CP) of precision and recall are considered as metrics to evaluate and compare different methods. The Obtained results show using GWWP provides better performance in compared to with other methods. Manuscript Document
    • Open Access Article

      37 - Language Model Adaptation Using Dirichlet Class Language Model Based on Part-of-Speech
      Ali Hatami ahmad akbari Babak Nasersharif
      Language modeling has many applications in a large variety of domains. Performance of this model depends on its adaptation to a particular style of data. Accordingly, adaptation methods endeavour to apply syntactic and semantic characteristics of the language for langua Full Text
      Language modeling has many applications in a large variety of domains. Performance of this model depends on its adaptation to a particular style of data. Accordingly, adaptation methods endeavour to apply syntactic and semantic characteristics of the language for language modeling. The previous adaptation methods such as family of Dirichlet class language model (DCLM) extract class of history words. These methods due to lake of syntactic information are not suitable for high morphology languages such as Farsi. In this paper, we present an idea for using syntactic information such as part-of-speech (POS) in DCLM for combining with one of the language models of n-gram family. In our work, word clustering is based on POS of previous words and history words in DCLM. The performance of language models are evaluated on BijanKhan corpus using a hidden Markov model based ASR system. The results show that use of POS information along with history words and class of history words improves performance of language model, and decreases the perplexity on our corpus. Exploiting POS information along with DCLM, the word error rate of the ASR system decreases by 1.2% compared to DCLM. Manuscript Document
    • Open Access Article

      38 - Assessment of Performance Improvement in Hyperspectral Image Classification Based on Adaptive Expansion of Training Samples
      Maryam Imani
      High dimensional images in remote sensing applications allow us to analysis the surface of the earth with more details. A relevant problem for supervised classification of hyperspectral image is the limited availability of labeled training samples, since their collectio Full Text
      High dimensional images in remote sensing applications allow us to analysis the surface of the earth with more details. A relevant problem for supervised classification of hyperspectral image is the limited availability of labeled training samples, since their collection is generally expensive, difficult and time consuming. In this paper, we propose an adaptive method for improving the classification of hyperspectral images through expansion of training samples size. The represented approach utilizes high-confidence labeled pixels as training samples to re-estimate classifier parameters. Semi-labeled samples are samples whose class labels are determined by GML classifier. Samples whose discriminator function values are large enough are selected in an adaptive process and considered as semi-labeled (pseudo-training) samples added to the training samples to train the classifier sequentially. The results of experiments show that proposed method can solve the limitation of training samples in hyperspectral images and improve the classification performance. Manuscript Document
    • Open Access Article

      39 - Low Complexity Median Filter Hardware for Image Impulsive Noise Reduction
      Hossein Zamani HosseinAbadi samavi96 samavi96 Nader Karimi
      Median filters are commonly used for removal of the impulse noise from images. De-noising is a preliminary step in online processing of images, thus hardware implementation of median filters is of great interest. Hence, many methods, mostly based on sorting the pixels, Full Text
      Median filters are commonly used for removal of the impulse noise from images. De-noising is a preliminary step in online processing of images, thus hardware implementation of median filters is of great interest. Hence, many methods, mostly based on sorting the pixels, have been developed to implement median filters. Utilizing vast amount of hardware resources and not being fast are the two main disadvantages of these methods. In this paper a method for filtering images is proposed to reduce the needed hardware elements. A modular pipelined median filter unit is first modeled and then the designed module is used in a parallel structure. Since the image is applied in rows and in a parallel manner, the amount of necessary hardware elements is reduced in comparison with other hardware implementation methods. Also, image filtering speed has increased. Implementation results show that the proposed method has advantageous speed and efficiency. Manuscript Document
    • Open Access Article

      40 - GoF-Based Spectrum Sensing of OFDM Signals over Fading Channels
      Seyed Sadra Kashef paeez azmi Hamed Sadeghi
      Goodness-of-Fit (GoF) based spectrum sensing of orthogonal frequency-division multiplexing (OFDM) signals is investigated in this paper. To this end, some novel local sensing methods based on Shapiro-Wilk (SW), Shapiro-Francia (SF), and Jarque-Bera (JB) tests are first Full Text
      Goodness-of-Fit (GoF) based spectrum sensing of orthogonal frequency-division multiplexing (OFDM) signals is investigated in this paper. To this end, some novel local sensing methods based on Shapiro-Wilk (SW), Shapiro-Francia (SF), and Jarque-Bera (JB) tests are first studied. In essence, a new threshold selection technique is proposed for SF and SW tests. Then, three studied methods are applied to spectrum sensing for the first time and their performance are analyzed. Furthermore, the computational complexity of the above methods is computed and compared to each other. Simulation results demonstrate that the SF detector outperforms other existing GoF-based methods over AWGN channels. Furthermore simulation results demonstrate the superiority of the proposed SF method in additive colored Gaussian noise channels and over fading channel in comparison with the conventional energy detector. Manuscript Document
    • Open Access Article

      41 - Load Balanced Spanning Tree in Metro Ethernet Networks
      Ghasem Mirjalily Samira Samadi
      Spanning Tree Protocol (STP) is a link management standard that provides loop free paths in Ethernet networks. Deploying STP in metro area networks is inadequate because it does not meet the requirements of these networks. STP blocks redundant links, causing the risk of Full Text
      Spanning Tree Protocol (STP) is a link management standard that provides loop free paths in Ethernet networks. Deploying STP in metro area networks is inadequate because it does not meet the requirements of these networks. STP blocks redundant links, causing the risk of congestion close to the root. As a result, STP provides poor support for load balancing in metro Ethernet networks. A solution for this problem is using multi-criteria spanning tree by considering criterions related to load balancing over links and switches. In our previous work, an algorithm named Best Spanning Tree (BST) is proposed to find the best spanning tree in a metro Ethernet network. BST is based on the computation of total cost for each possible spanning tree; therefore, it is very time consuming especially when the network is large. In this paper, two heuristic algorithms named Load Balanced Spanning Tree (LBST) and Modified LBST (MLBST) will be proposed to find the near-optimal balanced spanning tree in metro Ethernet networks. The computational complexity of the proposed algorithms is much less than BST algorithm. Furthermore, simulation results show that the spanning tree obtained by proposed algorithms is the same or similar to the spanning tree obtained by BST algorithm. Manuscript Document
    • Open Access Article

      42 - Node to Node Watermarking in Wireless Sensor Networks for Authentication of Self Nodes
      Hassan Farsi Seyed Morteza Nourian
      In order to solve some security issues in Wireless Sensor Networks (WSNs), node to node authentication method based on digital watermarking technique for verification of relative nodes is proposed. In the proposed method, some algorithms with low computational for gener Full Text
      In order to solve some security issues in Wireless Sensor Networks (WSNs), node to node authentication method based on digital watermarking technique for verification of relative nodes is proposed. In the proposed method, some algorithms with low computational for generation, embedding and detection of security ID are designed. The collected data packets by the nodes are marked using security ID. In the proposed method, header is used to mark the packets. Since the nature of the sensor networks is cooperative, using the head of the packets is proposed for authentication. Also using the marked head can prevent from sending and receiving fake data in the other nodes. Simulations have been performed in environments with imposing unrealistic data and having a probability from 1% to 10%. Comparing the proposed method with other methods shows that the proposed method in term of security, reducing traffic and increasing network lifetime is more effective. Manuscript Document
    • Open Access Article

      43 - A New Recursive Algorithm for Universal Coding of Integers
      Mehdi Nangir Hamid Behroozi Mohammad Reza Aref
      In this paper, we aim to encode the set of all positive integers so that the codewords not only be uniquely decodable but also be an instantaneous set of binary sequences. Elias introduces three recursive algorithms for universal coding of positive integers where each c Full Text
      In this paper, we aim to encode the set of all positive integers so that the codewords not only be uniquely decodable but also be an instantaneous set of binary sequences. Elias introduces three recursive algorithms for universal coding of positive integers where each codeword contains binary representation of the integer plus an attachment portion that gives some information about the first part [1]. On the other hand, Fibonacci coding which is based on Fibonacci numbers is also introduced by Apostolico and Fraenkel for coding of integers [2]. In this paper, we propose a new lossless recursive algorithm for universal coding of positive integers based on both recursive algorithms and Fibonacci coding scheme without using any knowledge about the source statistics [3].The coding schemes which don’t use the source statistics is called universal coding, in these universal coding schemes we should use a universal decoding scheme in the receiver side of communication system. All of these encoding and decoding schemes assign binary streams to positive integers and conversely, without any need of use to probability masses over positive integers. We show that if we use Fibonacci coding in the first part of each codeword we can achieve shorter expected codeword length than Elias Omega code. In addition, our proposed algorithm has low complexity of encoding and decoding procedures. Manuscript Document
    • Open Access Article

      44 - SIP Vulnerability Scan Framework
      Mitra Alidoosti Hassan Asgharian ahmad akbari
      The purpose of this paper is to provide a framework for detecting vulnerabilities in SIP (Session Initiation Protocol) networks. We try to find weaknesses in SIP enabled entities that an attacker by exploiting them is able to attack the system and affect it. This framew Full Text
      The purpose of this paper is to provide a framework for detecting vulnerabilities in SIP (Session Initiation Protocol) networks. We try to find weaknesses in SIP enabled entities that an attacker by exploiting them is able to attack the system and affect it. This framework is provided by the concept of penetration testing and is designed to be flexible and extensible, and has the capability to customize for other similar session based protocols. To satisfy the above objectives, the framework is designed with five main modules for discovery, information modeling, operation, evaluation and report. After setting up a test-bed as a typical VoIP system to show the validity of the proposed framework, this system has been implemented as a SIP vulnerability scanner. We also defined appropriate metrics for gathering the performance statistics of SIP components. Our test bed is deployed by open-source applications and used for validation and also evaluation of the proposed framework. Manuscript Document
    • Open Access Article

      45 - A New Robust Digital Image Watermarking Algorithm Based on LWT-SVD and Fractal Images
      Fardin Akhlaghian Tab Kayvan Ghaderi Parham Moradi
      This paper presents a robust copyright protection scheme based on Lifting Wavelet Transform (LWT) and Singular Value Decomposition (SVD). We have used fractal decoding to make a very compact representation of watermark image. The fractal code is presented by a binary im Full Text
      This paper presents a robust copyright protection scheme based on Lifting Wavelet Transform (LWT) and Singular Value Decomposition (SVD). We have used fractal decoding to make a very compact representation of watermark image. The fractal code is presented by a binary image. In the embedding phase of watermarking scheme, at first, we perform decomposing of the host image with 2D-LWT transform, then SVD is applied to sub-bands of the transformed image, and then the watermark, “binary image,” is embedded by modifying the singular values. In the watermark extraction phase, after the reverse steps are applied, the embedded binary image and consequently the fractal code are extracted from the watermarked image. The original watermark image is rendered by running the code. To verify the validity of the proposed watermarking scheme, several experiments are carried out and the results are compared with the results of the other algorithms. In order to evaluate the quality of image, we use parameter peak value signal-to-noise ratio (PSNR). To measure the robustness of the proposed algorithm, the NC coefficient is evaluated. The experimental results indicate that, in addition to high transparency, the proposed scheme is strong enough to resist various signal processing operations, such as average filter, median filter, Jpeg compression, contrast adjustment, cropping, histogram equalization, rotation, etc. Manuscript Document
    • Open Access Article

      46 - A Study on Clustering for Clustering Based Image De-noising
      Hossein Bakhshi Golestani Mohsen Joneidi Mostafa Sadeghi
      In this paper, the problem of de-noising of an image contaminated with Additive White Gaussian Noise (AWGN) is studied. This subject is an open problem in signal processing for more than 50 years. In the present paper, we suggest a method based on global clustering of i Full Text
      In this paper, the problem of de-noising of an image contaminated with Additive White Gaussian Noise (AWGN) is studied. This subject is an open problem in signal processing for more than 50 years. In the present paper, we suggest a method based on global clustering of image constructing blocks. As the type of clustering plays an important role in clustering-based de-noising methods, we address two questions about the clustering. The first, which parts of the data should be considered for clustering? The second, what data clustering method is suitable for de-noising? Then clustering is exploited to learn an over complete dictionary. By obtaining sparse decomposition of the noisy image blocks in terms of the dictionary atoms, the de-noised version is achieved. Experimental results show that our dictionary learning framework outperforms its competitors in terms of de-noising performance and execution time. Manuscript Document
    • Open Access Article

      47 - Joint Relay Selection and Power Allocation in MIMO Cooperative Cognitive Radio Networks
      Mehdi  Ghamari Adian Hassan Aghaeenia
      In this work, the issue of joint relay selection and power allocation in Underlay MIMO Cooperative Cognitive Radio Networks (U-MIMO-CCRN) is addressed. The system consists of a number of secondary users (SUs) in the secondary network and a primary user (PU) in the prima Full Text
      In this work, the issue of joint relay selection and power allocation in Underlay MIMO Cooperative Cognitive Radio Networks (U-MIMO-CCRN) is addressed. The system consists of a number of secondary users (SUs) in the secondary network and a primary user (PU) in the primary network. We consider the communications in the link between two selected SUs, referred to as the desired link which is enhanced using the cooperation of one of the existing SUs. The core aim of this work is to maximize the achievable data rate in the desired link, using the cooperation of one of the SUs which is chosen opportunistically out of existing SUs. Meanwhile, the interference due to the secondary transmission on the PU should not exceed the tolerable amount. The approach to determine the optimal power allocation, i.e. the optimal transmits covariance and amplification matrices of the SUs, and also the optimal cooperating SU is proposed. Since the proposed optimal approach is a highly complex method, a low complexity approach is further proposed and its performance is evaluated using simulations. The simulation results reveal that the performance loss due to the low complexity approach is only about 14%, while the complexity of the algorithm is greatly reduced. Manuscript Document
    • Open Access Article

      48 - Joint Source and Channel Analysis for Scalable Video Coding Using Vector Quantization over OFDM System
      Farid Jafarian Hassan Farsi
      Conventional wireless video encoders employ variable-length entropy encoding and predictive coding to achieve high compression ratio but these techniques render the extremely sensitive encoded bit-stream to channel errors. To prevent error propagation, it is necessary t Full Text
      Conventional wireless video encoders employ variable-length entropy encoding and predictive coding to achieve high compression ratio but these techniques render the extremely sensitive encoded bit-stream to channel errors. To prevent error propagation, it is necessary to employ various additional error correction techniques. In contrast, alternative technique, vector quantization (VQ), which doesn’t use variable-length entropy encoding, have the ability to impede such an error through the use of fix-length code-words. In this paper, we address the problem of analysis of joint source and channel for VQ based scalable video coding (VQ-SVC). We introduce intra-mode VQ-SVC and VQ-3D-DCT SVC, which offer similar compression performance to intra-mode H.264 and 3D-DCT respectively, while offering inherent error resilience. In intra-mode VQ-SVC, 2D-DCT and in VQ-3D-DCT SVC, 3D-DCT is applied on video frames to exploit DCT coefficients then VQ is employed to prepare the codebook of DCT coefficients. In this low bitrate video codecs, high level robustness is needed against the wireless channel fluctuations. To achieve such robustness, we propose and calculate optimal codebook of VQ-SVC and optimal channel code rate using joint source and channel coding (JSCC) technique. Next, the analysis is developed for transmission of video using an OFDM system over multipath Rayleigh fading and AWGN channel. Finally, we report the performance of these schemes to minimize end-to-end distortion over the wireless channel. Manuscript Document
    • Open Access Article

      49 - Ant Colony Scheduling for Network On Chip
      Neda  Dousttalab Mohammad Ali Jabraeil Jamali بهنام طالبی
      The operation scheduling problem in network on chip is NP-hard; therefore effective heuristic methods are needful to provide modal solutions. This paper introduces ant colony scheduling, a simple and effective method to increase allocator matching efficiency and hence n Full Text
      The operation scheduling problem in network on chip is NP-hard; therefore effective heuristic methods are needful to provide modal solutions. This paper introduces ant colony scheduling, a simple and effective method to increase allocator matching efficiency and hence network performance, particularly suited to networks with complex topology and asymmetric traffic patterns. Proposed algorithm has been studied in torus and flattened-butterfly topologies with multiple types of traffic pattern. Evaluation results show that this algorithm in many causes has showed positive effects on reducing network delays and increased chip performance in comparison with other algorithms. Manuscript Document
    • Open Access Article

      50 - Facial Expression Recognition Using Texture Description of Displacement Image
      Hamid Sadeghi Abolghasem Asadollah Raie Mohammad Reza Mohammadi
      In recent years, facial expression recognition, as an interesting problem in computer vision has been performed by means of static and dynamic methods. Dynamic information plays an important role in recognizing facial expression. However, using the entire dynamic inform Full Text
      In recent years, facial expression recognition, as an interesting problem in computer vision has been performed by means of static and dynamic methods. Dynamic information plays an important role in recognizing facial expression. However, using the entire dynamic information in the expression image sequences is of higher computational cost compared to the static methods. To reduce the computational cost, instead of entire image sequence, only neutral and emotional faces can be employed. In the previous research, this idea was used by means of DLBPHS method in which facial important small displacements were vanished by subtracting LBP features of neutral and emotional face images. In this paper, a novel approach is proposed to utilize two face images. In the proposed method, the face component displacements are highlighted by subtracting neutral image from emotional image; then, LBP features are extracted from the difference image. The proposed method is evaluated on standard databases and the results show a significant accuracy improvement compared to DLBPHS. Manuscript Document
    • Open Access Article

      51 - Fusion of Learning Automata to Optimize Multi-constraint Problem
      Sara Motamed Ali Ahmadi
      This paper aims to introduce an effective classification method of learning for partitioning the data in statistical spaces. The work is based on using multi-constraint partitioning on the stochastic learning automata. Stochastic learning automata with fixed or variable Full Text
      This paper aims to introduce an effective classification method of learning for partitioning the data in statistical spaces. The work is based on using multi-constraint partitioning on the stochastic learning automata. Stochastic learning automata with fixed or variable structures are a reinforcement learning method. Having no information about optimized operation, such models try to find an answer to a problem. Converging speed in such algorithms in solving different problems and their route to the answer is so that they produce a proper condition if the answer is obtained. However, despite all tricks to prevent the algorithm involvement with local optimal, the algorithms do not perform well for problems with a lot of spread local optimal points and give no good answer. In this paper, the fusion of stochastic learning automata algorithms has been used to solve given problems and provide a centralized control mechanism. Looking at the results, is found that the recommended algorithm for partitioning constraints and finding optimization problems are suitable in terms of time and speed, and given a large number of samples, yield a learning rate of 97.92%. In addition, the test results clearly indicate increased accuracy and significant efficiency of recommended systems compared with single model systems based on different methods of learning automata. Manuscript Document
    • Open Access Article

      52 - Tracking Performance of Semi-Supervised Large Margin Classifiers in Automatic Modulation Classification
      Hamidreza Hosseinzadeh Farbod Razzazi Afrooz Haghbin
      Automatic modulation classification (AMC) in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, we propose a semi-superv Full Text
      Automatic modulation classification (AMC) in detected signals is an intermediate step between signal detection and demodulation, and is also an essential task for an intelligent receiver in various civil and military applications. In this paper, we propose a semi-supervised Large margin AMC and evaluate it on tracking the received signal to noise ratio (SNR) changes to classify all forms of signals in a cognitive radio environment. To achieve this objective, two structures for self-training of large margin classifiers were developed in additive white Gaussian noise (AWGN) channels with priori unknown SNR. A suitable combination of the higher order statistics and instantaneous characteristics of digital modulation are selected as effective features. Simulation results show that adding unlabeled input samples to the training set, improve the tracking capacity of the presented system to robust against environmental SNR changes. Manuscript Document
    • Open Access Article

      53 - Security Analysis of Scalar Costa Scheme Against Known Message Attack in DCT-Domain Image Watermarking
      Reza Samadi Seyed Alireza  Seyedin
      This paper proposes an accurate information-theoretic security analysis of Scalar Costa Scheme (SCS) when the SCS is employed in the embedding layer of digital image watermarking. For this purpose, Discrete Cosine Transform (DCT) coefficients are extracted from the cove Full Text
      This paper proposes an accurate information-theoretic security analysis of Scalar Costa Scheme (SCS) when the SCS is employed in the embedding layer of digital image watermarking. For this purpose, Discrete Cosine Transform (DCT) coefficients are extracted from the cover images. Then, the SCS is used to embed watermarking messages into mid-frequency DCT coefficients. To prevent unauthorized embedding and/or decoding, the SCS codebook is randomized using the pseudorandom dither signal which plays the role of the secret key. A passive attacker applies Known Message Attack (KMA) on the watermarked messages to practically estimate the secret key. The security level is measured using residual entropy (equivocation) of the secret key provided that the attacker’s observations are available. It can be seen that the practical security level of the SCS depends on the host statistics which has contradiction with previous theoretical result. Furthermore, the practical security analysis of the SCS leads to the different values of the residual entropy in comparison with previous theoretical equation. It will be shown that these differences are mainly due to existence of uniform regions in images that cannot be captured by previous theoretical analysis. Another source of such differences is ignoring the dependencies between the observations of non-uniform regions in previous theoretical analysis. To provide an accurate reformulation, the theoretical equation for the uniform regions and the empirical equation for the non-uniform regions are proposed. Then, by combining these equations a new equation is presented for the whole image which considers both host statistics and observations dependencies. Finally, accuracy of the proposed formulations is examined through exhaustive simulations. Manuscript Document
    • Open Access Article

      54 - Effects of Wave Polarization on Microwave Imaging Using Linear Sampling Method
      Mehdi Salar Kaleji Mohammad  Zoofaghari reza Safian Zaker Hossein  Firouzeh
      Linear Sampling Method (LSM) is a simple and effective method for the shape reconstruction of unknown objects. It is also a fast and robust method to find the location of an object. This method is based on far field operator which relates the far field radiation to its Full Text
      Linear Sampling Method (LSM) is a simple and effective method for the shape reconstruction of unknown objects. It is also a fast and robust method to find the location of an object. This method is based on far field operator which relates the far field radiation to its associated line source in the object. There has been an extensive research on different aspects of the method. But from the experimental point of view there has been little research especially on the effect of polarization on the imaging quality of the method. In this paper, we study the effect of polarization on the quality of shape reconstruction of two dimensional targets. Some examples are illustrated to compare the effect of transverse electric (TE) and transverse magnetic (TM) polarizations, on the reconstruction quality of penetrable and non-penetrable objects. Manuscript Document
    • Open Access Article

      55 - Extracting Credit Rules from Imbalanced Data: The Case of an Iranian Export Development Bank
      mehdi rasul mohammadreza gholamian Kamran shahanaghi
      Credit scoring is an important topic, and banks collect different data from their loan applicant to make an appropriate and correct decision. Rule bases are of more attention in credit decision making because of their ability to explicitly distinguish between good and b Full Text
      Credit scoring is an important topic, and banks collect different data from their loan applicant to make an appropriate and correct decision. Rule bases are of more attention in credit decision making because of their ability to explicitly distinguish between good and bad applicants. The credit scoring datasets are usually imbalanced. This is mainly because the number of good applicants in a portfolio of loan is usually much higher than the number of loans that default. This paper use previous applied rule bases in credit scoring, including RIPPER, OneR, Decision table, PART and C4.5 to study the reliability and results of sampling on its own dataset. A real database of one of an Iranian export development bank is used and, imbalanced data issues are investigated by randomly Oversampling the minority class of defaulters, and three times under sampling of majority of non-defaulters class. The performance criterion chosen to measure the reliability of rule extractors is the area under the receiver operating characteristic curve (AUC), accuracy and number of rules. Friedman’s statistic is used to test for significance differences between techniques and datasets. The results from study show that PART is better and good and bad samples of data affect its results less. Manuscript Document
    • Open Access Article

      56 - Trust evaluation in unsupervised network: A fuzzy logic approach
      Golnar Assadat  Afzali Monireh Hosseini
      Because of the possibility of anonymity and impersonation in social networks, trust plays an important role in these networks. Pear to pear networks, by eliminating the supervisor roles, besides its benefit in decreasing management costs, have problems in trust and secu Full Text
      Because of the possibility of anonymity and impersonation in social networks, trust plays an important role in these networks. Pear to pear networks, by eliminating the supervisor roles, besides its benefit in decreasing management costs, have problems in trust and security of users. In this research, by using social networks as supervised networks, trust level of users is evaluated and by identifying these users in unsupervised networks, appropriate trust level is assigned to them. Manuscript Document
    • Open Access Article

      57 - Detection and Removal of Rain from Video Using Predominant Direction of Gabor Filters
      Gelareh Malekshahi Hossein Ebrahimnezhad
      In this paper, we examine the visual effects of rain on the imaging system and present a new method for detection and removal of rain in a video sequences. In the proposed algorithm, to separate the moving foreground from the background in image sequences that are the f Full Text
      In this paper, we examine the visual effects of rain on the imaging system and present a new method for detection and removal of rain in a video sequences. In the proposed algorithm, to separate the moving foreground from the background in image sequences that are the frames of video with scenes recorded from the raindrops moving, a background subtraction technique is used. Then, rain streaks are detected using predominant direction of Gabor filters which contains maximum energy. To achieve this goal, the rainy image is partitioned to multiple sub images. Then, all directions of Gabor filter banks are applied to each sub image and the direction which maximizes the energy of the filtered sub image is selected as the predominant direction of that region. At the end, the rainy pixels diagnosed in per frame are replaced with non-rainy pixels background of other frames. As a result, we reconstruct a new video in which the rain streaks have been removed. According to the certain limitations and existence of textures variation during time, the proposed method is not sensitive to these changes and operates properly. Simulation results show that the proposed method can detect and locate the rain place as well. Manuscript Document
    • Open Access Article

      58 - Blog feed search in Persian Blogosphere
      Mohammad Sadegh Zahedi Abolfazl Aleahmad rahgozar rahgozar Farhad Oroumchian
      Blogs are one of the main user generated content on the web. So, it is necessary to present retrieval algorithms to the meet information need of weblog users. The goal of blog feed search is to rank blogs regarding their recurrent relevance to the topic of the query. I Full Text
      Blogs are one of the main user generated content on the web. So, it is necessary to present retrieval algorithms to the meet information need of weblog users. The goal of blog feed search is to rank blogs regarding their recurrent relevance to the topic of the query. In this paper, the state-of-the-art blog retrieval methods are surveyed and then they are evaluated and compared in Persian blogosphere. Also, one of the best retrieval models is optimized by using data fusion methods. Evaluation of the proposed algorithm is carried out based on a standard Persian weblogs dataset with 45 diverse queries. Our comparisons show considerable improvement over existing blog retrieval algorithms. Manuscript Document
    • Open Access Article

      59 - SRR shape dual band CPW-fed monopole antenna for WiMAX / WLAN applications
      Zahra Mansouri Ramezan Ali Sadeghzadeh Maryam  Rahimi Ferdows Zarrabi
      CPW structure is became common structure for UWB and multi band antenna design and SRR structure is well-known kind of metamaterial that has been used in antenna and filter design for multi band application. In this paper, a SRR dual band monopole antenna with CPW-fed f Full Text
      CPW structure is became common structure for UWB and multi band antenna design and SRR structure is well-known kind of metamaterial that has been used in antenna and filter design for multi band application. In this paper, a SRR dual band monopole antenna with CPW-fed for WLAN and WiMAX is presented. The prototype antenna is designed for wireless communication such as WLAN and WIMAX respectively at 2.4 GHz and 5 GHz. The HFSS and CST microwave studio are used to simulate the prototype antenna for two different FEM and time domain method and they have also been compared with the experimental results. The total size of the antenna is 60mm×55mm×1.6mm and it is fabricated on FR-4 low cost substrate. The antenna is connected to a 50 Ω CPW feed line. Its bandwidth is around 3% for 2.45 GHz (2.4-2.5 GHz) and 33% for 5.15GHz (4.3-6 GHz).Its limited bandwidth in 2.4 GHz frequency is benefit for power saving at indoor application. The antenna has 2-7 dBi gain in the mentioned bands with an Omni-directional pattern. The antenna experimental result shows good similarity to simulation kind for return loss and pattern. Here, the effect of parasitic SRR on current distribution has been studied in presence and absence of parasitic element. The simulation of polarization is confirmed that the antenna has linear polarization. Here comparison between antenna return losses in absence of each parasitic element is presented. Manuscript Document
    • Open Access Article

      60 - A Robust Statistical Color Edge Detection for Noisy Images
      Mina Alibeigi Niloofar Mozafari Zohre Azimifar Mahnaz Mahmoodian
      Edge detection is a fundamental tool that plays a significant role in image processing, and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. Therefore, edge detection is one of the well-studied areas in image p Full Text
      Edge detection is a fundamental tool that plays a significant role in image processing, and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. Therefore, edge detection is one of the well-studied areas in image processing and computer vision. However, it is clear that accurate edge map generation is more difficult when images are corrupted with noise. Moreover, most of edge detection methods have parameters which must be set manually. In recent years different approaches has been used to address these problems. Here we propose a new color edge detector based on a statistical test, which is robust to noise. Also, the parameters of this method will be set automatically based on image content. To show the effectiveness of the proposed method, four state-of-the-art edge detectors are implemented and the results are compared. Experimental results on five of the most well-known edge detection benchmarks show that the proposed method is robust to noise. The performance of our method for lower levels of noise is very comparable to the existing approaches, whose performances highly depend on their parameter tuning stage. However, for higher levels of noise, the observed results significantly highlight the superiority of the proposed method over the existing edge detection methods, both quantitatively and qualitatively. Manuscript Document
    • Open Access Article

      61 - Active Steganalysis of Transform Domain Steganography Based on Sparse Component Analysis
      Hamed Modaghegh Seyed Alireza  Seyedin
      This paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or es Full Text
      This paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or estimate hidden message (active steganalysis). Although estimating the message is important but there is little research in this area. A new active steganalysis method which is based on Spars Component Analysis (SCA) technique is presented in this work. Here, the sparsity property of cover image and hidden message has been used to extract hidden message from stego image. In our method, transform domain steganography is formulated mathematically as a linear combination of sparse sources and therefore active steganalysis can be presented as a SCA problem. The feasibility of the SCA problem solving is confirmed by Linear Programming methods. Then, a fast algorithm is introduced to decrease the computational cost of steganalysis without much loss of accuracy. The accuracy of our new method has been confirmed in different experiments on a variety of transform domain steganography. These experiments show that, our method compared to the previous active steganalysis methods not only reduces the error rate but also decreases the computational cost. Manuscript Document
    • Open Access Article

      62 - Better Performance of New Generation of Digital Video Broadcasting-terrestrial (DVB-T2) Using Alamouti scheme with Cyclic Delay Diversity
      Behnam Akbarian Saeed Ghazi-Maghrebi
      The goal of the future terrestrial digital video broadcasting (DVB-T) standard is to employ diversity and spatial multiplexing in order to achieve the fully multiple-input multiple-output (MIMO) channel capacity. The DVB-T2 standard targets an improved system performanc Full Text
      The goal of the future terrestrial digital video broadcasting (DVB-T) standard is to employ diversity and spatial multiplexing in order to achieve the fully multiple-input multiple-output (MIMO) channel capacity. The DVB-T2 standard targets an improved system performance throughput by at least 30% over the DVB-T. The DVB-T2 enhances the performance using improved coding methods, modulation techniques and multiple antenna technologies. After a brief presentation of the antenna diversity technique and its properties, we introduce the fact of the well-known Alamouti decoding scheme cannot be simply used over the frequency selective channels. In other words, the Alamouti Space-Frequency coding in DVB-T2 provides additional diversity. However, the performance degrades in highly frequency-selective channels, because the channel frequency response is not necessarily flat over the entire Alamouti block code. The objective of this work is to present an enhanced Alamouti space frequency block decoding scheme for MIMO and orthogonal frequency-division multiplexing (OFDM) systems using the delay diversity techniques over highly frequency selective channels. Also, we investigate the properties of the proposed scheme over different channels. Specifically, we show that the Alamouti scheme with using Cyclic Delay Diversity (CDD) over some particular channels has the better performance. Then, we exemplarity implement this scheme to the DVB-T2 system. Simulation results confirm that the proposed scheme has lower bit error rate (BER), especially for high SNRs, with respect to the standard Alamouti decoder over highly frequency-selective channels such as single frequency networks (SFN). Furthermore, the new scheme allows a high reliability and tolerability. The other advantages of the proposed method are its simplicity, flexibility and standard compatibility with respect to the conventional methods. Manuscript Document
    • Open Access Article

      63 - Statistical Analysis of Different Traffic Types Effect on QoS of Wireless Ad Hoc Networks
      Mahmood Mollaei Gharehajlu Saadan Zokaei Yousef Darmani
      IEEE 802.11 based wireless ad hoc networks are highly appealing owing to their needless of infrastructures, ease and quick deployment and high availability. Vast variety of applications such as voice and video transmission over these types of networks need different net Full Text
      IEEE 802.11 based wireless ad hoc networks are highly appealing owing to their needless of infrastructures, ease and quick deployment and high availability. Vast variety of applications such as voice and video transmission over these types of networks need different network performances. In order to support quality of service for these applications, characterizing both packets arrival and available resources are essential. To address these issues we use Effective Bandwidth/Effective Capacity theory which expresses packet arrival and service model statistically. Effective Bandwidth asymptotically represents arrival traffic specifications using a single function. Also, Effective Capacity statistically describes service model of each node. Based on this theory, at first we modeled each node’s service as an ON/OFF process. Then a new closed form of Effective Capacity is proposed which is a simple function and is dependent on a few parameters of the network. Afterward the performance of different traffic patterns such as constant bit rate, Poisson and Markov Modulated Poisson process are statistically evaluated in the case of both single and aggregate traffic modes. Using the proposed model we will show that traffic pattern affects QoS parameters even if all models have the same average packet arrival rate. We prove the accuracy of our model by a series of simulations which are run using NS2 simulator. Manuscript Document
    • Open Access Article

      64 - A New Architecture for Intrusion-Tolerant Web Services Based on Design Diversity Techniques
      Sadegh Bejani Mohammad  Abdollahi Azgomi
      Web services are the realization of service-oriented architecture (SOA). Security is an important challenge of SOAP-based Web services. So far, several security techniques and standards based on traditional security mechanisms, such as encryption and digital signature, Full Text
      Web services are the realization of service-oriented architecture (SOA). Security is an important challenge of SOAP-based Web services. So far, several security techniques and standards based on traditional security mechanisms, such as encryption and digital signature, have been proposed to enhance the security of Web services. The aim has been to employ the concepts and techniques of fault-tolerant computing to make Web services more secure, which is called intrusion-tolerance. Intrusion-tolerance means the continuous delivery of services in presence of security attacks, which can be used as a fundamental approach for enhancing the security of Web services. In this paper, we propose a novel architecture for intrusion-tolerant Web services with emphasis on intrusion-tolerance concepts and composite Web service techniques. The proposed architecture, which is called design-diverse intrusion-tolerant Web service (DDITWS), takes the advantages of design diversity techniques. For Web service composition, BPEL4WS is used. Formal modeling and verification of the proposed architecture is performed using colored Petri nets (CPNs) and CPN Tools. We have checked the behavioral properties of the model to ensure its correctness. The reliability and security evaluation of the proposed architecture is also performed using a stochastic Petri net (SPN) model and the SHARPE tool. The results show that the reliability and mean-time-to-security-failure (MTTSF) in the proposed architecture are improved. Manuscript Document
    • Open Access Article

      65 - A Persian Fuzzy Plagiarism Detection Approach
      shima rakian Faramarz Safi Esfahani Hamid Rastegari
      Plagiarism is one of the common problems that is present in all organizations that deal with electronic content. At present, plagiarism detection tools, only detect word by word or exact copy phrases and paraphrasing is often mixed. One of the successful and applicable Full Text
      Plagiarism is one of the common problems that is present in all organizations that deal with electronic content. At present, plagiarism detection tools, only detect word by word or exact copy phrases and paraphrasing is often mixed. One of the successful and applicable methods in paraphrasing detection is fuzzy method. In this study, a new fuzzy approach has been proposed to detect external plagiarism in Persian texts which is called Persian Fuzzy Plagiarism Detection (PFPD). The proposed approach compares paraphrased texts with the aim to recognize text similarities. External plagiarism detection, evaluates through a comparison between query document and a document collection. To avoid un-necessary comparisons this tool employs intelligent technology for comparing, suspicious documents, in different levels hierarchically. This method intends to conformed Fuzzy model to Persian language and improves previous methods to evaluate similarity degree between two sentences. Experiments on three corpora TMC, Irandoc and extracted corpus from prozhe.com, are performed to get confidence on proposed method performance. The obtained results showed that using proposed method in candidate documents retrieval, and in evaluating text similarity, increases the precision, recall and F measurement in comparing with one of the best previous fuzzy methods, respectively 22.41, 17.61, and 18.54 percent on the average. Manuscript Document
    • Open Access Article

      66 - A Hybrid Object Tracking for Hand Gesture (HOTHG) Approach based on MS-MD and its Application
      Amir Hooshang  Mazinan Jalal  Hassanian
      In the research proposed here, a hybrid object tracking approach, namely HOTHG, with its application to hand gesture recognition in American Sign Language; ASL, is realized. This is in fact proposed to track and recognize the hand gesture, in an effective manner, in lin Full Text
      In the research proposed here, a hybrid object tracking approach, namely HOTHG, with its application to hand gesture recognition in American Sign Language; ASL, is realized. This is in fact proposed to track and recognize the hand gesture, in an effective manner, in line with the mean shift; MS, and the motion detection; MD, entitled MS/MD-based approach. The results are synchronously investigated based on these two well-known techniques in the area of object tracking to modify those obtained from the traditional ones. The MS algorithm can track the objects based on its detailed targets, so we have to specify ones, as long as the MD algorithm is not realized. In the proposed approach, the advantages of two algorithms are efficiently used to upgrade the hand tracking performance. In the first step, the MD algorithm is applied to remove a number of parts without area motion, and subsequently the MS algorithm is accurately realized for hand tracking. Finally, the present approach is carried out to eliminate the weakness of the traditional methods, which are only organized in association with the MS algorithm. The results are all carried out on Boston-104 database, where the hand gesture is tracked in better form with respect to the previous existing approaches. Manuscript Document
    • Open Access Article

      67 - Fusion Infrared and Visible Images Using Optimal Weights
      Mehrnoush  Gholampour Hassan Farsi Sajad Mohammadzadeh
      Image fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is on Full Text
      Image fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is one of image fusion methods. The fusion is performed based on maintaining useful features and reducing or removing useless features. The aim of fusion has to be clearly specified. In this paper we propose a new method which combines vision and infrared images by weighting average to provide better image quality. The weighting average is performed in gradient domain. The weight of each image depends on its useful features. Since these images are recorded in night vision, the useful features are related to clear scene details. For this reason, object detection is applied on the infrared image and considered as its weight. The vision image is also considered as a complementary of infrared image weight. The averaging is performed in gradient of input images, and final composed image is obtained by Gauss-Seidel method. The quality of resulted image by the proposed algorithm is compared to the obtained images by state-of-the-art algorithms using quantitative and qualitative measures. The obtained results show that the proposed algorithm provides better image quality. Manuscript Document
    • Open Access Article

      68 - Selecting Enterprise Resource Planning System Using Fuzzy Analytic Hierarchy Process Approach
      hojatallah hamidi
      To select an enterprise resource planning (ERP) system is time consuming due to the resource constraints, the software complexity, and the different of alternatives. A comprehensively systematic selection policy for ERP system is very important to the success of ERP pro Full Text
      To select an enterprise resource planning (ERP) system is time consuming due to the resource constraints, the software complexity, and the different of alternatives. A comprehensively systematic selection policy for ERP system is very important to the success of ERP project. In this paper, we propose a fuzzy analytic hierarchy process (FAHP) method to evaluate the alternatives of ERP system. The selection criteria of ERP system are numerous and fuzzy, so how to select an adequate ERP system is crucial in the early phase of an ERP project. The framework decomposes ERP system selection into four main factors. The goal of this paper is to select the best alternative that meets the requirements with respect to product factors, system factors, management factors and vendor factors. The sub-attributes (sub-factors) related to ERP selection have been classified into thirteen main categories of Functionality, Reliability¬, Usability¬, Efficiency¬, Maintainability¬, Portability¬, Cost, Implementation time, User friendliness¬, Flexibility¬, Vendor Reputation¬, Consultancy Services, and R&D Capability¬ and arranged in a hierarchy structure. These criteria and factors are weighted and prioritized and finally a framework is provided for ERP selection with the fuzzy AHP method. Also, a real case study from Iran (PARDIS-LO Company) is also presented to demonstrate efficiency of this method in practice. Manuscript Document
    • Open Access Article

      69 - Simultaneous Methods of Image Registration and Super-Resolution Using Analytical Combinational Jacobian Matrix
      Hossein  Rezayi Seyed Alireza  Seyedin
      In this paper we propose two new simultaneous image registration (IR) and super-resolution (SR) methods using a novel approach to calculate the Jacobian matrix. SR is the process of fusing several low resolution (LR) images to reconstruct a high resolution (HR) image; h Full Text
      In this paper we propose two new simultaneous image registration (IR) and super-resolution (SR) methods using a novel approach to calculate the Jacobian matrix. SR is the process of fusing several low resolution (LR) images to reconstruct a high resolution (HR) image; however as inverse problem it consists of three principal operations of warping, blurring and down-sampling should be applied to the desired HR image to produce the existing LR images. Unlike the previous methods, we neither calculate the Jacobian matrix numerically nor derive the Jacobian matrix by treating the three principal operations separately. We develop a new approach to derive the Jacobian matrix analytically from combinational form of the three principal operations. In this approach, a Gaussian kernel (as it is more realistic in a wide rang of applications) is considered for blurring, which can be adaptively resized for each LR image. The main intended method is established by applying the aforementioned ideas to the joint methods, a class of simultaneous iterative methods in which the incremental values for both registration parameters and HR image are obtained by solving one system of equations per iteration. Our second proposed method is formed by applying these ideas to the alternating minimization (AM) methods, a class of simultaneous iterative methods in which the incremental values of registration parameters are obtained after calculating the high resolution image at each iteration. The results show that our methods are superior to the recently proposed methods such as Tian's joint and Hardie's AM method. Additionally, the computational cost of our proposed methods has also been reduced. Manuscript Document
    • Open Access Article

      70 - A Linear Model for Energy-Aware Scheduling Problem Considering Interference in Real-time Wireless Sensor Networks
      Maryam  Hamidanvar rafeh rafeh
      An important factor in increasing quality of service in real-time wireless networks is minimizing energy consumption, which contradicts with increasing message delivery rate because of associating a time deadline to each message. In these networks, every message has a t Full Text
      An important factor in increasing quality of service in real-time wireless networks is minimizing energy consumption, which contradicts with increasing message delivery rate because of associating a time deadline to each message. In these networks, every message has a time deadline constraint and when the message is not delivered to its destination before its deadline constraint, it will drop. Therefore, scheduling methods that simultaneously consider both energy consumption and time deadline constraint are needed. An effective method for reducing energy consumption is multi-hop transmission of packets. However, this method takes longer time for transmission as compared to single-hop transmission. Parallel transmission is another approach which on one hand reduces the transmission time and on the other hand increases the network throughput. However, a main issue with parallel transmission is the presence of interference among nearby nodes. In this paper, we propose a linear model (ILP formulation) for energy aware scheduling problem in real-time wireless sensor networks using parallel transmission. The main objective of the model is to reduce energy consumption and packet loss using multi-hop routing and parallel transmission. Experimental results show that the proposed model finds the optimum solution for the problem and outperforms the sequential scheduling based on the TDMA protocol. Manuscript Document
    • Open Access Article

      71 - A New Approach to the Quantitative Measurement of Software Reliability
      Abbas  Rasoolzadegan
      Nowadays software systems have very important role in a lot of sensitive and critical applications. Sometimes a small error in software could cause financial or even health loss in critical applications. So reliability assurance as a nun-functional requirement, is very Full Text
      Nowadays software systems have very important role in a lot of sensitive and critical applications. Sometimes a small error in software could cause financial or even health loss in critical applications. So reliability assurance as a nun-functional requirement, is very vital.One of the key tasks to ensure error-free operation of the software, is to have a quantitative measurement of the software reliability.Software reliability engineering is defined as the quantitative study of the operational behavior of software systems with respect to user requirements concerning reliability. Software Reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Quantifying software reliability is increasingly becoming necessary. We have recently proposed a new approach (referred to as SDAFlex&Rel) to the development of «reliable yet flexible» software. In this paper, we first present the definitions of a set of key terms that are necessary to communicate with the scope and contributions of this work. Based on the fact that software reliability is directly proportional to the reliability of the development approach used, in this paper, a new approach is proposed to quantitatively measure the reliability of the software developed using SDAFlex&Rel, thereby making precise informal claims on the reliability improvement. The quantitative results confirm the reliability improvement that is informally promised by SDAFlex&Rel. Manuscript Document
    • Open Access Article

      72 - Automatic Construction of Domain Ontology Using Wikipedia and Enhancing it by Google Search Engine
      Sedigheh  Khalatbari
      The foundation of the Semantic Web are ontologies. Ontologies play the main role in the exchange of information and development of the Lexical Web to the Semantic Web. Manual construction of ontologies is time-consuming, expensive, and dependent on the knowledge of doma Full Text
      The foundation of the Semantic Web are ontologies. Ontologies play the main role in the exchange of information and development of the Lexical Web to the Semantic Web. Manual construction of ontologies is time-consuming, expensive, and dependent on the knowledge of domain engineers. Also, Ontologies that have been extracted automatically from corpus on the Web might have incomplete information. The main objective of this study is describing a method to improve and expand the information of the ontologies. Therefore, this study first discusses the automatic construction of prototype ontology in animals’ domain from Wikipedia and then a method is presented to improve the built ontology. The proposed method of improving ontology expands ontology concepts through Bootstrapping methods using a set of concepts and relations in initial ontology and with the help of the Google search engine. A confidence measure was considered to choose the best option from the returned results by Google. Finally, the experiments showed the information that was obtained using the proposed method is twice more than the information that was obtained at the stage of automatic construction of ontology from Wikipedia. Manuscript Document
    • Open Access Article

      73 - On-road Vehicle detection based on hierarchical clustering using adaptive vehicle localization
      Moslem  Mohammadi Jenghara Hossein Ebrahimpour Komleh
      Vehicle detection is one of the important tasks in automatic driving. It is a hard problem that many researchers focused on it. Most commercial vehicle detection systems are based on radar. But these methods have some problems such as have problem in zigzag motions. Im Full Text
      Vehicle detection is one of the important tasks in automatic driving. It is a hard problem that many researchers focused on it. Most commercial vehicle detection systems are based on radar. But these methods have some problems such as have problem in zigzag motions. Image processing techniques can overcome these problems.This paper introduces a method based on hierarchical clustering using low-level image features for on-road vehicle detection. Each vehicle assumed as a cluster. In traditional clustering methods, the threshold distance for each cluster is fixed, but in this paper, the adaptive threshold varies according to the position of each cluster. The threshold measure is computed with bivariate normal distribution. Sampling and teammate selection for each cluster is applied by the members-based weighted average. For this purpose, unlike other methods that use only horizontal or vertical lines, a fully edge detection algorithm was utilized. Corner is an important feature of video images that commonly were used in vehicle detection systems. In this paper, Harris features are applied to detect the corners. LISA data set is used to evaluate the proposed method. Several experiments are applied to investigate the performance of proposed algorithm. Experimental results show good performance compared to other algorithms . Manuscript Document
    • Open Access Article

      74 - A fuzzy approach for ambiguity reducing in text similarity estimation (case study: Persian web contents)
      Hamid Ahangarbahan gholamali montazer
      Finding similar web contents have great efficiency in academic community and software systems. There are many methods and metrics in literature to measure the extent of text similarity among various documents and some its application especially in plagiarism detection s Full Text
      Finding similar web contents have great efficiency in academic community and software systems. There are many methods and metrics in literature to measure the extent of text similarity among various documents and some its application especially in plagiarism detection systems. However, most of them do not take ambiguity inherent in word or text pair’s comparison as well as structural features into account. As a result, pervious methods did not have enough accuracy to deal vague information. So using structural features and considering ambiguity inherent word improve the identification of similar contents. In this paper, a new method has been proposed that taking lexical and structural features in text similarity measures into consideration. After preprocessing and removing stopwords, each text was divided into general words and domain-specific knowledge words. Then, the two lexical and structural fuzzy inference systems were designed to assess lexical and structural text similarity. The proposed method has been evaluated on Persian paper abstracts of International Conference on e-Learning and e-Teaching (ICELET) Corpus. The results shows that the proposed method can achieve a rate of 75% in terms of precision and can detect 81% of the similar cases. Manuscript Document
    • Open Access Article

      75 - Privacy Preserving Big Data Mining: Association Rule Hiding
      Golnar Assadat  Afzali shahriyar mohammadi
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to Full Text
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to cover relationships between seemingly unrelated data in a data base.. Association rule hiding is a research area in privacy preserving data mining (PPDM) which addresses a solution for hiding sensitive rules within the data problem. Many researches have be done in this area, but most of them focus on reducing undesired side effect of deleting sensitive association rules in static databases. However, in the age of big data, we confront with dynamic data bases with new data entrance at any time. So, most of existing techniques would not be practical and must be updated in order to be appropriate for these huge volume data bases. In this paper, data anonymization technique is used for association rule hiding, while parallelization and scalability features are also embedded in the proposed model, in order to speed up big data mining process. In this way, instead of removing some instances of an existing important association rule, generalization is used to anonymize items in appropriate level. So, if necessary, we can update important association rules based on the new data entrances. We have conducted some experiments using three datasets in order to evaluate performance of the proposed model in comparison with Max-Min2 and HSCRIL. Experimental results show that the information loss of the proposed model is less than existing researches in this area and this model can be executed in a parallel manner for less execution time Manuscript Document
    • Open Access Article

      76 - COGNISON: A Novel Dynamic Community Detection Algorithm in Social Network
      Hamideh Sadat Cheraghchi Ali Zakerolhossieni
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social Full Text
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social network analysis, we present a novel dynamic community detection algorithm called COGNISON inspired mainly by social theories. To be specific, we take inspiration from prototype theory and cognitive consistency theory to recognize the best community for each member by formulating community detection algorithm by human analogy disciplines. COGNISON is placed in representative based algorithm category and hints to further fortify the pure mathematical approach to community detection with stabilized social science disciplines. The proposed model is able to determine the proper number of communities by high accuracy in both weighted and binary networks. Comparison with the state of art algorithms proposed for dynamic community discovery in real datasets shows higher performance of this method in different measures of Accuracy, NMI, and Entropy for detecting communities over times. Finally our approach motivates the application of human inspired models in dynamic community detection context and suggest the fruitfulness of the connection of community detection field and social science theories to each other. Manuscript Document
    • Open Access Article

      77 - Analysis and Evaluation of Techniques for Myocardial Infarction Based on Genetic Algorithm and Weight by SVM
      hojatallah hamidi Atefeh Daraei
      Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effe Full Text
      Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effects of using Angioplasty as main method for diagnosing Myocardial Infarction, presenting a method for diagnosing MI before occurrence seems really important. This study aim to investigate prediction models for Myocardial Infarction, by applying a feature selection model based on Wight by SVM and genetic algorithm. In our proposed method, for improving the performance of classification algorithm, a hybrid feature selection method is applied. At first stage of this method, the features are selected based on their weights, using weight by Support Vector Machine. At second stage, the selected features, are given to genetic algorithm for final selection. After selecting appropriate features, eight classification methods, include Sequential Minimal Optimization, REPTree, Multi-layer Perceptron, Random Forest, K-Nearest Neighbors and Bayesian Network, are applied to predict occurrence of Myocardial Infarction. Finally, the best accuracy of applied classification algorithms, have achieved by Multi-layer Perceptron and Sequential Minimal Optimization. Manuscript Document
    • Open Access Article

      78 - Optimization of Random Phase Updating Technique for Effective Reduction in PAPR, Using Discrete Cosine Transform
      Babak Haji Bagher Naeeni
      One of problems of OFDM systems, is the big value of peak to average power ratio. To reduce it, any attempt have been done amongst which, random phase updating is an important technique. In contrast to paper, since power variance is computable before IFFT block, the com Full Text
      One of problems of OFDM systems, is the big value of peak to average power ratio. To reduce it, any attempt have been done amongst which, random phase updating is an important technique. In contrast to paper, since power variance is computable before IFFT block, the complexity of this method would be less than other phase injection methods which could be an important factor. Another interesting capability of random phase updating technique is the possibility of applying the variance of threshold power. The operation of phase injection is repeated till the power variance reaches threshold power variance. However, this may be a considered as a disadvantage for random phase updating technique. The reason is that reaching the mentioned threshold may lead to possible system delay. In this paper, in order to solve the mentioned problem, DCT transform is applied on subcarrier outputs before phase injection. This leads to reduce the number of required carriers for reaching the threshold value which results in reducing system delay accordingly. Manuscript Document
    • Open Access Article

      79 - Nonlinear State Estimation Using Hybrid Robust Cubature Kalman Filter
      Behrooz Safarinejadian Mohsen Taher
      In this paper, a novel filter is provided that estimates the states of any nonlinear system, both in the presence and absence of uncertainty with high accuracy. It is well understood that a robust filter design is a compromise between the robustness and the estimation a Full Text
      In this paper, a novel filter is provided that estimates the states of any nonlinear system, both in the presence and absence of uncertainty with high accuracy. It is well understood that a robust filter design is a compromise between the robustness and the estimation accuracy. In fact, a robust filter is designed to obtain an accurate and suitable performance in presence of modelling errors.So in the absence of any unknown or time-varying uncertainties, the robust filter does not provide the desired performance. The new method provided in this paper, which is named hybrid robust cubature Kalman filter (CKF), is constructed by combining a traditional CKF and a novel robust CKF. The novel robust CKF is designed by merging a traditional CKF with an uncertainty estimator so that it can provide the desired performance in the presence of uncertainty. Since the presence of uncertainty results in a large innovation value, the hybrid robust CKF adapts itself according to the value of the normalized innovation. The CKF and robust CKF filters are run in parallel and at any time, a suitable decision is taken to choose the estimated state of either the CKF or the robust CKF as the final state estimation. To validate the performance of the proposed filters, two examples are given that demonstrate their promising performance. Manuscript Document
    • Open Access Article

      80 - Quality Assessment Based Coded Apertures for Defocus Deblurring
      Mina Masoudifar Hamid Reza Pourreza
      A conventional camera with small size pixels may capture images with defocused blurred regions. Blurring, as a low-pass filter, attenuates or drops details of the captured image. This fact makes deblurring as an ill-posed problem. Coded aperture photography can decrease Full Text
      A conventional camera with small size pixels may capture images with defocused blurred regions. Blurring, as a low-pass filter, attenuates or drops details of the captured image. This fact makes deblurring as an ill-posed problem. Coded aperture photography can decrease destructive effects of blurring in defocused images. Hence, in this case, aperture patterns are designed or evaluated based on the manner of reduction of these effects. In this paper, a new function is presented that is applied for evaluating the aperture patterns which are designed for defocus deblurring. The proposed function consists of a weighted sum of two new criteria, which are defined based on spectral characteristics of an aperture pattern. On the basis of these criteria, a pattern whose spectral properties are more similar to a flat all-pass filter is assessed as a better pattern. The weights of these criteria are determined by a learning approach. An aggregate image quality assessment measure, including an existing perceptual metric and an objective metric, is used for determining the weights. According to the proposed evaluation function, a genetic algorithm that converges to a near-optimal binary aperture pattern is developed. In consequence, an asymmetric and a semi-symmetric pattern are proposed. The resulting patterns are compared with the circular aperture and some other patterns in different scenarios. Manuscript Document
    • Open Access Article

      81 - Design, Implementation and Evaluation of Multi-terminal Binary Decision Diagram based Binary Fuzzy Relations
      Hamid Alavi Toussi Bahram Sadeghi Bigham
      Elimination of redundancies in the memory representation is necessary for fast and efficient analysis of large sets of fuzzy data. In this work, we use MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination Full Text
      Elimination of redundancies in the memory representation is necessary for fast and efficient analysis of large sets of fuzzy data. In this work, we use MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination of redundancies in the representation, less computations, and faster analyses. We also extended a BDD package (BuDDy) to support MTBDDs in general and fuzzy sets and relations in particular. Representation and manipulation of MTBDD based fuzzy sets and binary fuzzy relations are described in this paper. These include design and implementation of different fuzzy operations such as max, min and max-min composition. In particular, an efficient algorithm for computing max-min composition is presented.Effectiveness of our MTBDD based implementation is shown by applying it on fuzzy connectedness and image segmentation problem. Compared to a base implementation, the running time of the MTBDD based implementation was faster (in our test cases) by a factor ranging from 2 to 27. Also, when the MTBDD based data-structure was employed, the memory needed to represent the final results was improved by a factor ranging from 37.9 to 265.5. We also describe our base implementation which is based on matrices. Manuscript Document
    • Open Access Article

      82 - Unsupervised Segmentation of Retinal Blood Vessels Using the Human Visual System Line Detection Model
      Mohsen Zardadi Nasser Mehrshad Seyyed Mohammad Razavi
      Retinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic s Full Text
      Retinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic segmentation of blood vessels from retinal images is the initial step of the computer based assessment for blood vessel anomalies. In this paper, a fast unsupervised method for automatic detection of blood vessels in retinal images is presented. In order to eliminate optic disc and background noise in the fundus images, a simple preprocessing technique is introduced. First, a newly devised method, based on a simple cell model of the human visual system (HVS) enhances the blood vessels in various directions. Then, an activity function is presented on simple cell responses. Next, an adaptive threshold is used as an unsupervised classifier and classifies each pixel as a vessel pixel or a non-vessel pixel to obtain a vessel binary image. Lastly, morphological post-processing is applied to eliminate exudates which are detected as blood vessels. The method was tested on two publicly available databases, DRIVE and STARE, which are frequently used for this purpose. The results demonstrate that the performance of the proposed algorithm is comparable with state-of-the-art techniques. Manuscript Document
    • Open Access Article

      83 - Node Classification in Social Network by Distributed Learning Automata
      Ahmad Rahnama Zadeh meybodi meybodi Masoud Taheri Kadkhoda
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitio Full Text
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitioned according to the labeled nodes and a network of Distributed Learning Automata is corresponded on each partition. In each partition the maximal spanning tree is determined using DLA. Finally nodes are labeled according to the rewards of DLA. We have tested this algorithm on three real social network datasets, and results show that the expected accuracy of presented algorithm is achieved. Manuscript Document
    • Open Access Article

      84 - A Bio-Inspired Self-configuring Observer/ Controller for Organic Computing Systems
      Ali Tarihi haghighi haghighi feridon Shams
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the Full Text
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the emergence of life-like properties, called self-* in general which suits them well for pervasive computing. Achievement of these properties in organic computing systems is closely related to a proposed general feedback architecture, called the observer/controller architecture, which supports the mentioned properties through interacting with the system components and keeping their behavior under control. As one of these properties, self-configuration is desirable in the application of organic computing systems as it enables by enabling the adaptation to environmental changes. However, the adaptation in the level of architecture itself has not yet been studied in the literature of organic computing systems. This limits the achievable level of adaptation. In this paper, a self-configuring observer/controller architecture is presented that takes the self-configuration to the architecture level. It enables the system to choose the proper architecture from a variety of possible observer/controller variants available for a specific environment. The validity of the proposed architecture is formally demonstrated. We also show the applicability of this architecture through a known case study. Manuscript Document
    • Open Access Article

      85 - Safe Use of the Internet of Things for Privacy Enhancing
      hojatallah hamidi
      New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must Full Text
      New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must be recognized that management and control of information privacy may not be sufficient according to traditional user and public preferences. Society may need to balance the benefits of increased capabilities and efficiencies of the IoT against a possibly inevitably increased visibility into everyday business processes and personal activities. Much as people have come to accept increased sharing of personal information on the Web in exchange for better shopping experiences and other advantages, they may be willing to accept increased prevalence and reduced privacy of information. Because information is a large component of IoT information, and concerns about its privacy are critical to widespread adoption and confidence, privacy issues must be effectively addressed. The purpose of this paper is which looks at five phases of information flow, involving sensing, identification, storage, processing, and sharing of this information in technical, social, and legal contexts, in the IoT and three areas of privacy controls that may be considered to manage those flows, will be helpful to practitioners and researchers when evaluating the issues involved as the technology advances. Manuscript Document
    • Open Access Article

      86 - Preserving Data Clustering with Expectation Maximization Algorithm
      Leila Jafar Tafreshi Farzin Yaghmaee
      Data mining and knowledge discovery are important technologies for business and research. Despite their benefits in various areas such as marketing, business and medical analysis, the use of data mining techniques can also result in new threats to privacy and informatio Full Text
      Data mining and knowledge discovery are important technologies for business and research. Despite their benefits in various areas such as marketing, business and medical analysis, the use of data mining techniques can also result in new threats to privacy and information security. Therefore, a new class of data mining methods called privacy preserving data mining (PPDM) has been developed. The aim of researches in this field is to develop techniques those could be applied to databases without violating the privacy of individuals. In this work we introduce a new approach to preserve sensitive information in databases with both numerical and categorical attributes using fuzzy logic. We map a database into a new one that conceals private information while preserving mining benefits. In our proposed method, we use fuzzy membership functions (MFs) such as Gaussian, P-shaped, Sigmoid, S-shaped and Z-shaped for private data. Then we cluster modified datasets by Expectation Maximization (EM) algorithm. Our experimental results show that using fuzzy logic for preserving data privacy guarantees valid data clustering results while protecting sensitive information. The accuracy of the clustering algorithm using fuzzy data is approximately equivalent to original data and is better than the state of the art methods in this field. Manuscript Document
    • Open Access Article

      87 - Promote Mobile Banking Services by using National Smart Card Capabilities and NFC Technology
      Reza Vahedi Sayed Esmaeail Najafi Farhad Hosseinzadeh Lotfi
      By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account Full Text
      By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account card, the only security facility predicted for use mobile banking systems and financial transactions. That this alone cannot create reasonable security and the reason for greater protection and prevent the theft and misuse of citizens’ bank accounts is provide banking services by the service limits. That by using NFC (Near Field Communication) technology can identity and biometric information and Key pair stored on the smart card chip be exchanged with mobile phone and mobile banking system. And possibility of identification and authentication and also a digital signature created documents. And thus to enhance the security and promote mobile banking services. This research, the application and tool library studies and the opinion of seminary experts of information technology and electronic banking and analysis method Dematel is examined. And aim to investigate possibility Promote mobile banking services by using national smart card capabilities and NFC technology to overcome obstacles and risks that are mentioned above. Obtained Results, confirmed the hypothesis of the research and show that by implementing the so-called solutions in the banking system of Iran. Manuscript Document
    • Open Access Article

      88 - The Surfer Model with a Hybrid Approach to Ranking the Web Pages
      Javad Paksima - -
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly impor Full Text
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly important to design a ranking algorithm to provide the results pertaining to user’s query due to the great deal of information on the World Wide Web. In this paper, a ranking method is proposed with a hybrid approach, which considers the content and connections of pages. The proposed model is a smart surfer that passes or hops from the current page to one of the externally linked pages with respect to their content. A probability, which is obtained using the learning automata along with content and links to pages, is used to select a webpage to hop. For a transition to another page, the content of pages linked to it are used. As the surfer moves about the pages, the PageRank score of a page is recursively calculated. Two standard datasets named TD2003 and TD2004 were used to evaluate and investigate the proposed method. They are the subsets of dataset LETOR3. The results indicated the superior performance of the proposed approach over other methods introduced in this area. Manuscript Document
    • Open Access Article

      89 - A Hybrid Cuckoo Search for Direct Blockmodeling
      Saeed NasehiMoghaddam mehdi ghazanfari babak teimourpour
      As a way of simplifying, size reducing and making sense of the structure of each social network, blockmodeling consists of two major, essential components: partitioning of actors to equivalence classes, called positions, and clarifying relations between and within posit Full Text
      As a way of simplifying, size reducing and making sense of the structure of each social network, blockmodeling consists of two major, essential components: partitioning of actors to equivalence classes, called positions, and clarifying relations between and within positions. Partitioning of actors to positions is done variously and the ties between and within positions can be represented by density matrices, image matrices and reduced graphs. While actor partitioning in classic blockmodeling is performed by several equivalence definitions, such as structural and regular equivalence, generalized blockmodeling, using a local optimization procedure, searches the best partition vector that best satisfies a predetermined image matrix. The need for known predefined social structure and using a local search procedure to find the best partition vector fitting into that predefined image matrix, makes generalized blockmodeling be restricted. In this paper, we formulate blockmodel problem and employ a genetic algorithm to search for the best partition vector fitting into original relational data in terms of the known indices. In addition, during multiple samples and various situations such as dichotomous, signed, ordinal or interval valued relations, and multiple relations the quality of results shows better fitness to original relational data than solutions reported by researchers in classic, generalized, and stochastic blockmodeling field. Manuscript Document
    • Open Access Article

      90 - Investigating the Effect of Functional and Flexible Information Systems on Supply Chain Operation: Iran Automotive Industry
      abbas zareian Iraj Mahdavi Hamed Fazlollahtabar
      This research studies the relationship between supply chain and information system strategies, their effects on supply chain operation and functionality of an enterprise. Our research encompasses other ones because it uses a harmonic structure between information syst Full Text
      This research studies the relationship between supply chain and information system strategies, their effects on supply chain operation and functionality of an enterprise. Our research encompasses other ones because it uses a harmonic structure between information systems and supply chain strategies in order to improve supply chain functionality. The previous research focused on effects of information systems on modification of the relationship between supply chain strategies and supply chain function. We decide to evaluate direct effects of information systems on supply chain strategies. In this research, we show that information systems strategy to improve the relationship between supply chain and supply chain strategies will be. Therefore, it can be said that creating Alignment between informational system strategy and supply chain strategies finally result in improvement of supply chain functionality and company’s operation. Manuscript Document
    • Open Access Article

      91 - Short Time Price Forecasting for Electricity Market Based on Hybrid Fuzzy Wavelet Transform and Bacteria Foraging Algorithm
      keyvan borna Sepideh Palizdar
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, lin Full Text
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, linear prediction methods and neural networks and fuzzy logic have been studied and emulated. An optimized fuzzy-wavelet prediction method is proposed to predict the price of electricity. In this method, in order to have a better prediction, the membership functions of the fuzzy regression along with the type of the wavelet transform filter have been optimized using the E.Coli Bacterial Foraging Optimization Algorithm. Then, to better compare this optimal method with other prediction methods including conventional linear prediction and neural network methods, they were analyzed with the same electricity price data. In fact, our fuzzy-wavelet method has a more desirable solution than previous methods. More precisely by choosing a suitable filter and a multiresolution processing method, the maximum error has improved by 13.6%, and the mean squared error has improved about 17.9%. In comparison with the fuzzy prediction method, our proposed method has a higher computational volume due to the use of wavelet transform as well as double use of fuzzy prediction. Due to the large number of layers and neurons used in it, the neural network method has a much higher computational volume than our fuzzy-wavelet method. Manuscript Document
    • Open Access Article

      92 - Identification of a Nonlinear System by Determining of Fuzzy Rules
      hojatallah hamidi Atefeh  Daraei
      In this article the hybrid optimization algorithm of differential evolution and particle swarm is introduced for designing the fuzzy rule base of a fuzzy controller. For a specific number of rules, a hybrid algorithm for optimizing all open parameters was used to reach Full Text
      In this article the hybrid optimization algorithm of differential evolution and particle swarm is introduced for designing the fuzzy rule base of a fuzzy controller. For a specific number of rules, a hybrid algorithm for optimizing all open parameters was used to reach maximum accuracy in training. The considered hybrid computational approach includes: opposition-based differential evolution algorithm and particle swarm optimization algorithm. To train a fuzzy system hich is employed for identification of a nonlinear system, the results show that the proposed hybrid algorithm approach demonstrates a better identification accuracy compared to other educational approaches in identification of the nonlinear system model. The example used in this article is the Mackey-Glass Chaotic System on which the proposed method is finally applied. Manuscript Document
    • Open Access Article

      93 - An Effective Risk Computation Metric for Android Malware Detection
      Mahmood Deypir Ehsan Sharifi
      Android has been targeted by malware developers since it has emerged as widest used operating system for smartphones and mobile devices. Android security mainly relies on user decisions regarding to installing applications (apps) by approving their requested permissions Full Text
      Android has been targeted by malware developers since it has emerged as widest used operating system for smartphones and mobile devices. Android security mainly relies on user decisions regarding to installing applications (apps) by approving their requested permissions. Therefore, a systematic user assistance mechanism for making appropriate decisions can significantly improve the security of Android based devices by preventing malicious apps installation. However, the criticality of permissions and the security risk values of apps are not well determined for users in order to make correct decisions. In this study, a new metric is introduced for effective risk computation of untrusted apps based on their required permissions. The metric leverages both frequency of permission usage in malwares and rarity of them in normal apps. Based on the proposed metric, an algorithm is developed and implemented for identifying critical permissions and effective risk computation. The proposed solution can be directly used by the mobile owners to make better decisions or by Android markets to filter out suspicious apps for further examination. Empirical evaluations on real malicious and normal app samples show that the proposed metric has high malware detection rate and is superior to recently proposed risk score measurements. Moreover, it has good performance on unseen apps in term of security risk computation. Manuscript Document
    • Open Access Article

      94 - ANFIS Modeling to Forecast Maintenance Cost of Associative Information Technology Services
      reza Ehtesham Rasi لیلا مرادی
      Adaptive Neuro Fuzzy Inference System (ANFIS) was developed for quantifying Information Technology (IT) Generated Services perceptible by business users. In addition to forecasting, IT cost related to system maintenance can help managers for future and constructive deci Full Text
      Adaptive Neuro Fuzzy Inference System (ANFIS) was developed for quantifying Information Technology (IT) Generated Services perceptible by business users. In addition to forecasting, IT cost related to system maintenance can help managers for future and constructive decision. This model has been applied by previous large volume of data from IT cost factors, generated services, and associative cost for building pattern, tuning and training this model well. First of all, the model was fully developed, stabilized, and passed through intensive training with large volume of data collected in an organization. It can be possible to feed a specific time period of data into the model to determine the quantity of services and their related maintenance cost. ANFIS forecasting maintenance cost of measured service availability totally provided with first quantifying services in a specific time period. Having an operational mechanism for measuring and quantifying information technology services tangible by users for estimating their costs is contributed to practical accurate investment. Some components have been considered and measured in the field of system maintenance. The main objective of this study was identifying and determining the amount of investment for maintenance of entire generated services by consideration of their relations to tangible cost factors and also intangible cost connected to service lost. Manuscript Document
    • Open Access Article

      95 - Effective solving the One-Two Gap Problem in the PageRank algorithm
      Javad Paksima - -
      One of the criteria for search engines to determine the popularity of pages is an analysis of links in the web graph, and various methods have already been presented in this regard. The PageRank algorithm is the oldest web page ranking methods based on web graph and is Full Text
      One of the criteria for search engines to determine the popularity of pages is an analysis of links in the web graph, and various methods have already been presented in this regard. The PageRank algorithm is the oldest web page ranking methods based on web graph and is still used as one of the important factors of web pages on Google. Since the invention of this method, several bugs have been published and solutions have been proposed to correct them. The most important problem that is most noticed is pages without an out link or so-called suspended pages. In web graph analysis, we noticed another problem that occurs on some pages at the out degree of one, and the problem is that under conditions, the linked page score is more than the home page. This problem can generate unrealistic scores for pages, and the link chain can invalidate the web graph. In this paper, this problem has been investigated under the title "One-Two Gap", and a solution has been proposed to it. Experimental results show that fixing of the One-Two gap problem using the proposed solution. Test standard benchmark dataset, TREC2003, is applied to evaluate the proposed method. The experimental results show that our proposed method outperforms PageRank method theoretically and experimentally in the term of precision, accuracy, and sensitivity with such criteria as PD, P@n, NDCG@n, MAP, and Recall. Manuscript Document
    • Open Access Article

      96 - Concatenating Approach: Improving the Performance of Data Structure Implementation
      dmp dmp Ali Mahjur
      Data structures are important parts of the programs. Most programs use a variety of data structures and quality of data structures excessively affects the quality of the applications. In current programming languages, they are defined by storing a reference to the data Full Text
      Data structures are important parts of the programs. Most programs use a variety of data structures and quality of data structures excessively affects the quality of the applications. In current programming languages, they are defined by storing a reference to the data element in the data structure node. Some shortcomings of the current approach are limits in the performance of a data structure and poor mechanisms to handle key and hash attributes. These issues can be observed in the Java programming language which that dictates the programmer to use references to data element from the node. Clearly it is not an implementation mistake. It is a consequence of the Java paradigm which is common in almost all object-oriented programming languages. This paper introduces a new mechanism called access method, to implement a data structure efficiently which is based on the concatenating approach to data structure handling. In the concatenating approach, one memory block stores both the data element and the data structure node. According to the obtained results, the number of lines in the access method is reduced and reusability is increased. It builds data structure efficiently. Also it provides suitable mechanisms to handle key and hash attributes. Performance, simplicity, reusability and flexibility are the major features of the proposed approach. Manuscript Document
    • Open Access Article

      97 - A Novel User-Centric Method for Graph Summarization Based on Syntactical and Semantical Attributes
      Nosratali  Ashrafi Payaman Mohammadreza Kangavari
      In this paper, we proposed an interactive knowledge-based method for graph summarization. Due to the interactive nature of this method, the user can decide to stop or continue summarization process at any step based on the summary graph. The proposed method is a general Full Text
      In this paper, we proposed an interactive knowledge-based method for graph summarization. Due to the interactive nature of this method, the user can decide to stop or continue summarization process at any step based on the summary graph. The proposed method is a general one that covers three kinds of graph summarization called structural, attribute-based, and structural/attribute-based summarization. In summarization based on both structure and vertex attributes, the contributions of syntactical and semantical attributes, as well as the importance degrees of attributes are variable and could be specified by the user. We also proposed a new criterion based on density and entropy to assess the quality of a hybrid summary. For the purpose of evaluation, we generated a synthetic graph with 1000 nodes and 2500 edges and extracted the overall features of the graph using the Gephi tool and a developed application in Java. Finally, we generated summaries of different sizes and values for the structure contribution (α parameter). We calculated the values of density and entropy for each summary to assess their qualities based on the proposed criterion. The experimental results show that the proposed criterion causes to generate a summary with better quality. Manuscript Document
    • Open Access Article

      98 - Using Discrete Hidden Markov Model for Modelling and Forecasting the Tourism Demand in Isfahan
      Khatereh Ghasvarian Jahromi Vida Ghasvarian Jahromi
      Tourism has been increasingly gaining acceptance as a driving force to enhance the economic growth because it brings the per capita income, employment and foreign currency earnings. Since tourism affects other industries, in many countries, tourism is considered in the Full Text
      Tourism has been increasingly gaining acceptance as a driving force to enhance the economic growth because it brings the per capita income, employment and foreign currency earnings. Since tourism affects other industries, in many countries, tourism is considered in the economic outlook. The perishable nature of most sections dependent on the tourism has turned the prediction of tourism demand an important issue for future success. The present study, for the first time, uses the Discrete Hidden Markov Model (DHMM) to predict the tourism demand. DHMM is the discrete form of the well-known HMM approach with the capability of parametric modeling the random processes. MATLAB Software is applied to simulate and implement the proposed method. The statistic reports of Iranian and foreign tourists visiting Isfahan gained by Iran Cultural Heritage, Handicrafts, and Tourism Organization (ICHHTO)-Isfahan Tourism used for simulation of the model. To evaluate the proposed method, the prediction results are compared to the results from Artificial Neural Network, Grey model and Persistence method on the same data. Three errors indexes, MAPE (%), RMSE, and MAE, are also applied to have a better comparison between them. The results reveal that compared to three other methods, DHMM performs better in predicting tourism demand for the next year, both for Iranian and foreign tourists. Manuscript Document
    • Open Access Article

      99 - The Influence of ERP Usage on Organizational Learning: An Empirical Investigation
      Faisal Aburub
      A number of different hotels have been seen to direct significant investment towards Enterprise Recourse Planning (ERP) systems with the aim of securing sound levels of organizational learning. As a strategic instrument, organizational learning has been recommended in t Full Text
      A number of different hotels have been seen to direct significant investment towards Enterprise Recourse Planning (ERP) systems with the aim of securing sound levels of organizational learning. As a strategic instrument, organizational learning has been recommended in the modern management arena as potentially able to achieve a competitive edge and as stabilizing the success of businesses. Learning, as an aim, is not only able to improve the skillset and knowledge of employees, but also achieving organizational growth and development, whilst also helping to build a dynamic learning organization. Organizational learning is especially important in modern-day firms, when staff might choose to leave or change their role owing to the view that knowledge-sharing could be detrimental to their own success. The present work seeks to examine the impact of ERP usage on organizational learning. A new research model has been presented, this model has been empirically investigated in the Jordanian hotel industry. 350 questionnaires were distributed across a total of 350 hotels. 317 questionnaires were returned. Structural equation modeling (AMOS 18) was used to analyze the data. The findings from the empirical findings emphasize that ERP usage has significant impact on organizational learning. In line with the study findings, various aspects of organizational learning, such as continuous learning, system perspective, openness and experimentation and transfer and integration are recognized as able to best encourage the use of ERP. Suggestions for future work and discussion on research limitations are also discussed. Manuscript Document