• List of Articles


      • Open Access Article

        1 - Low Complex Standard Conformable Transceiver based on Doppler Spread for DVB-T2 Systems
        Saeed Ghazi-Maghrebi Behnam Akbarian
        This paper addresses a novel Alamouti space-frequency block decoding scheme with discontinuous Doppler diversity (DDoD) and cyclic delay diversity (CDD). We investigate different antenna diversity concepts, which can be applied to orthogonal frequency division multiplex Full Text
        This paper addresses a novel Alamouti space-frequency block decoding scheme with discontinuous Doppler diversity (DDoD) and cyclic delay diversity (CDD). We investigate different antenna diversity concepts, which can be applied to orthogonal frequency division multiplexing (OFDM) systems over highly frequency selective channels. The main object of this research is standard compatibility and the effect of simple diversity techniques on the channel fading properties. Therefore, we analyze a receiver in terms of the effective channel transfer function, which leads to the possibility of optimizing diversity. Besides, a novel transceiver using DDoD is proposed, which increases the Doppler spread of the multipath fading channel without causing additional Intercarrier Interference (ICI). Moreover, an efficient Alamouti encoder and decoder based on CDD is proposed, which allows a high reliability and capacity enhancement. In order to evaluate the capability of that, we have implemented this scheme for the second-generation terrestrial video broadcasting (DVB-T2) system over different channels. Furthermore, mathematical analysis and simulation results show the bit error performance of the modified encoding method with these diversity techniques, performs mostly better than the other forms of encoding Alamouti over highly frequency-selective channels such as single frequency networks (SFN). The other advantages of the proposed method are simplicity, flexibility, and standard compatibility. Manuscript Document
      • Open Access Article

        2 - AI based Computational Trust Model for Intelligent Virtual Assistant
        Babu Kumar Ajay Vikram Singh Parul  Agarwal
        The Intelligent virtual assistant (IVA) also called AI assistant or digital assistant is software developed as a product by organizations like Google, Apple, Microsoft and Amazon. Virtual assistant based on Artificial Intelligence which works and processes on natural la Full Text
        The Intelligent virtual assistant (IVA) also called AI assistant or digital assistant is software developed as a product by organizations like Google, Apple, Microsoft and Amazon. Virtual assistant based on Artificial Intelligence which works and processes on natural language commands given by humans. It helps the user to work more efficiently and also saves time. It is human friendly as it works on natural language commands given by humans. Voice-controlled Intelligent Virtual Assistants (IVAs) have seen gigantic development as of late on cell phones and as independent gadgets in individuals’ homes. The intelligent virtual assistant is very useful for illiterate and visually impaired people around the world. While research has analyzed the expected advantages and downsides of these gadgets for IVA clients, barely any investigations have exactly assessed the need of security and trust as a singular choice to use IVAs. In this proposed work, different IPA users and non-users (N=1000) are surveyed to understand and analyze the barriers and motivations to adopting IPAs and how users are concerned about data privacy and trust with respect to organizational compliances and social contract related to IPA data and how these concerns have affected the acceptance and use of IPAs. We have used Naïve Byes Classifier to compute trust in IVA devices and further evaluate probability of using different trusted IVA devices. Manuscript Document
      • Open Access Article

        3 - An Effective Method of Feature Selection in Persian Text for Improving the Accuracy of Detecting Request in Persian Messages on Telegram
        zahra khalifeh zadeh Mohammad Ali Zare Chahooki
        In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from Full Text
        In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from these sources of information. In automatic text classification, words are assessed as a set of features. Selecting useful features from each text reduces the size of the feature vector and improves classification performance. Many algorithms have been applied for the automatic classification of text. Although all the methods proposed for other languages are applicable and comparable, studies on classification and feature selection in the Persian text have not been sufficiently carried out. The present research is conducted in Persian, and the introduction of a Persian dataset is a part of its innovation. In the present article, an innovative approach is presented to improve the performance of Persian text classification. The authors extracted 85,000 Persian messages from the Idekav-system, which is a Telegram search engine. The new idea presented in this paper to process and classify this textual data is on the basis of the feature vector expansion by adding some selective features using the most extensively used feature selection methods based on Local and Global filters. The new feature vector is then filtered by applying the secondary feature selection. The secondary feature selection phase selects more appropriate features among those added from the first step to enhance the effect of applying wrapper methods on classification performance. In the third step, the combined filter-based methods and the combination of the results of different learning algorithms have been used to achieve higher accuracy. At the end of the three selection stages, a method was proposed that increased accuracy up to 0.945 and reduced training time and calculations in the Persian dataset. Manuscript Document
      • Open Access Article

        4 - IT Capability Evaluation through the IT Capability Map
        mina ranjbarfard Seyedeh Reyhaneh Mirsalari
        Organizations are increasingly in search of ways to derive more business values from IT investments and the need for IT capabilities (ITC) is surging. ITC is critical significant to build enterprise agility and promote organizational performance. However, IT capability Full Text
        Organizations are increasingly in search of ways to derive more business values from IT investments and the need for IT capabilities (ITC) is surging. ITC is critical significant to build enterprise agility and promote organizational performance. However, IT capability is always treated as the causal factor already existing and there are few studies on how IT capability is created and evaluated. Appropriate evaluation is necessary for an organization to measure, manage and improve enterprise ITC. This research aims to identify and map the dimensions of an organization's ITC. Using a mixed research method, this paper comprises two sections. The qualitative section adopts a systematic literature review (SLR) approach to identify the dimensions of ITC. The quantitative section employs factor analysis to validate identified ITC dimensions and their indicators in an attempt to develop a more precise model for ITC evaluation. The proposed ITC model includes IT management, IT human resources, IT infrastructure, and implementation of IT solutions dimensions as well as the 25 related indicators. Drawing on the results of this paper, organizations can engage in evaluation and improve/create essential ITCs based on the evaluation results. Manuscript Document
      • Open Access Article

        5 - Using Decision Lattice Analysis to Model IOT-based Companies’ profit
        Nazanin Talebolfakhr Seyed Babak Ebrahimi Donya Rahmani
        Demand uncertainty and high initial investments for IOT-based projects lead to analyzing various types of options, especially real options in project execution to decrease these uncertainties. In this study, we investigate the firms’ expected profits that resulted from Full Text
        Demand uncertainty and high initial investments for IOT-based projects lead to analyzing various types of options, especially real options in project execution to decrease these uncertainties. In this study, we investigate the firms’ expected profits that resulted from appropriate chosen static and dynamic pricing strategies namely low-pricing, high-pricing, and contingent pricing combined with binomial decision lattices. Besides, the reciprocal influence between pricing strategies and IOT investment could provide useful insights for the firms that confront demand uncertainties in selling the firms’ products. We propose a model which is the integration of binomial decision lattices, which have been calculated by Real Option Super Lattice Solver 2017 software, and pricing policies under uncertainty. The results provide insights into what pricing strategies to choose based on the project’s real option value and the level of the firm uncertainty about the purchasing of the high-value consumer. Among the mentioned static and dynamic pricing strategies, high-pricing and contingent pricing strategies under different situations can be selected and expected profits of each of the strategies will be calculated and compared with each other. On the contrary, as the low-pricing strategy resulted in the lowest option value, it will not be scrutinized in this study. Experimental results show that if the IOT investment level and high-value consumer purchasing likelihood are high, the firm will implement the high-pricing strategy, otherwise choosing the contingent pricing due to the demand uncertainty would be appropriate. Manuscript Document
      • Open Access Article

        6 - Using Static Information of Programs to Partition the Input Domain in Search-based Test Data Generation
        Atieh Monemi Bidgoli haghighi haghighi
        The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heurist Full Text
        The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heuristic algorithms search the domain of input variables in order to find input data that cover the targets. The domain of input variables is very large, even for simple programs, while this size has a major influence on the efficiency and effectiveness of all search-based methods. Despite the large volume of works on search-based test data generation, the literature contains few approaches that concern the impact of search space reduction. In order to partition the input domain, this study defines a relationship between the structure of the program and the input domain. Based on this relationship, we propose a method for partitioning the input domain. Then, to search in the partitioned search space, we select ant colony optimization as one of the important and prosperous meta-heuristic algorithms. To evaluate the performance of the proposed approach in comparison with the previous work, we selected a number of different benchmark programs. The experimental results show that our approach has 14.40% better average coverage versus the competitive approach Manuscript Document
      • Open Access Article

        7 - An Approach to Improve the Quality of Service in DTN and Non-DTN based VANET
        Ahmad Sarlak Yousef Darmani
        Nowadays, with attention to soar in the number of network users, it is necessary to find new approaches to revolutionize network operation. Vehicular ad-hoc networks are bound to play a pivotal role in communication, therefore raising the traffic in the network, using o Full Text
        Nowadays, with attention to soar in the number of network users, it is necessary to find new approaches to revolutionize network operation. Vehicular ad-hoc networks are bound to play a pivotal role in communication, therefore raising the traffic in the network, using only WiFi is unlikely to address this problem. Vehicles could use SDN and other networks such as 4G as well as 5G to distribute traffic to different networks. Moreover, many approaches for handling different data types are inappropriate due to the lack of attention to the data separation idea. In this paper, we proposed a control scheme called Improve Quality of Service in DTN and Non-DTN (IQDN) which works based on vehicle communication infrastructure using SDN idea. IQDN separates data to Delay-Tolerant Data (DTD), and Delay-Intolerant Data (DID) where the former buffers in a vehicle till the vehicle enters an RSU range and sends DTD using IEEE 802.11p. DID packets are sent by cellular networks and LTE. To transmit DTD via IEEE 802.11p, the network capacity is evaluated by SDN. If that network has room to transmit the data, SDN sends a control message to inform the vehicle. Simulations show that sending data over RSU and LTE increases the throughput and decreases the congestion, so the quality of service improves. Manuscript Document