• OpenAccess
    • List of Articles Information

      • Open Access Article

        1 - A New Method for Transformation Techniques in Secure Information Systems
        Hojatallah Hamidi
        The transformation technique relies on the comparison of parity values computed in two ways. The fault detection structures are developed and they not only detected subsystem faults but also corrected faults introduced in the data processing system. Concurrent parity va More
        The transformation technique relies on the comparison of parity values computed in two ways. The fault detection structures are developed and they not only detected subsystem faults but also corrected faults introduced in the data processing system. Concurrent parity values techniques are very useful in detecting numerical error in the data processing operations, where a single fault can propagate to many output faults. Parity values are the most effective tools used to detect faults occurring in the code stream. In this paper, we present a methodology for redundant systems that allows to detect faults. Checkpointing is the typical technique to tolerate such faults. This paper presents a checkpointing approach to operate on encoded data. The advantage of this method is that it is able to achieve very low overhead according to the specific characteristic of an application. The numerical results of the multiple checkpointing technique confirm that the technique is more efficient and reliable by not only distributing the process of checkpointing over groups of processors. This technique has been shown to improve both the reliability of the computation and the performance of the checkpointing. Manuscript profile
      • Open Access Article

        2 - Node to Node Watermarking in Wireless Sensor Networks for Authentication of Self Nodes
        Hassan Farsi Seyed Morteza Nourian
        In order to solve some security issues in Wireless Sensor Networks (WSNs), node to node authentication method based on digital watermarking technique for verification of relative nodes is proposed. In the proposed method, some algorithms with low computational for gener More
        In order to solve some security issues in Wireless Sensor Networks (WSNs), node to node authentication method based on digital watermarking technique for verification of relative nodes is proposed. In the proposed method, some algorithms with low computational for generation, embedding and detection of security ID are designed. The collected data packets by the nodes are marked using security ID. In the proposed method, header is used to mark the packets. Since the nature of the sensor networks is cooperative, using the head of the packets is proposed for authentication. Also using the marked head can prevent from sending and receiving fake data in the other nodes. Simulations have been performed in environments with imposing unrealistic data and having a probability from 1% to 10%. Comparing the proposed method with other methods shows that the proposed method in term of security, reducing traffic and increasing network lifetime is more effective. Manuscript profile
      • Open Access Article

        3 - Computing Semantic Similarity of Documents Based on Semantic Tensors
        Navid Bahrami Amir H.  Jadidinejad Mojdeh Nazari
        Exploiting semantic content of texts due to its wide range of applications such as finding related documents to a query, document classification and computing semantic similarity of documents has always been an important and challenging issue in Natural Language Process More
        Exploiting semantic content of texts due to its wide range of applications such as finding related documents to a query, document classification and computing semantic similarity of documents has always been an important and challenging issue in Natural Language Processing. In this paper, using Wikipedia corpus and organizing it by three-dimensional tensor structure, a novel corpus-based approach for computing semantic similarity of texts is proposed. For this purpose, first the semantic vector of available words in documents are obtained from the vector space derived from available words in Wikipedia articles, then the semantic vector of documents is formed according to their words vector. Consequently, measuring the semantic similarity of documents can be done by comparing their semantic vectors. The vector space of the corpus of Wikipedia will cause the curse of dimensionality challenge because of the existence of the high-dimension vectors. Usually vectors in high-dimension space are very similar to each other; in this way, it would be meaningless and vain to identify the most appropriate semantic vector for the words. Therefore, the proposed approach tries to improve the effect of the curse of dimensionality by reducing the vector space dimensions through random indexing. Moreover, the random indexing makes significant improvement in memory consumption of the proposed approach by reducing the vector space dimensions. The addressing capability of synonymous and polysemous words in the proposed approach will be feasible by means of the structured co-occurrence through random indexing. Manuscript profile
      • Open Access Article

        4 - Safe Use of the Internet of Things for Privacy Enhancing
        Hojatallah Hamidi
        New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must More
        New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must be recognized that management and control of information privacy may not be sufficient according to traditional user and public preferences. Society may need to balance the benefits of increased capabilities and efficiencies of the IoT against a possibly inevitably increased visibility into everyday business processes and personal activities. Much as people have come to accept increased sharing of personal information on the Web in exchange for better shopping experiences and other advantages, they may be willing to accept increased prevalence and reduced privacy of information. Because information is a large component of IoT information, and concerns about its privacy are critical to widespread adoption and confidence, privacy issues must be effectively addressed. The purpose of this paper is which looks at five phases of information flow, involving sensing, identification, storage, processing, and sharing of this information in technical, social, and legal contexts, in the IoT and three areas of privacy controls that may be considered to manage those flows, will be helpful to practitioners and researchers when evaluating the issues involved as the technology advances. Manuscript profile
      • Open Access Article

        5 - The Surfer Model with a Hybrid Approach to Ranking the Web Pages
        Javad Paksima Homa  Khajeh
        Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly impor More
        Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly important to design a ranking algorithm to provide the results pertaining to user’s query due to the great deal of information on the World Wide Web. In this paper, a ranking method is proposed with a hybrid approach, which considers the content and connections of pages. The proposed model is a smart surfer that passes or hops from the current page to one of the externally linked pages with respect to their content. A probability, which is obtained using the learning automata along with content and links to pages, is used to select a webpage to hop. For a transition to another page, the content of pages linked to it are used. As the surfer moves about the pages, the PageRank score of a page is recursively calculated. Two standard datasets named TD2003 and TD2004 were used to evaluate and investigate the proposed method. They are the subsets of dataset LETOR3. The results indicated the superior performance of the proposed approach over other methods introduced in this area. Manuscript profile
      • Open Access Article

        6 - Investigating the Effect of Functional and Flexible Information Systems on Supply Chain Operation: Iran Automotive Industry
        Abbas Zareian Iraj Mahdavi Hamed Fazlollahtabar
        This research studies the relationship between supply chain and information system strategies, their effects on supply chain operation and functionality of an enterprise. Our research encompasses other ones because it uses a harmonic structure between information syst More
        This research studies the relationship between supply chain and information system strategies, their effects on supply chain operation and functionality of an enterprise. Our research encompasses other ones because it uses a harmonic structure between information systems and supply chain strategies in order to improve supply chain functionality. The previous research focused on effects of information systems on modification of the relationship between supply chain strategies and supply chain function. We decide to evaluate direct effects of information systems on supply chain strategies. In this research, we show that information systems strategy to improve the relationship between supply chain and supply chain strategies will be. Therefore, it can be said that creating Alignment between informational system strategy and supply chain strategies finally result in improvement of supply chain functionality and company’s operation. Manuscript profile
      • Open Access Article

        7 - A Semantic Approach to Person Profile Extraction from Farsi Web Documents
        Hojjat Emami Hossein Shirazi ahmad abdolahzade
        Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studie More
        Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods. Manuscript profile
      • Open Access Article

        8 - Towards Accelerating IP Lookups on Commodity PC Routers using Bloom Filter: Proposal of Bloom-Bird
        Bahram Bahrambeigy Mahmood Ahmadi mahmood Fazlali
        Nowadays, routers are the main backbone of computer networks specifically the Internet. Moreover, the need for high-performance and high-speed routers has become a fundamental issue due to significant growth of information exchange through the Internet and intranets. On More
        Nowadays, routers are the main backbone of computer networks specifically the Internet. Moreover, the need for high-performance and high-speed routers has become a fundamental issue due to significant growth of information exchange through the Internet and intranets. On the other hand, flexibility and configurability behind the open-source routers has extended their usage via the networks. Furthermore, after assigning the last remaining IPv4 address block in 2011, development and improvement of IPv6-enabled routers especially the open-sources has become one of the first priorities for network programmers and researchers. In IPv6 because of its 128-bits address space compared to 32-bits in IPv4, much more space and time are required to be stored and searched that might cause a speed bottleneck in lookup of routing tables. Therefore, in this paper, Bird as an example of existing open source router which supports both IPv4 and IPv6 addresses is selected and Bloom-Bird (our improved version of Bird) is proposed which uses an extra stage for its IP lookups using Bloom filter to accelerate IP lookup mechanism. Based on the best of our knowledge this is the first application of Bloom filter on Bird software router. Moreover, false positive errors are handled in an acceptable rate because Bloom-Bird scales its Bloom filter capacity. The Bloom-Bird using real-world IP prefixes and huge number of inserted prefixes into its internal FIB (Forwarding Information Base), shows up to 61% and 56% speedup for IPv4 and IPv6 lookups over standard Bird, respectively. Moreover, using manually generated prefix sets in the best case, up to 93% speedup is gained. Manuscript profile
      • Open Access Article

        9 - ANFIS Modeling to Forecast Maintenance Cost of Associative Information Technology Services
        Reza Ehtesham Rasi Leila  Moradi
        Adaptive Neuro Fuzzy Inference System (ANFIS) was developed for quantifying Information Technology (IT) Generated Services perceptible by business users. In addition to forecasting, IT cost related to system maintenance can help managers for future and constructive deci More
        Adaptive Neuro Fuzzy Inference System (ANFIS) was developed for quantifying Information Technology (IT) Generated Services perceptible by business users. In addition to forecasting, IT cost related to system maintenance can help managers for future and constructive decision. This model has been applied by previous large volume of data from IT cost factors, generated services, and associative cost for building pattern, tuning and training this model well. First of all, the model was fully developed, stabilized, and passed through intensive training with large volume of data collected in an organization. It can be possible to feed a specific time period of data into the model to determine the quantity of services and their related maintenance cost. ANFIS forecasting maintenance cost of measured service availability totally provided with first quantifying services in a specific time period. Having an operational mechanism for measuring and quantifying information technology services tangible by users for estimating their costs is contributed to practical accurate investment. Some components have been considered and measured in the field of system maintenance. The main objective of this study was identifying and determining the amount of investment for maintenance of entire generated services by consideration of their relations to tangible cost factors and also intangible cost connected to service lost. Manuscript profile
      • Open Access Article

        10 - Representing a Model to Measure Absorbency Of Information Technology in Small And Medium Sized Enterprises
        Mohammad Taghi Sadeghi Farzad  Movahedi Sobhani Ali Rajabzade Ghatari
        With rapid development of information technology (IT) and further deepening of informationization construction, more and more enterprises have realized the strategic value of IT and made great investments in it. However, during the IT implementation process, decision-ma More
        With rapid development of information technology (IT) and further deepening of informationization construction, more and more enterprises have realized the strategic value of IT and made great investments in it. However, during the IT implementation process, decision-making, adaptation degree, and IT performance are often inferior to the anticipation. The assimilation of technology can be defined by the extent to which the use of Information technology spreads across organizational processes and becomes routinized in activities. Capabilities of information technology play crucial role in an ever-changing environment and are considered as one of the most important sources for enterprises while enterprises should acquire some effective capabilities so that they can effectively deploy and utilize information technology. The purpose of this investigation is to represent a model to measure absorbency of information technology in small and medium-sized enterprises. To do so, dimensions of the "absorbency of information technology" was determined through exploratory factor analysis in a survey research and confirmatory factor analysis was used to confirm model validity. Findings show that three dimensions are related to absorbency of information technology including the capability for innovative technology, Inside-Out IT capability and IT management capability, among which the capability for innovative technology has the highest correlation with the concept of absorbency of information technology. Manuscript profile
      • Open Access Article

        11 - Confidence measure estimation for Open Information Extraction
        Vahideh Reshadat maryam hourali Heshaam Faili
        The prior relation extraction approaches were relation-specific and supervised, yielding new instances of relations known a priori. While effective, this model is not applicable in case when the number of relations is high or where the relations are not known a priori. More
        The prior relation extraction approaches were relation-specific and supervised, yielding new instances of relations known a priori. While effective, this model is not applicable in case when the number of relations is high or where the relations are not known a priori. Open Information Extraction (OIE) is a relation-independent extraction paradigm designed to extract relations directly from massive and heterogeneous corpora such as Web. One of the main challenges for an Open IE system is estimating the probability that its extracted relation is correct. A confidence measure shows that how an extracted relation is a correct instance of a relation among entities. This paper proposes a new method of confidence estimation for OIE called Relation Confidence Estimator for Open Information Extraction (RCE-OIE). It investigates the incorporation of some proposed features in assigning confidence metric using logistic regression. These features consider diverse lexical, syntactic and semantic knowledge and also some extraction properties such as number of distinct documents from which extractions are drawn, number of relation arguments and their types. We implemented proposed confidence measure on the Open IE systems’ extractions and examined how it affects the performance of results. Evaluations show that incorporation of designed features is promising and the accuracy of our method is higher than the base methods while keeping almost the same performance as them. We also demonstrate how semantic information such as coherence measures can be used in feature-based confidence estimation of Open Relation Extraction (ORE) to further improve the performance. Manuscript profile
      • Open Access Article

        12 - Information Bottleneck and its Applications in Deep Learning
        Hassan Hafez Kolahi Shohreh Kasaei
        Information Theory (IT) has been used in Machine Learning (ML) from early days of this field. In the last decade, advances in Deep Neural Networks (DNNs) have led to surprising improvements in many applications of ML. The result has been a paradigm shift in the communit More
        Information Theory (IT) has been used in Machine Learning (ML) from early days of this field. In the last decade, advances in Deep Neural Networks (DNNs) have led to surprising improvements in many applications of ML. The result has been a paradigm shift in the community toward revisiting previous ideas and applications in this new framework. Ideas from IT are no exception. One of the ideas which is being revisited by many researchers in this new era, is Information Bottleneck (IB); a formulation of information extraction based on IT. The IB is promising in both analyzing and improving DNNs. The goal of this survey is to review the IB concept and demonstrate its applications in deep learning. The information theoretic nature of IB, makes it also a good candidate in showing the more general concept of how IT can be used in ML. Two important concepts are highlighted in this narrative on the subject, i) the concise and universal view that IT provides on seemingly unrelated methods of ML, demonstrated by explaining how IB relates to minimal sufficient statistics, stochastic gradient descent, and variational auto-encoders, and ii) the common technical mistakes and problems caused by applying ideas from IT, which is discussed by a careful study of some recent methods suffering from them. Manuscript profile
      • Open Access Article

        13 - SGF (Semantic Graphs Fusion): A Knowledge-based Representation of Textual Resources for Text Mining Applications
        Morteza Jaderyan Hassan Khotanlou
        The proper representation of textual documents has been the greatest challenge in text mining applications. In this paper, a knowledge-based representation model for text analysis applications is introduced. The proposed functionalities of the system are achieved by int More
        The proper representation of textual documents has been the greatest challenge in text mining applications. In this paper, a knowledge-based representation model for text analysis applications is introduced. The proposed functionalities of the system are achieved by integrating structured knowledge in the core components of the system. The semantic, lexical, syntactical and structural features are identified by the pre-processing module. The enrichment module is introduced to identify contextually similar concepts and concept maps for improving the representation. The information content of documents and the enriched contents are then fused (merged) into the graphical structure of a semantic network to form a unified and comprehensive representation of documents. The 20Newsgroup and Reuters-21578 datasets are used for evaluation. The evaluation results suggest that the proposed method exhibits a high level of accuracy, recall and precision. The results also indicate that even when a small portion of the information content is available, the proposed method performs well in standard text mining applications Manuscript profile
      • Open Access Article

        14 - The Innovation Roadmap and Value Creation for Information Goods Pricing as an Economic Commodity
        Hekmat Adelnia Najafabadi Ahmadreza Shekarchizadeh Akbar Nabiollahi Naser Khani Hamid Rastegari
        Nowadays, most books and information resources or even movies and application programs are produced and reproduced as information goods. Regarding characteristics of information goods, its cost structure and market, the usual and traditional pricing methods for such com More
        Nowadays, most books and information resources or even movies and application programs are produced and reproduced as information goods. Regarding characteristics of information goods, its cost structure and market, the usual and traditional pricing methods for such commodity are not useful and the information goods pricing has undergone innovative approaches. The purpose of product pricing is to find an optimal spot for maximizing manufacturers' profits and consumers' desirability. Undoubtedly, in order to achieve this goal, it is necessary to adopt appropriate strategies and implement innovative strategies. Innovative strategies and tactics reflect the analysis of market share, customer behavior change, pattern of cost, customer preferences, quick response to customer needs, market forecast, appropriate response to market changes, customer retention, discovery of their specific requirements, cost reduction and customer satisfaction increase. In this research, 32 papers have been selected among 540 prestigious articles to create a canvas containing more than 20 possible avenues for innovations in the field of information goods pricing, which can be used in the companies producing information goods, regardless of their size, nationality, and type of information goods they produce. Introduction of some key ideas on how to increase both profits and customer satisfaction and also three open issues for future research in the field of information goods pricing is one of the achievements of this research. Manuscript profile
      • Open Access Article

        15 - A New Capacity Theorem for the Gaussian Channel with Two-sided Input and Noise Dependent State Information
        Nima S. Anzabi-Nezhad Ghosheh  Abed Hodtani
        Gaussian interference known at the transmitter can be fully canceled in a Gaussian communication channel employing dirty paper coding, as Costa shows, when interference is independent of the channel noise and when the channel input designed independently of the interfer More
        Gaussian interference known at the transmitter can be fully canceled in a Gaussian communication channel employing dirty paper coding, as Costa shows, when interference is independent of the channel noise and when the channel input designed independently of the interference. In this paper, a new and general version of the Gaussian channel in presence of two-sided state information correlated to the channel input and noise is considered. Determining a general achievable rate for the channel and obtaining the capacity in a non-limiting case, we try to analyze and solve the Gaussian version of the Cover-Chiang theorem mathematically and information-theoretically. Our capacity theorem, while including all previous theorems as its special cases, explains situations that can not be analyzed by them; for example, the effect of the correlation between the side information and the channel input on the capacity of the channel that can not be analyzed with Costa’s “writing on dirty paper" theorem. Meanwhile, we try to exemplify the concept of “cognition" of the transmitter or the receiver on a variable (here, the channel noise) with the information-theoretic concept of “side information" correlated to that variable and known at the transmitter or at the receiver. According to our theorem, the channel capacity is an increasing function of the mutual information of the side information and the channel noise. Manuscript profile
      • Open Access Article

        16 - Secured Access Control in Security Information and Event Management Systems
        Leila Rikhtechi Vahid Rafeh Afshin Rezakhani
        Nowadays, Security Information and Event Management (SIEM) is very important in software. SIEM stores and monitors events in software and unauthorized access to logs can prompt different security threats such as information leakage and violation of confidentiality. In t More
        Nowadays, Security Information and Event Management (SIEM) is very important in software. SIEM stores and monitors events in software and unauthorized access to logs can prompt different security threats such as information leakage and violation of confidentiality. In this paper, a novel method is suggested for secured and integrated access control in the SIEM. First, the key points where the SIEM accesses the information within the software is specified and integrated policies for access control are developed in them. Accordingly, the threats entered into the access control module embedded in this system are carefully detected. By applying the proposed method, it is possible to provide the secured and integrated access control module for SIEM as well as the security of the access control module significantly increases in these systems. The method is implemented in the three stages of the requirements analysis for the establishment of a secure SIEM system, secure architectural design, and secure coding. The access control module is designed to create a secured SIEM and the test tool module is designed for evaluating the access control module vulnerabilities. Also, to evaluate the proposed method, the dataset is considered with ten thousand records, and the accuracy is calculated. The outcomes show the accuracy of the proposed method is significantly improved. The results of this paper can be used for designing an integrated and secured access control system in SIEM systems. Manuscript profile
      • Open Access Article

        17 - IT Capability Evaluation through the IT Capability Map
        mina ranjbarfard Seyedeh Reyhaneh Mirsalari
        Organizations are increasingly in search of ways to derive more business values from IT investments and the need for IT capabilities (ITC) is surging. ITC is critical significant to build enterprise agility and promote organizational performance. However, IT capability More
        Organizations are increasingly in search of ways to derive more business values from IT investments and the need for IT capabilities (ITC) is surging. ITC is critical significant to build enterprise agility and promote organizational performance. However, IT capability is always treated as the causal factor already existing and there are few studies on how IT capability is created and evaluated. Appropriate evaluation is necessary for an organization to measure, manage and improve enterprise ITC. This research aims to identify and map the dimensions of an organization's ITC. Using a mixed research method, this paper comprises two sections. The qualitative section adopts a systematic literature review (SLR) approach to identify the dimensions of ITC. The quantitative section employs factor analysis to validate identified ITC dimensions and their indicators in an attempt to develop a more precise model for ITC evaluation. The proposed ITC model includes IT management, IT human resources, IT infrastructure, and implementation of IT solutions dimensions as well as the 25 related indicators. Drawing on the results of this paper, organizations can engage in evaluation and improve/create essential ITCs based on the evaluation results. Manuscript profile
      • Open Access Article

        18 - Performance Analysis of Hybrid SOM and AdaBoost Classifiers for Diagnosis of Hypertensive Retinopathy
        Wiharto Wiharto Esti Suryani Murdoko Susilo
        The diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD More
        The diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD-RH system based on feature extraction tortuosity of retinal blood vessels. This study uses a segmentation method based on clustering self-organizing maps (SOM) combined with feature extraction, feature selection, and the ensemble Adaptive Boosting (AdaBoost) classification algorithm. Feature extraction was performed using fractal analysis with the box-counting method, lacunarity with the gliding box method, and invariant moment. Feature selection is done by using the information gain method, to rank all the features that are produced, furthermore, it is selected by referring to the gain value. The best system performance is generated in the number of clusters 2 with fractal dimension, lacunarity with box size 22-29, and invariant moment M1 and M3. Performance in these conditions is able to provide 84% sensitivity, 88% specificity, 7.0 likelihood ratio positive (LR+), and 86% area under the curve (AUC). This model is also better than a number of ensemble algorithms, such as bagging and random forest. Referring to these results, it can be concluded that the use of this model can be an alternative to CAD-RH, where the resulting performance is in a good category. Manuscript profile
      • Open Access Article

        19 - An ICT Performance Evaluation Model based on Meta-Synthesis Approach
        Khatrehe Bamary Mohammad Reza Behboudi Tayebeh Abbasnjad
        Information and Communication Technology (ICT) is one of the key determinants for today’s organizational success. Therefore, companies spend a significant amount of money each year on ICT, while not being sure that they will get a good result. The purpose of this study More
        Information and Communication Technology (ICT) is one of the key determinants for today’s organizational success. Therefore, companies spend a significant amount of money each year on ICT, while not being sure that they will get a good result. The purpose of this study is to identify the dimensions and indicators of ICT performance evaluation and suggesting a model for assessing it in organizations. This research is mainly a qualitative study with a meta-synthesis approach which uses the seven-stage qualitative method of Sandelowski and Barroso to systematically review the literature to find sub-indices (codes), indices (themes) and dimensions (categories) of ICT performance evaluation. The search of scientific databases with appropriate keywords found 516 articles, among them, 89 articles were chosen finally and used for analysis. Moreover, a questionnaire has been designed and answered by ICT experts and managers to determine the importance of each of the indicators of the model. Based on data analysis, the proposed ICT performance evaluation model has three dimensions: strategic, quality, and sustainability. The strategic dimension includes indicators of organization strategy, IT strategy, and alignment. The quality dimension includes maturity, and performance indicators; and finally, the sustainability dimension includes environmental, economic, and social indicators. For each of these indicators detailed list of sub-indices (104), which are substantial for evaluation of ICT performance in organizations, were identified and explained. Manuscript profile
      • Open Access Article

        20 - Developing A Contextual Combinational Approach for Predictive Analysis of Users Mobile Phone Trajectory Data in LBSNs
        Fatemeh  Ghanaati Gholamhossein Ekbatanifard Kamrad Khoshhal Roudposhti
        Today, smartphones, due to their ubiquity, have become indispensable in human daily life. Progress in the technology of mobile phones has recently resulted in the emergence of several popular services such as location-based social networks (LBSNs) and predicting the nex More
        Today, smartphones, due to their ubiquity, have become indispensable in human daily life. Progress in the technology of mobile phones has recently resulted in the emergence of several popular services such as location-based social networks (LBSNs) and predicting the next Point of Interest (POI), which is an important task in these services. The gathered trajectory data in LBSNs include various contextual information such as geographical and temporal contextual information (GTCI) that play a crucial role in the next POI recommendations. Various methods, including collaborating filtering (CF) and recurrent neural networks, incorporated the contextual information of the user’ trajectory data to predict the next POIs. CF methods do not consider the effect of sequential data on modeling, while the next POI prediction problem is inherently a time sequence problem. Although recurrent models have been proposed for sequential data modeling, they have limitations such as similarly considering the effect of contextual information. Nonetheless, they have a separate impact as well. In the current study, a geographical temporal contextual information-extended attention gated recurrent unit (GTCI-EAGRU) architecture was proposed to separately consider the influence of geographical and temporal contextual information on the next POI recommendations. In this research, the GRU model was developed using three separate attention gates to consider the contextual information of the user trajectory data in the recurrent layer GTCI-EAGRU architecture, including timestamp, geographical, and temporal contextual attention gates. Inspired by the assumption of the matrix factorization method in CF approaches, a ranked list of POI recommendations was provided for each user. Moreover, a comprehensive evaluation was conducted by utilizing large-scale real-world datasets based on three LBSNs, including Gowalla, Brightkite, and Foursquare. The results revealed that the performance of GTCI-EAGRU was higher than that of competitive baseline methods in terms of Acc@10, on average, by 42.11% in three datasets. Manuscript profile