• Journal of Information Systems and Telecommunication (JIST) ( Scientific )
  • About Journal

     The Journal of Information Systems and Telecommunication (JIST) accepts and publishes papers containing original researches and/or development results, representing an effective and novel contribution for knowledge in the area of information systems and Telecommunication. Contributions are accepted in the form of Regular papers or Correspondence. Regular papers are the ones with a well-rounded treatment of a problem area, whereas Correspondence focus on a point of a defined problem area. Under the permission of the editorial board, other kinds of papers may be published if they are found to be relevant or of interest to the readers. Responsibility for the content of the papers rests upon the Authors only. The Journal is aimed at not only a national target community, but also international audiences is taken into consideration. For this reason, authors are supposed to write in English.

    This Journal is Published under scientific support of Advanced Information Systems (AIS) Research Group and Digital & Signal Processing Group, ICTRC

     

    The JIST has taken the decision that abroad authors pay model as an author-pays Open Access (OA) model, effective from the 1st March, 2021 volume, which comes into effect for all new submissions to the journal from this date.

    For further information on Article Processing Charges (APCs) policies, please visit our APC page or contact us infojist@gmail.com.  


    Latest articles published

    • Open Access Article

      1 - Low Complex Standard Conformable Transceiver based on Doppler Spread for DVB-T2 Systems
      Saeed   Ghazi-Maghrebi Behnam   Akbarian
      Issue 32 , Volume 8 , Autumn 2020
      This paper addresses a novel Alamouti space-frequency block decoding scheme with discontinuous Doppler diversity (DDoD) and cyclic delay diversity (CDD). We investigate different antenna diversity concepts, which can be applied to orthogonal frequency division multiplex Full Text
      This paper addresses a novel Alamouti space-frequency block decoding scheme with discontinuous Doppler diversity (DDoD) and cyclic delay diversity (CDD). We investigate different antenna diversity concepts, which can be applied to orthogonal frequency division multiplexing (OFDM) systems over highly frequency selective channels. The main object of this research is standard compatibility and the effect of simple diversity techniques on the channel fading properties. Therefore, we analyze a receiver in terms of the effective channel transfer function, which leads to the possibility of optimizing diversity. Besides, a novel transceiver using DDoD is proposed, which increases the Doppler spread of the multipath fading channel without causing additional Intercarrier Interference (ICI). Moreover, an efficient Alamouti encoder and decoder based on CDD is proposed, which allows a high reliability and capacity enhancement. In order to evaluate the capability of that, we have implemented this scheme for the second-generation terrestrial video broadcasting (DVB-T2) system over different channels. Furthermore, mathematical analysis and simulation results show the bit error performance of the modified encoding method with these diversity techniques, performs mostly better than the other forms of encoding Alamouti over highly frequency-selective channels such as single frequency networks (SFN). The other advantages of the proposed method are simplicity, flexibility, and standard compatibility. Manuscript Document

    • Open Access Article

      2 - AI based Computational Trust Model for Intelligent Virtual Assistant
      Babu   Kumar Ajay Vikram   Singh Parul   Agarwal
      Issue 32 , Volume 8 , Autumn 2020
      The Intelligent virtual assistant (IVA) also called AI assistant or digital assistant is software developed as a product by organizations like Google, Apple, Microsoft and Amazon. Virtual assistant based on Artificial Intelligence which works and processes on natural la Full Text
      The Intelligent virtual assistant (IVA) also called AI assistant or digital assistant is software developed as a product by organizations like Google, Apple, Microsoft and Amazon. Virtual assistant based on Artificial Intelligence which works and processes on natural language commands given by humans. It helps the user to work more efficiently and also saves time. It is human friendly as it works on natural language commands given by humans. Voice-controlled Intelligent Virtual Assistants (IVAs) have seen gigantic development as of late on cell phones and as independent gadgets in individuals’ homes. The intelligent virtual assistant is very useful for illiterate and visually impaired people around the world. While research has analyzed the expected advantages and downsides of these gadgets for IVA clients, barely any investigations have exactly assessed the need of security and trust as a singular choice to use IVAs. In this proposed work, different IPA users and non-users (N=1000) are surveyed to understand and analyze the barriers and motivations to adopting IPAs and how users are concerned about data privacy and trust with respect to organizational compliances and social contract related to IPA data and how these concerns have affected the acceptance and use of IPAs. We have used Naïve Byes Classifier to compute trust in IVA devices and further evaluate probability of using different trusted IVA devices. Manuscript Document

    • Open Access Article

      3 - An Effective Method of Feature Selection in Persian Text for Improving the Accuracy of Detecting Request in Persian Messages on Telegram
      zahra   khalifeh zadeh Mohammad Ali   Zare Chahooki
      Issue 32 , Volume 8 , Autumn 2020
      In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from Full Text
      In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from these sources of information. In automatic text classification, words are assessed as a set of features. Selecting useful features from each text reduces the size of the feature vector and improves classification performance. Many algorithms have been applied for the automatic classification of text. Although all the methods proposed for other languages are applicable and comparable, studies on classification and feature selection in the Persian text have not been sufficiently carried out. The present research is conducted in Persian, and the introduction of a Persian dataset is a part of its innovation. In the present article, an innovative approach is presented to improve the performance of Persian text classification. The authors extracted 85,000 Persian messages from the Idekav-system, which is a Telegram search engine. The new idea presented in this paper to process and classify this textual data is on the basis of the feature vector expansion by adding some selective features using the most extensively used feature selection methods based on Local and Global filters. The new feature vector is then filtered by applying the secondary feature selection. The secondary feature selection phase selects more appropriate features among those added from the first step to enhance the effect of applying wrapper methods on classification performance. In the third step, the combined filter-based methods and the combination of the results of different learning algorithms have been used to achieve higher accuracy. At the end of the three selection stages, a method was proposed that increased accuracy up to 0.945 and reduced training time and calculations in the Persian dataset. Manuscript Document

    • Open Access Article

      4 - IT Capability Evaluation through the IT Capability Map
      mina   ranjbarfard Seyedeh Reyhaneh   Mirsalari
      Issue 32 , Volume 8 , Autumn 2020
      Organizations are increasingly in search of ways to derive more business values from IT investments and the need for IT capabilities (ITC) is surging. ITC is critical significant to build enterprise agility and promote organizational performance. However, IT capability Full Text
      Organizations are increasingly in search of ways to derive more business values from IT investments and the need for IT capabilities (ITC) is surging. ITC is critical significant to build enterprise agility and promote organizational performance. However, IT capability is always treated as the causal factor already existing and there are few studies on how IT capability is created and evaluated. Appropriate evaluation is necessary for an organization to measure, manage and improve enterprise ITC. This research aims to identify and map the dimensions of an organization's ITC. Using a mixed research method, this paper comprises two sections. The qualitative section adopts a systematic literature review (SLR) approach to identify the dimensions of ITC. The quantitative section employs factor analysis to validate identified ITC dimensions and their indicators in an attempt to develop a more precise model for ITC evaluation. The proposed ITC model includes IT management, IT human resources, IT infrastructure, and implementation of IT solutions dimensions as well as the 25 related indicators. Drawing on the results of this paper, organizations can engage in evaluation and improve/create essential ITCs based on the evaluation results. Manuscript Document

    • Open Access Article

      5 - Using Decision Lattice Analysis to Model IOT-based Companies’ profit
      Nazanin   Talebolfakhr Seyed Babak   Ebrahimi Donya   Rahmani
      Issue 32 , Volume 8 , Autumn 2020
      Demand uncertainty and high initial investments for IOT-based projects lead to analyzing various types of options, especially real options in project execution to decrease these uncertainties. In this study, we investigate the firms’ expected profits that resulted from Full Text
      Demand uncertainty and high initial investments for IOT-based projects lead to analyzing various types of options, especially real options in project execution to decrease these uncertainties. In this study, we investigate the firms’ expected profits that resulted from appropriate chosen static and dynamic pricing strategies namely low-pricing, high-pricing, and contingent pricing combined with binomial decision lattices. Besides, the reciprocal influence between pricing strategies and IOT investment could provide useful insights for the firms that confront demand uncertainties in selling the firms’ products. We propose a model which is the integration of binomial decision lattices, which have been calculated by Real Option Super Lattice Solver 2017 software, and pricing policies under uncertainty. The results provide insights into what pricing strategies to choose based on the project’s real option value and the level of the firm uncertainty about the purchasing of the high-value consumer. Among the mentioned static and dynamic pricing strategies, high-pricing and contingent pricing strategies under different situations can be selected and expected profits of each of the strategies will be calculated and compared with each other. On the contrary, as the low-pricing strategy resulted in the lowest option value, it will not be scrutinized in this study. Experimental results show that if the IOT investment level and high-value consumer purchasing likelihood are high, the firm will implement the high-pricing strategy, otherwise choosing the contingent pricing due to the demand uncertainty would be appropriate. Manuscript Document

    • Open Access Article

      6 - Using Static Information of Programs to Partition the Input Domain in Search-based Test Data Generation
      Atieh   Monemi Bidgoli haghighi   haghighi
      Issue 32 , Volume 8 , Autumn 2020
      The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heurist Full Text
      The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heuristic algorithms search the domain of input variables in order to find input data that cover the targets. The domain of input variables is very large, even for simple programs, while this size has a major influence on the efficiency and effectiveness of all search-based methods. Despite the large volume of works on search-based test data generation, the literature contains few approaches that concern the impact of search space reduction. In order to partition the input domain, this study defines a relationship between the structure of the program and the input domain. Based on this relationship, we propose a method for partitioning the input domain. Then, to search in the partitioned search space, we select ant colony optimization as one of the important and prosperous meta-heuristic algorithms. To evaluate the performance of the proposed approach in comparison with the previous work, we selected a number of different benchmark programs. The experimental results show that our approach has 14.40% better average coverage versus the competitive approach Manuscript Document

    • Open Access Article

      7 - An Approach to Improve the Quality of Service in DTN and Non-DTN based VANET
      Ahmad   Sarlak Yousef   Darmani
      Issue 32 , Volume 8 , Autumn 2020
      Nowadays, with attention to soar in the number of network users, it is necessary to find new approaches to revolutionize network operation. Vehicular ad-hoc networks are bound to play a pivotal role in communication, therefore raising the traffic in the network, using o Full Text
      Nowadays, with attention to soar in the number of network users, it is necessary to find new approaches to revolutionize network operation. Vehicular ad-hoc networks are bound to play a pivotal role in communication, therefore raising the traffic in the network, using only WiFi is unlikely to address this problem. Vehicles could use SDN and other networks such as 4G as well as 5G to distribute traffic to different networks. Moreover, many approaches for handling different data types are inappropriate due to the lack of attention to the data separation idea. In this paper, we proposed a control scheme called Improve Quality of Service in DTN and Non-DTN (IQDN) which works based on vehicle communication infrastructure using SDN idea. IQDN separates data to Delay-Tolerant Data (DTD), and Delay-Intolerant Data (DID) where the former buffers in a vehicle till the vehicle enters an RSU range and sends DTD using IEEE 802.11p. DID packets are sent by cellular networks and LTE. To transmit DTD via IEEE 802.11p, the network capacity is evaluated by SDN. If that network has room to transmit the data, SDN sends a control message to inform the vehicle. Simulations show that sending data over RSU and LTE increases the throughput and decreases the congestion, so the quality of service improves. Manuscript Document
    Most Viewed Articles

    • Open Access Article

      1 - Instance Based Sparse Classifier Fusion for Speaker Verification
      Mohammad Hasheminejad Hassan Farsi
      Issue 15 , Volume 4 , Summer 2016
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers Full Text
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method. Manuscript Document

    • Open Access Article

      2 - Node Classification in Social Network by Distributed Learning Automata
      Ahmad Rahnama Zadeh meybodi meybodi Masoud Taheri Kadkhoda
      Issue 18 , Volume 5 , Spring 2017
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitio Full Text
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitioned according to the labeled nodes and a network of Distributed Learning Automata is corresponded on each partition. In each partition the maximal spanning tree is determined using DLA. Finally nodes are labeled according to the rewards of DLA. We have tested this algorithm on three real social network datasets, and results show that the expected accuracy of presented algorithm is achieved. Manuscript Document

    • Open Access Article

      3 - Privacy Preserving Big Data Mining: Association Rule Hiding
      Golnar Assadat  Afzali shahriyar mohammadi
      Issue 14 , Volume 4 , Spring 2016
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to Full Text
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to cover relationships between seemingly unrelated data in a data base.. Association rule hiding is a research area in privacy preserving data mining (PPDM) which addresses a solution for hiding sensitive rules within the data problem. Many researches have be done in this area, but most of them focus on reducing undesired side effect of deleting sensitive association rules in static databases. However, in the age of big data, we confront with dynamic data bases with new data entrance at any time. So, most of existing techniques would not be practical and must be updated in order to be appropriate for these huge volume data bases. In this paper, data anonymization technique is used for association rule hiding, while parallelization and scalability features are also embedded in the proposed model, in order to speed up big data mining process. In this way, instead of removing some instances of an existing important association rule, generalization is used to anonymize items in appropriate level. So, if necessary, we can update important association rules based on the new data entrances. We have conducted some experiments using three datasets in order to evaluate performance of the proposed model in comparison with Max-Min2 and HSCRIL. Experimental results show that the information loss of the proposed model is less than existing researches in this area and this model can be executed in a parallel manner for less execution time Manuscript Document

    • Open Access Article

      4 - A Bio-Inspired Self-configuring Observer/ Controller for Organic Computing Systems
      Ali Tarihi haghighi haghighi feridon Shams
      Issue 15 , Volume 4 , Summer 2016
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the Full Text
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the emergence of life-like properties, called self-* in general which suits them well for pervasive computing. Achievement of these properties in organic computing systems is closely related to a proposed general feedback architecture, called the observer/controller architecture, which supports the mentioned properties through interacting with the system components and keeping their behavior under control. As one of these properties, self-configuration is desirable in the application of organic computing systems as it enables by enabling the adaptation to environmental changes. However, the adaptation in the level of architecture itself has not yet been studied in the literature of organic computing systems. This limits the achievable level of adaptation. In this paper, a self-configuring observer/controller architecture is presented that takes the self-configuration to the architecture level. It enables the system to choose the proper architecture from a variety of possible observer/controller variants available for a specific environment. The validity of the proposed architecture is formally demonstrated. We also show the applicability of this architecture through a known case study. Manuscript Document

    • Open Access Article

      5 - COGNISON: A Novel Dynamic Community Detection Algorithm in Social Network
      Hamideh Sadat Cheraghchi Ali Zakerolhossieni
      Issue 14 , Volume 4 , Spring 2016
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social Full Text
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social network analysis, we present a novel dynamic community detection algorithm called COGNISON inspired mainly by social theories. To be specific, we take inspiration from prototype theory and cognitive consistency theory to recognize the best community for each member by formulating community detection algorithm by human analogy disciplines. COGNISON is placed in representative based algorithm category and hints to further fortify the pure mathematical approach to community detection with stabilized social science disciplines. The proposed model is able to determine the proper number of communities by high accuracy in both weighted and binary networks. Comparison with the state of art algorithms proposed for dynamic community discovery in real datasets shows higher performance of this method in different measures of Accuracy, NMI, and Entropy for detecting communities over times. Finally our approach motivates the application of human inspired models in dynamic community detection context and suggest the fruitfulness of the connection of community detection field and social science theories to each other. Manuscript Document

    • Open Access Article

      6 - Publication Venue Recommendation Based on Paper’s Title and Co-authors Network
      Ramin Safa Seyed Abolghassem Mirroshandel Soroush Javadi Mohammad Azizi
      Issue 21 , Volume 6 , Winter 2018
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying Full Text
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying them in scientific applications. Applying recommender systems to scientific domain, such as paper recommendation, expert recommendation, citation recommendation and reviewer recommendation, are new and developing topics. With the significant growth of the number of scientific events and journals, one of the most important issues is choosing the most suitable venue for publishing papers, and the existence of a tool to accelerate this process is necessary for researchers. Despite the importance of these systems in accelerating the publication process and decreasing possible errors, this problem has been less studied in related works. So in this paper, an efficient approach will be suggested for recommending related conferences or journals for a researcher’s specific paper. In other words, our system will be able to recommend the most suitable venues for publishing a written paper, by means of social network analysis and content-based filtering, according to the researcher’s preferences and the co-authors’ publication history. The results of evaluation using real-world data show acceptable accuracy in venue recommendations. Manuscript Document

    • Open Access Article

      7 - Safe Use of the Internet of Things for Privacy Enhancing
      hojatallah hamidi
      Issue 15 , Volume 4 , Summer 2016
      New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must Full Text
      New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must be recognized that management and control of information privacy may not be sufficient according to traditional user and public preferences. Society may need to balance the benefits of increased capabilities and efficiencies of the IoT against a possibly inevitably increased visibility into everyday business processes and personal activities. Much as people have come to accept increased sharing of personal information on the Web in exchange for better shopping experiences and other advantages, they may be willing to accept increased prevalence and reduced privacy of information. Because information is a large component of IoT information, and concerns about its privacy are critical to widespread adoption and confidence, privacy issues must be effectively addressed. The purpose of this paper is which looks at five phases of information flow, involving sensing, identification, storage, processing, and sharing of this information in technical, social, and legal contexts, in the IoT and three areas of privacy controls that may be considered to manage those flows, will be helpful to practitioners and researchers when evaluating the issues involved as the technology advances. Manuscript Document

    • Open Access Article

      8 - The Surfer Model with a Hybrid Approach to Ranking the Web Pages
      Javad Paksima - -
      Issue 15 , Volume 4 , Summer 2016
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly impor Full Text
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly important to design a ranking algorithm to provide the results pertaining to user’s query due to the great deal of information on the World Wide Web. In this paper, a ranking method is proposed with a hybrid approach, which considers the content and connections of pages. The proposed model is a smart surfer that passes or hops from the current page to one of the externally linked pages with respect to their content. A probability, which is obtained using the learning automata along with content and links to pages, is used to select a webpage to hop. For a transition to another page, the content of pages linked to it are used. As the surfer moves about the pages, the PageRank score of a page is recursively calculated. Two standard datasets named TD2003 and TD2004 were used to evaluate and investigate the proposed method. They are the subsets of dataset LETOR3. The results indicated the superior performance of the proposed approach over other methods introduced in this area. Manuscript Document

    • Open Access Article

      9 - DBCACF: A Multidimensional Method for Tourist Recommendation Based on Users’ Demographic, Context and Feedback
      Maral Kolahkaj Ali Harounabadi Alireza Nikravan shalmani Rahim Chinipardaz
      Issue 24 , Volume 6 , Autumn 2018
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropr Full Text
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropriate areas based on user’s preferences is very difficult due to some issues such as huge amount of tourist areas, the limitation of the visiting time, and etc. In addition, the available methods have yet failed to provide accurate tourist’s recommendations based on geo-tagged media because of some problems such as data sparsity, cold start problem, considering two users with different habits as the same (symmetric similarity), and ignoring user’s personal and context information. Therefore, in this paper, a method called “Demographic-Based Context-Aware Collaborative Filtering” (DBCACF) is proposed to investigate the mentioned problems and to develop the Collaborative Filtering (CF) method with providing personalized tourist’s recommendations without users’ explicit requests. DBCACF considers demographic and contextual information in combination with the users' historical visits to overcome the limitations of CF methods in dealing with multi- dimensional data. In addition, a new asymmetric similarity measure is proposed in order to overcome the limitations of symmetric similarity methods. The experimental results on Flickr dataset indicated that the use of demographic and contextual information and the addition of proposed asymmetric scheme to the similarity measure could significantly improve the obtained results compared to other methods which used only user-item ratings and symmetric measures. Manuscript Document

    • Open Access Article

      10 - Promote Mobile Banking Services by using National Smart Card Capabilities and NFC Technology
      Reza Vahedi Sayed Esmaeail Najafi Farhad Hosseinzadeh Lotfi
      Issue 15 , Volume 4 , Summer 2016
      By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account Full Text
      By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account card, the only security facility predicted for use mobile banking systems and financial transactions. That this alone cannot create reasonable security and the reason for greater protection and prevent the theft and misuse of citizens’ bank accounts is provide banking services by the service limits. That by using NFC (Near Field Communication) technology can identity and biometric information and Key pair stored on the smart card chip be exchanged with mobile phone and mobile banking system. And possibility of identification and authentication and also a digital signature created documents. And thus to enhance the security and promote mobile banking services. This research, the application and tool library studies and the opinion of seminary experts of information technology and electronic banking and analysis method Dematel is examined. And aim to investigate possibility Promote mobile banking services by using national smart card capabilities and NFC technology to overcome obstacles and risks that are mentioned above. Obtained Results, confirmed the hypothesis of the research and show that by implementing the so-called solutions in the banking system of Iran. Manuscript Document
    Upcoming Articles
  • Email
    infojist@gmail.com
    Address
    No.5, Saeedi Alley, Kalej Intersection., Enghelab Ave., Tehran, Iran.
    Phone
    021-88930150

    Search

    Statistics

    Number Of Issues 8
    Count Of Numbers 31
    Printed Articles 230
    Number Of Authors 3286
    Article Views 343439
    Number Of Article Downloads 4613
    Number Of Articles Submitted 921
    Number Of Rejected Articles 5
    Number Of Accepted Articles 245
    Admission Time(Day) 204
    Reviewer Count 652