• About Journal

     The Journal of Information Systems and Telecommunication (JIST) accepts and publishes papers containing original researches and/or development results, representing an effective and novel contribution for knowledge in the area of information systems and Telecommunication. Contributions are accepted in the form of Regular papers or Correspondence. Regular papers are the ones with a well-rounded treatment of a problem area, whereas Correspondence focus on a point of a defined problem area. Under the permission of the editorial board, other kinds of papers may be published if they are found to be relevant or of interest to the readers. Responsibility for the content of the papers rests upon the Authors only. The Journal is aimed at not only a national target community, but also international audiences is taken into consideration. For this reason, authors are supposed to write in English.

    This Journal is Published under scientific support of Advanced Information Systems (AIS) Research Group and Digital & Signal Processing Group, ICTRC

    For further information on Article Processing Charges (APCs) policies, please visit our APC page or contact us infojist@gmail.com. 

     


    Latest published articles

    • Open Access Article

      1 - Membrane Cholesterol Prediction from Human Receptor using Rough Set based Mean-Shift Approach
      Rudra Kalyan Nayak Ramamani  Tripathy Hitesh  Mohapatra Amiya  Kumar Rath Debahuti  Mishra
      Iss. 39 , Vol. 10 , Summer 2022
      In human physiology, cholesterol plays an imperative part in membrane cells which regulates the function of G-protein-coupled receptors (GPCR) family. Cholesterol is an individual type of lipid structure and about 90 percent of cellular cholesterol is present at plasma Full Text
      In human physiology, cholesterol plays an imperative part in membrane cells which regulates the function of G-protein-coupled receptors (GPCR) family. Cholesterol is an individual type of lipid structure and about 90 percent of cellular cholesterol is present at plasma membrane region. Cholesterol Recognition/interaction Amino acid Consensus (CRAC) sequence, generally referred as the CRAC (L/V)-X1−5-(Y)-X1−5-(K/R) and the new cholesterol-binding domain is similar to the CRAC sequence, but exhibits the inverse orientation along the polypeptide chain i.e. CARC (K/R)-X1−5-(Y/F)-X1−5-(L/V). GPCR is treated as a biggest super family in human physiology and probably more than 900 protein genes included in this family. Among all membrane proteins GPCR is responsible for novel drug discovery in all pharmaceuticals industry. In earlier researches the researchers did not find the required number of valid motifs in terms of helices and motif types so they were lacking clinical relevance. The research gap here is that they were not able to predict the motifs effectively which are belonging to multiple motif types. To find out better motif sequences from human GPCR, we explored a hybrid computational model consisting of hybridization of Rough Set with Mean-Shift algorithm. In this paper we made comparison among our resulted output with other techniques such as fuzzy C-means (FCM), FCM with spectral clustering and we concluded that our proposed method targeted well on CRAC region in comparison to CARC region which have higher biological relevance in medicine industry and drug discovery. Manuscript Document

    • Open Access Article

      2 - A Corpus for Evaluation of Cross Language Text Re-use Detection Systems
      Salar Mohtaj Habibollah Asghari
      Iss. 39 , Vol. 10 , Summer 2022
      In recent years, the availability of documents through the Internet along with automatic translation systems have increased plagiarism, especially across languages. Cross-lingual plagiarism occurs when the source or original text is in one language and the plagiarized o Full Text
      In recent years, the availability of documents through the Internet along with automatic translation systems have increased plagiarism, especially across languages. Cross-lingual plagiarism occurs when the source or original text is in one language and the plagiarized or re-used text is in another language. Various methods for automatic text re-use detection across languages have been developed whose objective is to assist human experts in analyzing documents for plagiarism cases. For evaluating the performance of these systems and algorithms, standard evaluation resources are needed. To construct cross lingual plagiarism detection corpora, the majority of earlier studies have paid attention to English and other European language pairs, and have less focused on low resource languages. In this paper, we investigate a method for constructing an English-Persian cross-language plagiarism detection corpus based on parallel bilingual sentences that artificially generate passages with various degrees of paraphrasing. The plagiarized passages are inserted into topically related English and Persian Wikipedia articles in order to have more realistic text documents. The proposed approach can be applied to other less-resourced languages. In order to evaluate the compiled corpus, both intrinsic and extrinsic evaluation methods were employed. So, the compiled corpus can be suitably included into an evaluation framework for assessing cross-language plagiarism detection systems. Our proposed corpus is free and publicly available for research purposes. Manuscript Document

    • Open Access Article

      3 - Reducing Energy Consumption in Sensor-Based Internet of Things Networks Based on Multi-Objective Optimization Algorithms
      Mohammad sedighimanesh Hessam  Zandhessami Mahmood  Alborzi Mohammadsadegh  Khayyatian
      Iss. 39 , Vol. 10 , Summer 2022
      Energy is an important parameter in establishing various communications types in the sensor-based IoT. Sensors usually possess low-energy and non-rechargeable batteries since these sensors are often applied in places and applications that cannot be recharged. The mos Full Text
      Energy is an important parameter in establishing various communications types in the sensor-based IoT. Sensors usually possess low-energy and non-rechargeable batteries since these sensors are often applied in places and applications that cannot be recharged. The most important objective of the present study is to minimize the energy consumption of sensors and increase the IoT network's lifetime by applying multi-objective optimization algorithms when selecting cluster heads and routing between cluster heads for transferring data to the base station. In the present article, after distributing the sensor nodes in the network, the type-2 fuzzy algorithm has been employed to select the cluster heads and also the genetic algorithm has been used to create a tree between the cluster heads and base station. After selecting the cluster heads, the normal nodes become cluster members and send their data to the cluster head. After collecting and aggregating the data by the cluster heads, the data is transferred to the base station from the path specified by the genetic algorithm. The proposed algorithm was implemented with MATLAB simulator and compared with LEACH, MB-CBCCP, and DCABGA protocols, the simulation results indicate the better performance of the proposed algorithm in different environments compared to the mentioned protocols. Due to the limited energy in the sensor-based IoT and the fact that they cannot be recharged in most applications, the use of multi-objective optimization algorithms in the design and implementation of routing and clustering algorithms has a significant impact on the increase in the lifetime of these networks. Manuscript Document

    • Open Access Article

      4 - Dynamic Tree- Based Routing: Applied in Wireless Sensor Network and IOT
      Mehdi Khazaei
      Iss. 39 , Vol. 10 , Summer 2022
      The Internet of Things (IOT) has advanced in parallel with the wireless sensor network (WSN) and the WSN is an IOT empowerment. The IOT, through the internet provides the connection between the defined objects in apprehending and supervising the environment. In some app Full Text
      The Internet of Things (IOT) has advanced in parallel with the wireless sensor network (WSN) and the WSN is an IOT empowerment. The IOT, through the internet provides the connection between the defined objects in apprehending and supervising the environment. In some applications, the IOT is converted into WSN with the same descriptions and limitations. Working with WSN is limited to energy, memory and computational ability of the sensor nodes. This makes the energy consumption to be wise if protection of network reliability is sought. The newly developed and effective hierarchical and clustering techniques are to overcome these limitations. The method proposed in this article, regarding energy consumption reduction is tree-based hierarchical technique, used clustering based on dynamic structure. In this method, the location-based and time-based properties of the sensor nodes are applied leading to provision of a greedy method as to form the subtree leaves. The rest of the tree structure up to the root, would be formed by applying the centrality concept in the network theory by the base station. The simulation reveals that the scalability and fairness parameter in energy consumption compare to the similar method has improved, thus, prolonged network lifetime and reliability. Manuscript Document

    • Open Access Article

      5 - Edge Detection and Identification using Deep Learning to Identify Vehicles
      Zohreh Dorrani Hassan Farsi Sajad Mohammadzadeh
      Iss. 39 , Vol. 10 , Summer 2022
      A deep convolution neural network (CNN) is used to detect the edge. First, the initial features are extracted using VGG-16, which consists of 5 convolutions, each step is connected to a pooling layer. For edge detection of the image, it is necessary to extract informati Full Text
      A deep convolution neural network (CNN) is used to detect the edge. First, the initial features are extracted using VGG-16, which consists of 5 convolutions, each step is connected to a pooling layer. For edge detection of the image, it is necessary to extract information of different levels from each layer to the pixel space of the edge, and then re-extract the feature, and perform sampling. The attributes are mapped to the pixel space of the edge and a threshold extractor of the edges. It is then compared with a background model. Using background subtraction, foreground objects are detected. The Gaussian mixture model is used to detect the vehicle. This method is performed on three videos, and compared with other methods; the results show higher accuracy. Therefore, the proposed method is stable against sharpness, light, and traffic. Moreover, to improve the detection accuracy of the vehicle, shadow removal conducted, which uses a combination of color and contour features to identify the shadow. For this purpose, the moving target is extracted, and the connected domain is marked to be compared with the background. The moving target contour is extracted, and the direction of the shadow is checked according to the contour trend to obtain shadow points and remove these points. The results show that the proposed method is very resistant to changes in light, high-traffic environments, and the presence of shadows, and has the best performance compared to the current methods. Manuscript Document

    • Open Access Article

      6 - Rough Sets Theory with Deep Learning for Tracking in Natural Interaction with Deaf
      Mohammad Ebrahimi Hossein Ebrahimpour-Komeleh
      Iss. 39 , Vol. 10 , Summer 2022
      Sign languages commonly serve as an alternative or complementary mode of human communication Tracking is one of the most fundamental problems in computer vision, and use in a long list of applications such as sign languages recognition. Despite great advances in recent Full Text
      Sign languages commonly serve as an alternative or complementary mode of human communication Tracking is one of the most fundamental problems in computer vision, and use in a long list of applications such as sign languages recognition. Despite great advances in recent years, tracking remains challenging due to many factors including occlusion, scale variation, etc. The mistake detecting of head or left hand instead of right hand in overlapping are, modes like this, and due to the uncertainty of the hand area over the deaf news video frames; we proposed two methods: first, tracking using particle filter and second tracking using the idea of the rough set theory in granular information with deep neural network. We proposed the method for Combination the Rough Set with Deep Neural Network and used for in Hand/Head Tracking in Video Signal DeafNews. We develop a tracking system for Deaf News. We used rough set theory to increase the accuracy of skin segmentation in video signal. Using deep neural network, we extracted inherent relationships available in the frame pixels and generalized the achieved features to tracking. The system proposed is tested on the 33 of Deaf News with 100 different words and 1927 video files for words then recall, MOTA and MOTP values are obtained. Manuscript Document

    • Open Access Article

      7 - Recognition of Attention Deficit/Hyperactivity Disorder (ADHD) Based on Electroencephalographic Signals Using Convolutional Neural Networks (CNNs)
      Sara Motamed Elham Askari
      Iss. 39 , Vol. 10 , Summer 2022
      Impulsive / hyperactive disorder is a neuro-developmental disorder that usually occurs in childhood, and in most cases parents find that the child is more active than usual and have problems such as lack of attention and concentration control. Because this problem might Full Text
      Impulsive / hyperactive disorder is a neuro-developmental disorder that usually occurs in childhood, and in most cases parents find that the child is more active than usual and have problems such as lack of attention and concentration control. Because this problem might interfere with your own learning, work, and communication with others, it could be controlled by early diagnosis and treatment. Because the automatic recognition and classification of electroencephalography (EEG) signals is challenging due to the large variation in time features and signal frequency, the present study attempts to provide an efficient method for diagnosing hyperactive patients. The proposed method is that first, the recorded brain signals of hyperactive subjects are read from the input and in order to the signals to be converted from time range to frequency range, Fast Fourier Transform (FFT) is used. Also, to select an effective feature to check hyperactive subjects from healthy ones, the peak frequency (PF) is applied. Then, to select the features, principal component analysis and without principal component analysis will be used. In the final step, convolutional neural networks (CNNs) will be utilized to calculate the recognition rate of individuals with hyperactivity. For model efficiency, this model is compared to the models of K- nearest neighbors (KNN), and multilayer perceptron (MLP). The results show that the best method is to use feature selection by principal component analysis and classification of CNNs and the recognition rate of individuals with ADHD from healthy ones is equal to 91%. Manuscript Document

    • Open Access Article

      8 - An ICT Performance Evaluation Model based on Meta-Synthesis Approach
      Khatrehe Bamary Mohammad Reza Behboudi Tayebeh Abbasnjad
      Iss. 39 , Vol. 10 , Summer 2022
      Information and Communication Technology (ICT) is one of the key determinants for today’s organizational success. Therefore, companies spend a significant amount of money each year on ICT, while not being sure that they will get a good result. The purpose of this study Full Text
      Information and Communication Technology (ICT) is one of the key determinants for today’s organizational success. Therefore, companies spend a significant amount of money each year on ICT, while not being sure that they will get a good result. The purpose of this study is to identify the dimensions and indicators of ICT performance evaluation and suggesting a model for assessing it in organizations. This research is mainly a qualitative study with a meta-synthesis approach which uses the seven-stage qualitative method of Sandelowski and Barroso to systematically review the literature to find sub-indices (codes), indices (themes) and dimensions (categories) of ICT performance evaluation. The search of scientific databases with appropriate keywords found 516 articles, among them, 89 articles were chosen finally and used for analysis. Moreover, a questionnaire has been designed and answered by ICT experts and managers to determine the importance of each of the indicators of the model. Based on data analysis, the proposed ICT performance evaluation model has three dimensions: strategic, quality, and sustainability. The strategic dimension includes indicators of organization strategy, IT strategy, and alignment. The quality dimension includes maturity, and performance indicators; and finally, the sustainability dimension includes environmental, economic, and social indicators. For each of these indicators detailed list of sub-indices (104), which are substantial for evaluation of ICT performance in organizations, were identified and explained. Manuscript Document
    Most Viewed Articles

    • Open Access Article

      1 - Privacy Preserving Big Data Mining: Association Rule Hiding
      Golnar Assadat  Afzali shahriyar mohammadi
      Iss. 14 , Vol. 4 , Spring 2016
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to Full Text
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to cover relationships between seemingly unrelated data in a data base.. Association rule hiding is a research area in privacy preserving data mining (PPDM) which addresses a solution for hiding sensitive rules within the data problem. Many researches have be done in this area, but most of them focus on reducing undesired side effect of deleting sensitive association rules in static databases. However, in the age of big data, we confront with dynamic data bases with new data entrance at any time. So, most of existing techniques would not be practical and must be updated in order to be appropriate for these huge volume data bases. In this paper, data anonymization technique is used for association rule hiding, while parallelization and scalability features are also embedded in the proposed model, in order to speed up big data mining process. In this way, instead of removing some instances of an existing important association rule, generalization is used to anonymize items in appropriate level. So, if necessary, we can update important association rules based on the new data entrances. We have conducted some experiments using three datasets in order to evaluate performance of the proposed model in comparison with Max-Min2 and HSCRIL. Experimental results show that the information loss of the proposed model is less than existing researches in this area and this model can be executed in a parallel manner for less execution time Manuscript Document

    • Open Access Article

      2 - Instance Based Sparse Classifier Fusion for Speaker Verification
      Mohammad Hasheminejad Hassan Farsi
      Iss. 15 , Vol. 4 , Summer 2016
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers Full Text
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method. Manuscript Document

    • Open Access Article

      3 - COGNISON: A Novel Dynamic Community Detection Algorithm in Social Network
      Hamideh Sadat Cheraghchi Ali Zakerolhossieni
      Iss. 14 , Vol. 4 , Spring 2016
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social Full Text
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social network analysis, we present a novel dynamic community detection algorithm called COGNISON inspired mainly by social theories. To be specific, we take inspiration from prototype theory and cognitive consistency theory to recognize the best community for each member by formulating community detection algorithm by human analogy disciplines. COGNISON is placed in representative based algorithm category and hints to further fortify the pure mathematical approach to community detection with stabilized social science disciplines. The proposed model is able to determine the proper number of communities by high accuracy in both weighted and binary networks. Comparison with the state of art algorithms proposed for dynamic community discovery in real datasets shows higher performance of this method in different measures of Accuracy, NMI, and Entropy for detecting communities over times. Finally our approach motivates the application of human inspired models in dynamic community detection context and suggest the fruitfulness of the connection of community detection field and social science theories to each other. Manuscript Document

    • Open Access Article

      4 - Node Classification in Social Network by Distributed Learning Automata
      Ahmad Rahnama Zadeh meybodi meybodi Masoud Taheri Kadkhoda
      Iss. 18 , Vol. 5 , Spring 2017
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitio Full Text
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitioned according to the labeled nodes and a network of Distributed Learning Automata is corresponded on each partition. In each partition the maximal spanning tree is determined using DLA. Finally nodes are labeled according to the rewards of DLA. We have tested this algorithm on three real social network datasets, and results show that the expected accuracy of presented algorithm is achieved. Manuscript Document

    • Open Access Article

      5 - A Bio-Inspired Self-configuring Observer/ Controller for Organic Computing Systems
      Ali Tarihi haghighi haghighi feridon Shams
      Iss. 15 , Vol. 4 , Summer 2016
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the Full Text
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the emergence of life-like properties, called self-* in general which suits them well for pervasive computing. Achievement of these properties in organic computing systems is closely related to a proposed general feedback architecture, called the observer/controller architecture, which supports the mentioned properties through interacting with the system components and keeping their behavior under control. As one of these properties, self-configuration is desirable in the application of organic computing systems as it enables by enabling the adaptation to environmental changes. However, the adaptation in the level of architecture itself has not yet been studied in the literature of organic computing systems. This limits the achievable level of adaptation. In this paper, a self-configuring observer/controller architecture is presented that takes the self-configuration to the architecture level. It enables the system to choose the proper architecture from a variety of possible observer/controller variants available for a specific environment. The validity of the proposed architecture is formally demonstrated. We also show the applicability of this architecture through a known case study. Manuscript Document

    • Open Access Article

      6 - Publication Venue Recommendation Based on Paper’s Title and Co-authors Network
      Ramin Safa Seyed Abolghassem Mirroshandel Soroush Javadi Mohammad Azizi
      Iss. 21 , Vol. 6 , Winter 2018
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying Full Text
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying them in scientific applications. Applying recommender systems to scientific domain, such as paper recommendation, expert recommendation, citation recommendation and reviewer recommendation, are new and developing topics. With the significant growth of the number of scientific events and journals, one of the most important issues is choosing the most suitable venue for publishing papers, and the existence of a tool to accelerate this process is necessary for researchers. Despite the importance of these systems in accelerating the publication process and decreasing possible errors, this problem has been less studied in related works. So in this paper, an efficient approach will be suggested for recommending related conferences or journals for a researcher’s specific paper. In other words, our system will be able to recommend the most suitable venues for publishing a written paper, by means of social network analysis and content-based filtering, according to the researcher’s preferences and the co-authors’ publication history. The results of evaluation using real-world data show acceptable accuracy in venue recommendations. Manuscript Document

    • Open Access Article

      7 - Low-Complexity Iterative Detection for Uplink Multiuser Large-Scale MIMO
      Mojtaba Amiri Mahmoud Ferdosizade Naeiny
      Iss. 29 , Vol. 8 , Winter 2020
      In massive Multiple Input Multiple Output (MIMO) or large scale MIMO systems, uplink detection at the Base Station (BS) is a challenging problem due to significant increase of the dimensions in comparison to ordinary MIMO systems. In this letter, a novel iterative metho Full Text
      In massive Multiple Input Multiple Output (MIMO) or large scale MIMO systems, uplink detection at the Base Station (BS) is a challenging problem due to significant increase of the dimensions in comparison to ordinary MIMO systems. In this letter, a novel iterative method is proposed for detection of the transmitted symbols in uplink multiuser massive MIMO systems. Linear detection algorithms such as minimum-mean-square-error (MMSE) and zero-forcing (ZF), are able to achieve the performance of the near optimal detector, when the number of base station (BS) antennas is enough high. But the complexity of linear detectors in Massive MIMO systems is high due to the necessity of the calculation of the inverse of a large dimension matrix. In this paper, we address the problem of reducing the complexity of the MMSE detector for massive MIMO systems. The proposed method is based on Gram Schmidt algorithm, which improves the convergence speed and also provides better error rate than the alternative methods. It will be shown that the complexity order is reduced from O(〖n_t〗^3) to O(〖n_t〗^2), where n_t is the number of users. The proposed method avoids the direct computation of matrix inversion. Simulation results show that the proposed method improves the convergence speed and also it achieves the performance of MMSE detector with considerable lower computational complexity. Manuscript Document

    • Open Access Article

      8 - DBCACF: A Multidimensional Method for Tourist Recommendation Based on Users’ Demographic, Context and Feedback
      Maral Kolahkaj Ali Harounabadi Alireza Nikravan shalmani Rahim Chinipardaz
      Iss. 24 , Vol. 6 , Autumn 2018
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropr Full Text
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropriate areas based on user’s preferences is very difficult due to some issues such as huge amount of tourist areas, the limitation of the visiting time, and etc. In addition, the available methods have yet failed to provide accurate tourist’s recommendations based on geo-tagged media because of some problems such as data sparsity, cold start problem, considering two users with different habits as the same (symmetric similarity), and ignoring user’s personal and context information. Therefore, in this paper, a method called “Demographic-Based Context-Aware Collaborative Filtering” (DBCACF) is proposed to investigate the mentioned problems and to develop the Collaborative Filtering (CF) method with providing personalized tourist’s recommendations without users’ explicit requests. DBCACF considers demographic and contextual information in combination with the users' historical visits to overcome the limitations of CF methods in dealing with multi- dimensional data. In addition, a new asymmetric similarity measure is proposed in order to overcome the limitations of symmetric similarity methods. The experimental results on Flickr dataset indicated that the use of demographic and contextual information and the addition of proposed asymmetric scheme to the similarity measure could significantly improve the obtained results compared to other methods which used only user-item ratings and symmetric measures. Manuscript Document

    • Open Access Article

      9 - Short Time Price Forecasting for Electricity Market Based on Hybrid Fuzzy Wavelet Transform and Bacteria Foraging Algorithm
      keyvan borna Sepideh Palizdar
      Iss. 16 , Vol. 4 , Autumn 2016
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, lin Full Text
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, linear prediction methods and neural networks and fuzzy logic have been studied and emulated. An optimized fuzzy-wavelet prediction method is proposed to predict the price of electricity. In this method, in order to have a better prediction, the membership functions of the fuzzy regression along with the type of the wavelet transform filter have been optimized using the E.Coli Bacterial Foraging Optimization Algorithm. Then, to better compare this optimal method with other prediction methods including conventional linear prediction and neural network methods, they were analyzed with the same electricity price data. In fact, our fuzzy-wavelet method has a more desirable solution than previous methods. More precisely by choosing a suitable filter and a multiresolution processing method, the maximum error has improved by 13.6%, and the mean squared error has improved about 17.9%. In comparison with the fuzzy prediction method, our proposed method has a higher computational volume due to the use of wavelet transform as well as double use of fuzzy prediction. Due to the large number of layers and neurons used in it, the neural network method has a much higher computational volume than our fuzzy-wavelet method. Manuscript Document

    • Open Access Article

      10 - The Surfer Model with a Hybrid Approach to Ranking the Web Pages
      Javad Paksima - -
      Iss. 15 , Vol. 4 , Summer 2016
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly impor Full Text
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly important to design a ranking algorithm to provide the results pertaining to user’s query due to the great deal of information on the World Wide Web. In this paper, a ranking method is proposed with a hybrid approach, which considers the content and connections of pages. The proposed model is a smart surfer that passes or hops from the current page to one of the externally linked pages with respect to their content. A probability, which is obtained using the learning automata along with content and links to pages, is used to select a webpage to hop. For a transition to another page, the content of pages linked to it are used. As the surfer moves about the pages, the PageRank score of a page is recursively calculated. Two standard datasets named TD2003 and TD2004 were used to evaluate and investigate the proposed method. They are the subsets of dataset LETOR3. The results indicated the superior performance of the proposed approach over other methods introduced in this area. Manuscript Document
    Upcoming Articles
  • Email
    infojist@gmail.com
    Address
    No.5, Saeedi Alley, Kalej Intersection., Enghelab Ave., Tehran, Iran.
    Phone
    +98 21 88930150

    Search

    Statistics

    Number of Volumes 10
    Number of Issues 39
    Printed Articles 288
    Number of Authors 2416
    Article Views 814362
    Article Downloads 191445
    Number of Submitted Articles 1386
    Number of Rejected Articles 848
    Number of Accepted Articles 327
    Acceptance 23 %
    Admission Time(Day) 175
    Reviewer Count 843
    Last Update 8/11/2022