Search Article (Advanced Search)

Search for Article,Author or ...

Current Issue

No 21
Vol. 6 No. 1
Winter 2018

Most Visited

Last Published Articles

The prior relation extraction approaches were relation-specific and supervised, yielding new instances of relations known a priori. While effective, this model is not applicable in case when the number of relations is high or where the relations are not known a priori. Open Information Extraction (OIE) is a relation-independent extraction paradigm designed to extract relations directly from massive and heterogeneous corpora such as Web. One of the main challenges for an Open IE system is estimating the probability that its extracted relation is correct. A confidence measure shows that how an extracted relation is a correct instance of a relation among entities. This paper proposes a new method of confidence estimation for OIE called Relation Confidence Estimator for Open Information Extraction (RCE-OIE). It investigates the incorporation of some proposed features in assigning confidence metric using logistic regression. These features consider diverse lexical, syntactic and semantic knowledge and also some extraction properties such as number of distinct documents from which extractions are drawn, number of relation arguments and their types. We implemented proposed confidence measure on the Open IE systems’ extractions and examined how it affects the performance of results. Evaluations show that incorporation of designed features is promising and the accuracy of our method is higher than the base methods while keeping almost the same performance as them. We also demonstrate how semantic information such as coherence measures can be used in feature-based confidence estimation of Open Relation Extraction (ORE) to further improve the performance.
Vahideh Reshadat - Maryam Hoorali - Heshaam Faili
DOI : 10.7508/jist.2018.21.001
Keywords : Information Extraction; ، Open Information Extraction; ، Relation Extraction; ، Knowledge discovery; ، Fact Extraction
Due to the type of applications, wireless sensor nodes must always be energy efficient and small. Hence, some studies have been done in order to the reduction in energy consumption. Data collection in wireless sensor networks is one of the most important operations of these networks. Due to the energy limitation of nodes, energy efficiency is considered as a key objective in the design of sensor networks. In this paper, we present a method in which, in the first phase, nodes obtain their position by using the position of the base station and two other two nodes informed geographic position and are out of covered environment. In the second phase, the optimal location of the base station is determined. In the third phase, we determine the cluster heads based on the criteria such as the remaining energy, the distance (the distance from the cluster head and the distance from the base station), the number of neighbors (the one-step neighbors and the two-step neighbors) and the centrality. Using the multi-as criteria to select optimally cluster heads by decision making method. We implement the proposed method in the NS2 environment and evaluate its effect and compare it with the NEECP E-LEACH protocols. Simulation results show that by reducing energy consumption, the proposed method enhances the network life time expectancy. In addition it improves average packet delivery and the average delay.
Mohammad Reza Taghva - Robab Hamlbarani Haghi - Aziz Hanifi - Kamran feizi
DOI : 10.7508/jist.2018.21.002
Keywords : Clustering ، Energy ، location ، base station ، Sensor Networks
Data structures are important parts of the programs. Most programs use a variety of data structures and quality of data structures excessively affects the quality of the applications. In current programming languages, they are defined by storing a reference to the data element in the data structure node. Some shortcomings of the current approach are limits in the performance of a data structure and poor mechanisms to handle key and hash attributes. These issues can be observed in the Java programming language which that dictates the programmer to use references to data element from the node. Clearly it is not an implementation mistake. It is a consequence of the Java paradigm which is common in almost all object-oriented programming languages. This paper introduces a new mechanism called access method, to implement a data structure efficiently which is based on the concatenating approach to data structure handling. In the concatenating approach, one memory block stores both the data element and the data structure node. According to the obtained results, the number of lines in the access method is reduced and reusability is increased. It builds data structure efficiently. Also it provides suitable mechanisms to handle key and hash attributes. Performance, simplicity, reusability and flexibility are the major features of the proposed approach.
Davud Mohammadpur - Ali Mahjur
DOI : 10.7508/jist.2018.21.003
Keywords : Programming Language; ، Data Structure Handling; ، High-Level Abstraction; ، Concatenating;
Software Defined Network (SDN) is an emerging architecture that can overcome the challenges facing traditional networks. SDN enables administrator/operator to build a simpler and manageable network. New SDN paradigms are encouraged to deploy multiple (rather than centralized) controllers to monitor the entire system. The Controller Placement Problem (CPP) is one of the key issues in SDN that affects every aspect of it such as scalability, convergence time, fault tolerance and node to controller latency. This problem has been investigated in diverse papers with their major attention paid on optimizing the location of an arbitrary number of controllers. The related works in this area get less attention to two following important issues. i) Bidirectional end-to-end latency between switch and its controller instead of propagation latency, ii) finding the minimal number of controllers that even is a prerequisite for locating them. In this paper, a Set Covering Controller Placement Problem Model (SCCPPM) to find the least number of required controllers with regard to carrier grade latency requirement is proposed. The new model is carried out on a set of 124 graphs from the Internet Topology Zoo and solve them with IBM ILOG CPLEX Optimization package. As expected, our results indicate that the number of required controllers for high resiliency is dependent on topology and network size. As well, in order to achieve carrier grade requirement, 86 percent of topologies must have more than one controller.
Ahmad Jalili - Reza Akbari - Manijeh Keshtgari
DOI : 10.7508/jist.2018.21.004
Keywords : Software Defined Networks ، Controller Placement Problem ، Latency Constraint ، Carrier Grade Requirement
Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying them in scientific applications. Applying recommender systems to scientific domain, such as paper recommendation, expert recommendation, citation recommendation and reviewer recommendation, are new and developing topics. With the significant growth of the number of scientific events and journals, one of the most important issues is choosing the most suitable venue for publishing papers, and the existence of a tool to accelerate this process is necessary for researchers. Despite the importance of these systems in accelerating the publication process and decreasing possible errors, this problem has been less studied in related works. So in this paper, an efficient approach will be suggested for recommending related conferences or journals for a researcher’s specific paper. In other words, our system will be able to recommend the most suitable venues for publishing a written paper, by means of social network analysis and content-based filtering, according to the researcher’s preferences and the co-authors’ publication history. The results of evaluation using real-world data show acceptable accuracy in venue recommendations.
Ramin Safa - SeyedAbolghasem Mirroshandel - Soroush Javadi - Mohammad Azizi
DOI : 10.7508/jist.2018.21.005
Keywords : Academic Recommender Systems ، Social Network Analysis ، Venue Recommendation ، DBLP
In the last decade, eye gaze detection system is one of the most important areas in image processing and computer vision. The performance of eye gaze detection system depends on iris detection and recognition (IR). Iris recognition is very important role for person identification. The aim of this paper is to achieve higher recognition rate compared to learning automata based methods. Usually, iris retrieval based systems consist of several parts as follows: pre-processing, iris detection, normalization, feature extraction and classification which are captured from eye region. In this paper, a new method without normalization step is proposed. Meanwhile, Speeded up Robust Features (SURF) descriptor is used to extract features of iris images. The descriptor of each iris image creates a vector with 64 dimensions. For classification step, learning automata classifier is applied. The proposed method is tested on three known iris databases; UBIRIS, MMU and UPOL database. The proposed method results in recognition rate of 100% for UBIRIS and UPOL databases and 99.86% for MMU iris database. Also, EER rate of the proposed method for UBIRIS, UPOL and MMU iris database are 0.00%, 0.00% and 0.008%, respectively. Experimental results show that the proposed learning automata classifier results in minimum classification error, and improves precision and computation time.
Hasan Farsi - Reza Nasiripour - Sajjad Mohammadzadeh
DOI : 10.7508/jist.2018.21.006
Keywords : Iris retrieval ، SURF ، Learning automata ، Feature extraction ، Classification ، Biometrics
One of the criteria for search engines to determine the popularity of pages is an analysis of links in the web graph, and various methods have already been presented in this regard. The PageRank algorithm is the oldest web page ranking methods based on web graph and is still used as one of the important factors of web pages on Google. Since the invention of this method, several bugs have been published and solutions have been proposed to correct them. The most important problem that is most noticed is pages without an out link or so-called suspended pages. In web graph analysis, we noticed another problem that occurs on some pages at the out degree of one, and the problem is that under conditions, the linked page score is more than the home page. This problem can generate unrealistic scores for pages, and the link chain can invalidate the web graph. In this paper, this problem has been investigated under the title "One-Two Gap", and a solution has been proposed to it. Experimental results show that fixing of the one and two gap problem using the proposed solution. Test dataset, TREC2003, is applied to evaluate the proposed method. The experimental results show that our proposed method outperforms PageRank method in the term of PD, P@n, NDCG@n, and MAP.
Javad Paksima - homa khajeh
DOI : 10.7508/jist.2018.21.007
Keywords : One-Two Gap ، PageRank ، Search Engine ، Web Graph

About Journal

Affiliated to :ICT Research Institute of ACECR
Manager in Charge :Habibollah Asghari
Editor in Chief :Masood Shafiei
Editorial Board :
Abdolali Abdipour
Mahmoud Naghibzadeh
Zabih Ghasemlooy
Mahmoud Moghavemi
Aliakbar Jalali
Ramazan Ali Sadeghzadeh
Hamidreza Sadegh Mohammadi
Saeed Ghazimaghrebi
Shaban Elahi
Alireza Montazemi
Ali Mohammad Djafari
Rahim Saeidi
Shohreh Kasaei
Mehrnoush Shamsfard
ISSN :2322-1437
eISSN :2345-2773

Indexed In