List of articles (by subject) Machine learning


    • Open Access Article

      1 - A Conflict Resolution Approach using Prioritization Strategy
      Hojjat Emami Kamyar Narimanifar
      In current air traffic control system and especially in free flight method, the resolution of conflicts between different aircrafts is a critical problem. In recent years, conflict detection and resolution problem has been an active and hot research topic in the aviatio More
      In current air traffic control system and especially in free flight method, the resolution of conflicts between different aircrafts is a critical problem. In recent years, conflict detection and resolution problem has been an active and hot research topic in the aviation industry. In this paper, we mapped the aircrafts’ conflict resolution process to graph coloring problem, then we used a prioritization method to solve this problem. Valid and optimal solutions for corresponding graph are equivalent to free conflict flight plans for aircrafts in airspace. The proposed prioritization method is based on some score allocation metrics. After score allocation process, how much the score of an aircraft be higher its priority will be higher and vice versa how much the score of an aircraft be lower its priority will be lower. We implemented and tested our proposed method by different test cases and test results indicate high efficiency of this method. Manuscript profile
    • Open Access Article

      2 - Camera Identification Algorithm Based on Sensor Pattern Noise Using Wavelet Transform, SVD / PCA and SVM Classifier
      Kimia Bolouri Mehdi Javanmard Mohammad Firouzmand
      Identifying the source camera of an image is one of the most important issues of digital court and is useful in many applications, such as images that are presented in court as evidence. In many methods, the image noise characteristics, extraction of Sensor Pattern Nois More
      Identifying the source camera of an image is one of the most important issues of digital court and is useful in many applications, such as images that are presented in court as evidence. In many methods, the image noise characteristics, extraction of Sensor Pattern Noise and its correlation with non-uniformity of the light response (PNU) are used. In this paper we have presented a method based on photo response non uniformity (PRNU) that provides some features for classification by support vector machine (SVM). Because the noise model is affected by the complexity of the image, we used the wavelet transform to de-noise and reduce edge effects in PRNU noise pattern and also raise the detection accuracy. We also used the Precision processing theory to reduce the image size, then we simplified and summarized the data using the Single Value Decomposition (SVD) Or principal component analysis (PCA). The results show that using two-level wavelet transform and summarized data is more suitable using PCA. Manuscript profile
    • Open Access Article

      3 - A Learning Automata Approach to Cooperative Particle Swarm Optimizer
      Mohammad Hasanzadeh meybodi meybodi mohamad mehdi ebadzade
      This paper presents a modification of Particle Swarm Optimization (PSO) technique based on cooperative behavior of swarms and learning ability of an automaton. The approach is called Cooperative Particle Swarm Optimization based on Learning Automata (CPSOLA). The CPSOLA More
      This paper presents a modification of Particle Swarm Optimization (PSO) technique based on cooperative behavior of swarms and learning ability of an automaton. The approach is called Cooperative Particle Swarm Optimization based on Learning Automata (CPSOLA). The CPSOLA algorithm utilizes three layers of cooperation which are intra swarm, inter swarm and inter population. There are two active populations in CPSOLA. In the primary population, the particles are placed in all swarms and each swarm consists of multiple dimensions of search space. Also there is a secondary population in CPSOLA which is used the conventional PSO's evolution schema. In the upper layer of cooperation, the embedded Learning Automaton (LA) is responsible for deciding whether to cooperate between these two populations or not. Experiments are organized on five benchmark functions and results show notable performance and robustness of CPSOLA, cooperative behavior of swarms and successful adaptive control of populations. Manuscript profile
    • Open Access Article

      4 - Analysis and Evaluation of Techniques for Myocardial Infarction Based on Genetic Algorithm and Weight by SVM
      hojatallah hamidi Atefeh Daraei
      Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effe More
      Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effects of using Angioplasty as main method for diagnosing Myocardial Infarction, presenting a method for diagnosing MI before occurrence seems really important. This study aim to investigate prediction models for Myocardial Infarction, by applying a feature selection model based on Wight by SVM and genetic algorithm. In our proposed method, for improving the performance of classification algorithm, a hybrid feature selection method is applied. At first stage of this method, the features are selected based on their weights, using weight by Support Vector Machine. At second stage, the selected features, are given to genetic algorithm for final selection. After selecting appropriate features, eight classification methods, include Sequential Minimal Optimization, REPTree, Multi-layer Perceptron, Random Forest, K-Nearest Neighbors and Bayesian Network, are applied to predict occurrence of Myocardial Infarction. Finally, the best accuracy of applied classification algorithms, have achieved by Multi-layer Perceptron and Sequential Minimal Optimization. Manuscript profile
    • Open Access Article

      5 - Short Time Price Forecasting for Electricity Market Based on Hybrid Fuzzy Wavelet Transform and Bacteria Foraging Algorithm
      keyvan Borna Sepideh Palizdar
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, lin More
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, linear prediction methods and neural networks and fuzzy logic have been studied and emulated. An optimized fuzzy-wavelet prediction method is proposed to predict the price of electricity. In this method, in order to have a better prediction, the membership functions of the fuzzy regression along with the type of the wavelet transform filter have been optimized using the E.Coli Bacterial Foraging Optimization Algorithm. Then, to better compare this optimal method with other prediction methods including conventional linear prediction and neural network methods, they were analyzed with the same electricity price data. In fact, our fuzzy-wavelet method has a more desirable solution than previous methods. More precisely by choosing a suitable filter and a multiresolution processing method, the maximum error has improved by 13.6%, and the mean squared error has improved about 17.9%. In comparison with the fuzzy prediction method, our proposed method has a higher computational volume due to the use of wavelet transform as well as double use of fuzzy prediction. Due to the large number of layers and neurons used in it, the neural network method has a much higher computational volume than our fuzzy-wavelet method. Manuscript profile
    • Open Access Article

      6 - Identification of a Nonlinear System by Determining of Fuzzy Rules
      hojatallah hamidi Atefeh  Daraei
      In this article the hybrid optimization algorithm of differential evolution and particle swarm is introduced for designing the fuzzy rule base of a fuzzy controller. For a specific number of rules, a hybrid algorithm for optimizing all open parameters was used to reach More
      In this article the hybrid optimization algorithm of differential evolution and particle swarm is introduced for designing the fuzzy rule base of a fuzzy controller. For a specific number of rules, a hybrid algorithm for optimizing all open parameters was used to reach maximum accuracy in training. The considered hybrid computational approach includes: opposition-based differential evolution algorithm and particle swarm optimization algorithm. To train a fuzzy system hich is employed for identification of a nonlinear system, the results show that the proposed hybrid algorithm approach demonstrates a better identification accuracy compared to other educational approaches in identification of the nonlinear system model. The example used in this article is the Mackey-Glass Chaotic System on which the proposed method is finally applied. Manuscript profile
    • Open Access Article

      7 - Confidence measure estimation for Open Information Extraction
      Vahideh Reshadat maryam hourali Heshaam Faili
      The prior relation extraction approaches were relation-specific and supervised, yielding new instances of relations known a priori. While effective, this model is not applicable in case when the number of relations is high or where the relations are not known a priori. More
      The prior relation extraction approaches were relation-specific and supervised, yielding new instances of relations known a priori. While effective, this model is not applicable in case when the number of relations is high or where the relations are not known a priori. Open Information Extraction (OIE) is a relation-independent extraction paradigm designed to extract relations directly from massive and heterogeneous corpora such as Web. One of the main challenges for an Open IE system is estimating the probability that its extracted relation is correct. A confidence measure shows that how an extracted relation is a correct instance of a relation among entities. This paper proposes a new method of confidence estimation for OIE called Relation Confidence Estimator for Open Information Extraction (RCE-OIE). It investigates the incorporation of some proposed features in assigning confidence metric using logistic regression. These features consider diverse lexical, syntactic and semantic knowledge and also some extraction properties such as number of distinct documents from which extractions are drawn, number of relation arguments and their types. We implemented proposed confidence measure on the Open IE systems’ extractions and examined how it affects the performance of results. Evaluations show that incorporation of designed features is promising and the accuracy of our method is higher than the base methods while keeping almost the same performance as them. We also demonstrate how semantic information such as coherence measures can be used in feature-based confidence estimation of Open Relation Extraction (ORE) to further improve the performance. Manuscript profile
    • Open Access Article

      8 - Information Bottleneck and its Applications in Deep Learning
      Hassan Hafez Kolahi Shohreh Kasaei
      Information Theory (IT) has been used in Machine Learning (ML) from early days of this field. In the last decade, advances in Deep Neural Networks (DNNs) have led to surprising improvements in many applications of ML. The result has been a paradigm shift in the communit More
      Information Theory (IT) has been used in Machine Learning (ML) from early days of this field. In the last decade, advances in Deep Neural Networks (DNNs) have led to surprising improvements in many applications of ML. The result has been a paradigm shift in the community toward revisiting previous ideas and applications in this new framework. Ideas from IT are no exception. One of the ideas which is being revisited by many researchers in this new era, is Information Bottleneck (IB); a formulation of information extraction based on IT. The IB is promising in both analyzing and improving DNNs. The goal of this survey is to review the IB concept and demonstrate its applications in deep learning. The information theoretic nature of IB, makes it also a good candidate in showing the more general concept of how IT can be used in ML. Two important concepts are highlighted in this narrative on the subject, i) the concise and universal view that IT provides on seemingly unrelated methods of ML, demonstrated by explaining how IB relates to minimal sufficient statistics, stochastic gradient descent, and variational auto-encoders, and ii) the common technical mistakes and problems caused by applying ideas from IT, which is discussed by a careful study of some recent methods suffering from them. Manuscript profile
    • Open Access Article

      9 - Social Groups Detection in Crowd by Using Automatic Fuzzy Clustering with PSO
      Ali Akbari Hassan Farsi Sajad Mohammadzadeh
      Detecting social groups is one of the most important and complex problems which has been concerned recently. This process and relation between members in the groups are necessary for human-like robots shortly. Moving in a group means to be a subsystem in the group. In o More
      Detecting social groups is one of the most important and complex problems which has been concerned recently. This process and relation between members in the groups are necessary for human-like robots shortly. Moving in a group means to be a subsystem in the group. In other words, a group containing two or more persons can be considered to be in the same direction of movement with the same speed of movement. All datasets contain some information about trajectories and labels of the members. The aim is to detect social groups containing two or more persons or detecting the individual motion of a person. For detecting social groups in the proposed method, automatic fuzzy clustering with Particle Swarm Optimization (PSO) is used. The automatic fuzzy clustering with the PSO introduced in the proposed method does not need to know the number of groups. At first, the locations of all people in frequent frames are detected and the average of locations is given to automatic fuzzy clustering with the PSO. The proposed method provides reliable results in valid datasets. The proposed method is compared with a method that provides better results while needs training data for the training step, but the proposed method does not require training at all. This characteristic of the proposed method increases the ability of its implementation for robots. The indexing results show that the proposed method can automatically find social groups without accessing the number of groups and requiring training data at all. Manuscript profile
    • Open Access Article

      10 - A Study of Fraud Types, Challenges and Detection Approaches in Telecommunication
      Kasra Babaei ZhiYuan Chen Tomas Maul
      Fraudulent activities have been rising globally resulting companies losing billions of dollars that can cause severe financial damages. Various approaches have been proposed by researchers in different applications. Studying these approaches can help us obtain a better More
      Fraudulent activities have been rising globally resulting companies losing billions of dollars that can cause severe financial damages. Various approaches have been proposed by researchers in different applications. Studying these approaches can help us obtain a better understanding of the problem. The aim of this paper is to investigate different aspects of fraud prevention and detection in telecommunication. This study presents a review of different fraud categories in telecommunication, the challenges that hinder the detection process, and some proposed solutions to overcome them. Also, the performance of some of the state-of-the-art approaches is reported followed by our guideline and recommendation in choosing the best metrics. Manuscript profile
    • Open Access Article

      11 - AI based Computational Trust Model for Intelligent Virtual Assistant
      Babu Kumar Ajay Vikram Singh Parul  Agarwal
      The Intelligent virtual assistant (IVA) also called AI assistant or digital assistant is software developed as a product by organizations like Google, Apple, Microsoft and Amazon. Virtual assistant based on Artificial Intelligence which works and processes on natural la More
      The Intelligent virtual assistant (IVA) also called AI assistant or digital assistant is software developed as a product by organizations like Google, Apple, Microsoft and Amazon. Virtual assistant based on Artificial Intelligence which works and processes on natural language commands given by humans. It helps the user to work more efficiently and also saves time. It is human friendly as it works on natural language commands given by humans. Voice-controlled Intelligent Virtual Assistants (IVAs) have seen gigantic development as of late on cell phones and as independent gadgets in individuals’ homes. The intelligent virtual assistant is very useful for illiterate and visually impaired people around the world. While research has analyzed the expected advantages and downsides of these gadgets for IVA clients, barely any investigations have exactly assessed the need of security and trust as a singular choice to use IVAs. In this proposed work, different IPA users and non-users (N=1000) are surveyed to understand and analyze the barriers and motivations to adopting IPAs and how users are concerned about data privacy and trust with respect to organizational compliances and social contract related to IPA data and how these concerns have affected the acceptance and use of IPAs. We have used Naïve Byes Classifier to compute trust in IVA devices and further evaluate probability of using different trusted IVA devices. Manuscript profile
    • Open Access Article

      12 - An Effective Method of Feature Selection in Persian Text for Improving the Accuracy of Detecting Request in Persian Messages on Telegram
      zahra khalifeh zadeh Mohammad Ali Zare Chahooki
      In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from More
      In recent years, data received from social media has increased exponentially. They have become valuable sources of information for many analysts and businesses to expand their business. Automatic document classification is an essential step in extracting knowledge from these sources of information. In automatic text classification, words are assessed as a set of features. Selecting useful features from each text reduces the size of the feature vector and improves classification performance. Many algorithms have been applied for the automatic classification of text. Although all the methods proposed for other languages are applicable and comparable, studies on classification and feature selection in the Persian text have not been sufficiently carried out. The present research is conducted in Persian, and the introduction of a Persian dataset is a part of its innovation. In the present article, an innovative approach is presented to improve the performance of Persian text classification. The authors extracted 85,000 Persian messages from the Idekav-system, which is a Telegram search engine. The new idea presented in this paper to process and classify this textual data is on the basis of the feature vector expansion by adding some selective features using the most extensively used feature selection methods based on Local and Global filters. The new feature vector is then filtered by applying the secondary feature selection. The secondary feature selection phase selects more appropriate features among those added from the first step to enhance the effect of applying wrapper methods on classification performance. In the third step, the combined filter-based methods and the combination of the results of different learning algorithms have been used to achieve higher accuracy. At the end of the three selection stages, a method was proposed that increased accuracy up to 0.945 and reduced training time and calculations in the Persian dataset. Manuscript profile
    • Open Access Article

      13 - Cost Benefit Analysis of Three Non-Identical Machine Model with Priority in Operation and Repair
      Nafeesa Bashir Raeesa Bashir JP Singh Joorel Tariq Rashid Jan Jan
      The paper proposes a new real life model and the main aim is to examine the cost benefit analysis of Textile Industry model subject to different failure and repair strategies. The reliability model comprises of three units i,e Spinning machine (S), Weaving machine (W), More
      The paper proposes a new real life model and the main aim is to examine the cost benefit analysis of Textile Industry model subject to different failure and repair strategies. The reliability model comprises of three units i,e Spinning machine (S), Weaving machine (W), Colouring and Finishing machine(Cf). The working principal of the model starts with spinning machine (S) where in unit S is in operative state while as weaving machine, Colouring and Finishing machine are in ideal state. Complete failure of system is observed when all three units of system i.e. S,W and Cf are in down state. Repairperson is always available to carry out the repair activities in the system in which first priority in repair is given to Colouring and Finishing machine followed by Spinning and weaving machine. The proposed model attempts to maximize the reliability of a real life system. Reliability measures such as Mean Sojourn time, Mean time to system failure, Profit analysis of system are examined to define the performance of the reliability characteristics. For concluding the study of such model, different stochastic measures are analyzed in steady state using regenerative point technique. The tables are prepared for arbitrary values of the parameters to show the performance of some important reliability measures and to check the efficiency of the model under such situations. Manuscript profile
    • Open Access Article

      14 - Predicting Student Performance for Early Intervention using Classification Algorithms in Machine Learning
      Kalaivani K Ulagapriya K Saritha A Ashutosh  Kumar
      Predicting Student’s Performance System is to find students who may require early intervention before they fail to graduate. It is generally meant for the teaching faculty members to analyze Student's Performance and Results. It stores Student Details in a database and More
      Predicting Student’s Performance System is to find students who may require early intervention before they fail to graduate. It is generally meant for the teaching faculty members to analyze Student's Performance and Results. It stores Student Details in a database and uses Machine Learning Model using i. Python Data Analysis tools like Pandas and ii. Data Visualization tools like Seaborn to analyze the overall Performance of the Class. The proposed system suggests student performance prediction through Machine Learning Algorithms and Data Mining Techniques. The Data Mining technique used here is classification, which classifies the students based on student’s attributes. The Front end of the application is made using React JS Library with Data Visualization Charts and connected to a backend Database where all student’s records are stored in MongoDB and the Machine Learning model is trained and deployed through Flask. In this process, the machine learning algorithm is trained using a dataset to create a model and predict the output on the basis of that model. Three different types of data used in Machine Learning are continuous, categorical and binary. In this study, a brief description and comparative analysis of various classification techniques is done using student performance dataset. The six different machine learning Classification algorithms, which have been compared, are Logistic Regression, Decision Tree, K-Nearest Neighbor, Naïve Bayes, Support Vector Machine and Random Forest. The results of Naïve Bayes classifier are comparatively higher than other techniques in terms of metrics such as precision, recall and F1 score. The values of precision, recall and F1 score are 0.93, 0.92 and 0.92 respectively. Manuscript profile
    • Open Access Article

      15 - A Hybrid Machine Learning Approach for Sentiment Analysis of Beauty Products Reviews
      Kanika Jindal Rajni Aron
      Nowadays, social media platforms have become a mirror that imitates opinions and feelings about any specific product or event. These product reviews are capable of enhancing communication among entrepreneurs and their customers. These reviews need to be extracted and an More
      Nowadays, social media platforms have become a mirror that imitates opinions and feelings about any specific product or event. These product reviews are capable of enhancing communication among entrepreneurs and their customers. These reviews need to be extracted and analyzed to predict the sentiment polarity, i.e., whether the review is positive or negative. This paper aims to predict the human sentiments expressed for beauty product reviews extracted from Amazon and improve the classification accuracy. The three phases instigated in our work are data pre-processing, feature extraction using the Bag-of-Words (BoW) method, and sentiment classification using Machine Learning (ML) techniques. A Global Optimization-based Neural Network (GONN) is proposed for the sentimental classification. Then an empirical study is conducted to analyze the performance of the proposed GONN and compare it with the other machine learning algorithms, such as Random Forest (RF), Naive Bayes (NB), and Support Vector Machine (SVM). We dig further to cross-validate these techniques by ten folds to evaluate the most accurate classifier. These models have also been investigated on the Precision-Recall (PR) curve to assess and test the best technique. Experimental results demonstrate that the proposed method is the most appropriate method to predict the classification accuracy for our defined dataset. Specifically, we exhibit that our work is adept at training the textual sentiment classifiers better, thereby enhancing the accuracy of sentiment prediction. Manuscript profile
    • Open Access Article

      16 - An Agent Based Model for Developing Air Traffic Management Software
      Mahdi Yosefzadeh Seyed Reza Kamel Tabbakh Seyed Javad  Mahdavi Chabok Maryam khairabadi
      The Air Traffic Management system is a complex issue that faces factors such as Aircraft Crash Prevention, air traffic controllers pressure, unpredictable weather conditions, flight emergency situations, airplane hijacking, and the need for autonomy on the fly. agent-ba More
      The Air Traffic Management system is a complex issue that faces factors such as Aircraft Crash Prevention, air traffic controllers pressure, unpredictable weather conditions, flight emergency situations, airplane hijacking, and the need for autonomy on the fly. agent-based software engineering is a new aspect in software engineering that can provide autonomy. agent-based systems have some properties such: cooperation of agents with each other in order to meet their goals, autonomy in function, learning and Reliability that can be used for air traffic management systems. In this paper, we first study the agent-based software engineering and its methodologies, and then design a agent-based software model for air traffic management. The proposed model has five modules .this model is designed for aircraft ,air traffic control and navigations aids factors based on the Belief-Desire-Intention (BDI) architecture. The agent-based system was designed using the agent-tool under the multi-agent system engineering (MaSE) methodology, which was eventually developed by the agent-ATC toolkit. In this model, we consider agents for special occasions such as emergency flights’ and hijacking airplanes in airport air traffic management areas which is why the accuracy of the work increased. It also made the flight’s sequence arrangement in take-off and landing faster, which indicates a relative improvement in the parameters of the air traffic management Manuscript profile
    • Open Access Article

      17 - Rough Sets Theory with Deep Learning for Tracking in Natural Interaction with Deaf
      Mohammad Ebrahimi Hossein Ebrahimpour-Komeleh
      Sign languages commonly serve as an alternative or complementary mode of human communication Tracking is one of the most fundamental problems in computer vision, and use in a long list of applications such as sign languages recognition. Despite great advances in recent More
      Sign languages commonly serve as an alternative or complementary mode of human communication Tracking is one of the most fundamental problems in computer vision, and use in a long list of applications such as sign languages recognition. Despite great advances in recent years, tracking remains challenging due to many factors including occlusion, scale variation, etc. The mistake detecting of head or left hand instead of right hand in overlapping are, modes like this, and due to the uncertainty of the hand area over the deaf news video frames; we proposed two methods: first, tracking using particle filter and second tracking using the idea of the rough set theory in granular information with deep neural network. We proposed the method for Combination the Rough Set with Deep Neural Network and used for in Hand/Head Tracking in Video Signal DeafNews. We develop a tracking system for Deaf News. We used rough set theory to increase the accuracy of skin segmentation in video signal. Using deep neural network, we extracted inherent relationships available in the frame pixels and generalized the achieved features to tracking. The system proposed is tested on the 33 of Deaf News with 100 different words and 1927 video files for words then recall, MOTA and MOTP values are obtained. Manuscript profile
    • Open Access Article

      18 - Statistical Analysis and Comparison of the Performance of Meta-Heuristic Methods Based on their Powerfulness and Effectiveness
      Mehrdad Rohani Hassan Farsi Seyed Hamid Zahiri
      In this paper, the performance of meta-heuristic algorithms is compared using statistical analysis based on new criteria (powerfulness and effectiveness). Due to the large number of meta-heuristic methods reported so far, choosing one of them by researchers has always b More
      In this paper, the performance of meta-heuristic algorithms is compared using statistical analysis based on new criteria (powerfulness and effectiveness). Due to the large number of meta-heuristic methods reported so far, choosing one of them by researchers has always been challenging. In fact, the user does not know which of these methods are able to solve his complex problem. In this paper, in order to compare the performance of several methods from different categories of meta-heuristic methods new criteria are proposed. In fact, by using these criteria, the user is able to choose an effective method for his problem. For this reason, statistical analysis is conducted on each of these methods to clarify the application of each of these methods for the users. Also, powerfulness and effectiveness criteria are defined to compare the performance of the meta-heuristic methods to introduce suitable substrate and suitable quantitative parameters for this purpose. The results of these criteria clearly show the ability of each method for different applications and problems. Manuscript profile
    • Open Access Article

      19 - Edge Detection and Identification using Deep Learning to Identify Vehicles
      Zohreh Dorrani Hassan Farsi Sajad Mohammadzadeh
      A deep convolution neural network (CNN) is used to detect the edge. First, the initial features are extracted using VGG-16, which consists of 5 convolutions, each step is connected to a pooling layer. For edge detection of the image, it is necessary to extract informati More
      A deep convolution neural network (CNN) is used to detect the edge. First, the initial features are extracted using VGG-16, which consists of 5 convolutions, each step is connected to a pooling layer. For edge detection of the image, it is necessary to extract information of different levels from each layer to the pixel space of the edge, and then re-extract the feature, and perform sampling. The attributes are mapped to the pixel space of the edge and a threshold extractor of the edges. It is then compared with a background model. Using background subtraction, foreground objects are detected. The Gaussian mixture model is used to detect the vehicle. This method is performed on three videos, and compared with other methods; the results show higher accuracy. Therefore, the proposed method is stable against sharpness, light, and traffic. Moreover, to improve the detection accuracy of the vehicle, shadow removal conducted, which uses a combination of color and contour features to identify the shadow. For this purpose, the moving target is extracted, and the connected domain is marked to be compared with the background. The moving target contour is extracted, and the direction of the shadow is checked according to the contour trend to obtain shadow points and remove these points. The results show that the proposed method is very resistant to changes in light, high-traffic environments, and the presence of shadows, and has the best performance compared to the current methods. Manuscript profile
    • Open Access Article

      20 - Hierarchical Weighted Framework for Emotional Distress Detection using Personalized Affective Cues
      Nagesh Jadhav
      Emotional distress detection has become a hot topic of research in recent years due to concerns related to mental health and complex nature distress identification. One of the challenging tasks is to use non-invasive technology to understand and detect emotional distres More
      Emotional distress detection has become a hot topic of research in recent years due to concerns related to mental health and complex nature distress identification. One of the challenging tasks is to use non-invasive technology to understand and detect emotional distress in humans. Personalized affective cues provide a non-invasive approach considering visual, vocal, and verbal cues to recognize the affective state. In this paper, we are proposing a multimodal hierarchical weighted framework to recognize emotional distress. We are utilizing negative emotions to detect the unapparent behavior of the person. To capture facial cues, we have employed hybrid models consisting of a transfer learned residual network and CNN models. Extracted facial cue features are processed and fused at decision using a weighted approach. For audio cues, we employed two different models exploiting the LSTM and CNN capabilities fusing the results at the decision level. For textual cues, we used a BERT transformer to learn extracted features. We have proposed a novel decision level adaptive hierarchical weighted algorithm to fuse the results of the different modalities. The proposed algorithm has been used to detect the emotional distress of a person. Hence, we have proposed a novel algorithm for the detection of emotional distress based on visual, verbal, and vocal cues. Experiments on multiple datasets like FER2013, JAFFE, CK+, RAVDESS, TESS, ISEAR, Emotion Stimulus dataset, and Daily-Dialog dataset demonstrates the effectiveness and usability of the proposed architecture. Experiments on the enterface'05 dataset for distress detection has demonstrated significant results. Manuscript profile
    • Open Access Article

      21 - A Hybrid Approach based on PSO and Boosting Technique for Data Modeling in Sensor Networks
      hadi shakibian Jalaledin Nasiri
      An efficient data aggregation approach in wireless sensor networks (WSNs) is to abstract the network data into a model. In this regard, regression modeling has been addressed in many studies recently. If the limited characteristics of the sensor nodes are omitted from c More
      An efficient data aggregation approach in wireless sensor networks (WSNs) is to abstract the network data into a model. In this regard, regression modeling has been addressed in many studies recently. If the limited characteristics of the sensor nodes are omitted from consideration, a common regression technique could be employed after transmitting all the network data from the sensor nodes to the fusion center. However, it is not practical nor efferent. To overcome this issue, several distributed methods have been proposed in WSNs where the regression problem has been formulated as an optimization based data modeling problem. Although they are more energy efficient than the centralized method, the latency and prediction accuracy needs to be improved even further. In this paper, a new approach is proposed based on the particle swarm optimization (PSO) algorithm. Assuming a clustered network, firstly, the PSO algorithm is employed asynchronously to learn the network model of each cluster. In this step, every cluster model is learnt based on the size and data pattern of the cluster. Afterwards, the boosting technique is applied to achieve a better accuracy. The experimental results show that the proposed asynchronous distributed PSO brings up to 48% reduction in energy consumption. Moreover, the boosted model improves the prediction accuracy about 9% on the average. Manuscript profile
    • Open Access Article

      22 - Performance Analysis and Activity Deviation Discovery in Event Log Using Process Mining Tool for Hospital System
      Shanmuga Sundari M Rudra Kalyan Nayak Vijaya Chandra  Jadala Sai Kiran  Pasupuleti
      All service and manufacturing businesses are resilient and strive for a more efficient and better end in today's world. Data mining is data-driven and necessitates significant data to analyze the pattern and train the model. Assume the data is incorrect and was not coll More
      All service and manufacturing businesses are resilient and strive for a more efficient and better end in today's world. Data mining is data-driven and necessitates significant data to analyze the pattern and train the model. Assume the data is incorrect and was not collected from reliable sources, causing the analysis to be skewed. We introduce a procedure in which the dataset is split into test and training datasets with a specific ratio to overcome this challenge. Process mining will find the traces of actions to streamline the process and aid data mining in producing a more efficient result. The most responsible domain is the healthcare industry. In this study, we used the activity data from the hospital and applied process mining algorithms such as alpha miner and fuzzy miner. Process mining is used to check for conformity in the event log and do performance analysis, and a pattern of accuracy is exhibited. Finally, we used process mining techniques to show the deviation flow and fix the process flow. This study showed that there was a variation in the flow by employing alpha and fuzzy miners in the hospital. Manuscript profile
    • Open Access Article

      23 - Membrane Cholesterol Prediction from Human Receptor using Rough Set based Mean-Shift Approach
      Rudra Kalyan Nayak Ramamani  Tripathy Hitesh  Mohapatra Amiya  Kumar Rath Debahuti  Mishra
      In human physiology, cholesterol plays an imperative part in membrane cells which regulates the function of G-protein-coupled receptors (GPCR) family. Cholesterol is an individual type of lipid structure and about 90 percent of cellular cholesterol is present at plasma More
      In human physiology, cholesterol plays an imperative part in membrane cells which regulates the function of G-protein-coupled receptors (GPCR) family. Cholesterol is an individual type of lipid structure and about 90 percent of cellular cholesterol is present at plasma membrane region. Cholesterol Recognition/interaction Amino acid Consensus (CRAC) sequence, generally referred as the CRAC (L/V)-X1−5-(Y)-X1−5-(K/R) and the new cholesterol-binding domain is similar to the CRAC sequence, but exhibits the inverse orientation along the polypeptide chain i.e. CARC (K/R)-X1−5-(Y/F)-X1−5-(L/V). GPCR is treated as a biggest super family in human physiology and probably more than 900 protein genes included in this family. Among all membrane proteins GPCR is responsible for novel drug discovery in all pharmaceuticals industry. In earlier researches the researchers did not find the required number of valid motifs in terms of helices and motif types so they were lacking clinical relevance. The research gap here is that they were not able to predict the motifs effectively which are belonging to multiple motif types. To find out better motif sequences from human GPCR, we explored a hybrid computational model consisting of hybridization of Rough Set with Mean-Shift algorithm. In this paper we made comparison among our resulted output with other techniques such as fuzzy C-means (FCM), FCM with spectral clustering and we concluded that our proposed method targeted well on CRAC region in comparison to CARC region which have higher biological relevance in medicine industry and drug discovery. Manuscript profile
    • Open Access Article

      24 - Breast Cancer Classification Approaches - A Comparative Analysis
      Mohan Kumar Sunil Kumar Khatri Masoud Mohammadian
      Cancer of the breast is a difficult disease to treat since it weakens the patient's immune system. Particular interest has lately been shown in the identification of particular immune signals for a variety of malignancies in this regard. In recent years, several methods More
      Cancer of the breast is a difficult disease to treat since it weakens the patient's immune system. Particular interest has lately been shown in the identification of particular immune signals for a variety of malignancies in this regard. In recent years, several methods for predicting cancer based on proteomic datasets and peptides have been published. The cells turns into cancerous cells because of various reasons and get spread very quickly while detrimental to normal cells. In this regard, identifying specific immunity signs for a range of cancers has recently gained a lot of interest. Accurately categorizing and compartmentalizing the breast cancer subtype is a vital job. Computerized systems built on artificial intelligence can substantially save time and reduce inaccuracy. Several strategies for predicting cancer utilizing proteomic datasets and peptides have been reported in the literature in recent years.It is critical to classify and categorize breast cancer treatments correctly. It's possible to save time while simultaneously minimizing the likelihood of mistakes using machine learning and artificial intelligence approaches. Using the Wisconsin Breast Cancer Diagnostic dataset, this study evaluates the performance of various classification methods, including SVC, ETC, KNN, LR, and RF (random forest). Breast cancer can be detected and diagnosed using a variety of measurements of data (which are discussed in detail in the article) (WBCD). The goal is to determine how well each algorithm performs in terms of precision, recall, and accuracy. The variation of each classification threshold has been tested on various algorithms and SVM turned out to be very promising. Manuscript profile
    • Open Access Article

      25 - Hoax Identification of Indonesian Tweeters using Ensemble Classifier
      Gus Nanang Syaifuddiin Rizal Arifin Desriyanti Desriyanti Ghulam Asrofi  Buntoro Zulkham Umar  Rosyidin Ridwan Yudha  Pratama Ali  Selamat
      Fake information, better known as hoaxes, is often found on social media. Currently, social media is not only used to make friends or socialize with friends online, but some use it to spread hate speech and false information. Hoaxes are very dangerous in social life, es More
      Fake information, better known as hoaxes, is often found on social media. Currently, social media is not only used to make friends or socialize with friends online, but some use it to spread hate speech and false information. Hoaxes are very dangerous in social life, especially in countries with large populations and ethnically diverse cultures, such as Indonesia. Although there have been many studies on detecting false information, the accuracy and efficiency still need to be improved. To help prevent the spread of these hoaxes, we built a model to identify false information in Indonesian using an ensemble classifier that combines the n-gram method, term frequency-inverse document frequency, and passive-aggressive classifier method. The evaluation process was carried out using 5000 samples from Twitter social media accounts in this study. The testing process is carried out using four schemes by dividing the dataset into training and test data based on the ratios of 90:10, 80:20, 70:30, and 60:40. The inspection results show that our software can accurately detect hoaxes at 91.8%. We also found an increase in the accuracy and precision of hoax detection testing using the proposed method compared to several previous studies. The results show that our proposed method can be developed and used in detecting hoaxes in Indonesian on various social media platforms. Manuscript profile
    • Open Access Article

      26 - Implementation of Machine Learning Algorithms for Customer Churn Prediction
      Manal Loukili Fayçal Messaoudi Raouya El Youbi
      Churn prediction is one of the most critical issues in the telecommunications industry. The possibilities of predicting churn have increased considerably due to the remarkable progress made in the field of machine learning and artificial intelligence. In this context, w More
      Churn prediction is one of the most critical issues in the telecommunications industry. The possibilities of predicting churn have increased considerably due to the remarkable progress made in the field of machine learning and artificial intelligence. In this context, we propose the following process which consists of six stages. The first phase consists of data pre-processing, followed by feature analysis. In the third phase, the selection of features. Then the data was divided into two parts: the training set and the test set. In the prediction process, the most popular predictive models were adopted, namely random forest, k-nearest neighbor, and support vector machine. In addition, we used cross-validation on the training set for hyperparameter tuning and to avoid model overfitting. Then, the results obtained on the test set were evaluated using the confusion matrix and the AUC curve. Finally, we found that the models used gave high accuracy values (over 79%). The highest AUC score, 84%, is achieved by the SVM and bagging classifiers as an ensemble method which surpasses them. Manuscript profile
    • Open Access Article

      27 - An Analysis of Covid-19 Pandemic Outbreak on Economy using Neural Network and Random Forest
      Md. Nahid  Hasan Tanvir  Ahmed Md.  Ashik Md. Jahid  Hasan Tahaziba  Azmin Jia Uddin
      The pandemic disease outbreaks are causing a significant financial crisis affecting the worldwide economy. Machine learning techniques are urgently required to detect, predict and analyze the economy for early economic planning and growth. Consequently, in this paper, w More
      The pandemic disease outbreaks are causing a significant financial crisis affecting the worldwide economy. Machine learning techniques are urgently required to detect, predict and analyze the economy for early economic planning and growth. Consequently, in this paper, we use machine learning classifiers and regressors to construct an early warning model to tackle economic recession due to the cause of covid-19 pandemic outbreak. A publicly available database created by the National Bureau of Economic Research (NBER) is used to validate the model, which contains information about national revenue, employment rate, and workers' earnings of the USA over 239 days (1 January 2020 to 12 May 2020). Different techniques such as missing value imputation, k-fold cross validation have been used to pre-process the dataset. Machine learning classifiers- Multi-layer Perceptron- Neural Network (MLP-NN) and Random Forest (RF) have been used to predict recession. Additionally, machine learning regressors-Long Short-Term Memory (LSTM) and Random Forest (RF) have been used to detect how much recession a country is facing as a result of positive test cases of covid-19 pandemic. Experimental results demonstrate that the MLP-NN and RF classifiers have exhibited average 88.33% and 85% of recession (where 95%, 81%, 89% and 85%, 81%, 89% for revenue, employment rate and workers earnings, respectively) and average 90.67% and 93.67% of prediction accuracy for LSTM and RF regressors (where 92%, 90%, 90%, and 95%, 93%, 93% respectively). Manuscript profile
    • Open Access Article

      28 - A Novel Elite-Oriented Meta-Heuristic Algorithm: Qashqai Optimization Algorithm (QOA)
      Mehdi Khadem Abbas Toloie Eshlaghy Kiamars Fathi Hafshejani
      Optimization problems are becoming more complicated, and their resource requirements are rising. Real-life optimization problems are often NP-hard and time or memory consuming. Nature has always been an excellent pattern for humans to pull out the best mechanisms and th More
      Optimization problems are becoming more complicated, and their resource requirements are rising. Real-life optimization problems are often NP-hard and time or memory consuming. Nature has always been an excellent pattern for humans to pull out the best mechanisms and the best engineering to solve their problems. The concept of optimization seen in several natural processes, such as species evolution, swarm intelligence, social group behavior, the immune system, mating strategies, reproduction and foraging, and animals’ cooperative hunting behavior. This paper proposes a new Meta-Heuristic algorithm for solving NP-hard nonlinear optimization problems inspired by the intelligence, socially, and collaborative behavior of the Qashqai nomad’s migration who have adjusted for many years. In the design of this algorithm uses population-based features, experts’ opinions, and more to improve its performance in achieving the optimal global solution. The performance of this algorithm tested using the well-known optimization test functions and factory facility layout problems. It found that in many cases, the performance of the proposed algorithm was better than other known meta-heuristic algorithms in terms of convergence speed and quality of solutions. The name of this algorithm chooses in honor of the Qashqai nomads, the famous tribes of southwest Iran, the Qashqai algorithm. Manuscript profile
    • Open Access Article

      29 - Long-Term Software Fault Prediction Model with Linear Regression and Data Transformation
      Momotaz  Begum Jahid Hasan Rony Md. Rashedul Islam Jia Uddin
      The validation performance is obligatory to ensure the software reliability by determining the characteristics of an implemented software system. To ensure the reliability of software, not only detecting and solving occurred faults but also predicting the future fault i More
      The validation performance is obligatory to ensure the software reliability by determining the characteristics of an implemented software system. To ensure the reliability of software, not only detecting and solving occurred faults but also predicting the future fault is required. It is performed before any actual testing phase initiates. As a result, various works on software fault prediction have been done. In this paper presents, we present a software fault prediction model where different data transformation methods are applied with Poisson fault count data. For data pre-processing from Poisson data to Gaussian data, Box-Cox power transformation (Box-Cox_T), Yeo-Johnson power transformation (Yeo-Johnson_T), and Anscombe transformation (Anscombe_T) are used here. And then, to predict long-term software fault prediction, linear regression is applied. Linear regression shows the linear relationship between the dependent and independent variable correspondingly relative error and testing days. For synthesis analysis, three real software fault count datasets are used, where we compare the proposed approach with Naïve gauss, exponential smoothing time series forecasting model, and conventional method software reliability growth models (SRGMs) in terms of data transformation (With_T) and non-data transformation (Non_T). Our datasets contain days and cumulative software faults represented in (62, 133), (181, 225), and (114, 189) formats, respectively. Box-Cox power transformation with linear regression (L_Box-Cox_T) method, has outperformed all other methods with regard to average relative error from the short to long term. Manuscript profile
    • Open Access Article

      30 - Application of Machine Learning in the Telecommunications Industry: Partial Churn Prediction by using a Hybrid Feature Selection Approach
      Fatemeh Mozaffari Iman Raeesi Vanani Payam Mahmoudian Babak Sohrabi
      The telecommunications industry is one of the most competitive industries in the world. Because of the high cost of customer acquisition and the adverse effects of customer churn on the company's performance, customer retention becomes an inseparable part of strategic d More
      The telecommunications industry is one of the most competitive industries in the world. Because of the high cost of customer acquisition and the adverse effects of customer churn on the company's performance, customer retention becomes an inseparable part of strategic decision-making and one of the main objectives of customer relationship management. Although customer churn prediction models are widely studied in various domains, several challenges remain in designing and implementing an effective model. This paper addresses the customer churn prediction problem with a practical approach. The experimental analysis was conducted on the customers' data gathered from available sources at a telecom company in Iran. First, partial churn was defined in a new way that exploits the status of customers based on criteria that can be measured easily in the telecommunications industry. This definition is also based on data mining techniques that can find the degree of similarity between assorted customers with active ones or churners. Moreover, a hybrid feature selection approach was proposed in which various feature selection methods, along with the crowd's wisdom, were applied. It was found that the wisdom of the crowd can be used as a useful feature selection method. Finally, a predictive model was developed using advanced machine learning algorithms such as bagging, boosting, stacking, and deep learning. The partial customer churn was predicted with more than 88% accuracy by the Gradient Boosting Machine algorithm by using 5-fold cross-validation. Comparative results indicate that the proposed model performs efficiently compared to the ones applied in the previous studies. Manuscript profile
    • Open Access Article

      31 - A Recommender System for Scientific Resources Based on Recurrent Neural Networks
      Hadis Ahmadian Seyed Javad  Mahdavi Chabok Maryam  Kheirabadi
      Over the last few years, online training courses have had a significant increase in the number of participants. However, most web-based educational systems have drawbacks compared to traditional classrooms. On the one hand, the structure and nature of the courses direct More
      Over the last few years, online training courses have had a significant increase in the number of participants. However, most web-based educational systems have drawbacks compared to traditional classrooms. On the one hand, the structure and nature of the courses directly affect the number of active participants; on the other hand, it becomes difficult for teachers to guide students in choosing the appropriate learning resource due to the abundance of online learning resources. Students also find it challenging to decide which educational resources to choose according to their condition. The resource recommender system can be used as a Guide tool for educational resource recommendations to students so that these suggestions are tailored to the preferences and needs of each student. In this paper, it was presented a resource recommender system with the help of Bi-LSTM networks. Utilizing this type of structure involves both long-term and short-term interests of the user and, due to the gradual learning property of the system, supports the learners' behavioral changes. It has more appropriate recommendations with a mean accuracy of 0.95 and a loss of 0.19 compared to a similar article. Manuscript profile
    • Open Access Article

      32 - Comparing the Semantic Segmentation of High-Resolution Images Using Deep Convolutional Networks: SegNet, HRNet, CSE-HRNet and RCA-FCN
      Nafiseh Sadeghi Homayoun Mahdavi-Nasab Mansoor Zeinali Hossein Pourghasem
      Semantic segmentation is a branch of computer vision, used extensively in image search engines, automated driving, intelligent agriculture, disaster management, and other machine-human interactions. Semantic segmentation aims to predict a label for each pixel from a giv More
      Semantic segmentation is a branch of computer vision, used extensively in image search engines, automated driving, intelligent agriculture, disaster management, and other machine-human interactions. Semantic segmentation aims to predict a label for each pixel from a given label set, according to semantic information. Among the proposed methods and architectures, researchers have focused on deep learning algorithms due to their good feature learning results. Thus, many studies have explored the structure of deep neural networks, especially convolutional neural networks. Most of the modern semantic segmentation models are based on fully convolutional networks (FCN), which first replace the fully connected layers in common classification networks with convolutional layers, getting pixel-level prediction results. After that, a lot of methods are proposed to improve the basic FCN methods results. With the increasing complexity and variety of existing data structures, more powerful neural networks and the development of existing networks are needed. This study aims to segment a high-resolution (HR) image dataset into six separate classes. Here, an overview of some important deep learning architectures will be presented with a focus on methods producing remarkable scores in segmentation metrics such as accuracy and F1-score. Finally, their segmentation results will be discussed and we would see that the methods, which are superior in the overall accuracy and overall F1-score, are not necessarily the best in all classes. Therefore, the results of this paper lead to the point to choose the segmentation algorithm according to the application of segmentation and the importance degree of each class. Manuscript profile