• OpenAccess
    • List of Articles Reliability

      • Open Access Article

        1 - A New Architecture for Intrusion-Tolerant Web Services Based on Design Diversity Techniques
        Sadegh Bejani Mohammad  Abdollahi Azgomi
        Web services are the realization of service-oriented architecture (SOA). Security is an important challenge of SOAP-based Web services. So far, several security techniques and standards based on traditional security mechanisms, such as encryption and digital signature, More
        Web services are the realization of service-oriented architecture (SOA). Security is an important challenge of SOAP-based Web services. So far, several security techniques and standards based on traditional security mechanisms, such as encryption and digital signature, have been proposed to enhance the security of Web services. The aim has been to employ the concepts and techniques of fault-tolerant computing to make Web services more secure, which is called intrusion-tolerance. Intrusion-tolerance means the continuous delivery of services in presence of security attacks, which can be used as a fundamental approach for enhancing the security of Web services. In this paper, we propose a novel architecture for intrusion-tolerant Web services with emphasis on intrusion-tolerance concepts and composite Web service techniques. The proposed architecture, which is called design-diverse intrusion-tolerant Web service (DDITWS), takes the advantages of design diversity techniques. For Web service composition, BPEL4WS is used. Formal modeling and verification of the proposed architecture is performed using colored Petri nets (CPNs) and CPN Tools. We have checked the behavioral properties of the model to ensure its correctness. The reliability and security evaluation of the proposed architecture is also performed using a stochastic Petri net (SPN) model and the SHARPE tool. The results show that the reliability and mean-time-to-security-failure (MTTSF) in the proposed architecture are improved. Manuscript profile
      • Open Access Article

        2 - A New Approach to the Quantitative Measurement of Software Reliability
        Abbas  Rasoolzadegan
        Nowadays software systems have very important role in a lot of sensitive and critical applications. Sometimes a small error in software could cause financial or even health loss in critical applications. So reliability assurance as a nun-functional requirement, is very More
        Nowadays software systems have very important role in a lot of sensitive and critical applications. Sometimes a small error in software could cause financial or even health loss in critical applications. So reliability assurance as a nun-functional requirement, is very vital.One of the key tasks to ensure error-free operation of the software, is to have a quantitative measurement of the software reliability.Software reliability engineering is defined as the quantitative study of the operational behavior of software systems with respect to user requirements concerning reliability. Software Reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Quantifying software reliability is increasingly becoming necessary. We have recently proposed a new approach (referred to as SDAFlex&Rel) to the development of «reliable yet flexible» software. In this paper, we first present the definitions of a set of key terms that are necessary to communicate with the scope and contributions of this work. Based on the fact that software reliability is directly proportional to the reliability of the development approach used, in this paper, a new approach is proposed to quantitatively measure the reliability of the software developed using SDAFlex&Rel, thereby making precise informal claims on the reliability improvement. The quantitative results confirm the reliability improvement that is informally promised by SDAFlex&Rel. Manuscript profile
      • Open Access Article

        3 - Toward an Enhanced Dynamic VM Consolidation Approach for Cloud Datacenters Using Continuous Time Markov Chain
        Monireh Hosseini Sayadnavard Abolfazl Toroghi Haghighat
        Dynamic Virtual Machine (VM) consolidation is an effective manner to reduce energy consumption and balance the resource load of physical machines (PMs) in cloud data centers that guarantees efficient power consumption while maintaining the quality of service requirement More
        Dynamic Virtual Machine (VM) consolidation is an effective manner to reduce energy consumption and balance the resource load of physical machines (PMs) in cloud data centers that guarantees efficient power consumption while maintaining the quality of service requirements. Reducing the number of active PMs using VM live migration leads to prevent inefficient usage of resources. However, high frequency of VM consolidation has a negative effect on the system reliability and we need to deal with the trade-off between energy consumption and system reliability. In recent years many research work has been done to optimize energy management using power management techniques. Although these methods are very efficient from the point of view of energy management, but they ignore the negative impact on the system reliability. In this paper, a novel approach is proposed to achieve a reliable VM consolidation method. In this way, a Markov chain model is designed to determine the reliability of PMs and then it has been prioritized PMs based on their CPU utilization level and reliability status. Two algorithms are presented to determining source and destination servers. The efficiency of our proposed approach is validated by conducting extensive simulations. The results of the evaluation clearly show that the proposed approach significantly improve energy consumption while avoiding the inefficient VM migrations. Manuscript profile
      • Open Access Article

        4 - Reliable resource allocation and fault tolerance in mobile cloud computing
        Zahra Najafabadi Samani Mohammad Reza  Khayyam Bashi
        By switching the computational load from mobile devices to the cloud, Mobile Cloud Computing (MCC) allows mobile devices to offer a wider range of functionalities. There are several issues in using mobile devices as resource providers, including unstable wireless connec More
        By switching the computational load from mobile devices to the cloud, Mobile Cloud Computing (MCC) allows mobile devices to offer a wider range of functionalities. There are several issues in using mobile devices as resource providers, including unstable wireless connections, limited energy capacity, and frequent location changes. Fault tolerance and reliable resource allocation are among the challenges encountered by mobile service providers in MCC. In this paper, a new reliable resource allocation and fault tolerance mechanism is proposed in order to apply a fully distributed resource allocation algorithm without exploiting any central component. The objective is to improve the reliability of mobile resources. The proposed approach involves two steps: (1) Predicting device status by gathering contextual information and applying TOPSIS to prevent faults caused by volatility of mobile devices, and (2) Adapting replication and checkpointing methods to fault tolerance. A context-aware reliable offloading middleware is developed to collect contextual information and manage the offloading process. To evaluate the proposed method, several experiments are run in a real environment. The results indicate improvements in success rates, completion time, and energy consumption for tasks with high computational load Manuscript profile
      • Open Access Article

        5 - Cost Benefit Analysis of Three Non-Identical Machine Model with Priority in Operation and Repair
        Nafeesa Bashir Raeesa Bashir JP Singh Joorel Tariq Rashid Jan Jan
        The paper proposes a new real life model and the main aim is to examine the cost benefit analysis of Textile Industry model subject to different failure and repair strategies. The reliability model comprises of three units i,e Spinning machine (S), Weaving machine (W), More
        The paper proposes a new real life model and the main aim is to examine the cost benefit analysis of Textile Industry model subject to different failure and repair strategies. The reliability model comprises of three units i,e Spinning machine (S), Weaving machine (W), Colouring and Finishing machine(Cf). The working principal of the model starts with spinning machine (S) where in unit S is in operative state while as weaving machine, Colouring and Finishing machine are in ideal state. Complete failure of system is observed when all three units of system i.e. S,W and Cf are in down state. Repairperson is always available to carry out the repair activities in the system in which first priority in repair is given to Colouring and Finishing machine followed by Spinning and weaving machine. The proposed model attempts to maximize the reliability of a real life system. Reliability measures such as Mean Sojourn time, Mean time to system failure, Profit analysis of system are examined to define the performance of the reliability characteristics. For concluding the study of such model, different stochastic measures are analyzed in steady state using regenerative point technique. The tables are prepared for arbitrary values of the parameters to show the performance of some important reliability measures and to check the efficiency of the model under such situations. Manuscript profile
      • Open Access Article

        6 - Cache Point Selection and Transmissions Reduction using LSTM Neural Network
        Malihe  Bahekmat Mohammad Hossein  Yaghmaee Moghaddam
        Reliability of data transmission in wireless sensor networks (WSN) is very important in the case of high lost packet rate due to link problems or buffer congestion. In this regard, mechanisms such as middle cache points and congestion control can improve the performance More
        Reliability of data transmission in wireless sensor networks (WSN) is very important in the case of high lost packet rate due to link problems or buffer congestion. In this regard, mechanisms such as middle cache points and congestion control can improve the performance of the reliability of transmission protocols when the packet is lost. On the other hand, the issue of energy consumption in this type of networks has become an important parameter in their reliability. In this paper, considering the energy constraints in the sensor nodes and the direct relationship between energy consumption and the number of transmissions made by the nodes, the system tries to reduce the number of transmissions needed to send a packet from source to destination as much as possible by optimal selection of the cache points and packet caching. In order to select the best cache points, the information extracted from the network behavior analysis by deep learning algorithm has been used. In the training phase, long-short term memory (LSTM) capabilities as an example of recurrent neural network (RNN) deep learning networks to learn network conditions. The results show that the proposed method works better in examining the evaluation criteria of transmission costs, end-to-end delays, cache use and throughput. Manuscript profile
      • Open Access Article

        7 - Long-Term Software Fault Prediction Model with Linear Regression and Data Transformation
        Momotaz  Begum Jahid Hasan Rony Md. Rashedul Islam Jia Uddin
        The validation performance is obligatory to ensure the software reliability by determining the characteristics of an implemented software system. To ensure the reliability of software, not only detecting and solving occurred faults but also predicting the future fault i More
        The validation performance is obligatory to ensure the software reliability by determining the characteristics of an implemented software system. To ensure the reliability of software, not only detecting and solving occurred faults but also predicting the future fault is required. It is performed before any actual testing phase initiates. As a result, various works on software fault prediction have been done. In this paper presents, we present a software fault prediction model where different data transformation methods are applied with Poisson fault count data. For data pre-processing from Poisson data to Gaussian data, Box-Cox power transformation (Box-Cox_T), Yeo-Johnson power transformation (Yeo-Johnson_T), and Anscombe transformation (Anscombe_T) are used here. And then, to predict long-term software fault prediction, linear regression is applied. Linear regression shows the linear relationship between the dependent and independent variable correspondingly relative error and testing days. For synthesis analysis, three real software fault count datasets are used, where we compare the proposed approach with Naïve gauss, exponential smoothing time series forecasting model, and conventional method software reliability growth models (SRGMs) in terms of data transformation (With_T) and non-data transformation (Non_T). Our datasets contain days and cumulative software faults represented in (62, 133), (181, 225), and (114, 189) formats, respectively. Box-Cox power transformation with linear regression (L_Box-Cox_T) method, has outperformed all other methods with regard to average relative error from the short to long term. Manuscript profile