Yazar "Arasteh, Bahman" seçeneğine göre listele
Listeleniyor 1 - 20 / 37
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Advances in Manta Ray Foraging Optimization: A Comprehensive Survey(Springer Singapore Pte Ltd, 2024) Gharehchopogh, Farhad Soleimanian; Ghafouri, Shafi; Namazi, Mohammad; Arasteh, BahmanThis paper comprehensively analyzes the Manta Ray Foraging Optimization (MRFO) algorithm and its integration into diverse academic fields. Introduced in 2020, the MRFO stands as a novel metaheuristic algorithm, drawing inspiration from manta rays' unique foraging behaviors-specifically cyclone, chain, and somersault foraging. These biologically inspired strategies allow for effective solutions to intricate physical challenges. With its potent exploitation and exploration capabilities, MRFO has emerged as a promising solution for complex optimization problems. Its utility and benefits have found traction in numerous academic sectors. Since its inception in 2020, a plethora of MRFO-based research has been featured in esteemed international journals such as IEEE, Wiley, Elsevier, Springer, MDPI, Hindawi, and Taylor & Francis, as well as at international conference proceedings. This paper consolidates the available literature on MRFO applications, covering various adaptations like hybridized, improved, and other MRFO variants, alongside optimization challenges. Research trends indicate that 12%, 31%, 8%, and 49% of MRFO studies are distributed across these four categories respectively.Öğe A bioinspired discrete heuristic algorithm to generate the effective structural model of a program source code(Elsevier, 2023) Arasteh, Bahman; Sadegi, Razieh; Arasteh, Keyvan; Gunes, Peri; Kiani, Farzad; Torkamanian-Afshar, MahsaWhen the source code of a software is the only product available, program understanding has a substantial influence on software maintenance costs. The main goal in code comprehension is to extract information that is used in the software maintenance stage. Generating the structural model from the source code helps to alleviate the software maintenance cost. Software module clustering is thought to be a viable reverse engineering approach for building structural design models from source code. Finding the optimal clustering model is an NP-complete problem. The primary goals of this study are to minimize the number of connections between created clusters, enhance internal connections inside clusters, and enhance clustering quality. The previous approaches' main flaws were their poor success rates, instability, and inadequate modularization quality. The Olympiad optimization algorithm was introduced in this paper as a novel population-based and discrete heuristic algorithm for solving the software module clustering problem. This algorithm was inspired by the competition of a group of students to increase their knowledge and prepare for an Olympiad exam. The suggested algorithm employs a divide-and-conquer strategy, as well as local and global search methodologies. The effectiveness of the suggested Olympiad algorithm to solve the module clustering problem was evaluated using ten real-world and standard software benchmarks. According to the experimental results, on average, the modularization quality of the generated clustered models for the ten benchmarks is about 3.94 with 0.067 standard deviations. The proposed algorithm is superior to the prior algorithms in terms of modularization quality, convergence, and stability of results. Furthermore, the results of the experiments indicate that the proposed algorithm can be used to solve other discrete optimization problems efficiently. (c) 2023 The Author(s). Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Öğe A Bioinspired Test Generation Method Using Discretized and Modified Bat Optimization Algorithm(Mdpi, 2024) Arasteh, Bahman; Arasteh, Keyvan; Kiani, Farzad; Sefati, Seyed Salar; Fratu, Octavian; Halunga, Simona; Tirkolaee, Erfan BabaeeThe process of software development is incomplete without software testing. Software testing expenses account for almost half of all development expenses. The automation of the testing process is seen to be a technique for reducing the cost of software testing. An NP-complete optimization challenge is to generate the test data with the highest branch coverage in the shortest time. The primary goal of this research is to provide test data that covers all branches of a software unit. Increasing the convergence speed, the success rate, and the stability of the outcomes are other goals of this study. An efficient bioinspired technique is suggested in this study to automatically generate test data utilizing the discretized Bat Optimization Algorithm (BOA). Modifying and discretizing the BOA and adapting it to the test generation problem are the main contributions of this study. In the first stage of the proposed method, the source code of the input program is statistically analyzed to identify the branches and their predicates. Then, the developed discretized BOA iteratively generates effective test data. The fitness function was developed based on the program's branch coverage. The proposed method was implemented along with the previous one. The experiments' results indicated that the suggested method could generate test data with about 99.95% branch coverage with a limited amount of time (16 times lower than the time of similar algorithms); its success rate was 99.85% and the average number of required iterations to cover all branches is 4.70. Higher coverage, higher speed, and higher stability make the proposed method suitable as an efficient test generation method for real-world large software.Öğe Clustered design-model generation from a program source code using chaos-based metaheuristic algorithms(Springer Science and Business Media Deutschland GmbH, 2022) Arasteh, BahmanComprehension of the structure of software will facilitate maintaining the software more efficiently. Clustering software modules, as a reverse engineering technique, is assumed to be an effective technique in extracting comprehensible structural-models of software from the source code. Finding the best clustering model of a software system is regarded as a NP-complete problem. Minimizing the connections among the created clusters, maximizing the internal connections within the created clusters and maximizing the clustering quality are considered to be the most important objectives in software module clustering (SMC). Poor success rate, low stability and modularization quality are regarded as the major drawbacks of the previously proposed methods. In this paper, five different heuristic algorithms (Bat, Cuckoo, Teaching–Learning-Based, Black Widow and Grasshopper algorithms) are proposed for optimal clustering of software modules. Also, the effects of chaos theory in the performance of these algorithms in this problem have been experimentally investigated. The results of conducted experiments on the eight standard and real-world applications indicate that performance of the BWO, PSO, and TLB algorithms are higher than the other algorithms in SMC problem; also, the performance of these algorithm increased when their initial population were generated with logistic chaos method instead of random method. The average MQ of the generated clusters for the selected benchmark set by BWO, PSO and TLB are 3.155, 3.120 and 2.778, respectively.Öğe DATA REPLICATION IN DISTRIBUTED SYSTEMS USING OLYMPIAD OPTIMIZATION ALGORITHM(Univ Nis, 2023) Arasteh, Bahman; Bouyer, Asgarali; Ghanbarzadeh, Reza; Rouhi, Alireza; Mehrabani, Mahsa Nazeri; Tirkolaee, Erfan BabaeeAchieving timely access to data objects is a major challenge in big distributed systems like the Internet of Things (IoT) platforms. Therefore, minimizing the data read and write operation time in distributed systems has elevated to a higher priority for system designers and mechanical engineers. Replication and the appropriate placement of the replicas on the most accessible data servers is a problem of NP-complete optimization. The key objectives of the current study are minimizing the data access time, reducing the quantity of replicas, and improving the data availability. The current paper employs the Olympiad Optimization Algorithm (OOA) as a novel population-based and discrete heuristic algorithm to solve the replica placement problem which is also applicable to other fields such as mechanical and computer engineering design problems. This discrete algorithm was inspired by the learning process of student groups who are preparing for the Olympiad exams. The proposed algorithm, which is divide-and-conquer-based with local and global search strategies, was used in solving the replica placement problem in a standard simulated distributed system. The 'European Union Database' (EUData) was employed to evaluate the proposed algorithm, which contains 28 nodes as servers and a network architecture in the format of a complete graph. It was revealed that the proposed technique reduces data access time by 39% with around six replicas, which is vastly superior to the earlier methods. Moreover, the standard deviation of the results of the algorithm's different executions is approximately 0.0062, which is lower than the other techniques' standard deviation within the same experiments.Öğe Detecting SQL injection attacks by binary gray wolf optimizer and machine learning algorithms(Springer London Ltd, 2024) Arasteh, Bahman; Aghaei, Babak; Farzad, Behnoud; Arasteh, Keyvan; Kiani, Farzad; Torkamanian-Afshar, MahsaSQL injection is one of the important security issues in web applications because it allows an attacker to interact with the application's database. SQL injection attacks can be detected using machine learning algorithms. The effective features should be employed in the training stage to develop an optimal classifier with optimal accuracy. Identifying the most effective features is an NP-complete combinatorial optimization problem. Feature selection is the process of selecting the training dataset's smallest and most effective features. The main objective of this study is to enhance the accuracy, precision, and sensitivity of the SQLi detection method. In this study, an effective method to detect SQL injection attacks has been proposed. In the first stage, a specific training dataset consisting of 13 features was prepared. In the second stage, two different binary versions of the Gray-Wolf algorithm were developed to select the most effective features of the dataset. The created optimal datasets were used by different machine learning algorithms. Creating a new SQLi training dataset with 13 numeric features, developing two different binary versions of the gray wolf optimizer to optimally select the features of the dataset, and creating an effective and efficient classifier to detect SQLi attacks are the main contributions of this study. The results of the conducted tests indicate that the proposed SQL injection detector obtain 99.68% accuracy, 99.40% precision, and 98.72% sensitivity. The proposed method increases the efficiency of attack detection methods by selecting 20% of the most effective features.Öğe Discovering overlapping communities using a new diffusion approach based on core expanding and local depth traveling in social networks(Taylor & Francis Ltd, 2023) Bouyer, Asgarali; Sabavand Monfared, Maryam; Nourani, Esmaeil; Arasteh, BahmanThis paper proposes a local diffusion-based approach to find overlapping communities in social networks based on label expansion using local depth first search and social influence information of nodes, called the LDLF algorithm. It is vital to start the diffusion process in local depth, traveling from specific core nodes based on their local topological features and strategic position for spreading community labels. Correspondingly, to avoid assigning excessive and unessential labels, the LDLF algorithm prudently removes redundant and less frequent labels for nodes with multiple labels. Finally, the proposed method finalizes the node's label based on the Hub Depressed index. Thanks to requiring only two iterations for label updating, the proposed LDLF algorithm runs in low time complexity while eliminating random behavior and achieving acceptable accuracy in finding overlapping communities for large-scale networks. The experiments on benchmark networks prove the effectiveness of the LDLF method compared to state-of-the-art approaches.Öğe A discrete heuristic algorithm with swarm and evolutionary features for data replication problem in distributed systems(Springer London Ltd, 2023) Arasteh, Bahman; Allahviranloo, Tofigh; Funes, Peri; Torkamanian-Afshar, Mahsa; Khari, Manju; Catak, MuammerAvailability and accessibility of data objects in a reasonable time is a main issue in distributed systems like cloud computing services. As a result, the reduction of data-related operation times in distributed systems such as data read/write has become a major challenge in the development of these systems. In this regard, replicating the data objects on different servers is one commonly used technique. In general, replica placement plays an essential role in the efficiency of distributed systems and can be implemented statically or dynamically. Estimation of the minimum number of data replicas and the optimal placement of the replicas is an NP-complete optimization problem. Hence, different heuristic algorithms have been proposed for optimal replica placement in distributed systems. Reducing data processing costs as well as the number of replicas, and increasing the reliability of the replica placement algorithms are the main goals of this research. This paper presents a discrete and swarm-evolutionary method using a combination of shuffle-frog leaping and genetic algorithms to data-replica placement problems in distributed systems. The experiments on the standard dataset show that the proposed method reduces data access time by up to 30% with about 14 replicas; whereas the generated replicas by the GA and ACO are, respectively, 24 and 30. The average reduction in data access time by GA and ACO 21% and 18% which shows less efficiency than the SFLA-GA algorithm. Regarding the results, the SFLA-GA converges on the optimal solution before the 10th iteration, which shows the higher performance of the proposed method. Furthermore, the standard deviation among the results obtained by the proposed method on several runs is about 0.029, which is lower than other algorithms. Additionally, the proposed method has a higher success rate than other algorithms in the replica placement problem.Öğe A divide and conquer based development of gray wolf optimizer and its application in data replication problem in distributed systems(Springer, 2023) Fan, Wenguang; Arasteh, Bahman; Bouyer, Asgarali; Majidnezhad, VahidOne of the main problems of big distributed systems, like IoT, is the high access time to data objects. Replicating the data objects on various servers is a traditional strategy. Replica placement, which can be implemented statically or dynamically, is generally crucial to the effectiveness of distributed systems. Producing the minimum number of data copies and placing them on appropriate servers to minimize access time is an NP-complete optimization problem. Various heuristic techniques for efficient replica placement in distributed systems have been proposed. The main objectives of this research are to decrease the cost of data processing operations, decrease the number of copies, and improve the accessibility of the data objects. In this study, a discretized and group-based gray wolf optimization algorithm with swarm and evolutionary features was developed for the replica placement problem. The proposed algorithm includes swarm and evolutionary features and divides the wolves' population into subgroups, and each subgroup was locally searched in a different solution space. According to experiments conducted on the standard benchmark dataset, the suggested method provides about a 40% reduction in the data access time with about five replicas. Also, the reliability of the suggested method during different executions is considerably higher than the previous methods.Öğe Duzen: generating the structural model from the software source code using shuffled frog leaping algorithm(SPRINGER LONDON, 2022) Arasteh, Bahman; Karimi, Mohammad Bagher; Sadegi, RaziehThe cost of software maintenance is heavily influenced by program understanding. When the source code is the only product accessible, maintainers spend a significant amount of effort trying to understand the structure and behavior of the software. Program module clustering is a useful reverse-engineering technique for obtaining the software structural model from source code. Finding the best clustering is regarded as an NP-hard optimization problem, and several meta-heuristic methods have been employed to solve it. The fundamental flaws of the prior approaches were their insufficient performance and effectiveness. The major goals of this research are to achieve improved software clustering quality and stability. A new method (Duzen) is proposed in this research for improving software module clustering. As a meta-heuristic memetic algorithm, this technique employs the shuffled frog-leaping algorithm. The Duzen results were investigated and compared to those produced using earlier approaches. In terms of obtaining the best clustering quality, the proposed method was shown to be better and more successful than the others; it also had higher data stability and data convergence to optimal replies in a fewer number of repetitions. Furthermore, it acquired a higher data mean and a faster clustering execution time.Öğe Effective Software Mutation-Test Using Program Instructions Classification(Springer, 2023) Asghari, Zeinab; Arasteh, Bahman; Koochari, AbbasThe quantity of bugs that a software test-data finds determines its effectiveness. A useful technique for assessing the efficacy of a test set is mutation testing. The primary issues with the mutation test are cost and time requirements. Close to 40% of the injected bugs in the mutation test are effect-less (equivalent). Reducing the number of generated total mutants by decreasing equivalent mutants and reducing the execution time of the mutation test are the main objectives of this study. An error-propagation aware mutation test approach has been suggested in this research. Three steps make up the process. To find a collection of instruction-level characteristics effective on the error propagation rate, the data and instructions of the input program were evaluated in the first step. Utilizing supervised machine learning techniques, an instruction classifier was developed using the prepared dataset in the second step. After classifying the program instructions automatically by the created classifier, the mutation test is performed only on the identified error-propagating instructions; the identified non-error-propagating instructions are avoided to mutate in the proposed mutation testing. The conducted experiments on the set of standard benchmark programs indicate that the proposed method causes about 19% reduction in the number of generated mutants. Furthermore, the proposed method causes a 32.24% reduction in the live mutants. It should be noted that the proposed method eliminated only the affectless mutants. The key technical benefit of the suggested solution is that mutation of the instructions that don't propagate errors is avoided. These findings can lead to a performance improvement in the existing mutation-test methods and tools.Öğe Efficient software mutation test by clustering the single-line redundant mutants(Emerald Group Publishing Ltd, 2024) Arasteh, Bahman; Ghaffari, AliPurposeReducing the number of generated mutants by clustering redundant mutants, reducing the execution time by decreasing the number of generated mutants and reducing the cost of mutation testing are the main goals of this study.Design/methodology/approachIn this study, a method is suggested to identify and prone the redundant mutants. In the method, first, the program source code is analyzed by the developed parser to filter out the effectless instructions; then the remaining instructions are mutated by the standard mutation operators. The single-line mutants are partially executed by the developed instruction evaluator. Next, a clustering method is used to group the single-line mutants with the same results. There is only one complete run per cluster.FindingsThe results of experiments on the Java benchmarks indicate that the proposed method causes a 53.51 per cent reduction in the number of mutants and a 57.64 per cent time reduction compared to similar experiments in the MuJava and MuClipse tools.Originality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program. Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program. Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkOriginality/valueDeveloping a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.Competing interests: The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or nonfinancial conflict of interest.Authors' contribution statement: The proposed method was developed and discretized by the authors. The designed algorithm was implemented and coded by the authors. The implemented method was adapted and benchmarked by the authors. The data and results analysis were performed by the authors. The manuscript of the paper was written by the authors.Ethical and informed consent for data used: The data used in this research does not belong to any other person or third party and was prepared and generated by the researchers themselves during the research. The data of this research will be accessible to other researchers.Data availability access: The data relating to the current study is available in google.drive and can be freely accessed by the following link:https://drive.google.com/drive/folders/1d69XSBZ-ioInjPw9L4qp-3BBe2jlkkfv?usp=share_linkÖğe A fast module identification and filtering approach for influence maximization problem in social networks(Elsevier Science Inc, 2023) Beni, Hamid Ahmadi; Bouyer, Asgarali; Azimi, Sevda; Rouhi, Alireza; Arasteh, BahmanIn this paper, we explore influence maximization, one of the most widely studied problems in social network analysis. However, developing an effective algorithm for influence maximization is still a challenging task given its NP-hard nature. To tackle this issue, we propose the CSP (Combined modules for Seed Processing) algorithm, which aim to identify influential nodes. In CSP, graph modules are initially identified by a combination of criteria such as the clustering coefficient, degree, and common neighbors of nodes. Nodes with the same label are then clustered together into modules using label diffusion. Subsequently, only the most influential modules are selected using a filtering method based on their diffusion capacity. The algorithm then merges neighboring modules into distinct modules and extracts a candidate set of influential nodes using a new metric to quickly select seed sets. The number of selected nodes for the candidate set is restricted by a defined limit measure. Finally, seed nodes are chosen from the candidate set using a novel node scoring measure. We evaluated the proposed algorithm on both real-world and synthetic networks, and our experimental results indicate that the CSP algorithm outperforms other competitive algorithms in terms of solution quality and speedup on tested networks.Öğe FIP: A fast overlapping community-based influence maximization algorithm using probability coefficient of global diffusion in social networks(Elsevier Ltd, 2023) Bouyer, Asgarali; Ahmadi Beni, Hamid; Arasteh, Bahman; Aghaee, Zahra; Ghanbarzadeh, RezaInfluence maximization is the process of identifying a small set of influential nodes from a complex network to maximize the number of activation nodes. Due to the critical issues such as accuracy, stability, and time complexity in selecting the seed set, many studies and algorithms has been proposed in recent decade. However, most of the influence maximization algorithms run into major challenges such as the lack of optimal seed nodes selection, unsuitable influence spread, and high time complexity. In this paper intends to solve the mentioned challenges, by decreasing the search space to reduce the time complexity. Furthermore, It selects the seed nodes with more optimal influence spread concerning the characteristics of a community structure, diffusion capability of overlapped and hub nodes within and between communities, and the probability coefficient of global diffusion. The proposed algorithm, called the FIP algorithm, primarily detects the overlapping communities, weighs the communities, and analyzes the emotional relationships of the community's nodes. Moreover, the search space for choosing the seed nodes is limited by removing insignificant communities. Then, the candidate nodes are generated using the effect of the probability of global diffusion. Finally, the role of important nodes and the diffusion impact of overlapping nodes in the communities are measured to select the final seed nodes. Experimental results in real-world and synthetic networks indicate that the proposed FIP algorithm has significantly outperformed other algorithms in terms of efficiency and runtime.Öğe Fuzuli: Automatic Test Data Generation for Software Structural Testing using Grey Wolf Optimization Algorithm and Genetic Algorithm(Ieee, 2022) Arasteh, Bahman; Sattari, Mohammad Reza; Kalan, Reza ShokriSoftware testing refers to a process that improves the quality of software systems through bug detection, but software testing is one of the time and cost-consuming stages in software development. Hence, software test automation is regarded as a solution, which can facilitate heavy and laborious tasks of testing. Problem: Automatic generation of data with maximum coverage of program branches is regarded as an NP-complete optimization problem. Several heuristic algorithms have been proposed for this problem. Failure to maximise branch coverage, the poor success rate in optimal test data generation, and low stable results are the major demerits of the previous methods. Goal: Enhancing the branch coverage rate of the generated test data, enhancing the success rate in generating the test data with maximum coverage, and enhancing the stability and speed criteria are the main goals of this study. Method: In this study, a combination of grey wolf optimization algorithm and genetic algorithm have been used to automatically generate optimal test data. The proposed hybrid method (Fuzuli(1)) tries to generate test data with maximum branch coverage at the software source code level. Results: The results obtained from the proposed algorithm were compared with those of the following algorithms: Shuffled Frog Leaping Algorithm (SFLA), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA).The results obtained from running a wide range of tests on standard benchmark programs showed that the proposed algorithm outperforms other algorithms with an average coverage of %99.98, a success rate of %99.97, and an average output of 2.86.Öğe Generating the structural graph-based model from a program source-code using chaotic forrest optimization algorithm(Wiley, 2023) Arasteh, Bahman; Ghanbarzadeh, Reza; Gharehchopogh, Farhad Soleimanian; Hosseinalipour, AliOne of the most important and costly stages in software development is maintenance. Understanding the structure of software will make it easier to maintain it more efficiently. Clustering software modules is thought to be an effective reverse engineering technique for deriving structural models of software from source code. In software module clustering, the most essential objectives are to minimize connections between produced clusters, maximize internal connections within created clusters, and maximize clustering quality. Finding the appropriate software system clustering model is considered an NP-complete task. The previously proposed approaches' key limitations are their low success rate, low stability, and poor modularization quality. In this paper, for optimal clustering of software modules, Chaotic based heuristic method using a forest optimization algorithm is proposed. The impact of chaos theory on the performance of the other SFLA-GA and PSO-GA has also been investigated. The results show that using the logistic chaos approach improves the performance of these methods in the software-module clustering problem. The performance of chaotic based FOA, SFLA-GA and PSO-GA is superior to the other heuristic methods in terms of modularization quality and stability of the results.Öğe A hybrid chaos-based algorithm for data object replication in distributed systems(Taylor & Francis Ltd, 2024) Arasteh, Bahman; Gunes, Peri; Bouyer, Asgarali; Rouhi, Alireza; Ghanbarzadeh, RezaOne of the primary challenges in distributed systems, such as cloud computing, lies in ensuring that data objects are accessible within a reasonable timeframe. To address this challenge, the data objects are replicated across multiple servers. Estimating the minimum quantity of data replicas and their optimal placement is considered an NP-complete optimization problem. The primary objectives of the current research include minimizing data processing costs, reducing the quantity of replicas, and maximizing the applied algorithms' reliability in replica placement. This paper introduces a hybrid chaos-based swarm approach using the modified shuffle-frog leaping algorithm with a new local search strategy for replicating data in distributed systems. Taking into account the algorithm's performance in static settings, the introduced method reduces the expenses associated with replica placement. The results of the experiment conducted on a standard data set indicate that the proposed approach can decrease data access time by about 33% when using approximately seven replicas. When executed several times, the suggested method yields a standard deviation of approximately 0.012 for the results, which is lower than the result existing algorithms produce. Additionally, the new approach's success rate is higher in comparison with existing algorithms used in addressing the problem of replica placement.Öğe A Hybrid Heuristic Algorithm Using Artificial Agents for Data Replication Problem in Distributed Systems(Mdpi, 2023) Arasteh, Bahman; Sefati, Seyed Salar; Halunga, Simona; Fratu, Octavian; Allahviranloo, TofighOne of the key issues with large distributed systems, such as IoT platforms, is gaining timely access to data objects. As a result, decreasing the operation time of reading and writing data in distributed communication systems become essential demands for asymmetric system. A common method is to replicate the data objects across multiple servers. Replica placement, which can be performed statically or dynamically, is critical to the effectiveness of distributed systems in general. Replication and placing them on the best available data servers in an optimal manner is an NP-complete optimization problem. As a result, several heuristic strategies for replica placement in distributed systems have been presented. The primary goals of this research are to reduce the cost of data access time, reduce the number of replicas, and increase the reliability of the algorithms for placing replicas. In this paper, a discretized heuristic algorithm with artificial individuals and a hybrid imitation method were developed. In the proposed method, particle and gray-wolf-based individuals use a local memory and velocity to search for optimal solutions. The proposed method includes symmetry in both local and global searches. Another contribution of this research is the development of the proposed optimization algorithm for solving the data object replication problem in distributed systems. Regarding the results of simulations on the standard benchmark, the suggested method gives a 35% reduction in data access time with about six replicates. Furthermore, the standard deviation among the results obtained by the proposed method is about 0.015 which is lower than the other methods in the same experiments; hence, the method is more stable than the previous methods during different executions.Öğe Identifying influential nodes based on new layer metrics and layer weighting in multiplex networks(Springer London Ltd, 2024) Bouyer, Asgarali; Mohammadi, Moslem; Arasteh, BahmanIdentifying influential nodes in multiplex complex networks have a critical importance to implement in viral marketing and other real-world information diffusion applications. However, selecting suitable influential spreaders in multiplex networks are more complex due to existing multiple layers. Each layer of multiplex networks has its particular importance. Based on this research, an important layer with strong spreaders is a layer positioned in a well-connected neighborhood with more active edges, active critical nodes, the ratio of active nodes and their connections to all possible connections, and the intersection of intralayer communication compared to other layers. In this paper, we have formulated a layer weighting method based on mentioned layer's parameters and proposed an algorithm for mapping and computing the rank of nodes based on their spreading capability in multiplex networks. Thus, the result of layer weighting is used in mapping and compressing centrality vector values to a scalar value for calculating the centrality of nodes in multiplex networks by a coupled set of equations. In addition, based on this new method, the important layer parameters are combined for the first time to utilize in computing the influence of nodes from different layers. Experimental results on both synthetic and real-world networks show that the proposed layer weighting and mapping method significantly is effective in detecting high influential spreaders against compared methods. These results validate the specific attention to suitable layer weighting measure for identifying potential spreaders in multiplex network.Öğe Identifying top influential spreaders based on the influence weight of layers in multiplex networks(Pergamon-Elsevier Science Ltd, 2023) Zhou, Xiaohui; Bouyer, Asgarali; Maleki, Morteza; Mohammadi, Moslem; Arasteh, BahmanDetecting influential nodes in multiplex networks is a complex task due to the presence of multiple layers. In this study, we propose a method for identifying important layers with strong spreaders based on several key parameters. These include a layer's position within a well-connected neighborhood, the number of active edges and critical nodes, the ratio of active nodes to all possible connections, and the intersection of intra-layer communication compared to other layers. To accomplish this, we have formulated a layer weighting method which takes into account these parameters, and developed an algorithm for mapping and computing the rank of nodes based on their spreading capability within multiplex networks. The resulting layer weighting is then used to map and compress centrality vector values to a scalar value, allowing us to calculate node centrality in multiplex networks via a coupled set of equations. Moreover, our method combines the important layer parameters to compute the influence of nodes from different layers. Our experimental results, conducted on both synthetic and real-world networks, demonstrate that the proposed approach significantly outperforms existing methods in detecting high influential spreaders. These findings highlight the importance of using a suitable layer weighting measure for identifying potential spreaders in multiplex networks.