Yazar "Bouyer, Asgarali" seçeneğine göre listele
Listeleniyor 1 - 15 / 15
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A cascade information diffusion prediction model integrating topic features and cross-attention(Elsevier, 2023) Liu, Xiaoyang; Wang, Haotian; Bouyer, AsgaraliInformation cascade prediction is a crucial task in social network analysis. However, previous research has only focused on the impact of social relationships on cascade information diffusion, while ignoring the differences caused by the characteristics of cascade information itself, which limits the performance of prediction results. We propose a novel cascade information diffusion prediction model (Topic-HGAT). Firstly, we extract features from different topic features to enhance the learned cascade information representation. To better implement this method, we use hypergraphs to better characterize cascade information and dynamically learn multiple diffusion sub-hypergraphs according to the time process; secondly, we introduce cross-attention mechanisms to learn each other's feature representations from the perspectives of both user representation and cascade representation, thereby achieving deep fusion of the two features. This solves the problem of poor feature fusion effect caused by simply calculating self-attention on learned user representation and cascade representation in previous studies; finally, we conduct comparative experiments on four real datasets, including Twitter and Douban. Experimental results show that the proposed Topic-HGAT model achieves the highest improvements of 2.91% and 1.59% on Hits@100 and MAP@100 indicators, respectively, compared to other 8 baseline models, verifying the rationality and effectiveness of the proposed Topic-HGAT model.Öğe DATA REPLICATION IN DISTRIBUTED SYSTEMS USING OLYMPIAD OPTIMIZATION ALGORITHM(Univ Nis, 2023) Arasteh, Bahman; Bouyer, Asgarali; Ghanbarzadeh, Reza; Rouhi, Alireza; Mehrabani, Mahsa Nazeri; Tirkolaee, Erfan BabaeeAchieving timely access to data objects is a major challenge in big distributed systems like the Internet of Things (IoT) platforms. Therefore, minimizing the data read and write operation time in distributed systems has elevated to a higher priority for system designers and mechanical engineers. Replication and the appropriate placement of the replicas on the most accessible data servers is a problem of NP-complete optimization. The key objectives of the current study are minimizing the data access time, reducing the quantity of replicas, and improving the data availability. The current paper employs the Olympiad Optimization Algorithm (OOA) as a novel population-based and discrete heuristic algorithm to solve the replica placement problem which is also applicable to other fields such as mechanical and computer engineering design problems. This discrete algorithm was inspired by the learning process of student groups who are preparing for the Olympiad exams. The proposed algorithm, which is divide-and-conquer-based with local and global search strategies, was used in solving the replica placement problem in a standard simulated distributed system. The 'European Union Database' (EUData) was employed to evaluate the proposed algorithm, which contains 28 nodes as servers and a network architecture in the format of a complete graph. It was revealed that the proposed technique reduces data access time by 39% with around six replicas, which is vastly superior to the earlier methods. Moreover, the standard deviation of the results of the algorithm's different executions is approximately 0.0062, which is lower than the other techniques' standard deviation within the same experiments.Öğe Discovering overlapping communities using a new diffusion approach based on core expanding and local depth traveling in social networks(Taylor & Francis Ltd, 2023) Bouyer, Asgarali; Sabavand Monfared, Maryam; Nourani, Esmaeil; Arasteh, BahmanThis paper proposes a local diffusion-based approach to find overlapping communities in social networks based on label expansion using local depth first search and social influence information of nodes, called the LDLF algorithm. It is vital to start the diffusion process in local depth, traveling from specific core nodes based on their local topological features and strategic position for spreading community labels. Correspondingly, to avoid assigning excessive and unessential labels, the LDLF algorithm prudently removes redundant and less frequent labels for nodes with multiple labels. Finally, the proposed method finalizes the node's label based on the Hub Depressed index. Thanks to requiring only two iterations for label updating, the proposed LDLF algorithm runs in low time complexity while eliminating random behavior and achieving acceptable accuracy in finding overlapping communities for large-scale networks. The experiments on benchmark networks prove the effectiveness of the LDLF method compared to state-of-the-art approaches.Öğe A divide and conquer based development of gray wolf optimizer and its application in data replication problem in distributed systems(Springer, 2023) Fan, Wenguang; Arasteh, Bahman; Bouyer, Asgarali; Majidnezhad, VahidOne of the main problems of big distributed systems, like IoT, is the high access time to data objects. Replicating the data objects on various servers is a traditional strategy. Replica placement, which can be implemented statically or dynamically, is generally crucial to the effectiveness of distributed systems. Producing the minimum number of data copies and placing them on appropriate servers to minimize access time is an NP-complete optimization problem. Various heuristic techniques for efficient replica placement in distributed systems have been proposed. The main objectives of this research are to decrease the cost of data processing operations, decrease the number of copies, and improve the accessibility of the data objects. In this study, a discretized and group-based gray wolf optimization algorithm with swarm and evolutionary features was developed for the replica placement problem. The proposed algorithm includes swarm and evolutionary features and divides the wolves' population into subgroups, and each subgroup was locally searched in a different solution space. According to experiments conducted on the standard benchmark dataset, the suggested method provides about a 40% reduction in the data access time with about five replicas. Also, the reliability of the suggested method during different executions is considerably higher than the previous methods.Öğe A fast module identification and filtering approach for influence maximization problem in social networks(Elsevier Science Inc, 2023) Beni, Hamid Ahmadi; Bouyer, Asgarali; Azimi, Sevda; Rouhi, Alireza; Arasteh, BahmanIn this paper, we explore influence maximization, one of the most widely studied problems in social network analysis. However, developing an effective algorithm for influence maximization is still a challenging task given its NP-hard nature. To tackle this issue, we propose the CSP (Combined modules for Seed Processing) algorithm, which aim to identify influential nodes. In CSP, graph modules are initially identified by a combination of criteria such as the clustering coefficient, degree, and common neighbors of nodes. Nodes with the same label are then clustered together into modules using label diffusion. Subsequently, only the most influential modules are selected using a filtering method based on their diffusion capacity. The algorithm then merges neighboring modules into distinct modules and extracts a candidate set of influential nodes using a new metric to quickly select seed sets. The number of selected nodes for the candidate set is restricted by a defined limit measure. Finally, seed nodes are chosen from the candidate set using a novel node scoring measure. We evaluated the proposed algorithm on both real-world and synthetic networks, and our experimental results indicate that the CSP algorithm outperforms other competitive algorithms in terms of solution quality and speedup on tested networks.Öğe FIP: A fast overlapping community-based influence maximization algorithm using probability coefficient of global diffusion in social networks(Elsevier Ltd, 2023) Bouyer, Asgarali; Ahmadi Beni, Hamid; Arasteh, Bahman; Aghaee, Zahra; Ghanbarzadeh, RezaInfluence maximization is the process of identifying a small set of influential nodes from a complex network to maximize the number of activation nodes. Due to the critical issues such as accuracy, stability, and time complexity in selecting the seed set, many studies and algorithms has been proposed in recent decade. However, most of the influence maximization algorithms run into major challenges such as the lack of optimal seed nodes selection, unsuitable influence spread, and high time complexity. In this paper intends to solve the mentioned challenges, by decreasing the search space to reduce the time complexity. Furthermore, It selects the seed nodes with more optimal influence spread concerning the characteristics of a community structure, diffusion capability of overlapped and hub nodes within and between communities, and the probability coefficient of global diffusion. The proposed algorithm, called the FIP algorithm, primarily detects the overlapping communities, weighs the communities, and analyzes the emotional relationships of the community's nodes. Moreover, the search space for choosing the seed nodes is limited by removing insignificant communities. Then, the candidate nodes are generated using the effect of the probability of global diffusion. Finally, the role of important nodes and the diffusion impact of overlapping nodes in the communities are measured to select the final seed nodes. Experimental results in real-world and synthetic networks indicate that the proposed FIP algorithm has significantly outperformed other algorithms in terms of efficiency and runtime.Öğe A hybrid chaos-based algorithm for data object replication in distributed systems(Taylor & Francis Ltd, 2024) Arasteh, Bahman; Gunes, Peri; Bouyer, Asgarali; Rouhi, Alireza; Ghanbarzadeh, RezaOne of the primary challenges in distributed systems, such as cloud computing, lies in ensuring that data objects are accessible within a reasonable timeframe. To address this challenge, the data objects are replicated across multiple servers. Estimating the minimum quantity of data replicas and their optimal placement is considered an NP-complete optimization problem. The primary objectives of the current research include minimizing data processing costs, reducing the quantity of replicas, and maximizing the applied algorithms' reliability in replica placement. This paper introduces a hybrid chaos-based swarm approach using the modified shuffle-frog leaping algorithm with a new local search strategy for replicating data in distributed systems. Taking into account the algorithm's performance in static settings, the introduced method reduces the expenses associated with replica placement. The results of the experiment conducted on a standard data set indicate that the proposed approach can decrease data access time by about 33% when using approximately seven replicas. When executed several times, the suggested method yields a standard deviation of approximately 0.012 for the results, which is lower than the result existing algorithms produce. Additionally, the new approach's success rate is higher in comparison with existing algorithms used in addressing the problem of replica placement.Öğe Identifying influential nodes based on new layer metrics and layer weighting in multiplex networks(Springer London Ltd, 2024) Bouyer, Asgarali; Mohammadi, Moslem; Arasteh, BahmanIdentifying influential nodes in multiplex complex networks have a critical importance to implement in viral marketing and other real-world information diffusion applications. However, selecting suitable influential spreaders in multiplex networks are more complex due to existing multiple layers. Each layer of multiplex networks has its particular importance. Based on this research, an important layer with strong spreaders is a layer positioned in a well-connected neighborhood with more active edges, active critical nodes, the ratio of active nodes and their connections to all possible connections, and the intersection of intralayer communication compared to other layers. In this paper, we have formulated a layer weighting method based on mentioned layer's parameters and proposed an algorithm for mapping and computing the rank of nodes based on their spreading capability in multiplex networks. Thus, the result of layer weighting is used in mapping and compressing centrality vector values to a scalar value for calculating the centrality of nodes in multiplex networks by a coupled set of equations. In addition, based on this new method, the important layer parameters are combined for the first time to utilize in computing the influence of nodes from different layers. Experimental results on both synthetic and real-world networks show that the proposed layer weighting and mapping method significantly is effective in detecting high influential spreaders against compared methods. These results validate the specific attention to suitable layer weighting measure for identifying potential spreaders in multiplex network.Öğe Identifying top influential spreaders based on the influence weight of layers in multiplex networks(Pergamon-Elsevier Science Ltd, 2023) Zhou, Xiaohui; Bouyer, Asgarali; Maleki, Morteza; Mohammadi, Moslem; Arasteh, BahmanDetecting influential nodes in multiplex networks is a complex task due to the presence of multiple layers. In this study, we propose a method for identifying important layers with strong spreaders based on several key parameters. These include a layer's position within a well-connected neighborhood, the number of active edges and critical nodes, the ratio of active nodes to all possible connections, and the intersection of intra-layer communication compared to other layers. To accomplish this, we have formulated a layer weighting method which takes into account these parameters, and developed an algorithm for mapping and computing the rank of nodes based on their spreading capability within multiplex networks. The resulting layer weighting is then used to map and compress centrality vector values to a scalar value, allowing us to calculate node centrality in multiplex networks via a coupled set of equations. Moreover, our method combines the important layer parameters to compute the influence of nodes from different layers. Our experimental results, conducted on both synthetic and real-world networks, demonstrate that the proposed approach significantly outperforms existing methods in detecting high influential spreaders. These findings highlight the importance of using a suitable layer weighting measure for identifying potential spreaders in multiplex networks.Öğe Meet User's Service Requirements in Smart Cities Using Recurrent Neural Networks and Optimization Algorithm(Ieee-Inst Electrical Electronics Engineers Inc, 2023) Sefati, Seyed Salar; Arasteh, Bahman; Halunga, Simona; Fratu, Octavian; Bouyer, AsgaraliDespite significant advancements in Internet of Things (IoT)-based smart cities, service discovery, and composition continue to pose challenges. Current methodologies face limitations in optimizing Quality of Service (QoS) in diverse network conditions, thus creating a critical research gap. This study presents an original and innovative solution to this issue by introducing a novel three-layered recurrent neural network (RNN) algorithm. Aimed at optimizing QoS in the context of IoT service discovery, our method incorporates user requirements into its evaluation matrix. It also integrates long short-term memory (LSTM) networks and a unique black widow optimization (BWO) algorithm, collectively facilitating the selection and composition of optimal services for specific tasks. This approach allows the RNN algorithm to identify the top-K services based on QoS under varying network conditions. Our methodology's novelty lies in implementing LSTM in the hidden layer and employing backpropagation through time (BPTT) for parameter updates, which enables the RNN to capture temporal patterns and intricate relationships between devices and services. Further, we use the BWO algorithm, which simulates the behavior of black widow spiders, to find the optimal combination of services to meet system requirements. This algorithm factors in both the attractive and repulsive forces between services to isolate the best candidate solutions. In comparison with existing methods, our approach shows superior performance in terms of latency, availability, and reliability. Thus, it provides an efficient and effective solution for service discovery and composition in IoT-based smart cities, bridging a significant gap in current research.Öğe A Modified Horse Herd Optimization Algorithm and Its Application in the Program Source Code Clustering(Wiley-Hindawi, 2023) Arasteh, Bahman; Gunes, Peri; Bouyer, Asgarali; Gharehchopogh, Farhad Soleimanian; Banaei, Hamed Alipour; Ghanbarzadeh, RezaMaintenance is one of the costliest phases in the software development process. If architectural design models are accessible, software maintenance can be made more straightforward. When the software's source code is the only available resource, comprehending the program profoundly impacts the costs associated with software maintenance. The primary objective of comprehending the source code is extracting information used during the software maintenance phase. Generating a structural model based on the program source code is an effective way of reducing overall software maintenance costs. Software module clustering is considered a tremendous reverse engineering technique for constructing structural design models from the program source code. The main objectives of clustering modules are to reduce the quantity of connections between clusters, increase connections within clusters, and improve the quality of clustering. Finding the perfect clustering model is considered an NP-complete problem, and many previous approaches had significant issues in addressing this problem, such as low success rates, instability, and poor modularization quality. This paper applied the horse herd optimization algorithm, a distinctive population-based and discrete metaheuristic technique, in clustering software modules. The proposed method's effectiveness in addressing the module clustering problem was examined by ten real-world standard software test benchmarks. Based on the experimental data, the quality of the clustered models produced is approximately 3.219, with a standard deviation of 0.0718 across the ten benchmarks. The proposed method surpasses former methods in convergence, modularization quality, and result stability. Furthermore, the experimental results demonstrate the versatility of this approach in effectively addressing various real-world discrete optimization challenges.Öğe Program source code comprehension by module clustering using combination of discretized gray wolf and genetic algorithms(Elsevier Ltd, 2022) Arasteh, Bahman; Abdi, Mohammad; Bouyer, AsgaraliMaintenance is a critical and costly phase of software lifecycle. Understanding the structure of software will make it much easier to maintain the software. Clustering the modules of software is regarded as a useful reverse engineering technique for constructing software structural models from source code. Minimizing the connections between produced clusters, maximizing the internal connections within the clusters, and maximizing the clustering quality are the most important objectives in software module clustering. Finding the optimal software clustering model is regarded as an NP-complete problem. The low success rate, limited stability, and poor modularization quality are the main drawbacks of the previous methods. In this paper, a combination of gray wolf optimization algorithm and genetic algorithms is suggested for efficient clustering of software modules. An extensive series of experiments on 14 standard benchmarks have been conducted to evaluated the proposed method. The results illustrate that using the combination of gray wolf and genetic algorithms to the software-module clustering problem increases the quality of clustering. In terms of modularization quality and convergence speed, proposed hybrid method outperforms the other heuristic approaches.Öğe A quality-of-service aware composition-method for cloud service using discretized ant lion optimization algorithm(Springer London Ltd, 2024) Arasteh, Bahman; Aghaei, Babak; Bouyer, Asgarali; Arasteh, KeyvanIn the cloud system, service providers supply a pool of resources in the form of a web service and the services are merged to provide the required composite services. Composing a quality-of-service aware web service is like the knapsack problem and this problem is NP-hard. Different artificial intelligence and heuristic methods have been used to achieve optimal or near-optimal composite services. In this paper, the Ant Lion optimization algorithm was modified and discretized to choose the appropriate web services from the existing services and to provide the optimal composite services. The QWS dataset contains a collection of 2507 real-world web services which are used to evaluate the proposed method. In this study, response time parameters, availability, throughput, success capability, reliability, and latency were used as the web service quality metrics. The results of the conducted experiments confirm that the provided composite service by the proposed method has considerably higher quality than the other related algorithms. Hence, the proposed method can be used in the cloud resource discovery layer.Öğe Review of heterogeneous graph embedding methods based on deep learning techniques and comparing their efficiency in node classification(Springer Wien, 2024) Noori, Azad; Balafar, Mohammad Ali; Bouyer, Asgarali; Salmani, KhosroGraph embedding is an advantageous technique for reducing computational costs and effectively using graph information in machine learning tasks like classification, clustering, and link prediction. As a result, it has become a key method in various research areas. However, different embedding methods may be used depending on the variety of graphs available. One of the most commonly used graph types is the heterogeneous graph (HG) or heterogeneous information network (HIN), which presents unique challenges for graph embedding approaches due to its diverse set of nodes and edges. Several methods have been proposed for heterogeneous graph embedding in recent years to overcome these challenges. This paper aims to review the latest techniques used for this purpose, divided into two main parts: the first part describes the fundamental concepts and obstacles in heterogeneous graph embedding, while the second part compares the most critical methods. Finally, the results are summarized, outlining the challenges and opportunities for future directions.Öğe Time and cost-effective online advertising in social Internet of Things using influence maximization problem(Springer, 2024) Molaei, Reza; Fard, Kheirollah Rahsepar; Bouyer, AsgaraliRecently, a novel concept called the Social Internet of Things (SIoT) has emerged, which combines the Internet of Things (IoT) and social networks. SIoT plays a significant role in various aspects of modern human life, including smart transportation, online healthcare systems, and viral marketing. One critical challenge in SIoT-based advertising is identifying the most effective objects for maximizing advertising impact. This research paper introduces a highly efficient heuristic algorithm named Influence Maximization-Cost Minimization for Advertising in the Social Internet of Things (IMCMoT), inspired by real-world advertising strategies. The IMCMoT algorithm comprises three essential steps: Initial preprocessing, candidate objects selection and final seed set identification. In the initial preprocessing phase, the objects that are not suitable for advertising purposes are eliminated. Reducing the problem space not only minimizes computational overhead but also reduces execution time. Inspired by real-world advertising, we then select influential candidate objects based on their effective sociality rate, which accounts for both the object's sociality rate and relevant selection cost factors. By integrating these factors simultaneously, our algorithm enables organizations to reach a broader audience at a lower cost. Finally, in identifying the final seed set, our algorithm considers the overlapping of neighbors between candidate objects and their neighbors. This approach helps minimize the costs associated with spreading duplicate advertisements. Through experimental evaluations conducted on both real-world and synthetic networks, our algorithm demonstrates superior performance compared to other state-of-the-art algorithms. Specifically, it outperforms existing methods concerning attention to influence spread, achieves a reduction in advertising cost by more than 2-3 times and reduces duplicate advertising. Additionally, the running time of the IMCMoT algorithm is deemed acceptable, further highlighting its practicality and efficiency.