Last updated: 2025-07-03 03:01 UTC
All documents
Number of pages: 142
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Wenxian Li, Pingang Cheng, Yong Feng, Nianbo Liu, Ming Liu, Yingna Li | A Blockchain-Assisted Hierarchical Data Aggregation Framework for IIoT With Computing First Networks | 2025 | Early Access | Industrial Internet of Things Blockchains Data privacy Data aggregation Cloud computing Collaboration Servers Edge computing Resource management Protection Industrial internet of things(IIoT) computing first networks(CFN) secure data aggregation blockchain | With an increasing number of sensor devices connected to industrial systems, the efficient and reliable aggregation of sensor data has become a key topic in Industrial Internet of Things (IIoT). Computing First Networks (CFN) are emerging as a promising technology for aggregating vast quantities of IIoT data. However, existing CFN data collection frameworks are usually centralized, which overly rely on third-party trusted authorities and fail to fully schedule and utilize limited computing resources. More critically, that is prone to trust and security issues. In this paper, considering the heterogeneity and data security in complex industrial scenarios, we propose a blockchain-based and multi-edge CFN collaborative IIoT data hierarchical collection framework (ME-CIDC) to collect massive IIoT data securely and efficiently. In ME-CIDC, a blockchain-driven resource allocation algorithm is proposed for inter-domain CFN, which achieves distributed and efficient task scheduling and data collection by constructing multiple blockchains. A self-incentive mechanism is designed to encourage inter-domain nodes to contribute resources and support the operation of the inter-domain CFN. We also propose an efficient double-layered data aggregation algorithm, which distributes computational tasks across two layers to ensure the efficient collection and aggregation of IIoT data. Extensive simulation and numerical results demonstrate the effectiveness of our proposed scheme. | 10.1109/TNSM.2025.3563237 |
Mani Shekhar Gupta, Akanksha Srivastava, Krishan Kumar | PORA: A Proactive Optimal Resource Allocation Framework for Spectrum Management in Cognitive Radio Networks | 2025 | Early Access | Resource management Switches Sensors Data communication Autonomous aerial vehicles Optimization Interference Wireless sensor networks Vehicle dynamics Throughput Cognitive radio networks spectrum management proactive resource allocation next generation networks intelligent transportation system | Cognitive radio network with proactive resource allocation to identify unused spectrum bands and utilize them opportunistically is observed as an evolving technology to handle spectrum scarcity problem. However, it is a challenging problem to predict the accurate information about availability of unused resources due to randomness in licenced user appearance and high mobility at the cost of minimizing sensing time. To address this issue, we mathematically model the metrics like resource availability probability, resource allocation time, throughput, connection continuance probability, and the expected number of networks switching to propose a proactive optimal resource allocation (PORA) technique. Furthermore, the performance of proposed PORA technique is analysed under different traffic environments such as low, moderate, and high traffic. The results show that the proposed PORA technique addresses challenges related to providing resource allocation in proactive manner over the traditional techniques. | 10.1109/TNSM.2025.3540717 |
Mahdi Nouri, Sima Sobhi-Givi, Hamid Behroozi, Mahrokh G. Shayesteh, Md. Jalil Piran, Zhiguo Ding | Joint Slice Resource Allocation and Hybrid Beamforming with Deep Reinforcement Learning for NOMA based Vehicular 6G Communications | 2025 | Early Access | Ultra reliable low latency communication Resource management NOMA Millimeter wave communication Array signal processing Hybrid power systems Reliability Network slicing Optimization Spectral efficiency Network slicing Non-orthogonal multiple access (NOMA) mmWave Energy Efficiency (EE) Fairness Reinforcement learning (RL) | The escalating demand for high data rates and dependable communications in forthcoming wireless networks has led to the exploration of innovative solutions. Among these, the fusion of millimeter-wave (mmWave) communications and non-orthogonal multiple access (NOMA) holds significant promise. This paper delves into the optimization of hybrid mmWave-NOMA networks coexisting with Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications, accommodating ultra-reliable low-latency communication (uRLLC) and enhanced mobile broadband (eMBB) services. Our aim is to maximize system spectral efficiency (SE) and energy efficiency (EE) while ensuring fairness among users in terms of signal-to-leakage-and-noise ratio (SLNR). To tackle the intricate optimization challenges, we decompose them into two sequential sub-problems: power allocation and beamforming design, alongside resource block (RB) allocation. We advocate the application of deep reinforcement learning (DRL) algorithms to jointly optimize these sub-problems. Extensive evaluations demonstrate that RL-based approaches effectively enhance SE, EE, fairness, and resource utilization in the mmWave-NOMA network. | 10.1109/TNSM.2025.3561251 |
Latif U. Khan, Waseem Ullah, Sami Muhaidat, Mohsen Guizani, Bechir Hamdaoui | Block Successive Upper-Bound Minimization for Resource Scheduling in Wireless Metaverse | 2025 | Early Access | Metaverse Sensors Costs Wireless sensor networks Resource management Machine learning Hands Wireless networks Performance evaluation Synchronization Metaverse convex optimization digital twins resource optimization | In recent years, there has been a rising trend towards emerging applications (e.g., brain-computer interaction and haptics-based autonomous cars) with diverse requirements. To effectively enable these applications via autonomous operation and intelligent analytics, one can use a metaverseFor more details on how a metaverse can enable emerging applications and architecture, please refer to khan2024ametaverse. In a metaverse, we have two spaces: (a) a meta space based on a virtual model that performs analysis and resource management and (b) a physical space comprised of real world entities. A metaverse effectively enables emerging applications by performing three main tasks: (a) distributed learning of metaverse models; (b) instantly serving the end-users; and (c) sensing of the physical environment and sharing it with the meta space for synchronized operation. To perform these tasks, efficient wireless resource management is needed. Therefore, a novel resource scheduling framework for the wireless metaverse to enable various applications is proposed. Our aim is to minimize the cost of learning and sensing in metaverse. Subsequently, we formulate a problem. Meanwhile, the reliability as well as latency constraints of the service-requesting devices/users will be fulfilled. We assign multiple resource blocks to learning and sensing devices/units, whereas we use a concept of puncturing for service-requesting devices/users upon arrival. We use a scheme that is based on block successive upper-bound minimization and convex optimization for solving our formulated problem. At the end, we use empirical cumulative distribution function vs. cost and cost vs. metaverse entities for numerical evaluations. | 10.1109/TNSM.2025.3562516 |
Akio Kawabata, Sanetora Hiragi, Bijoy Chand Chatterjee, Eiji Oki | Distributed Processing Network Design Scheme for Virtual Application Processing Platform | 2025 | Early Access | Delays Synchronization Servers Distributed processing Memory management Cloud computing Switches Games Network topology Middleware Delay sensitive service distributed processing middleware optimistic synchronization conservative synchronization | Delay-sensitive applications have been provided through a low-delay network utilizing multiple edge clouds. For applications that involve sharing status among multiple users, it is crucial to prevent longer communication delays for users who are farther from the application server compared to those who are closer. To address this issue, this paper proposes a distributed processing network design scheme for virtual processing platforms using low-delay networks and widely distributed servers. The proposed scheme introduces Tapl as a given parameter for correcting events in occurrence order. Events within Tapl delay are sorted in occurrence order. The proposed scheme can change its operation mode from a conservative synchronization to an optimistic synchronization depending on the setting of Tapl. The proposed scheme is formulated as a mixed-integer linear programming problem to determine users’ and servers’ distributed processing network configuration. We evaluate the proposed scheme on two different network topologies. Numerical results indicate that, depending on the setting of Tapl, the proposed scheme can reduce the maximum amount of memory used for rollback processes in optimistic synchronization-based applications or realize a conservative synchronization algorithm. The computation time under the condition of 1000 users is within a maximum of nine sec, an acceptable amount of time for preparation before starting a planned service. These results indicate that the proposed scheme realizes event order correction with excellent delay characteristics and applies to virtual processing platforms. | 10.1109/TNSM.2025.3562208 |
Fangyu Zhang, Yuang Chen, Hancheng Lu, Yongsheng Huang | Network-Aware Reliability Modeling and Optimization for Microservice Placement | 2025 | Early Access | Reliability Software reliability Microservice architectures Hardware Load modeling Routing Software Bandwidth Computational modeling Approximation algorithms Microservice Placement Reliability Model Network State Fault Tolerance Shared Backup Path | Optimizing microservice placement to enhance the reliability of services is crucial for improving the service level of microservice architecture-based mobile networks and Internet of Things (IoT) networks. Despite extensive research on service reliability, the impact of network load and routing on service reliability remains understudied, leading to suboptimal models and unsatisfactory performance. To address this issue, we propose a novel network-aware service reliability model that effectively captures the correlation between network state changes and reliability. Based on this model, we formulate the microservice placement problem as an integer nonlinear programming problem, aiming to maximize service reliability. Subsequently, a service reliability-aware placement (SRP) algorithm is proposed to solve the problem efficiently. To reduce bandwidth consumption, we further discuss the microservice placement problem with the shared backup path mechanism and propose a placement algorithm based on the SRP algorithm using shared path reliability calculation, known as the SRP-S algorithm. Extensive simulations demonstrate that the SRP algorithm reduces service failures by up to 22% compared to the benchmark algorithms. By introducing the shared backup path mechanism, the SRP-S algorithm reduces bandwidth consumption by up to 64% compared to the SRP algorithm with the fully protected path mechanism. It also reduces service failures by up to 11% compared to the SRP algorithm with the shared backup mechanism. | 10.1109/TNSM.2025.3562913 |
Yi-Huai Hsu, Chen-Fan Chang, Chao-Hung Lee | A DRL Based Spectrum Sharing Scheme for multi-MNO in 5G and Beyond | 2025 | Early Access | Resource management 5G mobile communication Long Term Evolution Games Telecommunication traffic Pricing Internet of Things Deep reinforcement learning Wireless fidelity Training 5G spectrum sharing mobile network operator deep reinforcement learning | In spectrum pooling, which is a well-known technique of spectrum sharing, the initial licensed spectrum of each Mobile Network Operator (MNO) is partitioned into reserved and shared spectrum. The reserved spectrum is for the personal use of an MNO, and the shared spectrum of all MNOs constitutes a spectrum pool that can be flexibly utilized by MNOs that require extra spectrum. Nevertheless, the spectrum pool management problem substantially impacts the spectrum efficiency among these MNOs. In this paper, we formulate this problem as a non-linear programming problem that strives to maximize the average binary scale satisfaction (BSS) of MNOs. To achieve this objective, we introduce an event-driven deep reinforcement learning-based spectrum management scheme, termed EDRL-SMS. This approach adopts a spectrum pool manager (SPM) to efficiently supervise the spectrum pool to reach long-term optimization of network performance. The SPM smartly allocates spectrum resources by fully utilizing a DRL approach, Deep Deterministic Policy Gradient, for each stochastic arrival spectrum request event. The simulation results show that the average BSS of MNOs of the proposed EDRL-SMS significantly outperform our previous work, Bankruptcy Game-based Resource Allocation (BGRA), greedy, random, and without sharing schemes. | 10.1109/TNSM.2025.3562968 |
Awaneesh Kumar Yadav, Shalitha Wijethilaka, Madhusanka Liyanage | Blockchain-Based Cross-Operator Network Slice Authentication Protocol for 5G Communication | 2025 | Early Access | Authentication Protocols Security 5G mobile communication Blockchains Base stations Switches Network slicing Hospitals Costs Authentication Network Slicing Cross-Operator Blockchain | Network slicing enables the facilitation of diverse network requirements of different applications over a single physical network. Due to concepts such as Local 5G Operators (L5GOs), Mobile Virtual Network Operators (MVNOs), and high-frequency utilization of 5G and beyond networks, users need to switch frequently among different network slices as well as different operators than the traditional networks. Even though a couple of researches have been conducted on cross-network slice authentication, cross-operator network slice authentication is still an indeterminate research area. Also, the proposed cross-network slice authentication frameworks possess several limitations, such as vulnerability to severe attacks, high cost, the central point of failure, and the inability to support cross-operator network slice authentication. Therefore, in this research, we develop a blockchain-based cross-network slicing, cross-operator network slice authentication framework. Our framework supports the authentication for different network slices in the same operator as well as in different operators. The security properties of the proposed protocols are validated from formal (using Real-Or-Random logic, Scyther, and AVISPA validation tool) and informal security validation. The comparative analysis is conducted for known and unknown attacks to demonstrate its efficacy in terms of communication, computational, storage, and energy consumption costs. Also, a sample prototype of the protocols is implemented along with the state-of-the-art protocols to evaluate the performance of our framework. | 10.1109/TNSM.2025.3562874 |
Jiasong Li, Yunhe Cui, Yi Chen, Guowei Shen, Chun Guo, Qing Qian | The DUDFTO Attack: Towards Down-to-UP Timeout Probing and Dynamically Flow Table Overflowing in SDN | 2025 | Early Access | Probes Heuristic algorithms Inference algorithms Accuracy Delays Statistical analysis Interference Streams Process control Training SDN switchs flow table overflow attack information probing | As a new network structure, the decoupling of the control plane and forwarding plane makes Software-Defined Networking (SDN) widely used in large-scale network scenarios. However, the decoupling network architecture also brings new vulnerabilities. The flow table overflow attack is an attack strategy that can overwhelm SDN switches. Nevertheless, the existing flow table overflow attacks may fail in probing timeouts and match fields of flow entries, due to link failure, measurement of the round-trip time (RTT) of different packets, interference of hard-timeout and idle-timeout. Meanwhile, the stealthiness of the existing attacks may also reduce, as these attacks use fixed attack rate. To improve the timeout probing accuracy and the stealthiness of attack, a new flow table overflow attack strategy, DUDFTO, is proposed to accurately probe timeout settings and match fields, then stealthily overflow SDN flow tables. Firstly, it probes the match fields by measuring the one-sided transmission delay of the packets. After that, DUDFTO designs a down-to-up feedback-based timeout probing algorithm to eliminate the issues caused by high RTT, link failure, interference between hard-timeout and idle-timeout. Then, DUDFTO designs a dynamic attack packets sending algorithm to improve its stealthiness. Finally, DUDFTO probes the flow table state to stop sending new attack packets. The evaluation results demonstrate that DUDFTO outperforms the existing attacks in terms of match fields probing ability, timeout probing relative error, number of packet_in and flow_mod messages generated by the attack, rate distribution of packet_in and flow_mod messages generated during the attack, and number of detected attack packets. | 10.1109/TNSM.2025.3574260 |
Kaijie Wang, Zhicheng Bao, Kaijun Liu, Haotai Liang, Chen Dong, Xiaodong Xu, Lin Li | Adaptive Bitrate Video Semantic Increment Transmission System Based on Buffer and Semantic Importance | 2025 | Early Access | Streaming media Bit rate Semantic communication Encoding Heuristic algorithms Fluctuations Feature extraction Data mining Deep learning Decoding Video semantic communication increment transmission adaptive bitrate algorithm buffer | Significant progress has been made in researching video semantic communication technology and adaptive bitrate (ABR) algorithms. However, wireless network fluctuations challenge video semantic communication systems without ABR algorithms to achieve a satisfactory balance between high semantic recovery accuracy and efficient bandwidth utilization. This paper proposes an adaptive bitrate video semantic increment transmission system based on buffer and semantic importance to address this issue. Firstly, a buffer-based video semantic increment transmission system is designed to dynamically adjust the amount of video semantic data transmitted by the transmitter based on network fluctuations. Then, a novel Deep Learning and Reinforcement Learning based ABR algorithm (DR-ABR) is developed to determine the optimal video incremental ratio under the current network conditions. Furthermore, a semantic feature compression technology based on semantic importance is proposed to compress the video data according to the abovementioned ratio. Experimental results demonstrate that the proposed method outperforms traditional approaches in terms of video semantic transmission performance. | 10.1109/TNSM.2025.3563257 |
Fuhao Yang, Hua Wu, Xiangyu Zheng, Jinfeng Chen, Xiaoyan Hu, Jing Ren | A Detection Scheme for Multiplexed Asymmetric Workload DDoS Attacks in High-Speed Networks | 2025 | Early Access | Feature extraction High-speed networks Denial-of-service attack Multiplexing HTTP Cryptography Computer crime Web servers Routing Real-time systems HTTP/2 Multiplexed Asymmetric Workload DDoS high-speed network intrusion detection | The asymmetric workload attack is an application layer attack that aims to exhaust the Central Processing Unit (CPU) resources of a server. Some attackers exploit new features of the Hypertext Transfer Protocol version 2 (HTTP/2) to launch Multiplexed Asymmetric Workload DDoS (MAWD) attacks using a small number of bots, which can cause denial of service on HTTP/2 servers. Data centers in high-speed networks host a large number of web applications. However, most of the detection methods for asymmetric workload attacks rely on request semantic analysis, which cannot be applied to encrypted MAWD attack traffic in high-speed networks. Besides, traditional rate-based DDoS detection methods are ineffective in detecting MAWD because the MAWD attacks use legitimate HTTP requests, and HTTP/2 traffic is bursty in nature. This paper proposes a practical scheme to detect MAWD attacks in high-speed networks. We construct an effective feature set based on the characteristics of MAWD attacks in high-speed networks and design MAWD-HashTable (MAWD-HT) to extract features quickly. Experimental results on real traffic traces with speeds reaching Gbps demonstrate that our scheme can detect MAWD attacks within 3 seconds, with a recall rate of more than 99%, a FPR of less than 0.1%, and an acceptable resource consumption. | 10.1109/TNSM.2025.3563538 |
Yating Li, Le Wang, Liang Xue, Jingwei Liu, Xiaodong Lin | Efficient and Privacy-Enhancing Non-Interactive Periocular Authentication for Access Control Services | 2025 | Early Access | Authentication Servers Privacy Homomorphic encryption Accuracy Face recognition Security Protocols Faces Vectors Secure authentication privacy preservation access control service encrypted cosine similarity matching | Periocular authentication has emerged as an increasingly prominent approach in access control services, especially in situations of face occlusion. However, its limited feature area and low information complexity make it susceptible to tampering and identity forgery. To address these vulnerabilities, we propose a practical privacy-enhancing non-interactive periocular authentication scheme for access control services. It enables encrypted cosine similarity matching on an untrusted remote authentication server by leveraging random masking (RM) and dual-trapdoor homomorphic encryption (DTHE). Additionally, a weight control matrix is introduced to enhance authentication accuracy by assigning importance to different feature dimensions. To accommodate devices with varying trust levels, we employ adaptive authentication strategies. For trusted mobile devices, we implement secure single-factor authentication based on periocular features, while for external devices with unknown security status, we enforce a two-factor authentication mechanism integrating periocular features with tokens to mitigate unauthorized access risks. Additionally, our scheme conceals users’ true identities from external devices during authentication. Security analysis demonstrates that our solution effectively mitigates tampering and replay attacks in the network while preventing privacy leakage. As validated by experimental results, the proposed scheme enables efficient authentication of obscured faces. | 10.1109/TNSM.2025.3563607 |
Junfeng Tian, Yiting Wang, Yue Shen | A Security-Enhanced Certificateless Anonymous Authentication Scheme with High Computational Efficiency for Mobile Edge Computing | 2025 | Early Access | Security Servers Authentication Computational efficiency Costs Protocols Mobile handsets Cloud computing Low latency communication Faces Authentication and key agreement (AKA) mobile edge computing (MEC) certificateless cryptography unlinkability full anonymity | Mobile Edge Computing (MEC), as a new computing paradigm, provides high-quality and low-latency services for mobile users and also reduces the load on cloud servers. However, MEC faces some security threats, such as data leakage, privacy leakage, and unauthorized access. To cope with these threats, many researchers have designed a series of identity-based authentication and key negotiation (ID-AKA) schemes for MEC environments. However, these schemes have some drawbacks, such as key escrow issues, lack of unlinkability and full anonymity, use of time-consuming bilinear pairing operations and insufficiently secure static public-private key pairs. To compensate for these drawbacks, we propose a certificateless anonymous authentication scheme for MEC with enhanced security and high computational efficiency. The scheme achieves unlinkability and full anonymity by using one-time pseudonyms generated by a tamper-proof device (TPD). The scheme also solves the key escrow problem and uses one-time public-private key pairs for authentication, thus enhancing the key security and communication security. In addition, the scheme eliminates bilinear pairing operations and precomputes some time-consuming operations in the TPD each time, thus optimizing the computational efficiency. Finally, we perform the security analysis and performance evaluation of the scheme. The results show that the scheme has the optimal computational efficiency and moderate communication costs, as well as significant advantages in terms of security, as compared to other competing schemes. | 10.1109/TNSM.2025.3563637 |
Nicola Di Cicco, Gaetano Francesco Pittalà, Gianluca Davoli, Davide Borsatti, Walter Cerroni, Carla Raffaelli, Massimo Tornatore | Scalable and Energy-Efficient Service Orchestration in the Edge-Cloud Continuum With Multi-Objective Reinforcement Learning | 2025 | Early Access | Energy consumption Training Resource management Servers Computational modeling Optimization Scalability Delays Numerical models Energy efficiency Service Orchestration Edge-Cloud Continuum Multi-Objective Reinforcement Learning Energy Profiling | The Edge-Cloud Continuum represents a paradigm shift in distributed computing, seamlessly integrating resources from cloud data centers to edge devices. However, orchestrating services across this heterogeneous landscape poses significant challenges, as it requires finding a delicate balance between different (and competing) objectives, including service acceptance probability, offered Quality-of-Service, and network energy consumption. To address this challenge, we propose leveraging Multi-Objective Reinforcement Learning (MORL) to approximate the full Pareto Front of service orchestration policies. In contrast to conventional solutions based on single-objective RL, a MORL approach allows a network operator to inspect all possible “optimal” trade-offs, and then decide a posteriori on the orchestration policy that best satisfies the system’s operational requirements. Specifically, we first conduct an extensive measurement study to accurately model the energy consumption of heterogeneous edge devices and servers under various workloads, alongside the resource consumption of popular cloud services. Then, we develop a set-based MORL policy for service orchestration that can adapt to arbitrary network topologies without the need for retraining. Illustrative numerical results against selected heuristics show that our MORL policy outperforms baselines by 30% on average over a broad set of objective preferences, and generalizes to network topologies up to 5x larger than training. | 10.1109/TNSM.2025.3574131 |
Shashank Motepalli, Hans-Arno Jacobsen | Decentralization in PoS Blockchain Consensus: Quantification and Advancement | 2025 | Early Access | Consensus protocol Blockchains Measurement Safety Indexes Adaptation models Security Probabilistic logic Bitcoin Analytical models Blockchains Decentralized applications Decentralized applications | Decentralization is a foundational principle of permissionless blockchains, with consensus mechanisms serving a critical role in its realization. This study quantifies the decentralization of consensus mechanisms in proof-of-stake (PoS) blockchains using a comprehensive set of metrics, including Nakamoto coefficients, Gini, Herfindahl-Hirschman Index (HHI), Shapley values, and Zipf’s coefficient. Our empirical analysis across ten prominent blockchains reveals significant concentration of stake among a few validators, posing challenges to fair consensus. To address this, we introduce two alternative weighting models for PoS consensus: Square Root Stake Weight (SRSW) and Logarithmic Stake Weight (LSW), which adjust validator influence through non-linear transformations. Results demonstrate that SRSW and LSW models improve decentralization metrics by an average of 51% and 132%, respectively, supporting more equitable and resilient blockchain systems. | 10.1109/TNSM.2025.3561098 |
Elham Amini, Jelena Miši, Vojislav B. Miši | Paxos With Priorities for Blockchain Applications | 2025 | Early Access | Proposals Protocols Voting Fault tolerant systems Fault tolerance Consensus algorithm Consensus protocol Computer crashes Queueing analysis Delays Paxos consensus preemptive queues with priorities blockchain technology decentralized consensus aging mechanism | Paxos is a well known protocol for state machine replication and consensus in face of crash faults. However, it suffers from inefficiencies in request handling, particularly in scenarios requiring preemptive prioritization. To address this, we propose a priority-aware extension similar to MultiPaxos and evaluate its performance using a queuing model, and show the improvement in performance metrics such as mean completion and waiting times. Our results demonstrate that integrating prioritization mechanisms into Paxos reduces latency for high-priority requests while ensuring fairness. The aging-based approach maintains correctness of the consensus process while adding flexibility to manage time-sensitive distributed applications such as permissioned blockchains. | 10.1109/TNSM.2025.3574581 |
Jin-Xian Liu, Jenq-Shiou Leu | ETCN-NNC-LB: Ensemble TCNs With L-BFGS-B Optimized No Negative Constraint-Based Forecasting for Network Traffic | 2025 | Early Access | Telecommunication traffic Forecasting Predictive models Data models Convolutional neural networks Long short term memory Ensemble learning Complexity theory Accuracy Overfitting Deep learning ensemble learning network traffic prediction temporal convolutional network (TCN) time series forecasting | With the increasing demand for internet access and the advent of technologies such as 6G and IoT, efficient and dynamic network resource management has become crucial. Accurate network traffic prediction plays a pivotal role in this context. However, existing prediction methods often struggle with challenges such as complexity-accuracy trade-offs, limited data availability, and diverse traffic patterns, especially in coarsegrained forecasting. To address these issues, this article proposes ETCN-NNC-LB, which is a novel ensemble learning method for network traffic forecasting. ETCN-NNC-LB combines Temporal Convolutional Networks (TCNs) with No Negative Constraint Theory (NNCT) weight integration in ensemble learning and is optimized using the Limited-memory Broyden-Fletcher-Goldfarb- Shanno with Box constraints (L-BFGS-B) algorithm. This method balances model complexity and accuracy, mitigates overfitting risks, and flexibly aggregates predictions. The ETCN-NNC-LB model also incorporates a pattern-handling method to forecast traffic behaviors robustly. Experiments on a real-world dataset demonstrate that ETCN-NNC-LB significantly outperforms stateof-the-art methods, achieving an approximately 22% reduction in the Root Mean Square Error (RMSE). The proposed method provides accurate and efficient network traffic prediction in dynamic, data-constrained environments. | 10.1109/TNSM.2025.3563978 |
Ying-Dar Lin, Yin-Tao Ling, Yuan-Cheng Lai, Didik Sudyana | Reinforcement Learning for AI as a Service: CPU-GPU Task Scheduling for Preprocessing, Training, and Inference Tasks | 2025 | Early Access | Artificial intelligence Graphics processing units Training Computer architecture Optimal scheduling Scheduling Real-time systems Complexity theory Resource management Inference algorithms AI as a Service CPU GPU Task Scheduling Reinforcement Learning Deep Q-Learning | The rise of AI solutions has driven the emergence of AI as a Service (AIaaS), offering cost-effective and scalable solutions by outsourcing AI functionalities to specialized providers. Within AIaaS, three key components are essential: segmenting AI services into preprocessing, training, and inference tasks; utilizing GPU-CPU heterogeneous systems where GPUs handle parallel processing and CPUs manage sequential tasks; and minimizing latency in a distributed architecture consisting of cloud, edge, and fog computing. Efficient task scheduling is crucial to optimize performance across these components. In order to enhance task scheduling in AIaaS, we propose a user-experience-and-performance-balanced reinforcement learning (UXP-RL) algorithm. The UXP-RL algorithm considers 11 factors, including queuing task information. It then estimates resource release times and observes previous action outcomes, to select the optimal AI task for execution on either a GPU or CPU. This method effectively reduces the average turnaround time, particularly for rapid inference tasks. Our experimental findings show that the proposed RL-based scheduling algorithm reduces average turnaround time by 27.66% to 57.81% compared to the heuristic approaches such as SJF and FCFS. In a distributed architecture, utilizing distributed RL schedulers reduces the average turnaround time by 89.07% compared to a centralized scheduler. | 10.1109/TNSM.2025.3564480 |
Kun Lan, Gaolei Li, Wenkai Huang, Jianhua Li | HFL-RD: Heterogeneous Federated Learning-Empowered Ransomware Detection via APIs and Traffic Features | 2025 | Early Access | Ransomware Encryption Market research Federated learning Feature extraction Cryptography Telecommunication traffic Servers Organizations Monitoring Ransomware detection heterogeneous federated learning convolutional neural networks | Ransomware has evolved into a more organized attack threat with stronger anti detection and analysis capabilities, resulting in significant global losses. However, traditional methods separate the external and internal behaviors of ransomware infiltration into attack targets, making it difficult to discover the complex and covert evolution and iteration characteristics of advanced ransomware. The main contribution of this study lies in three aspects: a) The integration of Command-and-control (C&C) traffic behavior analysis and local API call operation analysis can effectively discern and capture the concealed characteristics of ransomware; b) The non-IID problem in aggregating ransomware features using federated learning can be resolved using dynamic regularization methods and penalty terms; c) By preprocessing the original data of ransomware traffic through one-dimensional convolution, the structural characteristics of network traffic in the process of attack operation can be retained to the greatest extent. Comprehensive experiments are conducted to validate the effectiveness of this model, specifically, the heterogeneous federated learning-empowered ransomware detection (HFL-RD) scheme outperformed existing methods, the experimental dataset gathered runnable ransomware from three public websites, including 300 ransomware samples from 30 families and 200 benign software samples from 7 categories. HFL-RD obtained a high accuracy over 95%. In terms of detecting unknown ransomware variants, it has demonstrated superior detection capabilities in terms of detection time and number of file corruption. | 10.1109/TNSM.2025.3574716 |
Numidia Zaidi, Sylia Zenadji, Mohamed Amine Ouamri, Daljeet Singh, F. Hamida Abdelhak | CARRAS: Combined AOA and RBFNN for Resource Allocation in Single Site 5G Network | 2025 | Early Access | Resource management Throughput 5G mobile communication Artificial neural networks Optimization Wireless communication Signal to noise ratio Quality of service Delays Wireless networks Resource allocation 5G Archimedes Optimization Algorithm Radial Basis Function Neural Network Throughput | 5G and future networks must manage flows with varying Quality of Service (QoS) requirements, even under unpredictable traffic conditions. As user requirements for network capacity evolve over time, it is crucial to allocate resources appropriately to maximize the efficiency of their application. Consequently, these demands are driving the creation of new resource management policies, as conventional methods are no longer sufficient to meet them effectively. Thus, we propose a framework for resource allocation at the radio access network (RAN) level, while taking into consideration throughput and delay probability. To solve the formulated problem and to make it more tractable, the archimedes optimization algorithm (AOA) combined with an artificial neural network (ANN) is introduced to explore the search space and find optimal solutions. Nevertheless, in a 5G environment, the interactions between users, services, and network resources are inherently complex and non-linear. To this end, the radial basis function (RBF) is then used to predict user needs and reallocate resources according to expected results. The simulation results show that the proposed approach has a significant advantage over traditional approaches such as Particle Swarm Optimization (PSO). To the best of our knowledge, this paper is the first attempt to study 5G resource allocation using a combination of AOA and RBFNN algorithms and adequately describes the approach. | 10.1109/TNSM.2025.3573797 |