Last updated: 2025-09-04 05:01 UTC
All documents
Number of pages: 146
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Menna Helmy, Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad | Slicing for AI: An Online Learning Framework for Network Slicing Supporting AI Services | 2025 | Early Access | Artificial intelligence Training Resource management Network slicing Computational modeling Optimization Quality of service Ultra reliable low latency communication 6G mobile communication Heuristic algorithms Network slicing online learning resource allocation 6G networks optimization | The forthcoming 6G networks will embrace a new realm of AI-driven services that requires innovative network slicing strategies, namely slicing for AI, which involves the creation of customized network slices to meet Quality of Service (QoS) requirements of diverse AI services. This poses challenges due to time-varying dynamics of users’ behavior and mobile networks. Thus, this paper proposes an online learning framework to determine the allocation of computational and communication resources to AI services, to optimize their accuracy as one of their unique key performance indicators (KPIs), while abiding by resources, learning latency, and cost constraints. We define a problem of optimizing the total accuracy while balancing conflicting KPIs, prove its NP-hardness, and propose an online learning framework for solving it in dynamic environments. We present a basic online solution and two variations employing a pre-learning elimination method for reducing the decision space to expedite the learning. Furthermore, we propose a biased decision space subset selection by incorporating prior knowledge to enhance the learning speed without compromising performance and present two alternatives of handling the selected subset. Our results depict the efficiency of the proposed solutions in converging to the optimal decisions, while reducing decision space and improving time complexity. Additionally, our solution outperforms State-of-the-Art techniques in adapting to diverse environmental dynamics and excels under varying levels of resource availability. | 10.1109/TNSM.2025.3603391 |
Feng Liu, Zhenyu Li, Chunfang Yang, Daofu Gong, Fenlin Liu, Rui Ma, Adrian G. Bors | BotCF: Improving the Social bot Detection Performance by Focusing on the Community Features | 2025 | Early Access | Chatbots Feature extraction Social networking (online) Semantics Data mining Blogs Training Image edge detection Graph convolutional networks Focusing Social Bot Detection Sybil Detection Community Structure Graph Convolutional Network | Various malicious activities performed by social bots have brought a crisis of trust to online social networks. Existing social bot detection methods often overlook the significance of community structure features and effective fusion strategies for multimodal features. To counter these limitations, we propose BotCF, a novel social bot detection method that incorporates community features and utilizes cross-attention fusion for multimodal features. In BotCF, we extract community features using a community division algorithm based on deep autoencoder-like non-negative matrix factorization. These features capture the social interactions and relationships within the network, providing valuable insights for bot detection. Furthermore, we employ cross-attention fusion to integrate the features of the account’s semantic content, properties, and community structure. This fusion strategy allows the model to learn the interdependencies between different modalities, leading to a more comprehensive representation of each account. Extensive experiments conducted on three publicly available benchmark datasets (Twibot20, Twibot22, and Cresci-2015) demonstrate the effectiveness of BotCF. Compared to state-of-the-art social bot detection models, BotCF achieves significant improvements in accuracy, with an average increase of 1.86%, 1.67%, and 0.47% on the respective datasets. The detection accuracy is boosted to 86.53%, 81.33%, and 98.21%, respectively. | 10.1109/TNSM.2025.3600474 |
Alisson Medeiros, Antonio Di Maio, Torsten Braun | FLATWISE: Flow Latency and Throughput Aware Sensitive Routing for 6DoF VR over SDN | 2025 | Early Access | Routing Throughput Videos Routing protocols Heuristic algorithms Quality of service Computational modeling Classification algorithms Approximation algorithms Videoconferences Virtual Reality Six Degrees of Freedom End-to-end Latency Routing Protocols Multi-access Edge Computing | The next generation of Virtual Reality (VR) applications is expected to provide advanced experiences through Six Degree-of-Freedom (6DoF) technology. However, 6DoF VR applications require latency and throughput guarantees. This article presents a novel intra-domain routing algorithm with throughput guarantees for minimizing the overall end-to-end (E2E) latency for all flows deployed in the network.We investigate the Joint Flow Allocation (JFA) problem to find paths for all flows in a network such that it determines the optimal path for each flow in terms of throughput and latency. The JFA problem is NP-hard. We use a mixed integer linear programming to model the system, along with a heuristic, Flow Latency and Throughput Aware Sensitive Routing (FLATWISE), which is one order of magnitude faster than optimally solving the JFA problem. FLATWISE introduces an adaptive routing approach that dynamically adjusts the path calculation based on E2E latency. The novelty of FLATWISE lies in its unique ability to precisely tune the routing path by either constraining or relaxing the path criteria to align the E2E latency of the selected path with the latency demands of each VR flow. This approach ensures that the latency of the calculated path approximates the latency of each VR flow, enabling more flexible and efficient network routing to meet diverse latency requirements. Our evaluation considers 6DoF VR application flows, which demand high throughput and ultra-low E2E latency. Extensive simulations demonstrate that FLATWISE significantly reduces flow latency, over-provisioned latency, E2E latency, and algorithm execution time when network flows are processed randomly. FLATWISE improves flow throughput and frame rate compared to related work approaches. | 10.1109/TNSM.2025.3600365 |
Menaka Pushpa Arthur, Ganesan Ramachandran, Keshav Sood, Pavan Kaarthik, Srivarshinee Sridhar, Morshed Chowdhury | Empirical Study of Hierarchical Intrusion Detection Systems for Unknown Attacks | 2025 | Early Access | Classification algorithms Adaptation models Intrusion detection Training Machine learning algorithms Accuracy 5G mobile communication Optimization Federated learning Computational modeling Intrusion Detection System zero-day attack open set problems machine learning | The attack detection models of the traditional Intrusion Detection Systems (IDSs) IDSsIntrusion Detection Systems are trained on closed-set problems which reduces the classifiers’ performance on detecting unknown attacks in the open-set problem space. Mostly adapted, one-short learning in the classifier does not allow the traditional IDS to be an open-set recognizer. The alternate continuous learning-based IDS in unknown attack detection claims ongoing suggestions from experts to retrain the model with newly identified samples. Hence, using the multi-layer hierarchical IDS (HIDS) HIDSHierarchical IDS with optimized classifier models, the unknown attacks can be classified by comparing their patterns with benign and known attacks. However, we have identified many challenges in the existing HIDS system on various datasets though it provides a solid foundation in this design category for unknown attack identification. As a result, in this paper, we designed an enhanced multi-tier IDS for zero-day attack detection with optimized heterogeneous classifiers in its major two phases like basic framework demands. We have examined the enhanced proposed hierarchical IDS on various benchmark Intrusion Detection Systems datasets such as WUSTL, CIC_IDS_2017, 5G and UNR to analyze the efficiency in unknown attacks classification. When compare to existing multi-tier IDS, the proposed IDS achieved highest detection 96.2%,87%,96.8% and 100% in 5G, WUSTL, UNR and CIC_IDS_2017 datasets for unknown attacks. The optimized model in the proposed IDS reduces the time complexity into 50% than the existing. Implementation results show the proposed enhanced IDS performs better than the existing hierarchical IDS with a high true positive rate for benign, known and unknown attack labels on various datasets. | 10.1109/TNSM.2025.3600378 |
Peng Qin, Yang Fu, Zhigang Yu, Jing Zhang, Zhou Lu | Cross-Domain Resource Allocation for Information Retrieving Delay Minimization in UAV-Empowered 6G Network | 2025 | Early Access | Sensors Autonomous aerial vehicles Resource management Delays 6G mobile communication Optimization Servers Minimization Integrated sensing and communication Wireless networks UAV-empowered 6G network cross-domain resource allocation integrated sensing communication caching and computing | The deep integration of sensing, communication, caching, and computation (SC3) is emerging as a key feature of 6G network, designed to support ubiquitous intelligent applications and enhance human quality of life. Simultaneously, unmanned aerial vehicles (UAVs) have been identified as promising edge nodes to bolster terrestrial wireless networks. To harness the coordination benefits of SC3 and address potential conflicts, we propose a UAV-empowered joint SC3 6G network, in which UAVs are outfitted with edge servers to cache and process sensing data from integrated sensing and communication devices before delivering the results to users. To maintain the freshness of sensing data in such network, we formulate an average information retrieving delay minimization problem, coordinating cross-domain resources while considering performance constraints in sensing, communication, and energy. We then develop a cross-domain resource optimization algorithm to jointly design UAV 3D deployment, subcarrier assignment, power control, caching update, and computational resource allocation. This approach combines block coordinate descent, matching theory, and successive convex approximation to iteratively solve the optimization problem. Experimental evaluations demonstrate that the proposed scheme converges rapidly and outperforms benchmark methods in reducing average information retrieving delay through the coordination of SC3 cross-domain resources. | 10.1109/TNSM.2025.3600768 |
Mingyang Yu, Haorui Yang, Shengwei Fu, Desheng Kong, Xiaoxuan Xu, Jun Zhang, Jing Xu | Improved Coverage and Redundancy Management in WSN Using ENMDBO: An Enhanced Metaheuristic Solution | 2025 | Early Access | Optimization Wireless sensor networks Heuristic algorithms Convergence Uncertainty Redundancy Layout Clustering algorithms Accuracy Internet of Things Dung Beetle Optimization Exploring Cosine Similarity Transformation Strategy Neighborhood Solution Mutation-sharing mechanism Tolerance Threshold Detection Mutation mechanism WSN coverage optimization | The widespread deployment of Wireless Sensor Networks (WSN) has made network coverage optimization crucial for improving coverage rates. However, traditional methods struggle with challenges such as energy constraints and environmental uncertainties. Metaheuristic (MH) algorithms offer promising solutions. Dung Beetle Optimization (DBO) algorithm is a well-regarded MH approach, but it suffers from slow convergence and a propensity for local optima entrapment in WSN coverage optimization. To overcome these limitations, this study proposes the Enhanced Dung Beetle Optimization with Neighborhood Mutation (ENMDBO). ENMDBO incorporates three key mechanisms: (1) the Exploring Cosine Similarity Transformation (ECST) strategy, which dynamically adjusts individual similarity to balance global exploration and local exploitation, mitigating the risk of local optima; (2) the Neighborhood Solution Mutation Sharing (NSMS) mechanism, which enhances population diversity by sharing positional information among neighbors, improving search efficiency; and (3) the Tolerance Threshold Detection Mutation (TTDM) mechanism, which detects stagnation in fitness to strengthen the algorithm’s global search capabilities. Experiments on the CEC2017 benchmark suite (Dim = 30, 50, 100) show that ENMDBO achieves superior performance compared to state-of-the-art algorithms, approaching the global optimum. Finally, in WSN coverage optimization, ENMDBO achieves an 86.88% coverage rate, representing an 8.92% improvement over the original DBO, while effectively reducing redundancy. These results underscore ENMDBO’s robustness and effectiveness, establishing it as a practical and reliable solution. (Matlab codes of ENMDBO are available at https://ww2.mathworks.cn/matlabcentral/fileexchange/181820-enhanced-dung-beetle-optimization-with-neighborhood-mutation. | 10.1109/TNSM.2025.3600631 |
Zhi-Bin Zuo, De-Min Wang, Mi-Mi Ma, Miao-Lei Deng, Chun Wang | An Adaptive Contention Window Backoff Scheme Differentiating Network Conditions Based on Deep Q-Learning Network | 2025 | Early Access | Throughput Data communication Wireless sensor networks Wireless networks Optimization Information science Multiaccess communication IEEE 802.11ax Standard Analytical models Wireless fidelity IEEE 802.11 Deep Q-Leaning Network Wireless Networks Deep Reinforcement Learning | n IEEE 802.11 networks, the Contention Window (CW) is a crucial parameter for wireless channel sharing among numerous stations, directly influencing overall network performance. In order to mitigate the performance degradation caused by the increasing number of stations in the network, we propose a novel adaptive CW backoff scheme, termed the ACWB-DQN algorithm. This algorithm leverages the Deep Q-Leaning Network (DQN) to explore a CW threshold, which is utilized as a boundary to differentiate the network load circumstances and learn the best configurations for different network conditions. When stations transmit data frames, different CW optimization strategies are employed based on station transmission status and the CW threshold. This approach aims to enhance network performance by adjusting the CW to increase transmission efficiency when there are fewer competing stations, and lower collision probabilities when there are more competing stations. Simulation results indicate that this approach can optimize station CW, reduce network collision rates, maintain constant throughput and significantly enhance the performance of Wi-Fi networks by means of adjusting the CW threshold according to real-time network conditions. | 10.1109/TNSM.2025.3600861 |
Yadi He, Zhou Wu, Linfeng Liu | Deep Learning Based Link Prediction Method Against Strong Sparsity for Mobile Social Networks | 2025 | Early Access | Feature extraction Social networking (online) Predictive models Deep learning Accuracy Network topology Data mining Recurrent neural networks Sparse matrices Computational modeling link prediction mobile social networks strong sparsity deep learning | Link prediction refers to the prediction of the potential relationships between nodes through exploring the evolution of the historical network topologies. In mobile social networks, the topologies change frequently due to the appearance/disappearance of nodes over time, and the links between nodes are typically very sparse (i.e., mobile social networks are with strong sparsity), which could affect the accuracy of link prediction in mobile social networks seriously. Therefore, this paper proposes a deep learning based Link Prediction Method against Strong Sparsity (LPMSS). LPMSS integrates the graph convolutional network output with encounter matrices to mitigate the negative impact of strong sparsity. Additionally, LPMSS employs the random negative sampling to alleviate the impact of imbalanced link distributions. We also adopt a Times module to capture the temporal topological changes in mobile social networks to enhance the prediction accuracy. Based on three datasets with different sparsity, extensive experiment results demonstrate that LPMSS can significantly improve AUC values while reducing MAE values, confirming its effectiveness in handling the link prediction in the mobile social networks with strong sparsity. | 10.1109/TNSM.2025.3601389 |
Maruthi V, Kunwar Singh | Enhancing Security and Privacy of IoMT Data for Unconscious Patient With Blockchain | 2025 | Early Access | Cryptography Security Polynomials Public key Medical services Interpolation Encryption Data privacy Blockchains Privacy Internet of Medical Things Inter-Planetary File System Proxy re-encryption+ Threshold Proxy re-encryption+ Blockchain Non-Interactive Zero Knowledge Proof Schnorr ID protocol | IoMT enables continuous monitoring through connected medical devices, producing real-time health data that must be protected from unauthorised access and tampering. Blockchain ensures this security with its decentralised, tamper-resistant, and access-controlled ledger. A critical challenge arises when patients are unconscious, making timely access to their IoMT data essential for emergency treatment. To address this, we have created and designed a novel Threshold Proxy Re-Encryption+ (TPRE+) framework that integrates threshold cryptography with unidirectional, non-transitive proxy re-encryption(PRE) with Shamir’s secret sharing to distribute re-encryption capabilities among multiple proxies, reducing single-point failure and collision risks. Our contributions are threefold: (i) We first proposed a semantically secure TPRE+ scheme with Shamir-secret sharing, (ii) Construction of an IND-CCA secure TPRE+ scheme, and (iii) Development of a secure, distributed medical record storage system for unconscious patients, combining blockchain infrastructure, IPFS-based encrypted storage, and our proposed TPRE+ schemes. This integration ensures confidentiality, integrity, and fault-tolerant access to critical patient data, enabling secure and efficient deployment in real-world emergency healthcare scenarios. | 10.1109/TNSM.2025.3602117 |
Ahan Kak, Van-Quan Pham, Huu-Trung Thieu, Nakjung Choi | HexRAN: A Programmable Approach to Open RAN Base Station System Design | 2025 | Early Access | Open RAN Base stations Protocols 3GPP Computer architecture Telemetry Cellular networks Network slicing Wireless networks Prototypes Network Architecture Cellular Systems Radio Access Networks O-RAN Network Slicing Network Programmability | In recent years, the radio access network (RAN) domain has seen significant changes with increased virtualization and softwarization, driven by the Open RAN (O-RAN) movement. However, the fundamental building block of the cellular network, i.e., the base station, remains unchanged and ill-equipped to handle this architectural evolution. In particular, there exists a general lack of programmability and composability along with a protocol stack that grapples with the intricacies of the 3GPP and O-RAN specifications. Recognizing the need for an “O-RAN-native” approach to base station design, this paper introduces HexRAN– a novel base station architecture characterized by key features relating to RAN disaggregation and composability, 3GPP and O-RAN protocol integration and programmability, robust controller interactions, and customizable RAN slicing. Furthermore, the paper also includes a concrete systems-level prototype and comprehensive experimental evaluation of HexRAN on an over-the-air testbed. The results demonstrate that HexRAN uses only 8% more computing resources compared to the baseline, while managing twice the user plane traffic, delivering control plane processing latency of under 120μs, and achieving 100% processing reliability. This underscores the scalability and performance advantages of the proposed architecture. | 10.1109/TNSM.2025.3600587 |
Dániel Unyi, Ernő Rigó, Bálint Gyires-Tóth, Róbert Lovas | Explainable GNN-Based Approach to Fault Forecasting in Cloud Service Debugging | 2025 | Early Access | Debugging Microservice architectures Cloud computing Reliability Observability Computer architecture Graph neural networks Monitoring Probabilistic logic Fault diagnosis Cloud computing Software debugging Microservice architectures Deep learning Graph neural networks Explainable AI Fault prediction | Debugging cloud services is increasingly challenging due to their distributed, dynamic, and scalable nature. Traditional methods struggle to handle large state spaces and the complex interactions between microservices, making it difficult to diagnose failures and identify critical components. This paper presents a Graph Neural Network (GNN)-based approach that enhances cloud service debugging by predicting system-level fault probabilities and providing interpretable insights into failure propagation. Our method models microservice interactions as graphs, where failures propagate probabilistically. Using Markov Decision Processes (MDPs), we simulate failure behaviors, capturing the probabilistic dependencies that influence system reliability. The trained GNN not only predicts fault probabilities but also identifies the most failure-prone microservices and explains their impact. We evaluate our approach on various service mesh structures, including feature-enriched, tree-structured, and general directed acyclic graph (DAG) architectures. Results indicate that our method is effective in the operational phase of cloud services, enabling proactive debugging and targeted optimization. This work represents a step toward more interpretable, reliable, and maintainable cloud infrastructures. | 10.1109/TNSM.2025.3602223 |
José Santos, Bibin V. Ninan, Bruno Volckaert, Filip De Turck, Mays Al-Naday | A Comprehensive Benchmark of Flannel CNI in SDN/non-SDN Enabled Cloud-Native Environments | 2025 | Early Access | Containers Benchmark testing IP networks Microservice architectures Encapsulation Complexity theory Software defined networking Packet loss Overlay networks Network interfaces Containers Container Network Interfaces Network Function Virtualization Benchmark Cloud-Native Software-Defined Networking | The emergence of cloud computing has driven advancements in software virtualization, particularly microservice containerization. This in turn led to the development of Container Network Interfaces (CNIs) such as Flannel to connect microservices over a network. Despite their objective to provide connectivity, CNIs have not been adequately benchmarked when containers are connected over an external network. This creates uncertainty about the operation reliability of CNIs in distributed edge-cloud ecosystems. Given the multitude of available CNIs and the complexity of comparing different ones, this paper focuses on the widely adopted CNI, Flannel. It proposes the design of novel benchmarks of Flannel across external networks, Software Defined Networking (SDN)-based and non-SDN, characterizing two of the key backend types of Flannel: User Datagram Protocol (UDP) and Virtual Extensible LAN (VXLAN). Unlike existing benchmarks, this study analysis the overhead introduced by the external network and the impact of network disruptions. The paper outlines the systematic approach to benchmarking a set of Key Performance Indicators (KPIs), including: speed, latency and throughput. A variety of network disruptions have been induced to analyse their impact on these KPIs, including: delay, packet loss, and packet corruption. The results show that VXLAN consistently outperforms UDP, offering superior bandwidth with efficient resource consumption, making it more suitable for production environments. In contrast, the UDP backend is suitable for real-time video streaming applications due to its higher data rate and lower jitter, though it requires higher resource utilization. Moreover, the results show less variation in KPIs over SDN, compared to non-SDN. The benchmark data are made publicly available in an open-source repository, enabling researchers to replicate the experiments, and potentially extend the study to other CNIs. This work contributes to the network management domain by providing an extensive benchmark study on container networking highlighting the main advantages and disadvantages of current technologies. | 10.1109/TNSM.2025.3602607 |
Wei-Kuo Chiang, Ting-Yu Wang, Yun-Fan Huang, Kun-Ting Liao | A Quantitative Approach to Optimize 5GC Refactoring for Minimum Signaling Latency and Resource Allocation | 2025 | Early Access | Heuristic algorithms 5G mobile communication Microservice architectures Delays Multiuser detection Merging Resource management Clustering algorithms Optimization Computer architecture Network function virtualization 5G core network (5GC) refactoring merging string matching algorithm queuing delay | This article proposes a quantitative approach to optimizing the 5G core (5GC) network refactoring as an example. Our previous study formulated the refactoring problem to minimize queuing delay and resource allocation cost directly in the M/M/k model and utilized the optimization tool, GUROBI, to derive an optimal refactored 5GC architecture, abbreviated as GUR-5GC. However, it is time-consuming; this approach for refactoring optimization is not feasible for dynamic scaling design. We design two quantitative indicators, message exchange reduction (MER) and merging utilization degradation (MUD), to evaluate the impacts of merging certain network functions. Moreover, the problem of calculating the two quantitative indicators can be reduced to a string-matching problem. Then, we reconstructed the optimization model formulation by using MER and MUD indicators in the objective functions instead of the queuing delay and resource allocation cost, since the optimization tool (CPLEX) Mathematical Programming (MP) and Constraint Programming (CP) models could not solve the original problem. Then, we utilized the CPLEX MP Model optimizer integrated with Pareto optimality to derive the CPLEX-based Refactored 5GC (CPR-5GC). In addition, we use a CURE-based (clustering) algorithm with MER and MUD by performing the quantitative analysis to derive the CURE-based Refactored 5GC (CUR-5GC) architecture. Finally, we analyzed the performance of the 5GC, GUR-5GC, CPR-5GC, and CUR-5GC. Moreover, we evaluate them in terms of queuing delay and scaling side effects. The performance results show that CPR-5GC and CUR-5GC outperform the original 5GC and are close to the GUR-5GC; the two heuristic algorithms for 5GC refactoring are feasible and practical. | 10.1109/TNSM.2025.3602492 |
Erhe Yang, Zhiwen Yu, Yao Zhang, Helei Cui, Zhaoxiang Huang, Hui Wang, Jiaju Ren, Bin Guo | Joint Semantic Extraction and Resource Optimization in Communication-Efficient UAV Crowd Sensing | 2025 | Early Access | Sensors Autonomous aerial vehicles Optimization Semantic communication Data mining Feature extraction Resource management Accuracy Data models Data communication UAV crowd sensing semantic communication multi-scale dilated fusion attention reinforcement learning | With the integration of IoT and 5G technologies, UAV crowd sensing has emerged as a promising solution to overcome the limitations of traditional Mobile Crowd Sensing (MCS) in terms of sensing coverage. As a result, UAV crowd sensing has been widely adopted across various domains. However, existing UAV crowd sensing methods often overlook the semantic information within sensing data, leading to low transmission efficiency. To address the challenges of semantic extraction and transmission optimization in UAV crowd sensing, this paper decomposes the problem into two sub-problems: semantic feature extraction and task-oriented sensing data transmission optimization. To tackle the semantic feature extraction problem, we propose a semantic communication module based on Multi-Scale Dilated Fusion Attention (MDFA), which aims to balance data compression, classification accuracy, and feature reconstruction under noisy channel conditions. For transmission optimization, we develop a reinforcement learning-based joint optimization strategy that effectively manages UAV mobility, bandwidth allocation, and semantic compression, thereby enhancing transmission efficiency and task performance. Extensive experiments conducted on real-world datasets and simulated environments demonstrate the effectiveness of the proposed method, showing significant improvements in communication efficiency and sensing performance under various conditions. | 10.1109/TNSM.2025.3603194 |
Iwan Setiawan, Binayak Kar, Shan-Hsiang Shen | Energy-Efficient Softwarized Networks: A Survey | 2025 | Early Access | Energy efficiency Cloud computing Surveys Wireless sensor networks Sustainable development Databases Training Network topology Industries Data mining Energy efficiency software-defined networking network functions virtualization network slicing | With the dynamic demands and stringent requirements of various applications, networks need to be high-performance, scalable, and adaptive to changes. Researchers and industries view network softwarization as the best enabler for the evolution of networking to tackle current and prospective challenges. Network softwarization must provide programmability and flexibility to network infrastructures and allow agile management, along with higher control for operators. While satisfying the demands and requirements of network services, energy cannot be overlooked, considering the effects on the sustainability of the environment and business. This paper discusses energy efficiency in modern and future networks with three network softwarization technologies: SDN, NFV, and NS, introduced in an energy-oriented context. With that framework in mind, we review the literature based on network scenarios, control/MANO layers, and energy-efficiency strategies. Following that, we compare the references regarding approach, evaluation method, criterion, and metric attributes to demonstrate the state-of-the-art. Last, we analyze the classified literature, summarize lessons learned, and present ten essential concerns to open discussions about future research opportunities on energy-efficient softwarized networks. | 10.1109/TNSM.2025.3599919 |
Huaide Liu, Fanqin Zhou, Yikun Zhao, Lei Feng, Zhixiang Yang, Yijing Lin, Wenjing Li | Autonomous Deployment of Aerial Base Station without Network-Side Assistance in Emergency Scenarios Based on Multi-Agent Deep Reinforcement Learning | 2025 | Early Access | Heuristic algorithms Disasters Optimization Estimation Wireless communication Collaboration Base stations Autonomous aerial vehicles Adaptation models Sensors Aerial base station deep reinforcement learning autonomous deployment emergency scenarios multi-agent systems | Aerial base station (AeBS) is a promising technology for providing wireless coverage to ground user equipment. Traditional methods of optimizing AeBS networks often rely on pre-known distribution models of ground user equipment. However, in practical scenarios such as natural disasters or temporary large-scale public events, the distribution of user clusters is often unknown, posing challenges for the deployment and application of AeBS. To adapt to complex and unknown user environments, this paper studies a method of estimating information from local to global and proposes a multi-agent AeBSs autonomous deployment algorithm based on deep reinforcement learning (DRL). This method attempts to dynamically deploy AeBS to autonomously identify hotspots by sensing user equipment signals without network-side assistance, providing a more comprehensive and intelligent solution for AeBS deployment. Simulation results indicate that our method effectively guides the autonomous deployment of AeBS in emergency scenarios, addressing the challenge of the lack of network-side assistance. | 10.1109/TNSM.2025.3603875 |
Wenjun Fan, Na Fan, Junhui Zhang, Jia Liu, Yifan Dai | Securing VNDN With Multi-Indicator Intrusion Detection Approach Against the IFA Threat | 2025 | Early Access | Monitoring Prevention and mitigation Electronic mail Threat modeling Telecommunication traffic Fans Blocklists Security Road side unit Intrusion detection Interest Flooding Attack Named Data Network Network Traffic Monitoring Denial of Service Road Side Unit | On vehicular named data network (VNDN), Interest Flooding Attack (IFA) can exhaust the computing resources by sending a large number of malicious Interest packets, which leads to the failure of satisfying the legitimate requests and seriously hazards the operation of Internet of Vehicles (IoV). To solve this problem, this paper proposes a distributed network traffic monitoring-enabled multi-indicator detection and prevention approach for VNDN to detect and resist the IFA attacks. In order for facilitating this approach, a distributed network traffic monitoring layer based on road side unit (RSU) is constructed. With such a monitoring layer, a multi-indicator detection approach is designed, which consists of three indicators: information entropy, self-similarity, and singularity, whereby the thresholds are tweaked by the real-time density of traffic flow. Apart from the detection, a blacklisting based prevention approach is realized to mitigate the attack impact.We validate the proposed approach via prototyping it on our VNDN experimental platform using realistic parameters setting and leveraging the original NDN packet structure to corroborate the usage of the required Source ID for identifying the source of the Interest packet, which consolidates the practicability of the approach. The experimental results show that our multi-indicator detection approach has a greatly higher detection performance than those of using indicators individually, and the blacklisting-based prevention can effectively mitigate the attack impact as well. | 10.1109/TNSM.2025.3603630 |
Shiqi Zhang, Mridul Gupta, Behnam Dezfouli | Understanding Linux Kernel-based Packet Switching on WiFi Access Points | 2025 | Early Access | Packet switching Linux Kernel Wireless fidelity Switches Ethernet Context Throughput Multicore processing Power demand 802.11 Linux Monitoring Measurement Power Consumption Processor Utilization Function Tracing | As the number of WiFi devices and their traffic demands continue to rise, the need for a scalable and high-performance wireless infrastructure becomes increasingly essential. Central to this infrastructure are WiFi Access Points (APs), which facilitate packet switching between Ethernet and WiFi interfaces. Despite APs’ reliance on the Linux kernel’s data plane for packet switching, the detailed operations and complexities of switching packets between Ethernet and WiFi interfaces have not been investigated in existing works. This paper makes the following contributions towards filling this research gap. Through macro and micro-analysis of empirical experiments, our study reveals insights in two distinct categories. Firstly, while the kernel’s statistics offer valuable insights into system operations, we identify and discuss potential pitfalls that can severely affect system analysis. For instance, we reveal how packet switching rate and the implementation of drivers influence the meaning and accuracy of statistics related to packet-switching tasks and processor utilization. Secondly, we analyze the impact of the packet switching path and core configuration on performance and power consumption. Specifically, we identify the differences in Ethernet-to-WiFi and WiFi-to-Ethernet data paths regarding processing components, multi-core utilization, and energy efficiency. | 10.1109/TNSM.2025.3603597 |
Fan Zhang, Pengfei Yu, Shaoyong Guo, Weicong Huang, Feng Qi | Computing Sandbox Driven Secure Edge Computing System for Industrial IoT | 2025 | Early Access | Industrial Internet of Things Edge computing Security Internet of Things Blockchains Servers Computational modeling Real-time systems Intrusion detection Encryption Industrial Internet of Things Edge computing Edge computing security Trusted Execution Environment Trusted computing | With the initiation of the Internet of Everything, edge computing has emerged as a pivotal paradigm, shifting from cloud computing to better address the growing data demands and latency issues in Industrial Internet of Things (IIoT). However, securing edge computing systems remains a critical challenge as malicious attackers can compromise the IIoT systems, gain control over edge servers, and tamper with computation programs and results. Existing solutions, such as cryptographic encryption, intrusion detection, and blockchain-based methods, have been widely used to enhance security. Yet, these approaches often suffer from high computational overhead, limited adaptability to dynamic IIoT environments, and a lack of foundational trusted assurance mechanisms. Although Trusted Execution Environment (TEE)-based solutions provide a hardware-enhanced secure execution environment, they face scalability and usability challenges and cannot fully support the parallel execution requirements of multiple and diverse IIoT applications. To overcome these limitations, a novel secure edge computing system is proposed for IIoT that strengthens security from the physical layer. By establishing a computing sandbox model, we extend the trust boundaries of the TEE using a virtual Trusted Platform Module (TPM), enabling secure and efficient execution for diverse IIoT applications. The proposed approach integrates a trust guarantee mechanism with decentralized adaptive attestation, ensuring real-time integrity verification while reducing performance overhead. Through security analysis and experimental validation, it is shown that our system improves Non-Volatile Random-Access Memory (NVRAM) launch time by approximately 1,700 times compared to hardware TPM-based virtual TPM implementations, while enhancing protection against attacks such as rollback. | 10.1109/TNSM.2025.3587956 |
Jyoti Shokhanda, Utkarsh Pal, Aman Kumar, Soumi Chattopadhyay, Arani Bhattacharya | SafeTail: Tail Latency Optimization in Edge Service Scheduling via Redundancy Management | 2025 | Early Access | Servers Telecommunication traffic Communication switching Redundancy Edge computing Real-time systems Processor scheduling Computational modeling Uncertainty Solid modeling Tail Latency Redundant Scheduling Reward-based deep learning Edge Computing | Optimizing tail latency while efficiently managing computational resources is crucial for delivering high-performance, latency-sensitive services in edge computing. Emerging applications, such as augmented reality, require low-latency computing services with high reliability on user devices, which often have limited computational capabilities. Consequently, these devices depend on nearby edge servers for processing. However, inherent uncertainties in network and computation latencies—stemming from variability in wireless networks and fluctuating server loads—make service delivery on time challenging. Existing approaches often focus on optimizing median latency but fall short of addressing the specific challenges of tail latency in edge environments, particularly under uncertain network and computational conditions. Although some methods do address tail latency, they typically rely on fixed or excessive redundancy and lack adaptability to dynamic network conditions, often being designed for cloud environments rather than the unique demands of edge computing. In this paper, we introduce SafeTail, a framework that meets both median and tail response time targets, with tail latency defined as latency beyond the percentile threshold. SafeTail addresses this challenge by selectively replicating services across multiple edge servers to meet target latencies. SafeTail employs a reward-based deep learning framework to learn optimal placement strategies, balancing the need to achieve target latencies with minimizing additional resource usage. Through trace-driven simulations, SafeTail demonstrated near-optimal performance and outperformed most baseline strategies across three diverse services. | 10.1109/TNSM.2025.3587752 |