Last updated: 2025-10-20 05:01 UTC
All documents
Number of pages: 149
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
José Santos, Bibin V. Ninan, Bruno Volckaert, Filip De Turck, Mays Al-Naday | A Comprehensive Benchmark of Flannel CNI in SDN/non-SDN Enabled Cloud-Native Environments | 2025 | Early Access | Containers Benchmark testing IP networks Microservice architectures Encapsulation Complexity theory Software defined networking Packet loss Overlay networks Network interfaces Containers Container Network Interfaces Network Function Virtualization Benchmark Cloud-Native Software-Defined Networking | The emergence of cloud computing has driven advancements in software virtualization, particularly microservice containerization. This in turn led to the development of Container Network Interfaces (CNIs) such as Flannel to connect microservices over a network. Despite their objective to provide connectivity, CNIs have not been adequately benchmarked when containers are connected over an external network. This creates uncertainty about the operation reliability of CNIs in distributed edge-cloud ecosystems. Given the multitude of available CNIs and the complexity of comparing different ones, this paper focuses on the widely adopted CNI, Flannel. It proposes the design of novel benchmarks of Flannel across external networks, Software Defined Networking (SDN)-based and non-SDN, characterizing two of the key backend types of Flannel: User Datagram Protocol (UDP) and Virtual Extensible LAN (VXLAN). Unlike existing benchmarks, this study analysis the overhead introduced by the external network and the impact of network disruptions. The paper outlines the systematic approach to benchmarking a set of Key Performance Indicators (KPIs), including: speed, latency and throughput. A variety of network disruptions have been induced to analyse their impact on these KPIs, including: delay, packet loss, and packet corruption. The results show that VXLAN consistently outperforms UDP, offering superior bandwidth with efficient resource consumption, making it more suitable for production environments. In contrast, the UDP backend is suitable for real-time video streaming applications due to its higher data rate and lower jitter, though it requires higher resource utilization. Moreover, the results show less variation in KPIs over SDN, compared to non-SDN. The benchmark data are made publicly available in an open-source repository, enabling researchers to replicate the experiments, and potentially extend the study to other CNIs. This work contributes to the network management domain by providing an extensive benchmark study on container networking highlighting the main advantages and disadvantages of current technologies. | 10.1109/TNSM.2025.3602607 |
Akhirul Islam, Manojit Ghose, Sudeep Pasricha | An RL-based Framework for Task Offloading and Resource Allocation in Energy Harvesting-based Multi-Access Edge Computing | 2025 | Early Access | Resource management Energy harvesting Energy consumption Optimization Heuristic algorithms Servers Computational modeling Predictive models Long short term memory Dynamic scheduling MEC Edge-Cloud Energy-harvesting devices Machine Learning RNN LSTM DDQN | With the growing awareness of sustainability concerns in many application domains, energy-harvesting (EH) devices are increasingly being used with traditional non-energy-harvesting (non-EH) devices. This paper proposes a reinforcement learning (RL)-based task offloading and scheduling strategy called DTORA for a hybrid EH and non-EH enabled multi-access edge computing environment where EH devices harvest energy from solar radiation. The applications running on user devices can have varying levels of criticality (mixed-criticality). We formulate a mixed-integer energy and latency minimization programming problem based on the system and application model. To solve this, we use a recurrent neural network based long short-term memory (LSTM) model for solar energy prediction, and a Double Deep Q-learning is used for task-offloading decisions. The proposed strategy (DTORA) is benchmarked against several state-of-the-art (SOA) strategies and other baseline approaches, including SCOPE, OCO (Offloading Cost Optimization), a hybrid Particle Swarm Optimization and Genetic Algorithm (PSOGA), and Selective-Greedy (SG). The proposed strategy outperforms these strategies in terms of latency, energy consumption of user devices, task failure rate, and critical task failures by 39%, 77.51%, 60.98%, and 69.94%, respectively (on average). Compared to the existing best-performing strategy, DTORA achieves improvements of 17.45% for latency, 58.54% for energy consumption, 23.34% for task failure rate, and 27.77% for critical task failures. This improvement can be attributed to improved edge-cloud cooperation, an efficient energy prediction model, and efficient RL-based task offloading and scheduling in our proposed strategy. | 10.1109/TNSM.2025.3600109 |
Iwan Setiawan, Binayak Kar, Shan-Hsiang Shen | Energy-Efficient Softwarized Networks: A Survey | 2025 | Early Access | Energy efficiency Cloud computing Surveys Wireless sensor networks Sustainable development Databases Training Network topology Industries Data mining Energy efficiency software-defined networking network functions virtualization network slicing | With the dynamic demands and stringent requirements of various applications, networks need to be high-performance, scalable, and adaptive to changes. Researchers and industries view network softwarization as the best enabler for the evolution of networking to tackle current and prospective challenges. Network softwarization must provide programmability and flexibility to network infrastructures and allow agile management, along with higher control for operators. While satisfying the demands and requirements of network services, energy cannot be overlooked, considering the effects on the sustainability of the environment and business. This paper discusses energy efficiency in modern and future networks with three network softwarization technologies: SDN, NFV, and NS, introduced in an energy-oriented context. With that framework in mind, we review the literature based on network scenarios, control/MANO layers, and energy-efficiency strategies. Following that, we compare the references regarding approach, evaluation method, criterion, and metric attributes to demonstrate the state-of-the-art. Last, we analyze the classified literature, summarize lessons learned, and present ten essential concerns to open discussions about future research opportunities on energy-efficient softwarized networks. | 10.1109/TNSM.2025.3599919 |
Feng Liu, Zhenyu Li, Chunfang Yang, Daofu Gong, Fenlin Liu, Rui Ma, Adrian G. Bors | BotCF: Improving the Social bot Detection Performance by Focusing on the Community Features | 2025 | Early Access | Chatbots Feature extraction Social networking (online) Semantics Data mining Blogs Training Image edge detection Graph convolutional networks Focusing Social Bot Detection Sybil Detection Community Structure Graph Convolutional Network | Various malicious activities performed by social bots have brought a crisis of trust to online social networks. Existing social bot detection methods often overlook the significance of community structure features and effective fusion strategies for multimodal features. To counter these limitations, we propose BotCF, a novel social bot detection method that incorporates community features and utilizes cross-attention fusion for multimodal features. In BotCF, we extract community features using a community division algorithm based on deep autoencoder-like non-negative matrix factorization. These features capture the social interactions and relationships within the network, providing valuable insights for bot detection. Furthermore, we employ cross-attention fusion to integrate the features of the account’s semantic content, properties, and community structure. This fusion strategy allows the model to learn the interdependencies between different modalities, leading to a more comprehensive representation of each account. Extensive experiments conducted on three publicly available benchmark datasets (Twibot20, Twibot22, and Cresci-2015) demonstrate the effectiveness of BotCF. Compared to state-of-the-art social bot detection models, BotCF achieves significant improvements in accuracy, with an average increase of 1.86%, 1.67%, and 0.47% on the respective datasets. The detection accuracy is boosted to 86.53%, 81.33%, and 98.21%, respectively. | 10.1109/TNSM.2025.3600474 |
Alisson Medeiros, Antonio Di Maio, Torsten Braun | FLATWISE: Flow Latency and Throughput Aware Sensitive Routing for 6DoF VR over SDN | 2025 | Early Access | Routing Throughput Videos Routing protocols Heuristic algorithms Quality of service Computational modeling Classification algorithms Approximation algorithms Videoconferences Virtual Reality Six Degrees of Freedom End-to-end Latency Routing Protocols Multi-access Edge Computing | The next generation of Virtual Reality (VR) applications is expected to provide advanced experiences through Six Degree-of-Freedom (6DoF) technology. However, 6DoF VR applications require latency and throughput guarantees. This article presents a novel intra-domain routing algorithm with throughput guarantees for minimizing the overall end-to-end (E2E) latency for all flows deployed in the network.We investigate the Joint Flow Allocation (JFA) problem to find paths for all flows in a network such that it determines the optimal path for each flow in terms of throughput and latency. The JFA problem is NP-hard. We use a mixed integer linear programming to model the system, along with a heuristic, Flow Latency and Throughput Aware Sensitive Routing (FLATWISE), which is one order of magnitude faster than optimally solving the JFA problem. FLATWISE introduces an adaptive routing approach that dynamically adjusts the path calculation based on E2E latency. The novelty of FLATWISE lies in its unique ability to precisely tune the routing path by either constraining or relaxing the path criteria to align the E2E latency of the selected path with the latency demands of each VR flow. This approach ensures that the latency of the calculated path approximates the latency of each VR flow, enabling more flexible and efficient network routing to meet diverse latency requirements. Our evaluation considers 6DoF VR application flows, which demand high throughput and ultra-low E2E latency. Extensive simulations demonstrate that FLATWISE significantly reduces flow latency, over-provisioned latency, E2E latency, and algorithm execution time when network flows are processed randomly. FLATWISE improves flow throughput and frame rate compared to related work approaches. | 10.1109/TNSM.2025.3600365 |
Menaka Pushpa Arthur, Ganesan Ramachandran, Keshav Sood, Pavan Kaarthik, Srivarshinee Sridhar, Morshed Chowdhury | Empirical Study of Hierarchical Intrusion Detection Systems for Unknown Attacks | 2025 | Early Access | Classification algorithms Adaptation models Intrusion detection Training Machine learning algorithms Accuracy 5G mobile communication Optimization Federated learning Computational modeling Intrusion Detection System zero-day attack open set problems machine learning | The attack detection models of the traditional Intrusion Detection Systems (IDSs) IDSsIntrusion Detection Systems are trained on closed-set problems which reduces the classifiers’ performance on detecting unknown attacks in the open-set problem space. Mostly adapted, one-short learning in the classifier does not allow the traditional IDS to be an open-set recognizer. The alternate continuous learning-based IDS in unknown attack detection claims ongoing suggestions from experts to retrain the model with newly identified samples. Hence, using the multi-layer hierarchical IDS (HIDS) HIDSHierarchical IDS with optimized classifier models, the unknown attacks can be classified by comparing their patterns with benign and known attacks. However, we have identified many challenges in the existing HIDS system on various datasets though it provides a solid foundation in this design category for unknown attack identification. As a result, in this paper, we designed an enhanced multi-tier IDS for zero-day attack detection with optimized heterogeneous classifiers in its major two phases like basic framework demands. We have examined the enhanced proposed hierarchical IDS on various benchmark Intrusion Detection Systems datasets such as WUSTL, CIC_IDS_2017, 5G and UNR to analyze the efficiency in unknown attacks classification. When compare to existing multi-tier IDS, the proposed IDS achieved highest detection 96.2%,87%,96.8% and 100% in 5G, WUSTL, UNR and CIC_IDS_2017 datasets for unknown attacks. The optimized model in the proposed IDS reduces the time complexity into 50% than the existing. Implementation results show the proposed enhanced IDS performs better than the existing hierarchical IDS with a high true positive rate for benign, known and unknown attack labels on various datasets. | 10.1109/TNSM.2025.3600378 |
Peng Qin, Yang Fu, Zhigang Yu, Jing Zhang, Zhou Lu | Cross-Domain Resource Allocation for Information Retrieving Delay Minimization in UAV-Empowered 6G Network | 2025 | Early Access | Sensors Autonomous aerial vehicles Resource management Delays 6G mobile communication Optimization Servers Minimization Integrated sensing and communication Wireless networks UAV-empowered 6G network cross-domain resource allocation integrated sensing communication caching and computing | The deep integration of sensing, communication, caching, and computation (SC3) is emerging as a key feature of 6G network, designed to support ubiquitous intelligent applications and enhance human quality of life. Simultaneously, unmanned aerial vehicles (UAVs) have been identified as promising edge nodes to bolster terrestrial wireless networks. To harness the coordination benefits of SC3 and address potential conflicts, we propose a UAV-empowered joint SC3 6G network, in which UAVs are outfitted with edge servers to cache and process sensing data from integrated sensing and communication devices before delivering the results to users. To maintain the freshness of sensing data in such network, we formulate an average information retrieving delay minimization problem, coordinating cross-domain resources while considering performance constraints in sensing, communication, and energy. We then develop a cross-domain resource optimization algorithm to jointly design UAV 3D deployment, subcarrier assignment, power control, caching update, and computational resource allocation. This approach combines block coordinate descent, matching theory, and successive convex approximation to iteratively solve the optimization problem. Experimental evaluations demonstrate that the proposed scheme converges rapidly and outperforms benchmark methods in reducing average information retrieving delay through the coordination of SC3 cross-domain resources. | 10.1109/TNSM.2025.3600768 |
Mingyang Yu, Haorui Yang, Shengwei Fu, Desheng Kong, Xiaoxuan Xu, Jun Zhang, Jing Xu | Improved Coverage and Redundancy Management in WSN Using ENMDBO: An Enhanced Metaheuristic Solution | 2025 | Early Access | Optimization Wireless sensor networks Heuristic algorithms Convergence Uncertainty Redundancy Layout Clustering algorithms Accuracy Internet of Things Dung Beetle Optimization Exploring Cosine Similarity Transformation Strategy Neighborhood Solution Mutation-sharing mechanism Tolerance Threshold Detection Mutation mechanism WSN coverage optimization | The widespread deployment of Wireless Sensor Networks (WSN) has made network coverage optimization crucial for improving coverage rates. However, traditional methods struggle with challenges such as energy constraints and environmental uncertainties. Metaheuristic (MH) algorithms offer promising solutions. Dung Beetle Optimization (DBO) algorithm is a well-regarded MH approach, but it suffers from slow convergence and a propensity for local optima entrapment in WSN coverage optimization. To overcome these limitations, this study proposes the Enhanced Dung Beetle Optimization with Neighborhood Mutation (ENMDBO). ENMDBO incorporates three key mechanisms: (1) the Exploring Cosine Similarity Transformation (ECST) strategy, which dynamically adjusts individual similarity to balance global exploration and local exploitation, mitigating the risk of local optima; (2) the Neighborhood Solution Mutation Sharing (NSMS) mechanism, which enhances population diversity by sharing positional information among neighbors, improving search efficiency; and (3) the Tolerance Threshold Detection Mutation (TTDM) mechanism, which detects stagnation in fitness to strengthen the algorithm’s global search capabilities. Experiments on the CEC2017 benchmark suite (Dim = 30, 50, 100) show that ENMDBO achieves superior performance compared to state-of-the-art algorithms, approaching the global optimum. Finally, in WSN coverage optimization, ENMDBO achieves an 86.88% coverage rate, representing an 8.92% improvement over the original DBO, while effectively reducing redundancy. These results underscore ENMDBO’s robustness and effectiveness, establishing it as a practical and reliable solution. (Matlab codes of ENMDBO are available at https://ww2.mathworks.cn/matlabcentral/fileexchange/181820-enhanced-dung-beetle-optimization-with-neighborhood-mutation. | 10.1109/TNSM.2025.3600631 |
Zhi-Bin Zuo, De-Min Wang, Mi-Mi Ma, Miao-Lei Deng, Chun Wang | An Adaptive Contention Window Backoff Scheme Differentiating Network Conditions Based on Deep Q-Learning Network | 2025 | Early Access | Throughput Data communication Wireless sensor networks Wireless networks Optimization Information science Multiaccess communication IEEE 802.11ax Standard Analytical models Wireless fidelity IEEE 802.11 Deep Q-Leaning Network Wireless Networks Deep Reinforcement Learning | In IEEE 802.11 networks, the Contention Window (CW) is a crucial parameter for wireless channel sharing among numerous stations, directly influencing overall network performance. In order to mitigate the performance degradation caused by the increasing number of stations in the network, we propose a novel adaptive CW backoff scheme, termed the ACWB-DQN algorithm. This algorithm leverages the Deep Q-Leaning Network (DQN) to explore a CW threshold, which is utilized as a boundary to differentiate the network load circumstances and learn the best configurations for different network conditions. When stations transmit data frames, different CW optimization strategies are employed based on station transmission status and the CW threshold. This approach aims to enhance network performance by adjusting the CW to increase transmission efficiency when there are fewer competing stations, and lower collision probabilities when there are more competing stations. Simulation results indicate that this approach can optimize station CW, reduce network collision rates, maintain constant throughput and significantly enhance the performance of Wi-Fi networks by means of adjusting the CW threshold according to real-time network conditions. | 10.1109/TNSM.2025.3600861 |
Yadi He, Zhou Wu, Linfeng Liu | Deep Learning Based Link Prediction Method Against Strong Sparsity for Mobile Social Networks | 2025 | Early Access | Feature extraction Social networking (online) Predictive models Deep learning Accuracy Network topology Data mining Recurrent neural networks Sparse matrices Computational modeling link prediction mobile social networks strong sparsity deep learning | Link prediction refers to the prediction of the potential relationships between nodes through exploring the evolution of the historical network topologies. In mobile social networks, the topologies change frequently due to the appearance/disappearance of nodes over time, and the links between nodes are typically very sparse (i.e., mobile social networks are with strong sparsity), which could affect the accuracy of link prediction in mobile social networks seriously. Therefore, this paper proposes a deep learning based Link Prediction Method against Strong Sparsity (LPMSS). LPMSS integrates the graph convolutional network output with encounter matrices to mitigate the negative impact of strong sparsity. Additionally, LPMSS employs the random negative sampling to alleviate the impact of imbalanced link distributions. We also adopt a Times module to capture the temporal topological changes in mobile social networks to enhance the prediction accuracy. Based on three datasets with different sparsity, extensive experiment results demonstrate that LPMSS can significantly improve AUC values while reducing MAE values, confirming its effectiveness in handling the link prediction in the mobile social networks with strong sparsity. | 10.1109/TNSM.2025.3601389 |
Maruthi V, Kunwar Singh | Enhancing Security and Privacy of IoMT Data for Unconscious Patient With Blockchain | 2025 | Early Access | Cryptography Security Polynomials Public key Medical services Interpolation Encryption Data privacy Blockchains Privacy Internet of Medical Things Inter-Planetary File System Proxy re-encryption+ Threshold Proxy re-encryption+ Blockchain Non-Interactive Zero Knowledge Proof Schnorr ID protocol | IoMT enables continuous monitoring through connected medical devices, producing real-time health data that must be protected from unauthorised access and tampering. Blockchain ensures this security with its decentralised, tamper-resistant, and access-controlled ledger. A critical challenge arises when patients are unconscious, making timely access to their IoMT data essential for emergency treatment. To address this, we have created and designed a novel Threshold Proxy Re-Encryption+ (TPRE+) framework that integrates threshold cryptography with unidirectional, non-transitive proxy re-encryption(PRE) with Shamir’s secret sharing to distribute re-encryption capabilities among multiple proxies, reducing single-point failure and collision risks. Our contributions are threefold: (i) We first proposed a semantically secure TPRE+ scheme with Shamir-secret sharing, (ii) Construction of an IND-CCA secure TPRE+ scheme, and (iii) Development of a secure, distributed medical record storage system for unconscious patients, combining blockchain infrastructure, IPFS-based encrypted storage, and our proposed TPRE+ schemes. This integration ensures confidentiality, integrity, and fault-tolerant access to critical patient data, enabling secure and efficient deployment in real-world emergency healthcare scenarios. | 10.1109/TNSM.2025.3602117 |
Ahan Kak, Van-Quan Pham, Huu-Trung Thieu, Nakjung Choi | HexRAN: A Programmable Approach to Open RAN Base Station System Design | 2025 | Early Access | Open RAN Base stations Protocols 3GPP Computer architecture Telemetry Cellular networks Network slicing Wireless networks Prototypes Network Architecture Cellular Systems Radio Access Networks O-RAN Network Slicing Network Programmability | In recent years, the radio access network (RAN) domain has seen significant changes with increased virtualization and softwarization, driven by the Open RAN (O-RAN) movement. However, the fundamental building block of the cellular network, i.e., the base station, remains unchanged and ill-equipped to handle this architectural evolution. In particular, there exists a general lack of programmability and composability along with a protocol stack that grapples with the intricacies of the 3GPP and O-RAN specifications. Recognizing the need for an “O-RAN-native” approach to base station design, this paper introduces HexRAN– a novel base station architecture characterized by key features relating to RAN disaggregation and composability, 3GPP and O-RAN protocol integration and programmability, robust controller interactions, and customizable RAN slicing. Furthermore, the paper also includes a concrete systems-level prototype and comprehensive experimental evaluation of HexRAN on an over-the-air testbed. The results demonstrate that HexRAN uses only 8% more computing resources compared to the baseline, while managing twice the user plane traffic, delivering control plane processing latency of under 120μs, and achieving 100% processing reliability. This underscores the scalability and performance advantages of the proposed architecture. | 10.1109/TNSM.2025.3600587 |
Dániel Unyi, Ernő Rigó, Bálint Gyires-Tóth, Róbert Lovas | Explainable GNN-Based Approach to Fault Forecasting in Cloud Service Debugging | 2025 | Early Access | Debugging Microservice architectures Cloud computing Reliability Observability Computer architecture Graph neural networks Monitoring Probabilistic logic Fault diagnosis Cloud computing Software debugging Microservice architectures Deep learning Graph neural networks Explainable AI Fault prediction | Debugging cloud services is increasingly challenging due to their distributed, dynamic, and scalable nature. Traditional methods struggle to handle large state spaces and the complex interactions between microservices, making it difficult to diagnose failures and identify critical components. This paper presents a Graph Neural Network (GNN)-based approach that enhances cloud service debugging by predicting system-level fault probabilities and providing interpretable insights into failure propagation. Our method models microservice interactions as graphs, where failures propagate probabilistically. Using Markov Decision Processes (MDPs), we simulate failure behaviors, capturing the probabilistic dependencies that influence system reliability. The trained GNN not only predicts fault probabilities but also identifies the most failure-prone microservices and explains their impact. We evaluate our approach on various service mesh structures, including feature-enriched, tree-structured, and general directed acyclic graph (DAG) architectures. Results indicate that our method is effective in the operational phase of cloud services, enabling proactive debugging and targeted optimization. This work represents a step toward more interpretable, reliable, and maintainable cloud infrastructures. | 10.1109/TNSM.2025.3602223 |
Mohammad Saleh Mahdizadeh, Behnam Bahrak, Mohammad Sayad Haghighi | A GNN-Based Autopilot Recommendation Strategy to Mitigate Payment Channel Imbalance Problem in Bitcoin Lightning Network | 2025 | Early Access | Autopilot Bitcoin Routing Network topology Lightning Graph neural networks Scalability Training Topology Blockchains Lightning Network Bitcoin Payment Channels Autopilot Imbalance Recommendation System Graph Neural Network Graph Autoencoder | The Bitcoin Lightning Network, as a second-layer solution for enhancing the scalability of Bitcoin transactions, facilitates transactions through payment channels between nodes. However, the rapid growth of the network and rising transaction volumes have exacerbated the challenge of managing payment channel imbalances. Payment channel imbalance, characterized by the concentration of liquidity in one direction, leads to a decrease in payment success rates, a reduction in the effective lifespan of payment channels, and a decline in the network’s overall efficiency and throughput. This study introduces a graph neural network-based recommendation strategy designed to enhance the Lightning Network’s autopilot system. The proposed approach proactively mitigates channel imbalances by optimizing channel recommendations, enabling dynamic and scalable liquidity management for network users. Simulations conducted using the CLoTH tool demonstrate a 45% increase in payment success rates, a 46% reduction in imbalanced channels, and a 14% increase in the lifespan of payment channels across the network compared to the existing autopilot recommendation strategies, and when compared with the commonly adopted circular rebalancing method, the proposed strategy achieves a 27% improvement in payment success rates. Additionally, we offer a comparative topological analysis between two snapshots of the LN, taken in November 2021 and August 2023, to facilitate unsupervised learning tasks. The results highlight an increase in network centralization alongside a decrease in the number of network size, emphasizing the growing need for decentralization strategies in the LN, such as the approach proposed in this study. | 10.1109/TNSM.2025.3599393 |
Xiaoming He, Zijing He, Wenyun Li, Gang Liu, Xuan Wei | Framework for Real-Time Monitoring of Packet Loss Caused by Network Congestion | 2025 | Early Access | Packet loss Monitoring Real-time systems Accuracy Multiprotocol label switching Encapsulation Artificial intelligence Transport protocols Time-frequency analysis Telecommunication traffic Framework packet loss real-time monitoring network congestion | Network congestion induces performance degradation and increases the uncertainty of service delivery, so it is essential to monitor it in real time. In this paper, we discuss the requirements of real-time monitoring of packet loss caused by congestion, present the problems and challenges faced by existing measurement techniques in monitoring congestion induced packet loss, and propose a comprehensive packet loss monitoring framework. The proposed framework is described in detail and its realizability is demonstrated. The proposed scheme is capable to not only determine the time and location of packet loss occurrence, make the accurate statistics of discarded packets, parse what traffic flows are contained in discarded packets and identify what traffic flows lead to microburst, but also obtain accurate packet loss ratio results with zero error. More importantly, our proposed scheme can achieve little or even no interference to network, and is applicable to any data plane without modifying the forwarding chip and packet header as existing measurement methods do. Experimental results have verified the effectiveness of our proposed scheme. Furthermore, we present three typical application scenarios to demonstrate the advantages of the proposed framework | 10.1109/TNSM.2025.3578056 |
Erhe Yang, Zhiwen Yu, Yao Zhang, Helei Cui, Zhaoxiang Huang, Hui Wang, Jiaju Ren, Bin Guo | Joint Semantic Extraction and Resource Optimization in Communication-Efficient UAV Crowd Sensing | 2025 | Early Access | Sensors Autonomous aerial vehicles Optimization Semantic communication Data mining Feature extraction Resource management Accuracy Data models Data communication UAV crowd sensing semantic communication multi-scale dilated fusion attention reinforcement learning | With the integration of IoT and 5G technologies, UAV crowd sensing has emerged as a promising solution to overcome the limitations of traditional Mobile Crowd Sensing (MCS) in terms of sensing coverage. As a result, UAV crowd sensing has been widely adopted across various domains. However, existing UAV crowd sensing methods often overlook the semantic information within sensing data, leading to low transmission efficiency. To address the challenges of semantic extraction and transmission optimization in UAV crowd sensing, this paper decomposes the problem into two sub-problems: semantic feature extraction and task-oriented sensing data transmission optimization. To tackle the semantic feature extraction problem, we propose a semantic communication module based on Multi-Scale Dilated Fusion Attention (MDFA), which aims to balance data compression, classification accuracy, and feature reconstruction under noisy channel conditions. For transmission optimization, we develop a reinforcement learning-based joint optimization strategy that effectively manages UAV mobility, bandwidth allocation, and semantic compression, thereby enhancing transmission efficiency and task performance. Extensive experiments conducted on real-world datasets and simulated environments demonstrate the effectiveness of the proposed method, showing significant improvements in communication efficiency and sensing performance under various conditions. | 10.1109/TNSM.2025.3603194 |
Menna Helmy, Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad | Slicing for AI: An Online Learning Framework for Network Slicing Supporting AI Services | 2025 | Early Access | Artificial intelligence Training Resource management Network slicing Computational modeling Optimization Quality of service Ultra reliable low latency communication 6G mobile communication Heuristic algorithms Network slicing online learning resource allocation 6G networks optimization | The forthcoming 6G networks will embrace a new realm of AI-driven services that requires innovative network slicing strategies, namely slicing for AI, which involves the creation of customized network slices to meet Quality of Service (QoS) requirements of diverse AI services. This poses challenges due to time-varying dynamics of users’ behavior and mobile networks. Thus, this paper proposes an online learning framework to determine the allocation of computational and communication resources to AI services, to optimize their accuracy as one of their unique key performance indicators (KPIs), while abiding by resources, learning latency, and cost constraints. We define a problem of optimizing the total accuracy while balancing conflicting KPIs, prove its NP-hardness, and propose an online learning framework for solving it in dynamic environments. We present a basic online solution and two variations employing a pre-learning elimination method for reducing the decision space to expedite the learning. Furthermore, we propose a biased decision space subset selection by incorporating prior knowledge to enhance the learning speed without compromising performance and present two alternatives of handling the selected subset. Our results depict the efficiency of the proposed solutions in converging to the optimal decisions, while reducing decision space and improving time complexity. Additionally, our solution outperforms State-of-the-Art techniques in adapting to diverse environmental dynamics and excels under varying levels of resource availability. | 10.1109/TNSM.2025.3603391 |
Huaide Liu, Fanqin Zhou, Yikun Zhao, Lei Feng, Zhixiang Yang, Yijing Lin, Wenjing Li | Autonomous Deployment of Aerial Base Station without Network-Side Assistance in Emergency Scenarios Based on Multi-Agent Deep Reinforcement Learning | 2025 | Early Access | Heuristic algorithms Disasters Optimization Estimation Wireless communication Collaboration Base stations Autonomous aerial vehicles Adaptation models Sensors Aerial base station deep reinforcement learning autonomous deployment emergency scenarios multi-agent systems | Aerial base station (AeBS) is a promising technology for providing wireless coverage to ground user equipment. Traditional methods of optimizing AeBS networks often rely on pre-known distribution models of ground user equipment. However, in practical scenarios such as natural disasters or temporary large-scale public events, the distribution of user clusters is often unknown, posing challenges for the deployment and application of AeBS. To adapt to complex and unknown user environments, this paper studies a method of estimating information from local to global and proposes a multi-agent AeBSs autonomous deployment algorithm based on deep reinforcement learning (DRL). This method attempts to dynamically deploy AeBS to autonomously identify hotspots by sensing user equipment signals without network-side assistance, providing a more comprehensive and intelligent solution for AeBS deployment. Simulation results indicate that our method effectively guides the autonomous deployment of AeBS in emergency scenarios, addressing the challenge of the lack of network-side assistance. | 10.1109/TNSM.2025.3603875 |
Wenjun Fan, Na Fan, Junhui Zhang, Jia Liu, Yifan Dai | Securing VNDN With Multi-Indicator Intrusion Detection Approach Against the IFA Threat | 2025 | Early Access | Monitoring Prevention and mitigation Electronic mail Threat modeling Telecommunication traffic Fans Blocklists Security Road side unit Intrusion detection Interest Flooding Attack Named Data Network Network Traffic Monitoring Denial of Service Road Side Unit | On vehicular named data network (VNDN), Interest Flooding Attack (IFA) can exhaust the computing resources by sending a large number of malicious Interest packets, which leads to the failure of satisfying the legitimate requests and seriously hazards the operation of Internet of Vehicles (IoV). To solve this problem, this paper proposes a distributed network traffic monitoring-enabled multi-indicator detection and prevention approach for VNDN to detect and resist the IFA attacks. In order for facilitating this approach, a distributed network traffic monitoring layer based on road side unit (RSU) is constructed. With such a monitoring layer, a multi-indicator detection approach is designed, which consists of three indicators: information entropy, self-similarity, and singularity, whereby the thresholds are tweaked by the real-time density of traffic flow. Apart from the detection, a blacklisting based prevention approach is realized to mitigate the attack impact.We validate the proposed approach via prototyping it on our VNDN experimental platform using realistic parameters setting and leveraging the original NDN packet structure to corroborate the usage of the required Source ID for identifying the source of the Interest packet, which consolidates the practicability of the approach. The experimental results show that our multi-indicator detection approach has a greatly higher detection performance than those of using indicators individually, and the blacklisting-based prevention can effectively mitigate the attack impact as well. | 10.1109/TNSM.2025.3603630 |
Cheng Long, Haoming Zhang, Zixiao Wang, Yiming Zheng, Zonghui Li | FastScheduler: Polynomial-Time Scheduling for Time-Triggered Flows in TSN | 2025 | Early Access | Job shop scheduling Network topology Dynamic scheduling Real-time systems Heuristic algorithms Delays Ethernet Schedules Deep reinforcement learning Training Time-Sensitive Network Online Scheduling Algorithm Industrial Control | Time-Sensitive Networking (TSN) has emerged as a promising network paradigm for time-critical applications, such as industrial control, where flow scheduling is crucial to ensure low latency and determinism. As production flexibility demands increase, network topology and flow requirements may change, necessitating more efficient TSN scheduling algorithms to guarantee real-time and deterministic data transmission. In this work, we present FastScheduler, a polynomial-time, deterministic TSN scheduler, which can schedule thousands of Time-Triggered (TT) flows within arbitrary network topologies. The key innovations of FastScheduler include an Equivalent Reduction Technique to simplify the generic model while preserving the feasible scheduling space, a Deterministic Heuristic Strategy to ensure a consistent and reproducible scheduling process, and a Polynomial-Time Scheduling Algorithm to perform dynamic and real-time scheduling of periodic TT flows. Extensive experiments on various topologies show that FastScheduler can effectively simplify the model, reducing variables/constraints by 35%/62%, and schedule 1,000 TT flows in subsecond time. Furthermore, it runs 2/3 orders of magnitude faster and improves the schedulability by 12%/20% compared to heuristic/deep reinforcement learning-based methods. FastScheduler is well-suited for the dynamic requirements of industrial control networks. | 10.1109/TNSM.2025.3603844 |