Last updated: 2026-04-25 05:01 UTC
All documents
Number of pages: 162
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Jingyu Gan, Chen Guo, Chongxiang Yao | Construction and Post-Failure Reconstruction of Virtual Backbone Based on Regional Risk Difference in Wireless Sensor Networks | 2026 | Early Access | Broadcasting Broadcast technology Radio broadcasting Radio networks Communication systems Wireless sensor networks Computer networks Routing Wide area networks Network topology Wireless sensor network virtual backbone connected dominating set regional risk difference | In wireless sensor networks (WSNs), virtual backbones (VBs) are widely employed to address issues such as energy constraints and broadcast storms. WSNs are typically modeled as unit disk graphs (UDGs); a VB for data transmission is determined based on the construction of a connected dominating set (CDS) in the graph. Since sensor nodes may fail due to accidental damage or energy depletion, it is necessary to construct a CDS with fault tolerance. In fact, under the influence of complex terrain, significant altitude differences, and environmental perturbations caused by multiple factors, application scenarios frequently have significant differences in failure risk between nodes in different regions. Based on this observation, we optimize the network structure by constructing different CDS types in regions with varying risk factors, introducing the concept of a regional risk difference connected dominating set (RRD-CDS) tailored for heterogeneous hazard levels. In this paper, we enhance network robustness by constructing (k,m)-CDS in high-risk regions, while reducing the number of CDS nodes by building a global (1, 1)-CDS for other regions, thereby designing the RRD-CDS algorithm. When failures cause the RRD-CDS to lose its properties as a CDS, we design a reconstruction algorithm to restore the fault tolerance of RRD-CDS. Simulation results verify the effectiveness of both the RRD-CDS construction algorithm and the RRD-CDS reconstruction algorithm. | 10.1109/TNSM.2026.3686606 |
| Yu Gu, Le Zhang, Yunyi Zhang, Ye Du | SatFedGuard: Semi-Supervised Federated Contrastive Learning with RL-Assisted Bidirectional Distillation for Anomaly Traffic Detection in Satellite Networks | 2026 | Early Access | Low earth orbit satellites Artificial satellites Payloads Jamming Electronic warfare Feeds Broadcasting Broadcast technology Filtering Filters Federated Learning Satellite Network Intrusion Detection Semi-Supervised Learning Edge-Cloud Collaboration | Federated learning-based intrusion detection methods for satellite networks enable model training without sharing local data, thereby ensuring network security while significantly reducing communication overhead. However, due to the difficulty of obtaining large-scale high-quality labeled data in satellite environments, a key challenge lies in how to train intrusion detection models using abundant unlabeled traffic data. We propose SatFedGuard, a semi-supervised federated contrastive learning approach for anomaly traffic detection in satellite networks. SatFedGuard effectively integrates unlabeled in-orbit data with labeled data from ground stations for model training. First, it models the unlabeled satellite traffic data using a contrastive learning framework. To address the challenge of non-IID data distribution, an attention-based dual-path aggregation strategy is designed to generate personalized models for each satellite by leveraging model similarities. Then, a bidirectional multi-granularity distillation method between larger and smaller models is implemented, where reinforcement learning is employed to optimize the weights of different loss terms dynamically. Experiments on two satellite network traffic datasets under non-IID settings demonstrate that the proposed method significantly improves anomaly detection performance while reducing dependence on in-orbit labeled data, achieving F1-Scores of 93.38% (↑11.63%) and 99.80% (↑8.72%), respectively. | 10.1109/TNSM.2026.3685416 |
| Zhenzhen Yan, Lizhi Peng, Peiqiang Liu, Yingshuo Bao, Bo Yang | NT-Transformer: A Non-Pretrained Encrypted Network Traffic Classification Model | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Antennas Motion pictures Communication systems Internet of Things Telecommunication traffic Computer networks encrypted network traffic classification Transformers byte representation uni-gram pre-training deep learning | Network traffic classification plays an indispensable role in network management, Quality of Service (QoS), and cybersecurity. With the widespread encryption techniques applied to network traffic, it has become increasingly challenging to classify network traffic into different management groups accurately. In recent years, pre-training Transformer-based models have been successfully applied to Natural Language Processing (NLP), and researchers have also introduced such models into encrypted network traffic analysis. However, besides the similarities of words in NLP and byte codes in network traffic, there exist essential differences between them, which may cause inefficacy of the pretrained model when being applied to new traffic data. In this paper, we propose a non-pretrained encrypted network traffic classification model based on Transformer called NT-Transformer, which can directly learn labeled network traffic features at two levels of granularity, namely, byte level (uni-gram or bi-gram) and flow level (packet size and packet inter-arrival time), without the relatively expensive pre-training procedure of unlabeled data. This method is validated on three public datasets and three sets of recently collected network traffic data. Experimental results indicate that in some scenarios, pretrained models offer limited performance gains when applied to new encrypted network traffic data not encountered during pretraining, and NT-Transformer with uni-gram byte representation outperforms the state-of-the-art models in terms of pushing the F1 score up by 0.25% - 2.24%. | 10.1109/TNSM.2026.3683410 |
| Wangqing Luo, Jinbin Hu, Hua Sun, Pradip Kumar Sharma, Jin Wang | SALB: Security-Aware Load Balancing for Large Language Model Training in Datacenter Networks | 2026 | Early Access | Training Load management Packet loss Throughput Delays Topology Scheduling Telecommunication traffic Fluctuations Switches Datacenter Networks Load Balancing Data Security Deep Reinforcement Learning | To meet the massive compute and high-speed communication demands of Large Language Model (LLM) training, modern datacenters typically adopt multipath topologies such as Fat-Tree and Clos to host parallel jobs across hundreds to thousands of GPUs. However, LLM training exhibits periodic, high-bandwidth communication patterns. Existing load-balancing schemes become misaligned under dynamic congestion and anomalous surges: they struggle to promptly mitigate iteration-peak congestion and lack effective isolation of anomalous traffic. To address this, we propose Security-Aware Load Balancing (SALB) for LLM training. SALB leverages a Deep Reinforcement Learning (DRL) controller with queue and delay signals for packet-level multipath load balancing and employs path binding to confine suspicious flows. By integrating data security into load balancing, SALB simultaneously achieves high throughput and robust traffic isolation. NS-3 simulation results show that, compared with CONGA, Hermes, and ConWeave, SALB reduces the 99th-percentile flow completion time (FCT) of short flows by an average of 65% and increases the throughput of long flows by an average of 54%. It further outperforms the baselines in aggregate throughput, path utilization, and packet loss rate, thereby significantly enhancing system stability, robustness, and data security. | 10.1109/TNSM.2026.3678979 |
| Alba Jano, Serkut Ayvaşik, Yash Deshpande, Wolfgang Kellerer | QUEST: User-Based Quality of Service Aware Uplink Resource Scheduling | 2026 | Early Access | Payloads Military aircraft Space technology Omnidirectional antennas Broadcasting Feedback Circuits Semiconductor lasers Central Processing Unit Semiconductor optical amplifiers Radio resource management quality of service user context user satisfaction energy efficiency IoTs | Efficient radio resource management (RRM) in 5G networks is increasingly challenged by the diverse quality of service (QoS) requirements of emerging applications and the growing uplink (UL) traffic from resource-constrained devices. Existing scheduling approaches often lack user and service-specific context, limiting their ability to guarantee timely and energy-efficient data transmission, particularly critical for the internet of things (IoT) and mission-critical services. In this work, we introduce QUEST, a QoS-aware UL scheduling framework that exploits the 5G QoS model alongside network and device context to efficiently allocate radio resources. Designed and evaluated in an indoor factory environment, QUEST supports users with various heterogeneous 5QI services under dynamic multi-user conditions. Evaluation results, validated through both real-world measurements and 3GPP-compliant simulations, show that QUEST consistently outperforms traditional channel- and QoS-aware schedulers. It improves QoS compliance, reduces packet drops and serving time, and enhances energy efficiency. For users with stringent QoS demands, measurements show a 13% increase in successfully transmitted packets and a 6.2% reduction in delay for 50% of transmissions, compared to the best-performing baseline. Benchmarking against an optimal scheduler shows that QUEST achieves the closest performance among baselines, while maintaining low complexity, making it a practical and scalable solution for 5G and beyond UL RRM. | 10.1109/TNSM.2026.3685537 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Md Arif Hassan, Bui Duc Manh, Cong T. Nguyen, Chi-Hieu Nguyen, Dinh Thai Hoang, Diep N. Nguyen, Nguyen Van Huynh, Dusit Niyato | SBW 3.0: A Blockchain-Enabled Framework for Secure and Efficient Information Management in Web 3.0 | 2026 | Early Access | Jamming Protocols Semantic Web Smart contracts Consensus protocol Internet Communication systems Internet of Things Computer networks Web 2.0 Web 3.0 blockchain delegated proof-of-stake smart contract game theory non-cooperative game | In this paper, we propose an effective blockchain-enabled information management framework, named Smart Blockchain-based Web 3.0 (SBW 3.0). Our framework aims to handle information within Web 3.0 efficiently, enhance data security and privacy, create new revenue streams, and encourage users to contribute valuable information to websites. To this end, SBW 3.0 employs blockchain technology and smart contracts to manage the decentralized data collection in Web 3.0. Moreover, we introduce a robust consensus mechanism grounded in Delegated Proof-of-Stake (DPoS) to reward user contributions. Furthermore, we develop a non-cooperative game model to examine user behavior in this context and conduct thorough analysis to prove the uniqueness of the Nash equilibrium in our proposed system. Through simulations, we evaluate the performance of SBW 3.0 and analyze the effects of various critical parameters on information contribution. Our results validate the theoretical analysis, showing that the proposed consensus mechanism successfully encourages nodes and users to provide more information, thus overcoming the current limitations of Web 3.0 regarding data decentralization and management. | 10.1109/TNSM.2026.3683881 |
| Li-Chin Siang, Wen-Hsing Kuo, Pei-Chieh Lin, Chih-Wei Huang, De-Nian Yang | FoV Prediction-Based Adaptive Streaming Mechanism for 6DoF Volumetric MR Applications in Multi-Base-Station Networks | 2026 | Early Access | Payloads Antennas Feeds Antennas and propagation Broadcasting Broadcast technology Kalman filters Filters Central Processing Unit Circuits and systems femto-cells resource allocation layer encoding 360-degree video streaming | The emergence of mixed reality (MR) as a significant application in mobile networks has garnered significant attention. Wireless headsets enable unrestricted user movement within femtocell networks comprising numerous small base stations, offering a promising solution for MR applications. However, the complexity of these systems poses challenges in optimizing resource allocation across base stations. This paper proposes a novel resource allocation method for volumetric MR streaming in multi-base-station environments. The method consists of two phases. Firstly, the method uses neural networks to model and forecast users’ viewing directions. Leveraging these predictions, their confidence levels, and layer characteristics, the algorithm adjusts video quality for each user and allocates transmission resources across base stations to optimize overall performance. Through comprehensive analysis, we prove that this novel problem is NP-hard and show that our approach achieves a performance within a bounded gap from the optimal solution. Simulation results reveal that our proposed algorithm outperforms existing techniques, enhancing aggregate performance across diverse scenarios. | 10.1109/TNSM.2026.3685670 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Faissal Ahmadou, Boubakr Nour, Makan Pourzandi, Mourad Debbabi, Chadi Assi | Automating Threat-Aligned Testflows Generation using Ontology-Grounded RAG from CTI Reports | 2026 | Early Access | Radio broadcasting Frequency modulation System-on-chip Filtering Circuits Feedback Filters Integrated circuits MIMICs Millimeter wave integrated circuits Cybersecurity Security Automation Testflow Generation Retrieval-Augmented Generation | The increasing sophistication and complexity of Advanced Persistent Threats (APTs) pose significant challenges to security practitioners. To proactively protect against these threats, security practitioners rely on the generation of testflows, structured sequences of actions designed to verify whether the tactics and behaviors of an APT are present within their organization. However, manually creating such testflows is time-consuming, error-prone, and highly dependent on expert knowledge. Moreover, existing automated approaches suffer from several limitations, including validity, efficiency, and insufficient domain adaptation. To address these challenges, this paper introduces CTI-RAGFlow, to automate the generation of relevant, valid, and effective testflows from unstructured threat reports tailored to specific organizational environments. CTI-RAGFlow introduces three key contributions: (i) a dual-ontology approach, that integrates both a system ontology representing the operational environment and a cybersecurity ontology capturing adversary tactics, techniques, and procedures, improving the precision and accuracy of generated testflows; (ii) a fact-based context retrieval mechanism that combines a hypergraph structured knowledge base with a Retrieval-Augmented Generation pipeline using Large Language Models; and (iii) a fully automated testflow generation process that minimizes manual effort, reduces human error, and facilitates the generation of valid testflow. We evaluate CTI-RAGFlow against three widely used LLM models (e.g., base and fine-tuned models) using publicly available CTI reports for three well-known APTs (e.g., APT41, APT29, APT28). The results show that CTI-RAGFlow outperforms the baselines in terms of semantic relevance, coverage, validity, and effectiveness in verifying multi-stage cyberattack scenarios. | 10.1109/TNSM.2026.3684808 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Guisong Yang, Yechao Huang, Panxing Huang, Xingyu He | A Distributed SDN Controller-Based Computing Framework for Effective in-orbit Computing | 2026 | Early Access | Low earth orbit satellites Artificial satellites Aerospace and electronic systems Telemetry Antennas Antennas and propagation Central Processing Unit Software defined networking Computer networks Communication systems Task Scheduling Software Defined Network Satellite Network Placement of SDN Controller | The rapid development of Low Earth Orbit (LEO) satellite networks has made in-orbit computing more feasible, offering a solution for processing real-time, diverse user tasks. Compared with traditional cloud computing in ground cloud computing center, directly computing on the LEO satellite can significantly reduce task-processing delay. However, challenges remain, including the limited sensing and computing capabilities of satellites, high delays in processing task requests, and frequent switching of control domains due to the relative movement between LEO satellites and nodes in other orbits. To address these challenges and improve task management, computing is treated as a Virtual Network Function (VNF), managed by Software-Defined Networking (SDN) controllers. This paper proposes a distributed SDN controller-based computing framework, where task information is forwarded to SDN controllers, which then use a task scheduling strategy to allocate tasks to suitable computing nodes for processing. To support the implementation of this framework, we first propose a heuristic SDN controller placement strategy that uses a tiling method to divide the LEO satellite network into SDN control domains and places the controller at the midpoint of each domain Then, we propose a Double Deep Q-Network (DDQN) algorithm for in-orbit task scheduling, which adaptively optimizes task scheduling strategy to minimize task-processing delay and ensure a high task completion rate. Finally, Simulations are conducted in two parts to evaluate the framework. The first part validates the DDQN-based task scheduling strategy, achieving significant reductions in task-processing delay and improved task completion rates compared to conventional strategies. The second part assesses the impact of SDN control domain shape and size on task-processing delay, confirming domain size as the dominant factor influencing delay. | 10.1109/TNSM.2026.3685308 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Xingqi Wu, Junaid Farooq, Juntao Chen | Multi-Agent Resource Orchestration Based on D3QN for Network Slicing in 5G Edge-Cloud Networks | 2026 | Vol. 23, Issue | Resource management Dynamic scheduling Network slicing Ultra reliable low latency communication Costs Training Topology Computational modeling Servers Real-time systems 5G network slicing resource orchestration MARL micro-services | Optimizing resource orchestration in network slicing is essential for the performance of diverse applications in 5G edge-cloud networks. This paper introduces a novel approach utilizing multi-agent reinforcement learning (MARL) with a dueling double deep Q-network (D3QN) to efficiently manage dynamic resource provisioning to the different traffic flows. We model a network slicing environment with applications generating stochastic resource demands, simulating real-world virtual network patterns over physical infrastructure. Our MARL-based scheme adapts to the varying needs of traffic flows, balancing compute and memory resource allocation under limited information. Comparative analysis demonstrates the superiority of our approach over traditional static methods, particularly for ultra-reliable low-latency communication (URLLC) traffic flows, by minimizing latency and enhancing resource efficiency. The effectiveness of the proposed framework is validated through extensive simulations, which demonstrate up to 45% higher average utility for URLLC traffic flows and 18% improvement in overall resource efficiency compared with baseline strategies. These results confirm that the framework can simultaneously ensure stringent service requirements and enhance system-wide performance in Next-Generation networks. | 10.1109/TNSM.2025.3643340 |
| Zilong Jin, Xin Zhang, Jian Su, Lejun Zhang, Jian Shen | Subgraph-Driven Lightweight Federated Learning for Spatiotemporal Cellular Traffic Prediction | 2026 | Vol. 23, Issue | Predictive models Spatiotemporal phenomena Data models Computational modeling Adaptation models Training Federated learning Correlation Accuracy Costs Cellular traffic prediction federated learning graph neural network | The rapid expansion of mobile communication networks has led to a surge in cellular traffic, highlighting the need for advanced prediction models to improve network performance. Federated learning (FL) offers a promising solution by enabling distributed model training across multiple nodes, aligning well with the decentralized nature of modern networks. However, applying FL to spatiotemporal cellular traffic prediction is challenging due to the substantial communication overhead in distributed learning. To address this, we propose LFedSG, a lightweight FL framework incorporating subgraph partitioning for spatiotemporal traffic prediction. LFedSG supports collaborative training while preserving inter-client dependencies critical for accurate prediction. Communication efficiency is achieved by focusing on essential model parameters, while subgraph partitioning and spatiotemporal graph convolutional networks (STGCN) enhance spatial and temporal correlation modeling. An adaptive transmission weight pruning strategy further reduces communication and computation costs. Extensive experiments on the Telecom Italia and Pems07 datasets demonstrate that LFedSG achieves higher predictive accuracy than traditional methods, with significant reductions in communication overhead and training time, validating its effectiveness and scalability for large-scale mobile network environments. | 10.1109/TNSM.2025.3645253 |
| Yali Yuan, Ruolin Ma, Jian Ge, Guang Cheng | Robust and Invisible Flow Watermarking With Invertible Neural Network for Traffic Tracking | 2026 | Vol. 23, Issue | Watermarking Decoding Feature extraction Correlation Robustness Encoding Delays Encryption Data mining Accuracy Flow watermarking inter-packet delay INN invisibility robustness | This paper introduces an innovative blind flow watermarking framework on the basis of Invertible Neural Network (INN) called IFW, which aims to solve the problem of suboptimal encoder-decoder coupling in existing end-to-end watermarking architectures. The framework tightly couples the encoder and decoder to achieve highly consistent feature mapping using the same parameters, thus effectively avoiding redundant feature embedding. In addition, this paper adopts the INN to implement watermarking, which supports forward encoding and backward decoding, and the watermark extraction is completely dependent on the embedding algorithm without the need for the original network flow. This feature enables both the embedding and the blind extraction of watermarks simultaneously. Extensive experiments demonstrate that the proposed IFW method achieves a watermark extraction accuracy exceeding 96.6% and maintains a stable K-S test p-value above 0.85 in both simulated and real-world Tor traffic environments. These results indicate a clear advantage over mainstream baselines, highlighting the method’s ability to jointly ensure robustness and invisibility, as well as its strong potential for real-world deployment. | 10.1109/TNSM.2025.3645079 |
| Eduardo Castilho Rosa, Daniel Nunes Corujo, Flávio de Oliveira Silva | CoFIB: A Memory-Efficient NDN FIB Design for Programmable Edge Switches | 2026 | Vol. 23, Issue | Data structures Random access memory Pipelines Memory management Throughput Hardware Routing Filters Ribs Optimization FIB SDN P4 name lookup | Designing high-performance and memory-efficient data structures for the Forwarding Information Base (FIB) in Named-Data Networking (NDN) is a challenging task. Since the FIB size is orders of magnitude larger than IP routing tables, scaling it to store millions of prefixes in SRAM/TCAM memory in programmable switches is an open problem. To address this issue, we propose a Compressed FIB data structure called CoFIB, designed to run on edge programmable switches in an SDN-based environment. The CoFIB is a collection of exact-match tables placed in an optimized manner in both ingress and egress pipelines. We propose a LNPM algorithm that carefully recirculates packets in the pipeline. We also introduce the concept of canonical name prefixes to reduce memory footprint and propose an algorithm to extract canonical prefixes from the Routing Information Base (RIB). Experimental results show that CoFIB can compress memory up to $16.58{\times }$ compared to the state-of-the-art, with no significant impact on throughput compared to hardware-based solutions. Additionally, our proposed table placement optimization for LNPM increases the number of packets processed at a line rate by 23.17% compared to the linear table placement approach using a large NDN name dataset. | 10.1109/TNSM.2025.3641145 |
| Xi Xu, Yang Yang, Wei Huang, Songtao Guo, Guiyan Liu | VNF-FG Placement and Admission Control in SDN and NFV-Enabled IoT Networks: A Hierarchical Deep Reinforcement Learning Method | 2026 | Vol. 23, Issue | Admission control Feature extraction Virtual links Internet of Things Recurrent neural networks Heuristic algorithms Bandwidth Resource management Computational modeling Approximation algorithms Internet of Things network function virtualization deep reinforcement learning VNF-FG placement | Software Defined Networking (SDN) and Network Function Virtualization (NFV) are expected to provide greater flexibility and manageability for next-generation IoT networks. In this context, network services should be modeled as Virtual Network Function Forwarding Graphs (VNF-FGs). A key challenge is efficient allocation of resources for sequentially arriving network service requests, a process known as VNF-FG placement. Most existing algorithms either manually or partially extract features from the physical network and VNF-FG or adopt a greedy approach, allocating resources as long as a feasible solution exists, which may over-allocate resources to VNF-FG requests, ultimately harming infrastructure providers’ long-term revenue. In this paper, we propose a VNF-FG placement and admission control algorithm based on hierarchical reinforcement learning, called EAC. It consists two levels of agents: a coarse-level agent that generates placement strategies and rejects requests with no feasible placement strategies, and a refine-level agent that implements admission control and rejects requests that are detrimental to long-term revenue. To fully capture the topological features of both the physical network and the VNF-FG, we employ a customized Graph Attention Network (GAT) that incorporates link feature awareness and enables deeper exploration. To fully explore historical temporal information for admission control, we construct state triples and feed them into a Recurrent Neural Network (RNN). Using Proximal Policy Optimization (PPO) as the foundational training algorithm, the corresponding agents are trained hierarchically. Extensive experimental results demonstrate that the proposed EAC algorithm outperforms existing state-of-the-art solutions in terms of acceptance rate, revenue-to-cost ratio, and long-term average revenue. | 10.1109/TNSM.2025.3640927 |
| Koki Koshikawa, Yue Su, Jong-Deok Kim, Won-Joo Hwang, Zhetao Li, Kien Nguyen, Hiroo Sekiya | Impacts of Overlay Topologies and Peer Selection on Latencies in IoT Blockchain | 2026 | Vol. 23, Issue | Peer-to-peer computing Blockchains Topology Internet of Things Network topology Delays Security Reliability Overlay networks Propagation delay Ethereum overlay P2P proof-of-authority peer selection latency | The integration of blockchain with the Internet of Things (IoT) offers strong guarantees of data integrity and decentralized trust; however, latency remains a critical barrier to scalability. Under Ethereum’s default random peering, IoT deployments exhibit propagation delays ranging from 500 ms to 1000 ms, causing stale blocks and inconsistent state updates. This paper investigates the impact of peer-to-peer (P2P) overlay topologies on latency performance and introduces a lightweight peer-selection algorithm, Dual Perigee, designed to jointly optimize transaction-oriented latency (TOL) and block-oriented latency (BOL). We first develop a method to construct canonical overlay configurations (i.e., Erdős-Rényi, Barabási-Albert, and Random-Regular) and evaluate their influence on latency in a controlled IoT-blockchain environment. Experimental results reveal that static topologies fail to consistently minimize delay due to redundant message amplification and queuing effects. To address this, Dual Perigee extends the state-of-the-art Perigee algorithm by incorporating block propagation metrics into its scoring function while maintaining low computational overhead. In a 50-node Proof-of-Authority network emulated on Mininet-Wifi, Dual Perigee reduces TOL by up to 54.7% and BOL by 48.5% compared to Ethereum’s default peering, and outperforms Perigee by up to 23.4% in BOL. These findings demonstrate that latency-aware peer selection is essential for achieving responsive and scalable IoT-blockchain systems under dynamic network conditions. | 10.1109/TNSM.2025.3645139 |