Last updated: 2026-03-21 05:01 UTC
All documents
Number of pages: 159
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Jing Gao, Lei Feng, Fanqin Zhou, Mianxiong Dong, Peng Yu, Kaoru Ota, Xuesong Qiu | Deterministic Delay-Aware Task Scheduling over In-Network Computing: A Graph Embedding-Based DRL Approach | 2026 | Early Access | As the in-network computing (INC) paradigm evolves, efficient scheduling of dependent tasks within complex network systems becomes increasingly crucial. The network needs to handle high-level resource demands while adhering to strict latency requirements. Deterministic delay constraints are particularly critical in applications that rely on directed acyclic graphs (DAGs). To address this challenge, we first propose a deterministic delay-aware task scheduling optimization problem over INC to maximize resource utilization and ensure task acceptance. We accurately establish the complex deterministic delay constraint through traffic arrival and service curves and utilize network calculus for conversion to facilitate solving. Then, we further transform the task optimization problem into MDP and develop a deep reinforcement learning (DRL) algorithm that combines graph neural network (GNN) and delay-aware proximal policy optimization (DPPO) to solve it, called the Deterministic Delay-aware Task Scheduling (DDTS) scheme. It utilizes multilayer GNN to handle task dependencies and applies the DPPO algorithm to introduce deterministic delay penalty factors to evaluate policy operations, achieving optimal task scheduling. The simulation results demonstrate the significant advantages of the DDTS scheme over existing algorithms and task scheduling schemes in terms of task acceptance rate and resource utilization. | 10.1109/TNSM.2026.3676106 | |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Shaohui Gong, Luohao Tang, Jianjiang Wang, Quan Chen, Cheng Zhu | A Key Node Set Analysis Method For Regional Service Denial In Mega-Constellation Networks | 2026 | Early Access | Satellites Measurement Analytical models Robustness Collaboration Satellite constellations Protection Degradation Correlation Spatiotemporal phenomena Mega-Constellation Networks Regional Service Service Denial Key Node Set Temporal Networks Mixed-Integer Programming | Mega-constellation networks (MCNs) face the significant threats of regional service denial attacks. To improve the robustness of regional services in MCNs against such attacks, a cost-effective approach is to identify key node sets for targeted protection efforts. This paper formally defines the key node set analysis problem for regional service denial in MCNs and develops a comprehensive solution framework. First, we develop a regional service capability analysis model that considers the dynamic collaboration of multiple satellites within regional communication service scenarios in MCNs, alongside a temporal network model for their collaborative relationships. Next, we design a multi-satellite criticality metric that quantifies the multi-dimensional impacts of satellite node set failures on regional service capabilities. Building on these, we construct a mixed-integer programming-based key node set analysis model to achieve precise identification of key node sets. Finally, simulation experiments are conducted to verify and analyze the proposed methods, providing insights to enhance the robustness of regional services in MCNs. | 10.1109/TNSM.2026.3672157 |
| Junyan Guo, Shuang Yao, Yue Song, Le Zhang, Xu Han, Liyuan Chang | EF-CPPA: Escrow-Free Conditional Privacy-Preserving Authentication Scheme for Real-Time Emergency Messages in Smart Grids | 2026 | Early Access | Authentication Smart grids Security Privacy Smart meters Logic gates Real-time systems Vehicle dynamics Time factors Power system reliability Smart grid emergency message authentication conditional privacy preservation escrow-free key generation unlinkability dynamic joining and revocation | Timely and secure emergency message delivery is critical to resilient smart-grid operation and rapid disturbance response. However, existing schemes remain inadequate, leaving smart grids vulnerable to security and privacy threats and causing verification bottlenecks, particularly when nonlinear emergency measurements cannot be homomorphically aggregated, which prevents bandwidth-efficient in-network aggregation and scalable batch verification. We propose EF-CPPA, an escrow-free, conditional privacy-preserving authentication scheme for real-time emergency messaging in smart grids. EF-CPPA enables smart meters to deliver authenticated emergency messages to the CC via power gateways verifiable as legitimate relays, while ensuring the confidentiality, integrity, and unlinkability of embedded nonlinear measurements. EF-CPPA further provides conditional anonymity with accountable tracing, as well as origin authentication, intra-domain verification, and scalable batch verification under bursty multi-meter messaging. An ECDLP-based escrow-free key-generation mechanism reduces reliance on the CC and enables efficient node joining and revocation. Security analysis shows that EF-CPPA achieves existential unforgeability under chosen-message attacks (EUF-CMA) and satisfies the stated security and privacy requirements. Performance evaluation demonstrates low computational, communication, energy, and node-management overhead, making EF-CPPA suitable for security-critical, time-sensitive smart-grid emergency messaging. | 10.1109/TNSM.2026.3672754 |
| Amin Mohajer, Abbas Mirzaei, Mostafa Darabi, Xavier Fernando | Joint SLA-Aware Task Offloading and Adaptive Service Orchestration with Graph-Attentive Multi-Agent Reinforcement Learning | 2026 | Early Access | Quality of service Resource management Observability Training Delays Job shop scheduling Dynamic scheduling Bandwidth Vehicle dynamics Thermal stability Edge intelligence network slicing QoS-aware scheduling graph attention networks adaptive resource allocation | Coordinated service offloading is essential to meet Quality-of-Service (QoS) targets under non-stationary edge traffic. Yet conventional schedulers lack dynamic prioritization, causing deadline violations for delay-sensitive, lower-priority flows. We present PRONTO, a multi-agent framework with centralized training and decentralized execution (CTDE) that jointly optimizes SLA-aware offloading and adaptive service orchestration. PRONTO builds on Twin Delayed Deep Deterministic Policy Gradient (TD3) and incorporates spatiotemporal, topology-aware graph attention with top-K masking and temperature scaling to encode neighborhood influence at linear coordination cost. Gated Recurrent Units (GRUs) filter temporal features, while a hybrid reward couples task urgency, SLA satisfaction, and utilization costs. A priority-aware slicing policy divides bandwidth and compute between latency-critical and throughput-oriented flows. To improve robustness, we employ stability regularizers (temporal smoothing and confidence-weighted neighbor alignment), mitigating action jitter under bursts. Extensive evaluations show superior QoS and channel utilization, with up to 27.4% lower service delay and over 18% higher SLA Satisfaction Rate (SSR) compared with strong baselines. | 10.1109/TNSM.2026.3673188 |
| Ying-Chin Chen, Chit-Jie Chew, Wei-Bin Lee, Iuon-Chang Lin, Jun-San Lee | IROVF:Industrial Role-Oriented Verification Framework for safeguarding manufacture line deployment | 2026 | Early Access | Security Manufacturing Standards Industrial Internet of Things IEC Standards Authentication Computer crime Smart manufacturing Protocols SCADA systems Industrial role-oriented verification production line deployment | Traditionally, industrial control systems operate in isolated networks with proprietary solutions. As smart factories and digital twins have become inevitable with AI advancement, the rapid adoption of Industrial Internet of Things (IIoT) devices has significantly increased cybersecurity risks. More precisely, the complexity of industrial environments, which includes production processes and device roles, creates substantial challenges for secure deployment. The authors introduce a bottom-up, industrial role-oriented verification framework (IROVF) for manufacturing line deployment. IROVF incorporates SCADA's MTU and RTU components, which are mapped to distinct device roles. This provides authentication and least-privilege principles that are tailored to factory environments. The proposed framework designs an alarm strategy, which can be helpful to detect and report potential operational disruptions during runtime, thus minimizing impact on system availability. Experimental results demonstrate the superior security coverage of the proposed framework compared to existing research, while a comprehensive application scenario validates its practical applicability. The scalable security parameters of IROVF allow organizations to select appropriate security levels based on their specific requirements. IROVF provides an effective security solution for modern industrial control systems during deployment phases. | 10.1109/TNSM.2026.3672975 |
| Pietro Spadaccino, Paolo Di Lorenzo, Sergio Barbarossa, Antonia M. Tulino, Jaime Llorca | SPARQ: An Optimization Framework for the Distribution of AI-Intensive Applications under Non-Linear Delay Constraints | 2026 | Early Access | Computational modeling Delays Resource management Routing Optimization Load modeling Graphics processing units Microservice architectures Cloud computing Stochastic processes Edge computing service function chain service graph service placement resource allocation cloud network flow | Next-generation real-time compute-intensive applications, such as extended reality, multi-user gaming, and autonomous transportation, are increasingly composed of heterogeneous AI-intensive functions with diverse resource requirements and stringent latency constraints. While recent advances have enabled very efficient algorithms for joint service placement, routing, and resource allocation for increasingly complex applications, current models fail to capture the non-linear relationship between delay and resource usage that becomes especially relevant in AI-intensive workloads. In this paper, we extend the cloud network flow optimization framework to support queueing-delay-aware orchestration of distributed AI applications over edge-cloud infrastructures. We introduce two execution models, Guaranteed-Resource (GR) and Shared-Resource (SR), that more accurately capture how computation and communication delays emerge from system-level resource constraints. These models incorporate M/M/1 and M/G/1 queue dynamics to represent dedicated and shared resource usage, respectively. The resulting optimization problem is non-convex due to the non-linear delay terms. To overcome this, we develop SPARQ, an iterative approximation algorithm that decomposes the problem into two convex sub-problems, enabling joint optimization of service placement, routing, and resource allocation under nonlinear delay constraints. The modeling approach is validated against real-world data. Simulation results demonstrate that the SPARQ not only offers a more faithful representation of system delays, but also substantially improves resource efficiency and the overall cost-delay tradeoff compared to existing state-of-the-art methods. | 10.1109/TNSM.2026.3673194 |
| Ebrima Jaw, Moritz Müller, Cristian Hesselman, Lambert Nieuwenhuis | Reproducibility Study and Assessment of the Evolution of Serial BGP Hijacking Events | 2026 | Early Access | Internet Routing Border Gateway Protocol Routing protocols Security IP networks Cloud computing Autonomous systems Authorization Scalability Border Gateway Protocol (BGP) Prefix hijacks RPKI Regional Internet Registries (RIR) Serial hijackers | The Border Gateway Protocol (BGP) is the Internet’s most crucial protocol for efficient global connectivity and traffic routing. However, BGP is well known to be susceptible to route hijacks and leaks. Route hijacks are the intentional or unintentional illegitimate announcements of network resources that can compromise the confidentiality, integrity, and availability of communication systems. In the past, the so-called “serial hijackers” have hijacked Internet resources multiple times, some lasting for several months or years. So far, only the paper “Profiling BGP Serial Hijackers” has explicitly focused on these repeat offenders, and it dates back to 2019. Back then, they had to process large amounts of BGP announcements to find a few potential serial hijackers. In this paper, we revisit the profiling of serial hijackers. We reproduced the 2019 study and showed that we can identify potential offenders with less data while achieving similar accuracy. Our study confirms that there has been no significant increase in the evolution of serial hijacking activities in the last five years. We then extend their research, further analyze the characteristics of the serial hijackers, and show that most of the alleged serial hijackers are still active on the Internet. We also find that 22.9% of the hijacks violated RPKI objects but were still widely propagated, and that even MANRS participants were among the propagating networks. | 10.1109/TNSM.2026.3671613 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Mohammed A. M. Ali, Liqiang Zhao, Luhan Wang, Kai Liang, Adnan A. O. Al-Awadhi, Heng Zhao, Guorong Zhou, Huda Ali, Ahmed Al-Tbali, Paolo Bellavista | Bottleneck-Based Deep Learning-Driven Resource Allocation in O-RAN | 2026 | Early Access | With increasing demands for ultra-reliable, low-latency applications and next-generation network services, integrating artificial intelligence and machine learning (AI/ML) into Open Radio Access Network (O-RAN) components has become a critical research focus. However, realizing the full potential of AI/ML in O-RAN presents unresolved challenges due to the absence of system-level mechanisms for dynamic resource allocation and limited coordination among the functionally separated components. The paper addresses some of these challenges by proposing a bottleneck-based deep learning-driven resource allocation approach that employs a Gated Recurrent Unit (GRU)-based forecasting model to proactively identify and mitigate bottleneck resources, enabling the system to adapt to fluctuating user demands and varying network conditions, and guiding task reallocation through policy-driven decisions. Our approach combines the capabilities of the Non-Real-Time (Non-RT) and Near-Real-Time (Near-RT) RAN Intelligent Controllers (RICs) across the cloud-edge continuum. Since edge computing nodes often have limited resources and are more expensive compared to cloud infrastructure, components of the Near-RT RIC are deployed at the edge, while Non-RT RIC components are placed in the cloud. We implement this framework in both xApp and rApp forms, fully compliant with O-RAN specifications, and conduct extensive performance evaluations using real-world network data in an extended Kubernetes environment, demonstrating the integration of Near-RT RIC at the edge and Non-RT RIC in the cloud. Comprehensive performance evaluations conducted on the O-RAN Software Community (OSC) testbed demonstrate significant improvements in network efficiency, scalability, and latency, as the proposed approach significantly outperforms existing methods by reducing resource utilization by 14%–40%, reducing task delay by 21.6%–44.0%, and achieving an admittance ratio improvement ranging from 6.48% to 16.6% compared to other approaches. | 10.1109/TNSM.2026.3675573 | |
| Masaki Oda, Imran Ahmed, Akio Kawabata, Eiji Oki | Optimistic Synchronization-Based Server Allocation with Preventive Start-Time Optimization Under Server Failure in Delay-Sensitive Applications | 2026 | Early Access | Real-time applications require low latency and strict event ordering to ensure seamless operation. Distributed server processing is effective for this purpose, and there are two synchronization algorithms: a conservative synchronization algorithm (CSA) and an optimistic synchronization algorithm (OSA). OSA improves delay performance compared to CSA. While prior studies have considered OSA, they have not incorporated the impact of server failures. This paper proposes an OSA-based server allocation model for delay-sensitive applications with preventive start-time optimization (PreSO) under single-server failures (OSA-PreSO). The proposed OSA-PreSO model minimizes the largest total delay across all failure scenarios while satisfying constraints in OSA with PreSO under single-server failures. We formulate the proposed model as an integer linear programming (ILP) problem. In OSA-PreSO, the objective is to minimize the largest total delay across all failure scenarios, without giving special consideration to the total delay in the no-failure scenario. As a result, a penalty arises in the form of an increased total delay in the no-failure scenario. To reduce the penalty, we develop an improved OSA-PreSO model, OSA-PreSO-LP (low-penalty), which reduces the total delay in the nofailure scenario while maintaining the same delay characteristics in failure scenarios. We prove that the decision version of OSA-PreSO is NP-complete. We introduce heuristic algorithms to handle large-scale problems. Numerical results show that the proposed OSA-PreSO model reduces the delay compared to the conventional CSA-based model by effectively utilizing server memory resources. We observe that the proposed model achieves a lower largest total delay than start-time optimization and provides greater stability by preventing unnecessary user reassignments compared to run-time optimization. Numerical results also show that OSA-PreSO-LP reduces the penalty at most by 83%, while maintaining the same delay characteristics in failure scenarios. | 10.1109/TNSM.2026.3676230 | |
| Yanxu Lin, Renzhong Zhong, Jingnan Xie, Yueting Zhu, Byung-Gyu Kim, Saru Kumari, Shakila Basheer, Fatimah Alhayan | Privacy-Preserving Digital Publishing Framework for Next-Generation Communication Networks: A Verifiable Homomorphic Federated Learning Approach | 2026 | Early Access | Electronic publishing Federated learning Cryptography Communication networks Homomorphic encryption Next generation networking Complexity theory Protocols Privacy Optimization Digital publishing federated learning next-generation communication networks Chinese remainder theorem | Next-generation communication networks are revolutionizing digital publishing through intelligent content distribution and collaborative optimization capabilities. However, existing federated learning approaches face fundamental limitations, including trusted third-party dependencies, excessive communication overhead, and vulnerability to collusion attacks between servers and participants. This paper introduces VHFL-DP, a verifiable homomorphic federated learning framework for digital publishing environments operating within 6G network infrastructures. The framework addresses critical privacy and scalability challenges through four key innovations: a distributed cryptographic key generation protocol that eliminates trusted third-party requirements, Chinese remainder theorem-based dimensionality reduction, auxiliary validation nodes that enable independent verification with constant-time complexity, and an intelligent incentive mechanism that rewards digital publishing platforms based on objective contribution quality metrics. Experimental evaluation on MNIST and Amazon reviews datasets across six baseline methods demonstrates that VHFL-DP achieves superior performance with accuracy improvements of 4.2% over the best baseline method. The framework maintains constant verification time ranging from 2.73 to 2.91 seconds regardless of platform count, increasing from ten to fifty, or dropout rates reaching thirty percent. Security evaluation reveals strong resilience with only 2.4 percentage point accuracy degradation under poisoning attacks compared to 6.7-7.0 points for baseline method, inference attack success near random guessing at 51.3%, and 92.4% successful aggregation under Byzantine adversaries. | 10.1109/TNSM.2026.3667167 |
| Beibei Li | B-TWGA: A Trusted Gateway Architecture Based on Blockchain for Internet of Things | 2026 | Early Access | Internet of Things Blockchains Security Hardware Logic gates Computer architecture Sensors Radiofrequency identification Trust management Middleware Internet of Things communication links Blockchain-based Trustworthy Gateway Architecture | Internet of Things (IoT) terminals are commonly used for data sensing and edge control. The communication links between these hardware devices are critical points that are vulnerable to security attacks. Moreover, these links are usually composed of resource-constrained nodes that cannot implement strong security protections. To address these security threats, we introduce a Blockchain-based Trustworthy Gateway Architecture (B-TWGA), which does not rely on additional thirdparty management institutions or hardware facilities, nor does it require central control. Our proposal further considers the possibility of Denial of Service (DoS) attacks in blockchain transactions, ensuring secure storage and seamless interaction within the network. The proposed scheme offers advantages such as tamper-proofing, protection against malicious attacks, and reliability while maintaining operational simplicity. Experimental results demonstrate that B-TWGA maintains stable trust levels even when 40% of the network nodes are malicious, effectively mitigates trust degradation caused by vote-stuffing and switch attacks, and ensures high transaction processing performance, achieving an average throughput of 97.55% for storage transactions with practical response times below 0.7s for typical trust file sizes. | 10.1109/TNSM.2026.3671208 |
| Suraj Kumar, Soumi Chattopadhyay, Chandranath Adak | Anomaly Resilient Temporal QoS Prediction using Hypergraph Convoluted Transformer Network | 2026 | Early Access | Quality of service Accuracy Transformers Collaborative filtering Matrix decomposition Feature extraction Tensors Convolution Computational modeling Predictive models Graph convolution Hypergraph Temporal QoS prediction Transformer network | Quality-of-Service (QoS) prediction is a critical task in the service lifecycle, enabling precise and adaptive service recommendations by anticipating performance variations over time in response to evolving network uncertainties and user preferences. However, contemporary QoS prediction methods frequently encounter data sparsity and cold-start issues, which hinder accurate QoS predictions and limit the ability to capture diverse user preferences. Additionally, these methods often assume QoS data reliability, neglecting potential credibility issues such as outliers and the presence of greysheep users and services with atypical invocation patterns. Furthermore, traditional approaches fail to leverage diverse features, including domain-specific knowledge and complex higher-order patterns, essential for accurate QoS predictions. In this paper, we introduce a real-time, trust-aware framework for temporal QoS prediction to address the aforementioned challenges, featuring an end-to- end deep architecture called the Hypergraph Convoluted Transformer Network (HCTN). HCTN combines a hypergraph structure with graph convolution over hyper-edges to effectively address high-sparsity issues by capturing complex, high-order correlations. Complementing this, the transformer network utilizes multi-head attention along with parallel 1D convolutional layers and fully connected dense blocks to capture both fine-grained and coarse-grained dynamic patterns. Additionally, our approach includes a sparsity-resilient solution for detecting greysheep users and services, incorporating their unique characteristics to improve prediction accuracy. Trained with a robust loss function resistant to outliers, HCTN demonstrated state-of-the-art performance on the large-scale WSDREAM-2 datasets for response time and throughput. | 10.1109/TNSM.2026.3674650 |
| Junqing Wang, Lejun Zhang, Zhihong Tian, Kejia Zhang, Shen Su, Jing Qiu, Yanbin Sun, Ran Guo | 6Global: Dynamic IPv6 Active Address Scanning Assisted by Global Perspective | 2026 | Early Access | Clustering algorithms Heuristic algorithms 6G mobile communication Accuracy Privacy Logic Industrial control Focusing Feature extraction Entropy network measurement target generation IPv6 active address detection dynamic scanning | Network scanning is crucial for both network management and cybersecurity. However, due to the vast address space of IPv6, brute-force scanning is infeasible. Seed-based target generation algorithms have recently attracted considerable research attention. However, existing target generation algorithms lack a deeper exploration of patterns, leading to poor capture of dense regions and consequently low hitrate. To address this issue, we propose 6Global, a dynamic IPv6 active address scanning method assisted by global perspective. 6Global first performs rapid clustering of seed addresses based on their descriptive attributes. Then, for each cluster, patterns are generated in a bottom-up manner based on entropy, using subranges to represent patterns and resulting in denser patterns. Finally, dynamic scanning is conducted using these patterns. During scanning, the reward of each pattern is dynamically adjusted based on its active density and global statistics, which enhances the capability in capturing dense regions. Experimental results on six seed datasets show that 6Global overall outperforms seven baseline methods and demonstrates significant advantages across multiple datasets. | 10.1109/TNSM.2026.3674490 |
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Jingyu Wang, Bo He, Jinyu Zhao, Yixin Xuan, Haifeng Sun, Qi Qi, Junzhe Liang, Zirui Zhuang, Jianxin Liao | LLM-powered Intent-driven Configuration Generation for Multi-vendor Networks | 2026 | Early Access | Syntactics Codes Manuals Delays Translation Large language models Adaptation models Cross lingual Multilingual Decoding Network Configuration configuration generation multi-vendor | Network configuration management has become increasingly complex, inefficient, and prone to errors due to frequent updates in command structures and the prevalence of multi-vendor network infrastructures. To tackle these challenges, this paper introduces a novel cognitive communication approach, formulating a new task called intent-driven multi-vendor network configuration generation. Within the broader intent-based networking lifecycle, this task specifically targets the realization and command generation stage—translating natural language operational intents into accurate and syntactically valid network commands compatible with multiple vendors, rather than addressing high-level intent interpretation or decomposition. Three primary challenges are addressed: syntactical command validity, vendor-specific syntax diversity, and outdated or inconsistent network knowledge. We propose ConfGen, a cognitive and intent-driven multi-vendor configuration generation framework that consists of two phases: vendor-agnostic syntax retrieval and syntax-constrained command generation. In the first phase, a cognitive retrieval mechanism and reranking strategy identify the most relevant syntax structures based on user intents, while vendor-specific syntax components are effectively generalized. The second phase employs a Large Language Model (LLM) guided by retrieved syntax constraints and user intents to generate precise and valid network commands. To ensure syntactical correctness and vendor compatibility, syntax-constrained decoding strategies are integrated into the LLM generation process. Extensive experimental evaluations conducted on a novel dataset containing network commands from Huawei, Cisco, Nokia, and Juniper demonstrate the superiority of ConfGen. Results confirm significant performance improvements over state-of-the-art solutions in generating accurate, multi-vendor-compatible network configurations driven by user intent. | 10.1109/TNSM.2026.3675409 |
| Lingling Zhang, Yuan Zhang | Hierarchical DRL Based Resource Allocation for Latency-Constrained Packet Dropping Rate Optimization in URLLC | 2026 | Early Access | Resource management Ultra reliable low latency communication Reliability Error probability Optimization Decoding Heuristic algorithms Array signal processing Throughput Scalability Ultra-reliable and low-latency communication resource allocation hierarchical deep reinforcement learning imitation learning | Efficient resource allocation is important to realize ultra-reliability and low latency in the ultra-reliable and low-latency communication (URLLC) system with massive users. In this paper, resource allocation in a multi-user URLLC system is studied to improve the system reliability. A latency-constrained packet dropping rate (LCPDR) which jointly captures packet losses induced by transmission errors, timeout violations and buffer overflows during packet delivery, is used to comprehensively measure the end-to-end reliability. A joint optimization problem of subchannel, time slot and power allocation is formulated to minimize the LCPDR with the maximum delay constraint. To reduce complexity and resource consumption when there are a large number of users, user grouping is proposed and the problem is reformulated as two subproblems on two timescales, i.e., user grouping on a long-timescale and resource allocation within each group on a short-timescale. To solve the subproblems, a three-tier hierarchical deep reinforcement learning (HDRL) architecture is proposed, in which a novel event-triggered dynamic regrouping mechanism is embedded into the long-timescale user grouping and short-timescale intra-group resource allocation tiers to dynamically migrate abnormal users between groups. A distance-aware proximal policy optimization (PPO) algorithm named H-PPO-IL based on HDRL and imitation learning (IL) is proposed, in which IL is used to pretrain the PPO agents and accelerate the convergence. Simulation results show that the proposed algorithm has a lower LCPDR and a faster convergence speed, which is more effective compared with benchmark algorithms. | 10.1109/TNSM.2026.3675849 |
| Josef Koumar, Timotej Smoleň, Kamil Jeřábek, Tomáš Čejka | Comparative Analysis of Deep Learning Models for Real-World ISP Network Traffic Forecasting | 2026 | Vol. 23, Issue | Forecasting Telecommunication traffic Deep learning Predictive models Time series analysis Measurement Monitoring Transformers Analytical models Smoothing methods Neural networks deep learning network traffic forecasting network traffic prediction network monitoring | Accurate network traffic forecasting is crucial for Internet service providers to optimize resources, improve user experience, and detect anomalies. Until recently, the lack of large-scale, real-world datasets limited the fair evaluation of forecasting methods. The newly released CESNET-TimeSeries24 dataset addresses this gap by providing multivariate traffic data from thousands of devices over 40 weeks at multiple aggregation granularities and hierarchy levels. In this study, we leverage the CESNET-TimeSeries24 dataset to conduct a systematic evaluation of state-of-the-art deep learning models and provide practical insights. Moreover, our analysis reveals trade-offs between prediction accuracy and computational efficiency across different levels of granularity. Beyond model comparison, we establish a transparent and reproducible benchmarking framework, releasing source code and experiments to encourage standardized evaluation and accelerate progress in network traffic forecasting research. | 10.1109/TNSM.2025.3636557 |
| Shih-Chun Chien, You-Cheng Chang, Ming-Wei Su, Kate Ching-Ju Lin | Enabling Differentiated Monitoring for Sketch-Based Network Measurements | 2026 | Vol. 23, Issue | Accuracy Monitoring Pipelines Resource management Memory management Probabilistic logic Quality of service Hardware Data structures Traffic control Sketch-based measurements multi-class monitoring differentiated performance guarantee | With the advent of programmable switches, sketch-based measurements have become a powerful tool for traffic monitoring, offering high accuracy with minimal resource overhead. Conventional sketch designs provide uniform accuracy across all traffic classes, failing to address the diverse needs of different network applications. Recent efforts in priority-aware sketch-based measurements have enhanced accuracy for large flows. However, providing explicit guarantees for differentiated accuracy across multiple traffic categories under limited memory resources remains a challenge. To address this limitation, we introduce DiffSketch, a sketch-based measurement system that guarantees differential accuracy across traffic classes. DiffSketch employs a block-based biased hashing design, which dynamically adjusts block sizes and leverages biased hashing techniques to enable probabilistic block access. This design ensures that measurement accuracy aligns with operator-defined differentiation levels while optimizing memory usage. We implement DiffSketch on both bmv2 and Tofino switches, demonstrating that its block-based approach not only guarantees differential performance but also improves overall memory efficiency, reducing measurement errors compared to existing priority-aware solutions. | 10.1109/TNSM.2025.3636478 |