Last updated: 2026-03-25 05:01 UTC
All documents
Number of pages: 159
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Yanli Liu, Yue Pang, Yidi Wang, Shengnan Li, Jin Li, Min Zhang, Danshi Wang | Developing A Domain-Specific LLM for Optical Networks: A Reinforcement Learning-Based Fine-Tuning Framework | 2026 | Early Access | Optical fiber networks Cognition Accuracy Location awareness Reinforcement learning Adaptation models Semantics Optimization Maintenance Training Large language model reinforcement learning from human feedback reinforced fine-tuning optical networks | Optical networks serve as the backbone of modern communication infrastructure, where efficient operation and maintenance (O&M) are essential for ensuring reliable and high-speed data services. However, traditional network O&M face persistent challenges, including high labor costs, delayed response time, and difficulties in processing massive and complex network data. Although large language models (LLMs) have demonstrated strong capabilities in text understanding, generation, and reasoning, their direct application in optical network O&M is limited by domain-specific knowledge barriers, inherent reasoning biases, and insufficient performance in complex multi-step tasks. To address this issue, this study develops a domain-adaptation and system-implementation framework that applies two established reinforcement learning-based fine-tuning methods (RLHF and ReFT) to construct domain-specialized LLMs for optical network O&M tasks. In the context of log analysis, RLHF achieves improvements of 1.64 points in accuracy, 1.02 points in content richness, and a notable 10-point increase in interactivity over supervised fine-tuning. In alarm localization, ReFT achieves accuracy improvements of 2%–13% across four reasoning tasks. The extensive tests not only demonstrate the practical value of RL-based fine-tuning in enhancing alignment and reasoning for domain-specific applications, but also provides a practical methodology and implementation reference for applying reinforcement learning-based LLM adaptation in optical network O&M environments. | 10.1109/TNSM.2026.3676522 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Beibei Li | B-TWGA: A Trusted Gateway Architecture Based on Blockchain for Internet of Things | 2026 | Early Access | Internet of Things Blockchains Security Hardware Logic gates Computer architecture Sensors Radiofrequency identification Trust management Middleware Internet of Things communication links Blockchain-based Trustworthy Gateway Architecture | Internet of Things (IoT) terminals are commonly used for data sensing and edge control. The communication links between these hardware devices are critical points that are vulnerable to security attacks. Moreover, these links are usually composed of resource-constrained nodes that cannot implement strong security protections. To address these security threats, we introduce a Blockchain-based Trustworthy Gateway Architecture (B-TWGA), which does not rely on additional thirdparty management institutions or hardware facilities, nor does it require central control. Our proposal further considers the possibility of Denial of Service (DoS) attacks in blockchain transactions, ensuring secure storage and seamless interaction within the network. The proposed scheme offers advantages such as tamper-proofing, protection against malicious attacks, and reliability while maintaining operational simplicity. Experimental results demonstrate that B-TWGA maintains stable trust levels even when 40% of the network nodes are malicious, effectively mitigates trust degradation caused by vote-stuffing and switch attacks, and ensures high transaction processing performance, achieving an average throughput of 97.55% for storage transactions with practical response times below 0.7s for typical trust file sizes. | 10.1109/TNSM.2026.3671208 |
| Ebrima Jaw, Moritz Müller, Cristian Hesselman, Lambert Nieuwenhuis | Reproducibility Study and Assessment of the Evolution of Serial BGP Hijacking Events | 2026 | Early Access | Internet Routing Border Gateway Protocol Routing protocols Security IP networks Cloud computing Autonomous systems Authorization Scalability Border Gateway Protocol (BGP) Prefix hijacks RPKI Regional Internet Registries (RIR) Serial hijackers | The Border Gateway Protocol (BGP) is the Internet’s most crucial protocol for efficient global connectivity and traffic routing. However, BGP is well known to be susceptible to route hijacks and leaks. Route hijacks are the intentional or unintentional illegitimate announcements of network resources that can compromise the confidentiality, integrity, and availability of communication systems. In the past, the so-called “serial hijackers” have hijacked Internet resources multiple times, some lasting for several months or years. So far, only the paper “Profiling BGP Serial Hijackers” has explicitly focused on these repeat offenders, and it dates back to 2019. Back then, they had to process large amounts of BGP announcements to find a few potential serial hijackers. In this paper, we revisit the profiling of serial hijackers. We reproduced the 2019 study and showed that we can identify potential offenders with less data while achieving similar accuracy. Our study confirms that there has been no significant increase in the evolution of serial hijacking activities in the last five years. We then extend their research, further analyze the characteristics of the serial hijackers, and show that most of the alleged serial hijackers are still active on the Internet. We also find that 22.9% of the hijacks violated RPKI objects but were still widely propagated, and that even MANRS participants were among the propagating networks. | 10.1109/TNSM.2026.3671613 |
| Shaohui Gong, Luohao Tang, Jianjiang Wang, Quan Chen, Cheng Zhu | A Key Node Set Analysis Method For Regional Service Denial In Mega-Constellation Networks | 2026 | Early Access | Satellites Measurement Analytical models Robustness Collaboration Satellite constellations Protection Degradation Correlation Spatiotemporal phenomena Mega-Constellation Networks Regional Service Service Denial Key Node Set Temporal Networks Mixed-Integer Programming | Mega-constellation networks (MCNs) face the significant threats of regional service denial attacks. To improve the robustness of regional services in MCNs against such attacks, a cost-effective approach is to identify key node sets for targeted protection efforts. This paper formally defines the key node set analysis problem for regional service denial in MCNs and develops a comprehensive solution framework. First, we develop a regional service capability analysis model that considers the dynamic collaboration of multiple satellites within regional communication service scenarios in MCNs, alongside a temporal network model for their collaborative relationships. Next, we design a multi-satellite criticality metric that quantifies the multi-dimensional impacts of satellite node set failures on regional service capabilities. Building on these, we construct a mixed-integer programming-based key node set analysis model to achieve precise identification of key node sets. Finally, simulation experiments are conducted to verify and analyze the proposed methods, providing insights to enhance the robustness of regional services in MCNs. | 10.1109/TNSM.2026.3672157 |
| Junyan Guo, Shuang Yao, Yue Song, Le Zhang, Xu Han, Liyuan Chang | EF-CPPA: Escrow-Free Conditional Privacy-Preserving Authentication Scheme for Real-Time Emergency Messages in Smart Grids | 2026 | Early Access | Authentication Smart grids Security Privacy Smart meters Logic gates Real-time systems Vehicle dynamics Time factors Power system reliability Smart grid emergency message authentication conditional privacy preservation escrow-free key generation unlinkability dynamic joining and revocation | Timely and secure emergency message delivery is critical to resilient smart-grid operation and rapid disturbance response. However, existing schemes remain inadequate, leaving smart grids vulnerable to security and privacy threats and causing verification bottlenecks, particularly when nonlinear emergency measurements cannot be homomorphically aggregated, which prevents bandwidth-efficient in-network aggregation and scalable batch verification. We propose EF-CPPA, an escrow-free, conditional privacy-preserving authentication scheme for real-time emergency messaging in smart grids. EF-CPPA enables smart meters to deliver authenticated emergency messages to the CC via power gateways verifiable as legitimate relays, while ensuring the confidentiality, integrity, and unlinkability of embedded nonlinear measurements. EF-CPPA further provides conditional anonymity with accountable tracing, as well as origin authentication, intra-domain verification, and scalable batch verification under bursty multi-meter messaging. An ECDLP-based escrow-free key-generation mechanism reduces reliance on the CC and enables efficient node joining and revocation. Security analysis shows that EF-CPPA achieves existential unforgeability under chosen-message attacks (EUF-CMA) and satisfies the stated security and privacy requirements. Performance evaluation demonstrates low computational, communication, energy, and node-management overhead, making EF-CPPA suitable for security-critical, time-sensitive smart-grid emergency messaging. | 10.1109/TNSM.2026.3672754 |
| Amin Mohajer, Abbas Mirzaei, Mostafa Darabi, Xavier Fernando | Joint SLA-Aware Task Offloading and Adaptive Service Orchestration with Graph-Attentive Multi-Agent Reinforcement Learning | 2026 | Early Access | Quality of service Resource management Observability Training Delays Job shop scheduling Dynamic scheduling Bandwidth Vehicle dynamics Thermal stability Edge intelligence network slicing QoS-aware scheduling graph attention networks adaptive resource allocation | Coordinated service offloading is essential to meet Quality-of-Service (QoS) targets under non-stationary edge traffic. Yet conventional schedulers lack dynamic prioritization, causing deadline violations for delay-sensitive, lower-priority flows. We present PRONTO, a multi-agent framework with centralized training and decentralized execution (CTDE) that jointly optimizes SLA-aware offloading and adaptive service orchestration. PRONTO builds on Twin Delayed Deep Deterministic Policy Gradient (TD3) and incorporates spatiotemporal, topology-aware graph attention with top-K masking and temperature scaling to encode neighborhood influence at linear coordination cost. Gated Recurrent Units (GRUs) filter temporal features, while a hybrid reward couples task urgency, SLA satisfaction, and utilization costs. A priority-aware slicing policy divides bandwidth and compute between latency-critical and throughput-oriented flows. To improve robustness, we employ stability regularizers (temporal smoothing and confidence-weighted neighbor alignment), mitigating action jitter under bursts. Extensive evaluations show superior QoS and channel utilization, with up to 27.4% lower service delay and over 18% higher SLA Satisfaction Rate (SSR) compared with strong baselines. | 10.1109/TNSM.2026.3673188 |
| Ying-Chin Chen, Chit-Jie Chew, Wei-Bin Lee, Iuon-Chang Lin, Jun-San Lee | IROVF:Industrial Role-Oriented Verification Framework for safeguarding manufacture line deployment | 2026 | Early Access | Security Manufacturing Standards Industrial Internet of Things IEC Standards Authentication Computer crime Smart manufacturing Protocols SCADA systems Industrial role-oriented verification production line deployment | Traditionally, industrial control systems operate in isolated networks with proprietary solutions. As smart factories and digital twins have become inevitable with AI advancement, the rapid adoption of Industrial Internet of Things (IIoT) devices has significantly increased cybersecurity risks. More precisely, the complexity of industrial environments, which includes production processes and device roles, creates substantial challenges for secure deployment. The authors introduce a bottom-up, industrial role-oriented verification framework (IROVF) for manufacturing line deployment. IROVF incorporates SCADA's MTU and RTU components, which are mapped to distinct device roles. This provides authentication and least-privilege principles that are tailored to factory environments. The proposed framework designs an alarm strategy, which can be helpful to detect and report potential operational disruptions during runtime, thus minimizing impact on system availability. Experimental results demonstrate the superior security coverage of the proposed framework compared to existing research, while a comprehensive application scenario validates its practical applicability. The scalable security parameters of IROVF allow organizations to select appropriate security levels based on their specific requirements. IROVF provides an effective security solution for modern industrial control systems during deployment phases. | 10.1109/TNSM.2026.3672975 |
| Suraj Kumar, Soumi Chattopadhyay, Chandranath Adak | Anomaly Resilient Temporal QoS Prediction using Hypergraph Convoluted Transformer Network | 2026 | Early Access | Quality of service Accuracy Transformers Collaborative filtering Matrix decomposition Feature extraction Tensors Convolution Computational modeling Predictive models Graph convolution Hypergraph Temporal QoS prediction Transformer network | Quality-of-Service (QoS) prediction is a critical task in the service lifecycle, enabling precise and adaptive service recommendations by anticipating performance variations over time in response to evolving network uncertainties and user preferences. However, contemporary QoS prediction methods frequently encounter data sparsity and cold-start issues, which hinder accurate QoS predictions and limit the ability to capture diverse user preferences. Additionally, these methods often assume QoS data reliability, neglecting potential credibility issues such as outliers and the presence of greysheep users and services with atypical invocation patterns. Furthermore, traditional approaches fail to leverage diverse features, including domain-specific knowledge and complex higher-order patterns, essential for accurate QoS predictions. In this paper, we introduce a real-time, trust-aware framework for temporal QoS prediction to address the aforementioned challenges, featuring an end-to- end deep architecture called the Hypergraph Convoluted Transformer Network (HCTN). HCTN combines a hypergraph structure with graph convolution over hyper-edges to effectively address high-sparsity issues by capturing complex, high-order correlations. Complementing this, the transformer network utilizes multi-head attention along with parallel 1D convolutional layers and fully connected dense blocks to capture both fine-grained and coarse-grained dynamic patterns. Additionally, our approach includes a sparsity-resilient solution for detecting greysheep users and services, incorporating their unique characteristics to improve prediction accuracy. Trained with a robust loss function resistant to outliers, HCTN demonstrated state-of-the-art performance on the large-scale WSDREAM-2 datasets for response time and throughput. | 10.1109/TNSM.2026.3674650 |
| Jingyu Wang, Bo He, Jinyu Zhao, Yixin Xuan, Haifeng Sun, Qi Qi, Junzhe Liang, Zirui Zhuang, Jianxin Liao | LLM-powered Intent-driven Configuration Generation for Multi-vendor Networks | 2026 | Early Access | Syntactics Codes Manuals Delays Translation Large language models Adaptation models Cross lingual Multilingual Decoding Network Configuration configuration generation multi-vendor | Network configuration management has become increasingly complex, inefficient, and prone to errors due to frequent updates in command structures and the prevalence of multi-vendor network infrastructures. To tackle these challenges, this paper introduces a novel cognitive communication approach, formulating a new task called intent-driven multi-vendor network configuration generation. Within the broader intent-based networking lifecycle, this task specifically targets the realization and command generation stage—translating natural language operational intents into accurate and syntactically valid network commands compatible with multiple vendors, rather than addressing high-level intent interpretation or decomposition. Three primary challenges are addressed: syntactical command validity, vendor-specific syntax diversity, and outdated or inconsistent network knowledge. We propose ConfGen, a cognitive and intent-driven multi-vendor configuration generation framework that consists of two phases: vendor-agnostic syntax retrieval and syntax-constrained command generation. In the first phase, a cognitive retrieval mechanism and reranking strategy identify the most relevant syntax structures based on user intents, while vendor-specific syntax components are effectively generalized. The second phase employs a Large Language Model (LLM) guided by retrieved syntax constraints and user intents to generate precise and valid network commands. To ensure syntactical correctness and vendor compatibility, syntax-constrained decoding strategies are integrated into the LLM generation process. Extensive experimental evaluations conducted on a novel dataset containing network commands from Huawei, Cisco, Nokia, and Juniper demonstrate the superiority of ConfGen. Results confirm significant performance improvements over state-of-the-art solutions in generating accurate, multi-vendor-compatible network configurations driven by user intent. | 10.1109/TNSM.2026.3675409 |
| Masaki Oda, Imran Ahmed, Akio Kawabata, Eiji Oki | Optimistic Synchronization-Based Server Allocation with Preventive Start-Time Optimization Under Server Failure in Delay-Sensitive Applications | 2026 | Early Access | Servers Delays Resource management Optimization Numerical models Synchronization Computational modeling Heuristic algorithms Real-time systems Network function virtualization Server allocation optimistic synchronization algorithm preventive start-time optimization server failure | Real-time applications require low latency and strict event ordering to ensure seamless operation. Distributed server processing is effective for this purpose, and there are two synchronization algorithms: a conservative synchronization algorithm (CSA) and an optimistic synchronization algorithm (OSA). OSA improves delay performance compared to CSA. While prior studies have considered OSA, they have not incorporated the impact of server failures. This paper proposes an OSA-based server allocation model for delay-sensitive applications with preventive start-time optimization (PreSO) under single-server failures (OSA-PreSO). The proposed OSA-PreSO model minimizes the largest total delay across all failure scenarios while satisfying constraints in OSA with PreSO under single-server failures. We formulate the proposed model as an integer linear programming (ILP) problem. In OSA-PreSO, the objective is to minimize the largest total delay across all failure scenarios, without giving special consideration to the total delay in the no-failure scenario. As a result, a penalty arises in the form of an increased total delay in the no-failure scenario. To reduce the penalty, we develop an improved OSA-PreSO model, OSA-PreSO-LP (low-penalty), which reduces the total delay in the nofailure scenario while maintaining the same delay characteristics in failure scenarios. We prove that the decision version of OSA-PreSO is NP-complete. We introduce heuristic algorithms to handle large-scale problems. Numerical results show that the proposed OSA-PreSO model reduces the delay compared to the conventional CSA-based model by effectively utilizing server memory resources. We observe that the proposed model achieves a lower largest total delay than start-time optimization and provides greater stability by preventing unnecessary user reassignments compared to run-time optimization. Numerical results also show that OSA-PreSO-LP reduces the penalty at most by 83%, while maintaining the same delay characteristics in failure scenarios. | 10.1109/TNSM.2026.3676230 |
| Jing Gao, Lei Feng, Fanqin Zhou, Mianxiong Dong, Peng Yu, Kaoru Ota, Xuesong Qiu | Deterministic Delay-Aware Task Scheduling over In-Network Computing: A Graph Embedding-Based DRL Approach | 2026 | Early Access | Delays Processor scheduling Resource management Dynamic scheduling Optimal scheduling Calculus Graph neural networks Computational modeling Reinforcement learning Scheduling algorithms Task scheduling Network calculus Deterministic latency DRL | As the in-network computing (INC) paradigm evolves, efficient scheduling of dependent tasks within complex network systems becomes increasingly crucial. The network needs to handle high-level resource demands while adhering to strict latency requirements. Deterministic delay constraints are particularly critical in applications that rely on directed acyclic graphs (DAGs). To address this challenge, we first propose a deterministic delay-aware task scheduling optimization problem over INC to maximize resource utilization and ensure task acceptance. We accurately establish the complex deterministic delay constraint through traffic arrival and service curves and utilize network calculus for conversion to facilitate solving. Then, we further transform the task optimization problem into MDP and develop a deep reinforcement learning (DRL) algorithm that combines graph neural network (GNN) and delay-aware proximal policy optimization (DPPO) to solve it, called the Deterministic Delay-aware Task Scheduling (DDTS) scheme. It utilizes multilayer GNN to handle task dependencies and applies the DPPO algorithm to introduce deterministic delay penalty factors to evaluate policy operations, achieving optimal task scheduling. The simulation results demonstrate the significant advantages of the DDTS scheme over existing algorithms and task scheduling schemes in terms of task acceptance rate and resource utilization. | 10.1109/TNSM.2026.3676106 |
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Weichao Ding, Zhou Zhou, Qi Min, Fei Luo, Wenbo Dong, Hengrun Zhang | VDSV: Client Selection in Federated Learning Based on Value Density and Secondary Verification | 2026 | Vol. 23, Issue | Convergence Training Data models Servers Distributed databases Analytical models Interference Federated learning Costs Artificial intelligence Federated learning client selection data heterogeneity value density secondary verification | Client selection has been widely considered in Federated Learning (FL) to reduce communication overhead while ensuring proper convergence performance. Due to data heterogeneity in FL, a representative subset of participants should take into account both intra- and inter-client diversity. While existing works usually emphasize on one of them, this paper proposes a VDSV (client selection based on Value Density and Secondary Verification) framework, which optimizes the client selection strategy from both sides. Therein, intra- and inter-client diversity are respectively measured based on a designed client data score as well as gradient distance and direction. Afterwards, a client selection model is established based on a proposed metric, called client value density. Besides, a secondary validation method is developed to dynamically tweak the current client selection and model aggregation strategies. The general idea of the above design is based on the theoretical convergence analysis and the observation that the client contribution to the global model can get changed throughout the learning process. The experimental results demonstrate that VDSV can achieve higher convergence rates and ensure comparable model performance. In specific, our method can reduce the communication rounds by an average of 37.88%, which saves noticeable communication overhead. | 10.1109/TNSM.2025.3636990 |
| Josef Koumar, Timotej Smoleň, Kamil Jeřábek, Tomáš Čejka | Comparative Analysis of Deep Learning Models for Real-World ISP Network Traffic Forecasting | 2026 | Vol. 23, Issue | Forecasting Telecommunication traffic Deep learning Predictive models Time series analysis Measurement Monitoring Transformers Analytical models Smoothing methods Neural networks deep learning network traffic forecasting network traffic prediction network monitoring | Accurate network traffic forecasting is crucial for Internet service providers to optimize resources, improve user experience, and detect anomalies. Until recently, the lack of large-scale, real-world datasets limited the fair evaluation of forecasting methods. The newly released CESNET-TimeSeries24 dataset addresses this gap by providing multivariate traffic data from thousands of devices over 40 weeks at multiple aggregation granularities and hierarchy levels. In this study, we leverage the CESNET-TimeSeries24 dataset to conduct a systematic evaluation of state-of-the-art deep learning models and provide practical insights. Moreover, our analysis reveals trade-offs between prediction accuracy and computational efficiency across different levels of granularity. Beyond model comparison, we establish a transparent and reproducible benchmarking framework, releasing source code and experiments to encourage standardized evaluation and accelerate progress in network traffic forecasting research. | 10.1109/TNSM.2025.3636557 |
| Shih-Chun Chien, You-Cheng Chang, Ming-Wei Su, Kate Ching-Ju Lin | Enabling Differentiated Monitoring for Sketch-Based Network Measurements | 2026 | Vol. 23, Issue | Accuracy Monitoring Pipelines Resource management Memory management Probabilistic logic Quality of service Hardware Data structures Traffic control Sketch-based measurements multi-class monitoring differentiated performance guarantee | With the advent of programmable switches, sketch-based measurements have become a powerful tool for traffic monitoring, offering high accuracy with minimal resource overhead. Conventional sketch designs provide uniform accuracy across all traffic classes, failing to address the diverse needs of different network applications. Recent efforts in priority-aware sketch-based measurements have enhanced accuracy for large flows. However, providing explicit guarantees for differentiated accuracy across multiple traffic categories under limited memory resources remains a challenge. To address this limitation, we introduce DiffSketch, a sketch-based measurement system that guarantees differential accuracy across traffic classes. DiffSketch employs a block-based biased hashing design, which dynamically adjusts block sizes and leverages biased hashing techniques to enable probabilistic block access. This design ensures that measurement accuracy aligns with operator-defined differentiation levels while optimizing memory usage. We implement DiffSketch on both bmv2 and Tofino switches, demonstrating that its block-based approach not only guarantees differential performance but also improves overall memory efficiency, reducing measurement errors compared to existing priority-aware solutions. | 10.1109/TNSM.2025.3636478 |
| Yaqing Zhu, Liquan Chen, Suhui Liu, Bo Yang, Shang Gao | Blockchain-Based Lightweight Key Management Scheme for Secure UAV Swarm Task Allocation | 2026 | Vol. 23, Issue | Autonomous aerial vehicles Encryption Protocols Resource management Receivers Controllability Dynamic scheduling Blockchains Authentication Vehicle dynamics Lightweight certificateless pairing-free key management UAV swarm task allocation | Uncrewed Aerial Vehicle (UAV) swarms are a cornerstone technology in the rapidly growing low-altitude economy, with significant applications in logistics, smart cities, and emergency response. However, their deployment is constrained by challenges in secure communication, dynamic group coordination, and resource constraints. Although there are various cryptographic techniques, efficient and scalable group key management plays a critical role in secure task allocation in UAV swarms. Existing group key agreement schemes, both symmetric and asymmetric, often fail to adequately address these challenges due to their reliance on centralized control, high computational overhead, sender restrictions, and insufficient protection against physical attacks. To address these issues, we propose PCDCB (Pairing-free Certificateless Dynamic Contributory Broadcast encryption), a blockchain-assisted lightweight key management scheme designed for UAV swarm task allocation. PCDCB is particularly suitable for swarm operations as it supports efficient one-to-many broadcast of task commands, enables dynamic node join/leave, and eliminates key escrow by combining certificateless cryptography with Physical Unclonable Functions (PUFs) for hardware-bound key regeneration. Blockchain is used to maintain tamper-resistant update tables and ensure auditability, while a privacy-preserving mechanism with pseudonyms and a round mapping table provides task anonymity and unlinkability. Comprehensive security analysis confirms that PCDCB is secure and resistant to multiple attacks. Performance evaluation shows that, in large-scale swarm scenarios (n = 100), PCDCB reduces the cost of group key computation by 54.4% (up to 96.9%) and reduces the time to generate the decryption keys by at least 29.7%. In addition, PCDCB achieves the lowest communication cost among all compared schemes and demonstrates strong scalability with increasing group size. | 10.1109/TNSM.2025.3636562 |
| Yang Liu, Wenjun Zhu, Harry Chang, Yang Hong, Geoff Langdale, Kun Qiu, Jin Zhao | Hyperflex: A SIMD-Based DFA Model for Deep Packet Inspection | 2026 | Vol. 23, Issue | Single instruction multiple data Vectors Engines Automata Inspection Payloads Throughput Memory management Compression algorithms Software algorithms Deep packet inspection regular expression deterministic finite automata | Deep Packet Inspection (DPI) has been extensively employed for network security. It examines traffic payloads by searching for regular expressions (regex) with the Deterministic Finite Automaton (DFA) model. However, as the network bandwidth and ruleset size are increasing rapidly, the conventional DFA model has emerged as a significant performance bottleneck of DPI. Leveraging the Single-Instruction-Multiple-Data (SIMD) instruction to perform state transitions can substantially boost the efficiency of the DFA model. In this paper, we propose Hyperflex, a novel SIMD-based DFA model designed for high-performance regex matching. Hyperflex incorporates a region detection algorithm to identify regions suitable for acceleration by SIMD instructions across the whole DFA graph. Also, we design a hybrid state transition algorithm that enables state transition in both SIMD-accelerated and normal regions, and ensures seamless state transition across the two types of regions. We have implemented Hyperflex on the commodity CPU and evaluated it with real network traffic and DPI regexes. Our evaluation results indicate that Hyperflex reaches a throughput of 8.89Gbit/s, representing an improvement of up to 2.27 times over Mcclellan, the default DFA model of the prominent multi-pattern regex matching engine Hyperscan. As a result, Hyperflex has been successfully deployed in Hyperscan, significantly enhancing its performance. | 10.1109/TNSM.2025.3636946 |
| Massimo Tornatore, Teresa Gomes, Carmen Mas-Machuca, Eiji Oki, Chadi Assi, Dominic Schupke | Guest Editors’ Introduction: Special Issue on Resilient Communication Networks for an Hyper-Connected World | 2026 | Vol. 23, Issue | Special issues and sections Privacy Wireless networks Optical fiber networks Cyber-physical systems Communication networks Security Next generation networking Resilience Resilience network reliability IoT NFV SFC optical networks wireless networks network management privacy blockchain | This Special Issue contains a set of remarkable papers covering various recent research advances towards resilient Communication Networks for an hyper-connected World. Papers are organized into five categories: (i) Resilient Architectures for Next-Generation Networks, (ii) Edge, IoT, and Cyber-Physical Systems, (iii) Vehicular, Mobile, and Aerial Networks, (iv) Optical, Hybrid, and Satellite-based Resilient Communications, and (v) Security, Trust, and Resilience in Services and Applications. The editorial begins with an overview of the field and proceeds with a summary of the twenty-two papers included in this Special Issue. | 10.1109/TNSM.2025.3620249 |