Last updated: 2026-03-26 05:01 UTC
All documents
Number of pages: 160
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Amin Mohajer, Abbas Mirzaei, Mostafa Darabi, Xavier Fernando | Joint SLA-Aware Task Offloading and Adaptive Service Orchestration with Graph-Attentive Multi-Agent Reinforcement Learning | 2026 | Early Access | Quality of service Resource management Observability Training Delays Job shop scheduling Dynamic scheduling Bandwidth Vehicle dynamics Thermal stability Edge intelligence network slicing QoS-aware scheduling graph attention networks adaptive resource allocation | Coordinated service offloading is essential to meet Quality-of-Service (QoS) targets under non-stationary edge traffic. Yet conventional schedulers lack dynamic prioritization, causing deadline violations for delay-sensitive, lower-priority flows. We present PRONTO, a multi-agent framework with centralized training and decentralized execution (CTDE) that jointly optimizes SLA-aware offloading and adaptive service orchestration. PRONTO builds on Twin Delayed Deep Deterministic Policy Gradient (TD3) and incorporates spatiotemporal, topology-aware graph attention with top-K masking and temperature scaling to encode neighborhood influence at linear coordination cost. Gated Recurrent Units (GRUs) filter temporal features, while a hybrid reward couples task urgency, SLA satisfaction, and utilization costs. A priority-aware slicing policy divides bandwidth and compute between latency-critical and throughput-oriented flows. To improve robustness, we employ stability regularizers (temporal smoothing and confidence-weighted neighbor alignment), mitigating action jitter under bursts. Extensive evaluations show superior QoS and channel utilization, with up to 27.4% lower service delay and over 18% higher SLA Satisfaction Rate (SSR) compared with strong baselines. | 10.1109/TNSM.2026.3673188 |
| Suraj Kumar, Soumi Chattopadhyay, Chandranath Adak | Anomaly Resilient Temporal QoS Prediction using Hypergraph Convoluted Transformer Network | 2026 | Early Access | Quality of service Accuracy Transformers Collaborative filtering Matrix decomposition Feature extraction Tensors Convolution Computational modeling Predictive models Graph convolution Hypergraph Temporal QoS prediction Transformer network | Quality-of-Service (QoS) prediction is a critical task in the service lifecycle, enabling precise and adaptive service recommendations by anticipating performance variations over time in response to evolving network uncertainties and user preferences. However, contemporary QoS prediction methods frequently encounter data sparsity and cold-start issues, which hinder accurate QoS predictions and limit the ability to capture diverse user preferences. Additionally, these methods often assume QoS data reliability, neglecting potential credibility issues such as outliers and the presence of greysheep users and services with atypical invocation patterns. Furthermore, traditional approaches fail to leverage diverse features, including domain-specific knowledge and complex higher-order patterns, essential for accurate QoS predictions. In this paper, we introduce a real-time, trust-aware framework for temporal QoS prediction to address the aforementioned challenges, featuring an end-to- end deep architecture called the Hypergraph Convoluted Transformer Network (HCTN). HCTN combines a hypergraph structure with graph convolution over hyper-edges to effectively address high-sparsity issues by capturing complex, high-order correlations. Complementing this, the transformer network utilizes multi-head attention along with parallel 1D convolutional layers and fully connected dense blocks to capture both fine-grained and coarse-grained dynamic patterns. Additionally, our approach includes a sparsity-resilient solution for detecting greysheep users and services, incorporating their unique characteristics to improve prediction accuracy. Trained with a robust loss function resistant to outliers, HCTN demonstrated state-of-the-art performance on the large-scale WSDREAM-2 datasets for response time and throughput. | 10.1109/TNSM.2026.3674650 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Basharat Ali, Guihai Chen | MIRAGE-DoH: Metamorphic Intelligence and Resilient AI Grid for Autonomous Governance of Encrypted DNS | 2026 | Early Access | Existing DNS over HTTPS defenses have demonstrated limited resilience against polymorphic traffic shaping, staged tunneling, and adaptive mimicry, largely because they rely on static learning pipelines and rigid cryptographic configurations. MIRAGE-DoH was designed to examine whether adaptive inference, persistent structural encoding, and calibrated cryptographic agility could be integrated into a deployable and measurable encrypted DNS control architecture. The framework combined flow-level Cognitive MetaAgents capable of internal reconfiguration, Topological Memory Networks that preserved stable geometric irregularities across temporal windows, and Metamorphic Cryptographic Shards that adjusted key encapsulation policies according to empirically calibrated threat severity. A Causal Counterfactual Environment modeled constrained attacker decision pathways, while Spectral Game Intelligence analyzed flow interaction graphs to anticipate structural attack transitions.Evaluation on extended CIC-DoH2023 and Gen-C-DDD-2022 datasets was conducted under fixed flow-level decision intervals with explicit accounting for synchronization overhead, spectral graph construction cost, and cryptographic rotation latency. Cross-dataset experiments yielded a mean detection accuracy of 97.8% with a 0.41% false positive rate, sustaining median inference latency of 62μs and cryptographic morph latency of 3.7 ms under load. Quantum-assisted inference was assessed through bounded simulations, indicating constrained information gain within the adopted lattice-based configuration, without asserting unconditional post-quantum immunity. These results demonstrated that adaptive encrypted DNS governance can be empirically grounded, operationally bounded, and stress-evaluated without reliance on unqualified claims of perfect security. | 10.1109/TNSM.2026.3677474 | |
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Jingyu Wang, Bo He, Jinyu Zhao, Yixin Xuan, Haifeng Sun, Qi Qi, Junzhe Liang, Zirui Zhuang, Jianxin Liao | LLM-powered Intent-driven Configuration Generation for Multi-vendor Networks | 2026 | Early Access | Syntactics Codes Manuals Delays Translation Large language models Adaptation models Cross lingual Multilingual Decoding Network Configuration configuration generation multi-vendor | Network configuration management has become increasingly complex, inefficient, and prone to errors due to frequent updates in command structures and the prevalence of multi-vendor network infrastructures. To tackle these challenges, this paper introduces a novel cognitive communication approach, formulating a new task called intent-driven multi-vendor network configuration generation. Within the broader intent-based networking lifecycle, this task specifically targets the realization and command generation stage—translating natural language operational intents into accurate and syntactically valid network commands compatible with multiple vendors, rather than addressing high-level intent interpretation or decomposition. Three primary challenges are addressed: syntactical command validity, vendor-specific syntax diversity, and outdated or inconsistent network knowledge. We propose ConfGen, a cognitive and intent-driven multi-vendor configuration generation framework that consists of two phases: vendor-agnostic syntax retrieval and syntax-constrained command generation. In the first phase, a cognitive retrieval mechanism and reranking strategy identify the most relevant syntax structures based on user intents, while vendor-specific syntax components are effectively generalized. The second phase employs a Large Language Model (LLM) guided by retrieved syntax constraints and user intents to generate precise and valid network commands. To ensure syntactical correctness and vendor compatibility, syntax-constrained decoding strategies are integrated into the LLM generation process. Extensive experimental evaluations conducted on a novel dataset containing network commands from Huawei, Cisco, Nokia, and Juniper demonstrate the superiority of ConfGen. Results confirm significant performance improvements over state-of-the-art solutions in generating accurate, multi-vendor-compatible network configurations driven by user intent. | 10.1109/TNSM.2026.3675409 |
| Henghua Zhang, Jue Chen, Haidong Peng, Junru Chen | MAT4PM: Machine Learning-Guided Adaptive Threshold Control for P4-based Monitoring in SDNs | 2026 | Early Access | This paper presents MAT4PM, a P4-based proactive monitoring framework designed for Software-Defined Networking (SDN). This is the first monitoring framework that combines Programmable Data Plane (PDP) capabilities for event-driven data collection with control plane intelligence for real-time threshold optimization. The architecture consists of a lightweight P4-based monitoring module deployed at the switch, a Machine Learning (ML) inference engine running at the controller, and a P4Runtime feedback channel for real-time threshold updates. Traffic features are leveraged to predict optimal monitoring thresholds, which are then synchronized with the data plane. A composite cost function is introduced to jointly consider monitoring error and communication overhead, guiding the model toward a balanced trade-off between accuracy and efficiency. Experimental evaluation on BMv2 software switches demonstrates that, compared to static threshold strategies, MAT4PM reduces monitoring error to 7.0% and achieves a 5.6% reduction in overall cost, while maintaining sub-millisecond inference latency and minimal resource consumption. These results demonstrate the practical viability and scalability of MAT4PM in SDN environments. | 10.1109/TNSM.2026.3677416 | |
| Junyan Guo, Shuang Yao, Yue Song, Le Zhang, Xu Han, Liyuan Chang | EF-CPPA: Escrow-Free Conditional Privacy-Preserving Authentication Scheme for Real-Time Emergency Messages in Smart Grids | 2026 | Early Access | Authentication Smart grids Security Privacy Smart meters Logic gates Real-time systems Vehicle dynamics Time factors Power system reliability Smart grid emergency message authentication conditional privacy preservation escrow-free key generation unlinkability dynamic joining and revocation | Timely and secure emergency message delivery is critical to resilient smart-grid operation and rapid disturbance response. However, existing schemes remain inadequate, leaving smart grids vulnerable to security and privacy threats and causing verification bottlenecks, particularly when nonlinear emergency measurements cannot be homomorphically aggregated, which prevents bandwidth-efficient in-network aggregation and scalable batch verification. We propose EF-CPPA, an escrow-free, conditional privacy-preserving authentication scheme for real-time emergency messaging in smart grids. EF-CPPA enables smart meters to deliver authenticated emergency messages to the CC via power gateways verifiable as legitimate relays, while ensuring the confidentiality, integrity, and unlinkability of embedded nonlinear measurements. EF-CPPA further provides conditional anonymity with accountable tracing, as well as origin authentication, intra-domain verification, and scalable batch verification under bursty multi-meter messaging. An ECDLP-based escrow-free key-generation mechanism reduces reliance on the CC and enables efficient node joining and revocation. Security analysis shows that EF-CPPA achieves existential unforgeability under chosen-message attacks (EUF-CMA) and satisfies the stated security and privacy requirements. Performance evaluation demonstrates low computational, communication, energy, and node-management overhead, making EF-CPPA suitable for security-critical, time-sensitive smart-grid emergency messaging. | 10.1109/TNSM.2026.3672754 |
| Masaki Oda, Imran Ahmed, Akio Kawabata, Eiji Oki | Optimistic Synchronization-Based Server Allocation with Preventive Start-Time Optimization Under Server Failure in Delay-Sensitive Applications | 2026 | Early Access | Servers Delays Resource management Optimization Numerical models Synchronization Computational modeling Heuristic algorithms Real-time systems Network function virtualization Server allocation optimistic synchronization algorithm preventive start-time optimization server failure | Real-time applications require low latency and strict event ordering to ensure seamless operation. Distributed server processing is effective for this purpose, and there are two synchronization algorithms: a conservative synchronization algorithm (CSA) and an optimistic synchronization algorithm (OSA). OSA improves delay performance compared to CSA. While prior studies have considered OSA, they have not incorporated the impact of server failures. This paper proposes an OSA-based server allocation model for delay-sensitive applications with preventive start-time optimization (PreSO) under single-server failures (OSA-PreSO). The proposed OSA-PreSO model minimizes the largest total delay across all failure scenarios while satisfying constraints in OSA with PreSO under single-server failures. We formulate the proposed model as an integer linear programming (ILP) problem. In OSA-PreSO, the objective is to minimize the largest total delay across all failure scenarios, without giving special consideration to the total delay in the no-failure scenario. As a result, a penalty arises in the form of an increased total delay in the no-failure scenario. To reduce the penalty, we develop an improved OSA-PreSO model, OSA-PreSO-LP (low-penalty), which reduces the total delay in the nofailure scenario while maintaining the same delay characteristics in failure scenarios. We prove that the decision version of OSA-PreSO is NP-complete. We introduce heuristic algorithms to handle large-scale problems. Numerical results show that the proposed OSA-PreSO model reduces the delay compared to the conventional CSA-based model by effectively utilizing server memory resources. We observe that the proposed model achieves a lower largest total delay than start-time optimization and provides greater stability by preventing unnecessary user reassignments compared to run-time optimization. Numerical results also show that OSA-PreSO-LP reduces the penalty at most by 83%, while maintaining the same delay characteristics in failure scenarios. | 10.1109/TNSM.2026.3676230 |
| Jing Gao, Lei Feng, Fanqin Zhou, Mianxiong Dong, Peng Yu, Kaoru Ota, Xuesong Qiu | Deterministic Delay-Aware Task Scheduling over In-Network Computing: A Graph Embedding-Based DRL Approach | 2026 | Early Access | Delays Processor scheduling Resource management Dynamic scheduling Optimal scheduling Calculus Graph neural networks Computational modeling Reinforcement learning Scheduling algorithms Task scheduling Network calculus Deterministic latency DRL | As the in-network computing (INC) paradigm evolves, efficient scheduling of dependent tasks within complex network systems becomes increasingly crucial. The network needs to handle high-level resource demands while adhering to strict latency requirements. Deterministic delay constraints are particularly critical in applications that rely on directed acyclic graphs (DAGs). To address this challenge, we first propose a deterministic delay-aware task scheduling optimization problem over INC to maximize resource utilization and ensure task acceptance. We accurately establish the complex deterministic delay constraint through traffic arrival and service curves and utilize network calculus for conversion to facilitate solving. Then, we further transform the task optimization problem into MDP and develop a deep reinforcement learning (DRL) algorithm that combines graph neural network (GNN) and delay-aware proximal policy optimization (DPPO) to solve it, called the Deterministic Delay-aware Task Scheduling (DDTS) scheme. It utilizes multilayer GNN to handle task dependencies and applies the DPPO algorithm to introduce deterministic delay penalty factors to evaluate policy operations, achieving optimal task scheduling. The simulation results demonstrate the significant advantages of the DDTS scheme over existing algorithms and task scheduling schemes in terms of task acceptance rate and resource utilization. | 10.1109/TNSM.2026.3676106 |
| Yanli Liu, Yue Pang, Yidi Wang, Shengnan Li, Jin Li, Min Zhang, Danshi Wang | Developing A Domain-Specific LLM for Optical Networks: A Reinforcement Learning-Based Fine-Tuning Framework | 2026 | Early Access | Optical fiber networks Cognition Accuracy Location awareness Reinforcement learning Adaptation models Semantics Optimization Maintenance Training Large language model reinforcement learning from human feedback reinforced fine-tuning optical networks | Optical networks serve as the backbone of modern communication infrastructure, where efficient operation and maintenance (O&M) are essential for ensuring reliable and high-speed data services. However, traditional network O&M face persistent challenges, including high labor costs, delayed response time, and difficulties in processing massive and complex network data. Although large language models (LLMs) have demonstrated strong capabilities in text understanding, generation, and reasoning, their direct application in optical network O&M is limited by domain-specific knowledge barriers, inherent reasoning biases, and insufficient performance in complex multi-step tasks. To address this issue, this study develops a domain-adaptation and system-implementation framework that applies two established reinforcement learning-based fine-tuning methods (RLHF and ReFT) to construct domain-specialized LLMs for optical network O&M tasks. In the context of log analysis, RLHF achieves improvements of 1.64 points in accuracy, 1.02 points in content richness, and a notable 10-point increase in interactivity over supervised fine-tuning. In alarm localization, ReFT achieves accuracy improvements of 2%–13% across four reasoning tasks. The extensive tests not only demonstrate the practical value of RL-based fine-tuning in enhancing alignment and reasoning for domain-specific applications, but also provides a practical methodology and implementation reference for applying reinforcement learning-based LLM adaptation in optical network O&M environments. | 10.1109/TNSM.2026.3676522 |
| Ying-Chin Chen, Chit-Jie Chew, Wei-Bin Lee, Iuon-Chang Lin, Jun-San Lee | IROVF:Industrial Role-Oriented Verification Framework for safeguarding manufacture line deployment | 2026 | Early Access | Security Manufacturing Standards Industrial Internet of Things IEC Standards Authentication Computer crime Smart manufacturing Protocols SCADA systems Industrial role-oriented verification production line deployment | Traditionally, industrial control systems operate in isolated networks with proprietary solutions. As smart factories and digital twins have become inevitable with AI advancement, the rapid adoption of Industrial Internet of Things (IIoT) devices has significantly increased cybersecurity risks. More precisely, the complexity of industrial environments, which includes production processes and device roles, creates substantial challenges for secure deployment. The authors introduce a bottom-up, industrial role-oriented verification framework (IROVF) for manufacturing line deployment. IROVF incorporates SCADA's MTU and RTU components, which are mapped to distinct device roles. This provides authentication and least-privilege principles that are tailored to factory environments. The proposed framework designs an alarm strategy, which can be helpful to detect and report potential operational disruptions during runtime, thus minimizing impact on system availability. Experimental results demonstrate the superior security coverage of the proposed framework compared to existing research, while a comprehensive application scenario validates its practical applicability. The scalable security parameters of IROVF allow organizations to select appropriate security levels based on their specific requirements. IROVF provides an effective security solution for modern industrial control systems during deployment phases. | 10.1109/TNSM.2026.3672975 |
| Shaohui Gong, Luohao Tang, Jianjiang Wang, Quan Chen, Cheng Zhu | A Key Node Set Analysis Method For Regional Service Denial In Mega-Constellation Networks | 2026 | Early Access | Satellites Measurement Analytical models Robustness Collaboration Satellite constellations Protection Degradation Correlation Spatiotemporal phenomena Mega-Constellation Networks Regional Service Service Denial Key Node Set Temporal Networks Mixed-Integer Programming | Mega-constellation networks (MCNs) face the significant threats of regional service denial attacks. To improve the robustness of regional services in MCNs against such attacks, a cost-effective approach is to identify key node sets for targeted protection efforts. This paper formally defines the key node set analysis problem for regional service denial in MCNs and develops a comprehensive solution framework. First, we develop a regional service capability analysis model that considers the dynamic collaboration of multiple satellites within regional communication service scenarios in MCNs, alongside a temporal network model for their collaborative relationships. Next, we design a multi-satellite criticality metric that quantifies the multi-dimensional impacts of satellite node set failures on regional service capabilities. Building on these, we construct a mixed-integer programming-based key node set analysis model to achieve precise identification of key node sets. Finally, simulation experiments are conducted to verify and analyze the proposed methods, providing insights to enhance the robustness of regional services in MCNs. | 10.1109/TNSM.2026.3672157 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Ebrima Jaw, Moritz Müller, Cristian Hesselman, Lambert Nieuwenhuis | Reproducibility Study and Assessment of the Evolution of Serial BGP Hijacking Events | 2026 | Early Access | Internet Routing Border Gateway Protocol Routing protocols Security IP networks Cloud computing Autonomous systems Authorization Scalability Border Gateway Protocol (BGP) Prefix hijacks RPKI Regional Internet Registries (RIR) Serial hijackers | The Border Gateway Protocol (BGP) is the Internet’s most crucial protocol for efficient global connectivity and traffic routing. However, BGP is well known to be susceptible to route hijacks and leaks. Route hijacks are the intentional or unintentional illegitimate announcements of network resources that can compromise the confidentiality, integrity, and availability of communication systems. In the past, the so-called “serial hijackers” have hijacked Internet resources multiple times, some lasting for several months or years. So far, only the paper “Profiling BGP Serial Hijackers” has explicitly focused on these repeat offenders, and it dates back to 2019. Back then, they had to process large amounts of BGP announcements to find a few potential serial hijackers. In this paper, we revisit the profiling of serial hijackers. We reproduced the 2019 study and showed that we can identify potential offenders with less data while achieving similar accuracy. Our study confirms that there has been no significant increase in the evolution of serial hijacking activities in the last five years. We then extend their research, further analyze the characteristics of the serial hijackers, and show that most of the alleged serial hijackers are still active on the Internet. We also find that 22.9% of the hijacks violated RPKI objects but were still widely propagated, and that even MANRS participants were among the propagating networks. | 10.1109/TNSM.2026.3671613 |
| Beibei Li | B-TWGA: A Trusted Gateway Architecture Based on Blockchain for Internet of Things | 2026 | Early Access | Internet of Things Blockchains Security Hardware Logic gates Computer architecture Sensors Radiofrequency identification Trust management Middleware Internet of Things communication links Blockchain-based Trustworthy Gateway Architecture | Internet of Things (IoT) terminals are commonly used for data sensing and edge control. The communication links between these hardware devices are critical points that are vulnerable to security attacks. Moreover, these links are usually composed of resource-constrained nodes that cannot implement strong security protections. To address these security threats, we introduce a Blockchain-based Trustworthy Gateway Architecture (B-TWGA), which does not rely on additional thirdparty management institutions or hardware facilities, nor does it require central control. Our proposal further considers the possibility of Denial of Service (DoS) attacks in blockchain transactions, ensuring secure storage and seamless interaction within the network. The proposed scheme offers advantages such as tamper-proofing, protection against malicious attacks, and reliability while maintaining operational simplicity. Experimental results demonstrate that B-TWGA maintains stable trust levels even when 40% of the network nodes are malicious, effectively mitigates trust degradation caused by vote-stuffing and switch attacks, and ensures high transaction processing performance, achieving an average throughput of 97.55% for storage transactions with practical response times below 0.7s for typical trust file sizes. | 10.1109/TNSM.2026.3671208 |
| Koki Koshikawa, Yue Su, Jong-Deok Kim, Won-Joo Hwang, Zhetao Li, Kien Nguyen, Hiroo Sekiya | Impacts of Overlay Topologies and Peer Selection on Latencies in IoT Blockchain | 2026 | Vol. 23, Issue | Peer-to-peer computing Blockchains Topology Internet of Things Network topology Delays Security Reliability Overlay networks Propagation delay Ethereum overlay P2P proof-of-authority peer selection latency | The integration of blockchain with the Internet of Things (IoT) offers strong guarantees of data integrity and decentralized trust; however, latency remains a critical barrier to scalability. Under Ethereum’s default random peering, IoT deployments exhibit propagation delays ranging from 500 ms to 1000 ms, causing stale blocks and inconsistent state updates. This paper investigates the impact of peer-to-peer (P2P) overlay topologies on latency performance and introduces a lightweight peer-selection algorithm, Dual Perigee, designed to jointly optimize transaction-oriented latency (TOL) and block-oriented latency (BOL). We first develop a method to construct canonical overlay configurations (i.e., Erdős-Rényi, Barabási-Albert, and Random-Regular) and evaluate their influence on latency in a controlled IoT-blockchain environment. Experimental results reveal that static topologies fail to consistently minimize delay due to redundant message amplification and queuing effects. To address this, Dual Perigee extends the state-of-the-art Perigee algorithm by incorporating block propagation metrics into its scoring function while maintaining low computational overhead. In a 50-node Proof-of-Authority network emulated on Mininet-Wifi, Dual Perigee reduces TOL by up to 54.7% and BOL by 48.5% compared to Ethereum’s default peering, and outperforms Perigee by up to 23.4% in BOL. These findings demonstrate that latency-aware peer selection is essential for achieving responsive and scalable IoT-blockchain systems under dynamic network conditions. | 10.1109/TNSM.2025.3645139 |
| Abdurrahman Elmaghbub, Bechir Hamdaoui | HEEDFUL: Leveraging Sequential Transfer Learning for Robust WiFi Device Fingerprinting Amid Hardware Warm-Up Effects | 2026 | Vol. 23, Issue | Fingerprint recognition Radio frequency Hardware Wireless fidelity Accuracy Performance evaluation Training Wireless communication Estimation Transfer learning WiFi device fingerprinting hardware warm-up consideration hardware impairment estimation sequential transfer learning temporal-domain adaptation | Deep Learning-based RF fingerprinting approaches struggle to perform well in cross-domain scenarios, particularly during hardware warm-up. This often-overlooked vulnerability has been jeopardizing their reliability and their adoption in practical settings. To address this critical gap, in this work, we first dive deep into the anatomy of RF fingerprints, revealing insights into the temporal fingerprinting variations during and post hardware stabilization. Introducing HEEDFUL, a novel framework harnessing sequential transfer learning and targeted impairment estimation, we then address these challenges with remarkable consistency, eliminating blind spots even during challenging warm-up phases. Our evaluation showcases HEEDFUL‘s efficacy, achieving remarkable classification accuracies of up to 96% during the initial device operation intervals—far surpassing traditional models. Furthermore, cross-day and cross-protocol assessments confirm HEEDFUL’s superiority, achieving and maintaining high accuracy during both the stable and initial warm-up phases when tested on WiFi signals. Additionally, we release WiFi type B and N RF fingerprint datasets that, for the first time, incorporate both the time-domain representation and real hardware impairments of the frames. This underscores the importance of leveraging hardware impairment data, enabling a deeper understanding of fingerprints and facilitating the development of more robust RF fingerprinting solutions. | 10.1109/TNSM.2025.3624126 |
| Yali Yuan, Ruolin Ma, Jian Ge, Guang Cheng | Robust and Invisible Flow Watermarking With Invertible Neural Network for Traffic Tracking | 2026 | Vol. 23, Issue | Watermarking Decoding Feature extraction Correlation Robustness Encoding Delays Encryption Data mining Accuracy Flow watermarking inter-packet delay INN invisibility robustness | This paper introduces an innovative blind flow watermarking framework on the basis of Invertible Neural Network (INN) called IFW, which aims to solve the problem of suboptimal encoder-decoder coupling in existing end-to-end watermarking architectures. The framework tightly couples the encoder and decoder to achieve highly consistent feature mapping using the same parameters, thus effectively avoiding redundant feature embedding. In addition, this paper adopts the INN to implement watermarking, which supports forward encoding and backward decoding, and the watermark extraction is completely dependent on the embedding algorithm without the need for the original network flow. This feature enables both the embedding and the blind extraction of watermarks simultaneously. Extensive experiments demonstrate that the proposed IFW method achieves a watermark extraction accuracy exceeding 96.6% and maintains a stable K-S test p-value above 0.85 in both simulated and real-world Tor traffic environments. These results indicate a clear advantage over mainstream baselines, highlighting the method’s ability to jointly ensure robustness and invisibility, as well as its strong potential for real-world deployment. | 10.1109/TNSM.2025.3645079 |
| Akhila Rao, Magnus Boman | Self-Supervised Pretraining for User Performance Prediction Under Scarce Data Conditions | 2026 | Vol. 23, Issue | Generators Training Self-supervised learning Predictive models Noise Data models Data augmentation Base stations Vectors Adaptation models User performance prediction telecom networks mobile networks machine learning self-supervised learning structured data tabular data generalizability sample efficiency | Predicting user performance at the base station in telecom networks is a critical task that can significantly benefit from advanced machine learning techniques. However, labeled data for user performance are scarce and costly to collect, while unlabeled data consisting of base station metrics, are more readily accessible. Self-supervised learning provides a means to leverage this unlabeled data, and has seen remarkable success in the domains of computer vision and natural language processing, with unstructured data. Recently, these methods have been adapted to structured data as well, making them particularly relevant to the telecom domain. We apply self-supervised learning to predict user performance in telecom networks. Our results demonstrate that even with simple self-supervised approaches, the percentage of variance of the target values explained by the model, in low-labeled scenarios (e.g., only 100 labeled samples) can be improved fourfold, from 15% to 60%. Moreover, to promote reproducibility and further research in the domain, we open-source a dataset creation framework and a specific dataset created from it that captures scenarios that have been deemed to be challenging for future networks. | 10.1109/TNSM.2025.3622892 |