Last updated: 2026-05-15 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Dinghao Zeng, Fagui Liu, Runbin Chen, Jingwei Tan, Dishi Xu, Qingbo Wu, C.L. Philip Chen | CoreScaler: A Resource-Efficient Hybrid Scaling Framework for Dynamic Workloads in Cloud | 2026 | Early Access | Resource management Central Processing Unit Memory Optimization Modeling Timing Clouds Conferences Algorithms Loading Cloud computing microservices hybrid autoscaling resource management | Containerized microservices face significant challenges in balancing service quality and resource efficiency under dynamic workloads. Existing approaches suffer from horizontal scaling’s cold start latency, vertical scaling’s resource ceilings, and hybrid methods’ limited adaptability. We present CoreScaler, a resource-efficient hybrid scaling framework based on analysis of CPU usage patterns revealing substantial consumption differences between working mode and waiting mode instances. This insight drives our dual-mode instance management model that distinguishes between working instances actively handling requests and waiting instances maintaining hot standby with minimal resource allocation. CoreScaler employs a master-subordinate distributed architecture where the master node performs capacity planning using multi-confidence interval predictions and contextual multi-armed bandit optimization, while subordinate nodes execute mode-aware CPU quota adjustments. Comprehensive evaluation on a Kubernetes cluster with a typical microservice system under four representative production work-loads demonstrates that CoreScaler maintains SLO compliance while reducing CPU and memory allocation by 22.53% and 30.83% respectively compared to state-of-the-art solutions. The framework achieves substantially higher resource utilization than single-dimension scaling approaches, validating the effectiveness of coordinated hybrid scaling for dynamic cloud environments. | 10.1109/TNSM.2026.3692955 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/km2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Frequency modulation Radio broadcasting Filtering Filters Memory modules Virtual private networks Encrypted traffic classification Class incremental learning Neural collapse | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 |
| Xingyu He, Nianci Li, Panxing Huang, Chunhua Gu, Guisong Yang, Yunhuai Liu | Dynamic Spatiotemporal Dual-Encoder Transformer for Long-Term Traffic Prediction in LEO Satellite Networks | 2026 | Early Access | Accurate long-term traffic prediction in Low Earth Orbit (LEO) satellite networks is essential for proactive resource allocation and congestion avoidance, yet remains challenging due to highly dynamic topologies, intermittent connectivity, and scarce real traffic data. Existing approaches are largely limited to short-term prediction or assume static spatial dependencies, making them inadequate for non-stationary LEO environments. To address these challenges, this paper proposes DST-DEformer, a dynamic spatial–temporal Transformer framework that jointly models evolving inter-satellite topology and multi-scale temporal dependencies. Specifically, a topology-adaptive graph convolution module captures time-varying spatial correlations, while a dual temporal encoder decouples long-term global trend modeling from short-term local fluctuation learning. In addition, a hybrid simulation–calibration framework is developed to generate realistic satellite traffic by incorporating orbital dynamics, demographic information, and real-world traffic trends. Extensive experiments on simulated LEO satellite traffic and the PEMS08 benchmark show that DST-DEformer consistently outperforms state-of-the-art methods in long-term prediction, achieving 4%-13% reductions in MSE and MAE and significantly slower error accumulation as the prediction horizon increases. These results demonstrate the effectiveness and robustness of DST-DEformer for long-term traffic prediction under dynamic network topologies. | 10.1109/TNSM.2026.3693648 | |
| Jiahang Pu, Hongyu Ye, Jing Cheng, Feng Shan, Runqun Xiong | Balancing Timeliness and Accuracy: A Hybrid Data-Control Plane Framework for Volumetric DDoS Defense in IoT | 2026 | Early Access | Resource-constrained IoT devices in Industrial Internet environments are highly vulnerable to DDoS attacks due to infrequent security updates and insufficient built-in protection mechanisms. Existing defense solutions primarily rely on external filtering servers or programmable switches, but these approaches fail to simultaneously meet the stringent real-time performance and high accuracy requirements of industrial applications. To address these limitations, we propose a novel cross-plane defense framework that exploits the temporal invariance characteristics of attack traffic patterns. In the data plane, an adaptive variance threshold mechanism immediately mitigates high-volume, low-variance traffic flows, while a bidirectional dual-hash table captures low-collision flow features for efficient export to the control plane. The control plane constructs temporally-enhanced flow sequences that enable deep learning models to perform accurate attack detection, subsequently directing the data plane to block identified malicious sources. We implemented and evaluated a prototype of this framework on a software switch platform using both real-world attack datasets and custom-generated traffic patterns. Experimental results demonstrate that our framework successfully mitigates 86% of attack traffic within milliseconds and achieves complete source blocking within 52 seconds. Compared to baseline methods, our framework can effectively counter both DoS and DDoS attacks without generating false positives on benign traffic. | 10.1109/TNSM.2026.3693266 | |
| Yuxiang Wang, Jiao Zhang, Leixin Cai, Tao Huang | Mercury: Multipath Spraying for Joint Congestion and Reordering Control in RDMA | 2026 | Early Access | Due to the low entropy traffic characteristics of LLM (Large Language Model) training, existing load balancing mechanisms such as Equal-Cost Multi-Path (ECMP) fail to fully utilize the redundant bandwidth between computing nodes in RDMA over Converged Ethernet (RoCE). Packet spraying mechanism has become a typical solution to the load balancing problem in RoCEs. However, it has a negative effect on congestion control mechanisms and suffers severe out-of-order problems. In this paper, we propose Mercury, a host-driven spraying scheme that synergizes congestion feedback and reordering control. Mercury selects paths by leveraging ECN, RTT, and reordering metrics, adjusts rates via multi-metric window. It also employs receiver-side buffers with priority-based dropping to mitigate out-of-order penalties. Evaluations in ns-3 under AllReduce and All-to-All traffic show that Mercury consistently outperforms the ECMP-based baselines, including DCQCN, TIMELY, HPCC, SWIFT, and BOLT, with the largest reduction in Max FCT reaching 63%. Under multi-path load balancing, Mercury delivers the lowest Max FCT for large messages in AllReduce and for most message sizes in All-to-All. It outperforms STRACK and MP-RDMA by up to 28% and 35% in AllReduce, and by up to 25% and 30% in All-to-All. | 10.1109/TNSM.2026.3692452 | |
| Shaimaa Alkaabi, Mark A Gregory, Shuo Li | A Stateless Orchestrated Handover Protocol for Multi-Access Edge Computing | 2026 | Early Access | In Multi-access Edge Computing (MEC) environments, session continuity during user mobility remains a pressing challenge due to decentralized infrastructure and high-throughput, latency-sensitive applications. Existing mobility protocols often rely on stateful mechanisms or centralized control, leading to increased signaling overhead, limited scalability, and vulnerability to performance degradation in dynamic networks. This paper introduces the Server Search and Select Algorithm Protocol (SSSAP), a lightweight, UDP-based handover protocol tailored for MEC deployments. The protocol is an extension of our previous work on a handover Server Search and Selection Algorithm (SSSA). SSSAP enables seamless session redirection through a three-phase signaling scheme (pre-handover, handover initiation, and handover termination), preserving service continuity without coupling session state to transport layers. The protocol’s design features extensible headers for multi-metric evaluation and future security adaptation while maintaining minimal dependency on intermediary control nodes. Through extensive simulation and testing, we have validated the SS-SAP efficiency across user equipment nodes and MEC servers. Results demonstrate high handover success rates, low-session setup delays, and balanced server load distribution. SSSAP achieves superior performance in mobility robustness, packet loss mitigation, and integration simplicity. The research outcomes position SSSAP as a scalable and application-agnostic mobility protocol for MEC systems, especially in vehicular and high-mobility scenarios. | 10.1109/TNSM.2026.3692555 | |
| Lal Verda Cakir, Mehmet Ali Erturk, Mehmet Ozdem, Berk Canberk | Digital Twin-assisted Handover Scheme for Mobile Networks using Generative AI | 2026 | Early Access | Electromagnetic propagation Propagation constant Radio broadcasting Radio networks Handover Communication systems Avatars Communication switching Data transfer Cellular networks digital twin 5G/6G handover management generative artificial intelligence | Handover management in mobile networks is challenged by high latency and reduced reliability in dense deployments and under user mobility. Here, existing schemes improve handover initiation by optimising the candidate handover at the decision time. However, these are applied after a non-negligible delay due to the control-plane signalling. Then, when applied, it may become invalid or degrade performance. To address this, we propose a Digital Twin (DT)-assisted handover scheme that performs predictive execution-time validation prior to the preparation of the Next Generation (NG)-based handover. To this end, the DT-What-If Generator (DT-WIG) is used to emulate short-horizon future network states under uncertainty. Here, the DT-WIG is a spatiotemporal graph generative model that uses variational latent sampling to generate counterfactual post-handover trajectories for the candidate handover decision. Then, the AMF estimates the failure and QoS risks associated with the candidate handover and approves/rejects it via standard-compliant signalling. With this, we form a policy-agnostic mechanism that runs on the underlying handover policy. Consequently, we evaluate performance using ns-3/5G-LENA trace generation and replay-based policy analysis, with OpenAirInterface-based signalling evaluation. The results show that the proposed method reduces the handover failure rate and handover interruption time while improving latency, jitter, throughput, and packet loss. | 10.1109/TNSM.2026.3690572 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Aerospace and electronic systems Telemetry Central Processing Unit Microcontrollers Microprocessors MIMICs Millimeter wave integrated circuits Monolithic integrated circuits Communication systems Internet of Things EDHOC OSCORE Key agreement Authentication extended Canetti–Krawczyk (eCK) attack model | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 |
| Jiale Zhu, Xiaoyao Zheng, Shukai Ye, Ming Zheng, Liping Sun, Liangmin Guo, Qingying Yu, Yonglong Luo | Federated Recommendation Model Based on Personalized Attention and Privacy-Preserving Dynamic Graph | 2026 | Early Access | Modeling Federated learning Privacy Recommender systems Training Educational institutions Servers Algorithms Conferences Graph neural networks Graph Neural Networks Federated Learning Personalized Recommendation Privacy Protection | Graph Neural Networks (GNNs) have been widely adopted in recommendation systems. When integrated into a federated learning framework, GNNs can enhance the model’s expressive capability. However, challenges arise in personalized representation and graph expansion due to the heterogeneity and locality of user data in federated recommendation systems. To address these challenges, we propose a federated recommendation model based on personalized attention and privacy-preserving dynamic graphs. The method first matches neighbor users for each selected client. Subsequently, it counts the interaction frequencies of items for both local and neighbor users to construct personalized weights, which captures the unique characteristics of different users. Additionally, we designs a method for constructing privacy-preserving dynamic graphs. In each round of federated training, the selected client adds pseudo-interaction items to its own interaction subgraph, perturbing the real interactions. After completing local training, the noisy interaction subgraph is incorporated into the global graph to capture higher-order connectivity information among users while safeguarding their interaction privacy. We conduct extensive experiments on three benchmark datasets, and the results demonstrate that the proposed PADG method achieves superior performance while effectively protecting privacy. | 10.1109/TNSM.2026.3691659 |
| Atri Mukhopadhyay, Dinesh Korukonda, Goutam Das | Design of Passive Optical Network Based O-RAN X-haul: A Systematic Approach | 2026 | Early Access | Timing Passive optical networks Optimization Delays Optical network units Ethernet Jitter Loading Copper Synchronization C-RAN Delay Jitter QCQP O-RAN PON | The development of high data rate communication technologies has resulted in cell densification, which in turn has led to the development of centralized radio access networks (C-RANs) followed by open radio access networks (O-RANs). The O-RAN segregates the base station into three logical entities; the central unit (CU), the distributed unit (DU) and the radio unit (RU). The CU, DU and RU require low latency, low jitter and high data rate connections for seamless operation, which is known as X-haul. A passive optical network (PON) is a potential solution for X-haul design. However, conventional PON uplink protocols are not inherently suitable for X-haul requirements. The packetization procedure of PON introduces jitter to the X-haul bit stream. Further, the delay requirements of the X-haul limit the number of sources that can be connected to the X-haul. Advanced features like coordinated multipoint requires synchronization among the different X-haul bit streams as well. Therefore, in this paper, we develop an optimal uplink system that allows PON to be used as an X-haul connection technology. The proposal maximizes the throughput of the PON while conforming to the delay and synchronization requirements. Moreover, the proposal nullifies the jitter introduced by the PON scheduler. We have performed extensive simulations for verifying our results. | 10.1109/TNSM.2026.3692242 |
| Minh-Thuyen Thi, Mohan Gurusamy | Multi-dimensional Cross-granularity Open-set Network Intrusion Detection | 2026 | Early Access | Modeling Labeling Distance measurement Signal detection Optimization Fluid flow Training Intrusion detection Magnesium Tensors Network intrusion detection out-of-distribution detection optimal transport multi-granularity analysis | Network intrusion detection systems (NIDSs) face critical challenges from continuously evolving cyber-attacks. Traditional machine learning methods, while requiring extensive labeled training data, still often fail against unknown and out-of-distribution (OOD) attacks. Furthermore, new sophisticated adversaries are exploiting the detection blind spots inherent in traditional feature representation approaches that do not provide adequate comprehensive traffic analysis. In this paper, we propose MDCG-IDS, an NIDS framework that introduces multi-dimensional cross-granularity (MDCG) feature representation for open-set detection, in which network traffic is analyzed thoroughly across three complementary dimensions (traffic statistics, temporal, spatial), each at multiple granularity levels. These dimensions and granularities jointly capture the structures of sophisticated attacks that may be invisible from single analytical perspectives. We design a tensor structure that provides a unified encoding for the MDCG features while supporting the use of optimal transport theory to measure the distance between benign traffic and known or unknown attacks. MDCG-IDS uses a semi-supervised learning model that is trained exclusively on benign traffic and validated on a small set of labeled data, significantly reducing the effort of data labeling. Experiments on various datasets achieve AUC-ROC scores of more than 0.948, exceeding the best competing state-of-the-art methods by up to 7%. Regarding the amount of labeled validating data, MDCG-IDS obtains an AUC-ROC score of over 0.94 with only 3% of entire validating samples, outperforming the baseline models. | 10.1109/TNSM.2026.3693141 |
| Md Facklasur Rahaman, Makhduma F. Saiyed, Irfan Al-Anbagi, Ramakrishna Gokaraju | A Domain-informed Hierarchical Federated Learning Framework for DDoS Detection in WSN for Critical Infrastructure | 2026 | Early Access | Modeling Internet of Things Signal detection Federated learning Accuracy Inductors Image sensors Timing Training Architecture Wireless Sensor Networks (WSN) Small Modular Reactor (SMR) Distributed IoT sensors Federated Learning LSTM Hierarchical Aggregation DDoS Attack Detection Domain-Informed LSTM Trust-Aware Systems | The deployment of Wireless Sensor Networks (WSN) in critical infrastructure, such as Small Modular Reactors (SMRs), faces cybersecurity threats like Distributed Denial of Service (DDoS) attacks that can overload these networks and disrupt monitoring and control functions. Current DDoS detection systems often suffer from high false positive rates, neglect domain-specific operational constraints, and rely on centralized architectures that pose privacy risks, making them less suitable for distributed Internet of Things (IoT) environments. To address these issues, we propose a novel Domain-informed Hierarchical Federated Learning (DHFL) framework for WSN used in SMR monitoring and control applications. Our framework features a dual-branch bidirectional Long Short-Term Memory (LSTM) architecture comprising of two parallel processing branches with network-specific constraints, facilitating precise detection of DDoS attacks. It includes differentiable penalty functions to enforce domain-aligned behaviour and employs adaptive trust scoring to evaluate the reliability of individual nodes. These elements operate within a hierarchical Federated Learning (FL) structure organized into three tiers: sensor nodes, local aggregators, and a global coordinator, allowing collaborative training that preserves privacy. Unlike earlier approaches, our method not only maintains privacy by ensuring that raw sensor data never leaves the local nodes and only model updates are shared but also considers the operational importance and trustworthiness of each node through tier-weighted aggregation. Tested on the CICIoT2023 dataset, our system achieved 93.4% accuracy, 94.5% precision, 97.5% recall, 95.5% F1-score, and 98.9% AUC, surpassing state-of-the-art FL methods in both performance and efficiency. Furthermore, it converged in fewer communication rounds (30–50) with reduced communication costs (from 45 MB to 30 MB per round). Our framework can differentiate between normal reactor transients and actual attacks, making it suitable for mission-critical SMR cybersecurity. | 10.1109/TNSM.2026.3693112 |
| Shihong Hu, Zhipeng Li, Zhihao Qu, Bin Tang, Baoliu Ye, Xiongxiong Xu | A Unified Simulation Platform and Computation-Reuse Algorithm for Task Scheduling in Vehicle-Infrastructure Collaboration | 2026 | Vol. 23, Issue | Computational modeling Edge computing Spatiotemporal phenomena Resource management Correlation Cloud computing Artificial intelligence Collaboration Bandwidth Servers Vehicle-infrastructure collaboration task offloading computation reuse simulation platform branch and bound | Vehicle-Infrastructure Collaboration (VIC) integrates vehicles and roadside infrastructure using advanced communication technologies, forming a crucial component of intelligent transportation systems (ITS). In a VIC system, tasks generated by vehicles can either be processed locally or offloaded to nearby edge servers. Current research often focuses on optimizing task scheduling but overlooks the inherent spatiotemporal correlations among tasks, which can lead to redundant computations due to similar tasks producing identical results. Additionally, the diversity in research scenarios and model constructions has resulted in the absence of a unified simulation verification platform, making it difficult to compare and validate various scheduling algorithms. To address these challenges, we have developed a comprehensive VIC simulation platform (CVSP). This platform not only features vehicle simulation capabilities like those of SUMO for modeling vehicle movement, but it also allows for the customization of driving scenarios and configurations, including edge resource settings, and incorporates a unified algorithm execution module for evaluating the performance of scheduling algorithms. Using CVSP can provide a clearer understanding of the strengths and weaknesses of scheduling algorithms, which in turn benefits the development of VIC systems. To tackle the spatiotemporal correlations observed in vehicular tasks, we propose a branch and bound algorithm based on computation-reuse (BB-CR). This algorithm integrates a computational reuse model derived from fused vehicle and road data. We simulate both non-congested and congested scenarios of vehicles traversing intersections to validate the performance of the baseline and BB-CR on the CVSP. The results highlight the versatility of the CVSP, showing that the BB-CR algorithm reduces system costs and minimizes the probability of task loss compared to the baseline. The code is publicly available at https://github.com/lzp991105/comprehensive-VIC-simulation-platform.git. | 10.1109/TNSM.2025.3642265 |
| Claudia Canali, Giuseppe Di Modica, Francesco Faenza, Luca Foschini, Riccardo Lancellotti, Domenico Scotece | OptiFog: A Framework to Optimize the Placement of Microservices in Fog Scenarios | 2026 | Vol. 23, Issue | Microservice architectures Genetic algorithms Quality of service Edge computing Optimization Internet of Things Energy consumption Software Prototypes Emulation Microservices placement fog computing genetic algorithms framework performance evaluation fog federation software platform | The Fog computing paradigm makes use of dispersed, diverse, and resource-limited devices located at the network edge to effectively implement Internet of Things (IoT) application services that demand low latency and substantial bandwidth. At the same time, the adoption of microservice-based architectures in the IoT domain is on the rise due to their ability to align with the swift evolution and deployment demands of highly dynamic IoT applications and to elastically scale to fulfill load demands. In complex environments like Fog federations, characterized by highly heterogeneous computing and networking resources, the effective allocation of microservices to available nodes, while ensuring compliance with required Quality of Service (QoS) constraints, represents a significant challenge. In this paper, we present the design and implementation of OptiFog, a comprehensive framework that enables users to model, simulate, and validate microservice placement solutions within a realistic testbed environment. Compared to state-of-the-art approaches, OptiFog offers developers a controlled environment for experimenting with placement solutions while providing the assurance that the resulting deployments will meet the targeted QoS requirements in real-world scenarios, specifically in terms of service execution time and energy consumption of Fog nodes. To demonstrate the feasibility of the proposed approach, we implemented and evaluated a representative use case, involving both sub-optimal and optimal microservice placement, and utilizing real-world microservices drawn from the IoT domain. | 10.1109/TNSM.2025.3648449 |
| Chang Chen, Guoyu Yang, Dawei Zhang, Wei Wang, Qi Chen, Jin Li | S3Cross: Blockchain-Based Cross-Domain Authentication With Self-Sovereign and Supervised Identity Management | 2026 | Vol. 23, Issue | Authentication Blockchains Security Internet of Things Identity management systems Resistance Public key Privacy Encryption Trees (botanical) Privacy-preserving self-sovereign identity Sybil resistance group signature zkSNARKs | The widespread deployment of Internet of Things (IoT) devices has driven their segmentation into distinct trust domains for the purpose of governance, creating a critical need for secure cross-domain authentication (CDA). CDA must preserve both anonymity and traceability of device identities to enable trustworthy data exchange. However, existing approaches, while exploring this trade-off, remain vulnerable to single points of failure and Sybil attacks—threats that are especially severe for unattended and resource-constrained devices. In this paper, we propose a Self-Sovereign and Supervised Cross-domain authentication scheme (S3Cross) to tackle these issues. The main building block we designed is a pseudonym management scheme (PMS) that allows devices to generate and use pseudonyms without relying on a trusted party. Although devices has full control of their identities, PMS still ensures traceability, Sybil resistance, and revocability. We define the formal security models of PMS, instantiate it under two different approaches, namely group signature (S3Cross-GS) and zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs, S3Cross-ZK), and present security proofs for our proposal. We implemented and evaluated S3Cross. The result shows that our scheme achieves an effective trade-off between security and efficiency. | 10.1109/TNSM.2025.3642315 |
| Jan Luxemburk, Karel Hynek, Richard Plný, Tomáš Čejka | Universal Embedding Function for Traffic Classification via QUIC Domain Recognition Pretraining: A Transfer Learning Success | 2026 | Vol. 23, Issue | Transfer learning Training Cryptography Adaptation models Feature extraction Standards Payloads Protocols Pipelines Data augmentation Traffic classification transfer learning deep learning encrypted traffic QUIC | Encrypted traffic classification (TC) methods must adapt to new protocols and extensions as well as to advancements in other machine learning fields. In this paper, we adopt a transfer learning setup best known from computer vision. We first pretrain an embedding model on a complex task with a large number of classes and then transfer it to seven established TC datasets. The pretraining task is recognition of SNI domains in encrypted QUIC traffic, which in itself is a challenge for network monitoring due to the growing adoption of TLS Encrypted Client Hello. Our training pipeline—featuring a disjoint class setup, ArcFace loss function, and a modern deep learning architecture—aims to produce universal embeddings applicable across tasks. A transfer method based on model fine-tuning surpassed SOTA performance on nine of ten downstream TC tasks, with an average improvement of 6.4%. Furthermore, a comparison with a baseline method using raw packet sequences revealed unexpected findings with potential implications for the broader TC field. We released the model architecture, trained weights, and codebase for transfer learning experiments. | 10.1109/TNSM.2025.3642984 |