Last updated: 2026-05-10 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/km2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Aerospace and electronic systems Telemetry Central Processing Unit Microcontrollers Microprocessors MIMICs Millimeter wave integrated circuits Monolithic integrated circuits Communication systems Internet of Things EDHOC OSCORE Key agreement Authentication extended Canetti–Krawczyk (eCK) attack model | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 |
| Lal Verda Cakir, Mehmet Ali Erturk, Mehmet Ozdem, Berk Canberk | Digital Twin-assisted Handover Scheme for Mobile Networks using Generative AI | 2026 | Early Access | Electromagnetic propagation Propagation constant Radio broadcasting Radio networks Handover Communication systems Avatars Communication switching Data transfer Cellular networks digital twin 5G/6G handover management generative artificial intelligence | Handover management in mobile networks is challenged by high latency and reduced reliability in dense deployments and under user mobility. Here, existing schemes improve handover initiation by optimising the candidate handover at the decision time. However, these are applied after a non-negligible delay due to the control-plane signalling. Then, when applied, it may become invalid or degrade performance. To address this, we propose a Digital Twin (DT)-assisted handover scheme that performs predictive execution-time validation prior to the preparation of the Next Generation (NG)-based handover. To this end, the DT-What-If Generator (DT-WIG) is used to emulate short-horizon future network states under uncertainty. Here, the DT-WIG is a spatiotemporal graph generative model that uses variational latent sampling to generate counterfactual post-handover trajectories for the candidate handover decision. Then, the AMF estimates the failure and QoS risks associated with the candidate handover and approves/rejects it via standard-compliant signalling. With this, we form a policy-agnostic mechanism that runs on the underlying handover policy. Consequently, we evaluate performance using ns-3/5G-LENA trace generation and replay-based policy analysis, with OpenAirInterface-based signalling evaluation. The results show that the proposed method reduces the handover failure rate and handover interruption time while improving latency, jitter, throughput, and packet loss. | 10.1109/TNSM.2026.3690572 |
| Willie Kouam, Yezekael Hayel, Gabriel Deugoué, Charles Kamhoua | Decoy Allocation against Lateral Movement - A Network Centrality Game Approach | 2026 | Early Access | Circuits Feedback Network topology Reconnaissance Communication systems Radio access networks Regional area networks Routing Military communication Computer networks Lateral movement Cyber deception Centrality measure One-sided POSGs | Targeted incidents increasingly threaten internal security with a rise in data breaches and service disruptions. Attackers now employ sophisticated approaches, infiltrating systems for ongoing access to critical information through lateral movement. Detecting and defending against such intrusions is challenging due to commonly exploited specific vulnerabilities. Consequently, various deception techniques have emerged over time, aiming to divert attackers’ attention. In our scenario, attackers use lateral movement within the network to reach a specific target, while defenders strategically deploy decoys to counteract them. Such a dynamic and adversarial interaction is modeled as a one-sided partially observable stochastic game (OS-POSG). Several solutions have been proposed to address this challenge, particularly when the attacker possesses complete knowledge of the network’s topology through the recognition stage. Simultaneously, recent years saw the development of approaches to obscure the attackers’ reconnaissance phase, compelling them to operate without a full comprehension of the network’s structure. We therefore introduce an innovative methodology involving intelligent players who take into account the importance of network devices, assessed by centrality measures, during the decision-making process. This approach aims to improve the effectiveness of the defender’s strategy to counter the attacker’s lateral movement in the network, in the context of step-by-step optimization. | 10.1109/TNSM.2026.3689344 |
| Xinshuo Wang, Baihua Chen, Lei Liu, Yifei Li | Pisces: Fast Loss Recovery for Multipath Transmission in RDMA | 2026 | Early Access | Payloads Military aircraft Space technology Feeds System-on-chip Field programmable gate arrays Circuits Application specific integrated circuits Integrated circuits Feedback RDMA Loss Recovery Multipath Transmission Programmable Switch Programmable NIC FPGA | Conventional Remote Direct Memory Access (RDMA) relies on Priority Flow Control (PFC) to operate on lossless networks. However, as data centers scale, PFC’s drawbacks, such as head-of-line blocking and congestion spreading become increasingly problematic. This study proposes Pisces, a fast packet loss recovery scheme that leverages terminal–network collaboration. Instead of targeting lossless RDMA networks, Pisces enables high-throughput RDMA by efficiently handling loss recovery. To address the inefficient retransmission problems of PFC+Go-Back-N and the challenges of configuring appropriate timeouts for Selective Repeat (SR) in multipath transmission scenarios, Pisces implements Quick Drop Notification (QDN) of packet loss on switches, avoiding bandwidth waste and timeouts. In addition, Pisces RDMA NICs feature on-chip packet buffers to cache in-flight packets, supporting the scalability demands of RDMA in modern data centers. Upon receiving a QDN, lost packets are quickly retrieved from the buffer for retransmission, significantly improving retransmission efficiency and reducing PCIe bandwidth waste caused by cache replacements. This study overcame numerous challenges to implement Pisces prototype, which demonstrated excellent performance. Testbed experiments show that Pisces improves the 99th-percentile FCT by 130×compared to Mellanox CX-6. Large-scale simulations demonstrate that Pisces achieves a maximum reduction of 82.8% in the 99.9th-percentile FCT compared to SR and other state-of-the-art technologies. | 10.1109/TNSM.2026.3688038 |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Frequency modulation Radio broadcasting Filtering Filters Memory modules Virtual private networks Encrypted traffic classification Class incremental learning Neural collapse | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Jiale Zhu, Xiaoyao Zheng, Shukai Ye, Ming Zheng, Liping Sun, Liangmin Guo, Qingying Yu, Yonglong Luo | Federated Recommendation Model Based on Personalized Attention and Privacy-Preserving Dynamic Graph | 2026 | Early Access | Graph Neural Networks (GNNs) have been widely adopted in recommendation systems. When integrated into a federated learning framework, GNNs can enhance the model’s expressive capability. However, challenges arise in personalized representation and graph expansion due to the heterogeneity and locality of user data in federated recommendation systems. To address these challenges, we propose a federated recommendation model based on personalized attention and privacy-preserving dynamic graphs. The method first matches neighbor users for each selected client. Subsequently, it counts the interaction frequencies of items for both local and neighbor users to construct personalized weights, which captures the unique characteristics of different users. Additionally, we designs a method for constructing privacy-preserving dynamic graphs. In each round of federated training, the selected client adds pseudo-interaction items to its own interaction subgraph, perturbing the real interactions. After completing local training, the noisy interaction subgraph is incorporated into the global graph to capture higher-order connectivity information among users while safeguarding their interaction privacy. We conduct extensive experiments on three benchmark datasets, and the results demonstrate that the proposed PADG method achieves superior performance while effectively protecting privacy. | 10.1109/TNSM.2026.3691659 | |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Shahid Mahmood, Moneeb Gohar, Seok Joo Koh | Globally Integrated Trust Authority (GITA) for Resource-Constrained Edge Devices in IoT and 6G | 2026 | Early Access | Payloads Filtering Central Processing Unit Filters Feedback Circuits Electronic circuits Microcontrollers Circuits and systems Microprocessors GITA Globally Integrated Trust Authority Network PKDL TSL LMS Security Trust Management Resource Constrained Edge Device Internet of Things and Cyber-Attack | The rapid growth of the Internet and the increasing number of edge devices have expanded the cyber-attack surface at the edge layer. Hackers exploit vulnerabilities at various levels of a network by either directly connecting to it or accessing it over the Internet. In both scenarios, edge devices remain a primary target due to their widespread use, limited resources and critical impact. Therefore, securing edge devices is essential to counter both local and global cyber threats. Trust is a key factor in determining the level of protection required for edge devices. It can be used to assess the reliability of other devices before offering or requesting services. Since edge devices are often globally interconnected, trust levels should be verifiable across the Internet and intranet. In this paper, we propose the Globally Integrated Trust Authority (GITA), a framework that distributes verifiable trust values across networks and the Internet while minimizing communication overhead. Experimental results demonstrate that GITA improves the efficiency of trust value distribution and verification among nodes compared to digital certificates, while maintaining the same level of protection.. This approach enables effective identification of malicious and benign nodes, enhancing the precision of malicious node detection locally and globally. | 10.1109/TNSM.2026.3687967 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Tianzi Zhao, Xinran Liu, Zhaoxin Zhang, Dong Zhao, Ning Li, Zhichao Zhang, Xinye Wang | HMCGeo: IP Region Prediction Based on Hierarchical Multi-Label Classification | 2026 | Vol. 23, Issue | IP networks Geology Accuracy Multi label classification Urban areas Data mining Prototypes Noise Graph neural networks Feature extraction Fine-grained IP Geolocation IP region prediction hierarchical multi-label classification computer networks | Fine-grained IP geolocation plays a critical role in applications such as location-based services and cybersecurity. Most existing fine-grained IP geolocation methods treat the task as a coordinate regression problem. However, due to inherent noise in the IP features, they frequently produce kilometer-scale coordinate errors, which in turn lead to inaccuracies when the predicted coordinates are mapped to geographic regions. To alleviate this issue, this paper introduces a novel IP region prediction framework HMCGeo, framing IP region prediction as a hierarchical multi-label classification problem. City administrators divide urban areas into regions at multiple granularities. The proposed framework employs residual connection-based feature extraction units to obtain IP representations at each granularity and introduces class prototype attention to predict the IP’s belonging region at the current granularity. Additionally, it adopts an output fusion strategy combined with hierarchical focal loss to further enhance region prediction performance. We evaluate HMCGeo on real-world datasets from New York, Los Angeles, and Shanghai. It significantly outperforms existing methods in region prediction across all granularities and achieves lower coordinate errors on most samples by similarity-weighted averaging of candidate region centers. | 10.1109/TNSM.2025.3648815 |
| Surendra Kumar, Jitendra Kumar Samriya, Rajeev Tiwari, Mohit Kumar, Shilpi Harnal, Neeraj Kumar, Mohsen Guizani | A Dynamic PAPR Reduction Method Using PTS-ESSA for MIMO Generalized FDM Wireless System | 2026 | Vol. 23, Issue | 5G mobile communication Partial transmit sequences Heuristic algorithms Simulation Bit error rate Peak to average power ratio Frequency division multiplexing Computational complexity Signal to noise ratio Convergence Computational complexity CCDF ESSA GFDM MIMO PAPR PTS RPSM | Generalized Frequency Division Multiplexing (GFDM) is considered a strong candidate to replace Orthogonal Frequency Division Multiplexing (OFDM) in 5G MIMO networks because of its enhanced spectral utilization and design flexibility. Despite these advantages, GFDM faces the drawback of producing a relatively high Peak-to-Average Power Ratio (PAPR), which limits the efficiency of power amplifiers. To address this issue, the Partial Transmit Sequence (PTS) method is often employed for PAPR reduction. Nevertheless, the effectiveness of PTS is hindered by the intensive computational effort required for searching multiple phase factors. To overcome this challenge, we propose a method that integrates the Enhanced Squirrel Search Algorithm (ESSA) with an adaptive parameter control mechanism and a Grey Wolf Optimizer (GWO), enabling a dynamic balance between exploration and exploitation during phase factor selection. This improvement reduces the computational overhead, accelerates the convergence, and enhances the robustness of the phase sequence optimization. Simulation results show that the Hybrid PTS-ESSA-GWO-RPSM model achieves superior PAPR reduction compared to conventional ESSA-based approaches, while also providing better BER and SNR performance under varying channel conditions. The proposed method therefore offers an efficient trade-off between complexity and PAPR reduction, making it suitable for practical deployment in MIMO-GFDM-based 5G systems. The proposed scheme is evaluated against related methods by analyzing key performance indicators, including Complementary Cumulative Distribution Function (CCDF), Bit Error Rate (BER), Peak-to-Average Power Ratio (PAPR), and Signal-to-Noise Ratio (SNR). | 10.1109/TNSM.2025.3619945 |
| Surabhi Sharma, Sateesh Kumar Peddoju | Cooperative Multi-Agent Strategy for Caching of Transient Data in Edge-Assisted IoT Networks | 2026 | Vol. 23, Issue | Internet of Things Transient analysis Peer-to-peer computing Cooperative caching Reinforcement learning Data integrity Vehicle dynamics Real-time systems Measurement Adaptation models Internet of Things caching transient data multi-agent edge computing cooperative computing | Internet of Things (IoT) applications continuously generate large volumes of transient data. Delivering transient data efficiently is challenging because it is short-lived, highly dynamic, and often critical for time-sensitive services. Caching at the Edge offers a practical solution by storing frequently requested content closer to users, reducing delivery delays, and easing network congestion. However, existing caching approaches in Edge-assisted IoT networks face four significant limitations: (i) lack of freshness-aware policies, leading to outdated data, (ii) static or centralized coordination, which restricts scalability, (iii) inability to adapt to bursty and heterogeneous traffic patterns, and (iv) inefficient handling of resource-constrained Edge nodes. IoT-Cooperative Caching (IoT-C ${}^{2}$ ) addresses these issues with a framework based on multi-agent reinforcement learning. Using the framework, Edge servers make decentralized, adaptive decisions that account for both user demand and data freshness. IoT-C2 introduces topic-based grouping of Edge nodes and a hierarchical state model that supports collaboration across local, group, and global states. Experiments show that IoT-C2 increases cache hit rates, reduces latency, and improves freshness compared with state-of-the-art techniques. These improvements make the proposed approach well-suited for time-critical IoT applications like smart cities, healthcare, and industrial networks. | 10.1109/TNSM.2025.3649503 |
| Mei Nakashima, Takashi Kurimoto, Eiji Oki | Link Weight Design Adopting Traffic-Engineering Links Based on Preventive Start-Time Optimization Against Link Failures | 2026 | Vol. 23, Issue | Routing protocols Routing Optimization IP networks Multiprotocol label switching Time complexity Standards Scalability Linear programming Informatics Routing link weights traffic engineering links link failure | In an Internet Protocol network running a link-state routing protocol, determining the link weights selects each source-to-destination route on which the sum of the link weights is minimized. Previous studies have focused on determining physical link weights to reduce the network congestion ratio in case of physical-link failures. However, no study has yet addressed a model that determines link weights by incorporating traffic-engineering (TE) links and investigates the effect of incorporating TE links on reducing network congestion. The link-state routing protocol treats a TE link as a logical, direct link between non-adjacent nodes. This paper proposes a link-weight design model with TE links based on preventive start-time optimization (PSO) for handling single physical-link failures, called PSO-TE. The model considers physical and TE links when determining the link weights under the assumption of all single physical-link failures. It identifies the set of link weights that minimizes the worst-case network congestion ratio across all considered failure patterns. Introducing TE links does not require additional physical link resources and thus does not increase capital expenditures. Numerical results demonstrate that PSO-TE reduces the worst-case network congestion ratio compared to PSO without TE links. PSO-TE reduces the worst-case network congestion ratio compared with other models, including PSO without TE links, start-time optimization, and inverse capacity weighting. | 10.1109/TNSM.2025.3650109 |
| Claudia Canali, Giuseppe Di Modica, Francesco Faenza, Luca Foschini, Riccardo Lancellotti, Domenico Scotece | OptiFog: A Framework to Optimize the Placement of Microservices in Fog Scenarios | 2026 | Vol. 23, Issue | Microservice architectures Genetic algorithms Quality of service Edge computing Optimization Internet of Things Energy consumption Software Prototypes Emulation Microservices placement fog computing genetic algorithms framework performance evaluation fog federation software platform | The Fog computing paradigm makes use of dispersed, diverse, and resource-limited devices located at the network edge to effectively implement Internet of Things (IoT) application services that demand low latency and substantial bandwidth. At the same time, the adoption of microservice-based architectures in the IoT domain is on the rise due to their ability to align with the swift evolution and deployment demands of highly dynamic IoT applications and to elastically scale to fulfill load demands. In complex environments like Fog federations, characterized by highly heterogeneous computing and networking resources, the effective allocation of microservices to available nodes, while ensuring compliance with required Quality of Service (QoS) constraints, represents a significant challenge. In this paper, we present the design and implementation of OptiFog, a comprehensive framework that enables users to model, simulate, and validate microservice placement solutions within a realistic testbed environment. Compared to state-of-the-art approaches, OptiFog offers developers a controlled environment for experimenting with placement solutions while providing the assurance that the resulting deployments will meet the targeted QoS requirements in real-world scenarios, specifically in terms of service execution time and energy consumption of Fog nodes. To demonstrate the feasibility of the proposed approach, we implemented and evaluated a representative use case, involving both sub-optimal and optimal microservice placement, and utilizing real-world microservices drawn from the IoT domain. | 10.1109/TNSM.2025.3648449 |
| Agrippina Mwangi, León Navarro-Hilfiker, Lukasz Brewka, Mikkel Gryning, Elena Fumagalli, Madeleine Gibescu | A Threshold-Triggered Deep Q-Network-Based Framework for Self-Healing in Autonomic Software-Defined IIoT-Edge Networks | 2026 | Vol. 23, Issue | Switches Routing Quality of service IEC Standards Communication networks Control systems Wind power generation Thermal management Real-time systems Ethernet Agentic AI DQN SDN NFV self-healing IEC 61850 IEC 61400-25 intents ASHRAE autonomic networking offshore wind thermal model quality of service resilience | Stochastic disruptions such as flash events arising from benign traffic bursts and switch thermal fluctuations are major contributors to intermittent service degradation in software-defined industrial networks. These events violate IEC 61850-derived quality of service requirements and user-defined service-level agreements, hindering the reliable and timely delivery of control, monitoring, and best-effort traffic in IEC 61400-25-compliant wind power plants. Failure to maintain these requirements often results in delayed or lost control signals, reduced operational efficiency, and increased risk of wind turbine generator downtime. To address these challenges, this study proposes a threshold-triggered Deep Q-Network self-healing agent that autonomically detects, analyzes, and mitigates network disruptions while adapting routing behavior and resource allocation in real time. The proposed agent was trained, validated, and tested on an emulated tri-clustered switch network deployed in a cloud-based proof-of-concept testbed. Simulation results show that the proposed agent improves disruption recovery performance by 53.84% compared to a baseline shortest-path and load-balanced routing approach, and outperforms state-of-the-art methods, including the Adaptive Network-based Fuzzy Inference System by 13.1% and the Deep Q-Network and Traffic Prediction-based Routing Optimization method by 21.5%, in a super-spine leaf data-plane architecture. Additionally, the agent maintains switch thermal stability by proactively initiating external rack cooling when required. These findings highlight the potential of deep reinforcement learning in building resilience in software-defined industrial networks deployed in mission-critical, time-sensitive application scenarios. | 10.1109/TNSM.2025.3647853 |
| Massimo Tornatore, Teresa Gomes, Carmen Mas-Machuca, Eiji Oki, Chadi Assi, Dominic Schupke | Guest Editors’ Introduction: Special Issue on Resilient Communication Networks for an Hyper-Connected World | 2026 | Vol. 23, Issue | Special issues and sections Privacy Wireless networks Optical fiber networks Cyber-physical systems Communication networks Security Next generation networking Resilience Resilience network reliability IoT NFV SFC optical networks wireless networks network management privacy blockchain | This Special Issue contains a set of remarkable papers covering various recent research advances towards resilient Communication Networks for an hyper-connected World. Papers are organized into five categories: (i) Resilient Architectures for Next-Generation Networks, (ii) Edge, IoT, and Cyber-Physical Systems, (iii) Vehicular, Mobile, and Aerial Networks, (iv) Optical, Hybrid, and Satellite-based Resilient Communications, and (v) Security, Trust, and Resilience in Services and Applications. The editorial begins with an overview of the field and proceeds with a summary of the twenty-two papers included in this Special Issue. | 10.1109/TNSM.2025.3620249 |