Last updated: 2026-05-07 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/k m2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Henghua Zhang, Jue Chen, Yuhang Wu, Yujie Xiong | TT-INT: A Time-Threshold-based Lightweight In-Band Network Telemetry Scheme for P4-Enabled Programmable Networks | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Military aircraft Space technology Radio broadcasting Frequency modulation Filtering Filters Central Processing Unit In-Band Network Telemetry (INT) Programming Protocol-independent Packet Processors (P4) Software-Defined Networking (SDN) Programmable Data Plane (PDP) Per-Flow Telemetry Regulation | In-band Network Telemetry (INT) has emerged as a promising solution for fine-grained, real-time monitoring in programmable data planes. However, existing INT approaches often incur excessive overhead due to per-hop metadata accumulation or lack fine-grained control over telemetry frequency. This paper presents TT-INT, a lightweight INT framework designed for P4-enabled networks, which introduces a time-threshold-based mechanism to regulate telemetry insertion dynamically. Each switch enforces local constraints based on per-flow time intervals and metadata capacity, enabling reduced overhead while preserving path visibility without requiring global coordination or clock synchronization. Additionally, TT-INT supports a two-window byte-level anomaly detector and a controller-driven adjustment mechanism for further extensibility. Experiments on a real-world-derived backbone topology demonstrate that TT-INT reduces the average per-packet telemetry overhead to as low as 3.4 bytes under the 100 ms/5v configuration at 300 pps, achieving a 97.1% reduction compared to P4-INT under the same traffic rate. Compared to DLINT-5v and PLINT-5v (fixed at 20 and 26 bytes per packet, respectively), TT-INT-5v-100ms achieves up to 83.0% and 86.9% lower overhead. It also reaches a maximum path update detection rate of 97.9% (under the 50 ms configuration) and a minimum detection delay of 0.2 s, confirming TT-INT’s effectiveness in balancing overhead, responsiveness, and monitoring fidelity under high-throughput conditions. In addition, TT-INT improves TCP throughput by 22.9% relative to P4-INT in a BMv2-based environment, further highlighting its efficiency in resource-constrained data plane settings. | 10.1109/TNSM.2026.3688086 |
| Shahid Mahmood, Moneeb Gohar, Seok Joo Koh | Globally Integrated Trust Authority (GITA) for Resource-Constrained Edge Devices in IoT and 6G | 2026 | Early Access | Payloads Filtering Central Processing Unit Filters Feedback Circuits Electronic circuits Microcontrollers Circuits and systems Microprocessors GITA Globally Integrated Trust Authority Network PKDL TSL LMS Security Trust Management Resource Constrained Edge Device Internet of Things and Cyber-Attack | The rapid growth of the Internet and the increasing number of edge devices have expanded the cyber-attack surface at the edge layer. Hackers exploit vulnerabilities at various levels of a network by either directly connecting to it or accessing it over the Internet. In both scenarios, edge devices remain a primary target due to their widespread use, limited resources and critical impact. Therefore, securing edge devices is essential to counter both local and global cyber threats. Trust is a key factor in determining the level of protection required for edge devices. It can be used to assess the reliability of other devices before offering or requesting services. Since edge devices are often globally interconnected, trust levels should be verifiable across the Internet and intranet. In this paper, we propose the Globally Integrated Trust Authority (GITA), a framework that distributes verifiable trust values across networks and the Internet while minimizing communication overhead. Experimental results demonstrate that GITA improves the efficiency of trust value distribution and verification among nodes compared to digital certificates, while maintaining the same level of protection.. This approach enables effective identification of malicious and benign nodes, enhancing the precision of malicious node detection locally and globally. | 10.1109/TNSM.2026.3687967 |
| Xinshuo Wang, Baihua Chen, Lei Liu, Yifei Li | Pisces: Fast Loss Recovery for Multipath Transmission in RDMA | 2026 | Early Access | Payloads Military aircraft Space technology Feeds System-on-chip Field programmable gate arrays Circuits Application specific integrated circuits Integrated circuits Feedback RDMA Loss Recovery Multipath Transmission Programmable Switch Programmable NIC FPGA | Conventional Remote Direct Memory Access (RDMA) relies on Priority Flow Control (PFC) to operate on lossless networks. However, as data centers scale, PFC’s drawbacks, such as head-of-line blocking and congestion spreading become increasingly problematic. This study proposes Pisces, a fast packet loss recovery scheme that leverages terminal–network collaboration. Instead of targeting lossless RDMA networks, Pisces enables high-throughput RDMA by efficiently handling loss recovery. To address the inefficient retransmission problems of PFC+Go-Back-N and the challenges of configuring appropriate timeouts for Selective Repeat (SR) in multipath transmission scenarios, Pisces implements Quick Drop Notification (QDN) of packet loss on switches, avoiding bandwidth waste and timeouts. In addition, Pisces RDMA NICs feature on-chip packet buffers to cache in-flight packets, supporting the scalability demands of RDMA in modern data centers. Upon receiving a QDN, lost packets are quickly retrieved from the buffer for retransmission, significantly improving retransmission efficiency and reducing PCIe bandwidth waste caused by cache replacements. This study overcame numerous challenges to implement Pisces prototype, which demonstrated excellent performance. Testbed experiments show that Pisces improves the 99th-percentile FCT by 130×compared to Mellanox CX-6. Large-scale simulations demonstrate that Pisces achieves a maximum reduction of 82.8% in the 99.9th-percentile FCT compared to SR and other state-of-the-art technologies. | 10.1109/TNSM.2026.3688038 |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Frequency modulation Radio broadcasting Filtering Filters Memory modules Virtual private networks Encrypted traffic classification Class incremental learning Neural collapse | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 |
| Jayasree Sengupta, Mike Kosek, Justus Fries, Veronika Kitsul, Vaibhav Bajpai | A Long-term View of DNS over QUIC Adoption and its Performance Impact on YouTube Streaming | 2026 | Early Access | YouTube contributes the largest share of global video traffic on the Internet, making it an important use case for understanding the impact of evolving DNS protocol choices on video streaming performance. Although traditional DNS over UDP (DoUDP) offers low latency, it lacks modern transport features. Encrypted DNS protocols such as DNS over TLS (DoT) and DNS over HTTPS (DoH) improve protocol robustness but suffer from higher latency due to their underlying transport and encryption protocols with multi-RTT handshakes. However, recently standardized DNS over QUIC (DoQ) aims to combine the best of both worlds by leveraging the transport efficiency of QUIC while ensuring DNS privacy. In this paper, we present the first comprehensive long-term measurement study of DoQ adoption and evaluate its performance implications for YouTube video streaming. We collect data through weekly scans of the IPv4 address space over a two-year period to assess the adoption of the protocol. Our results show that DoQ adoption by public DNS resolvers has steadily increased and plateaued over 25 months. Using seven globally distributed vantage points, our video performance measurements shows that DoQ’s DNS lookup time increases by only 1.5% in the median while video startup delay increases by less than 1% compared to DoUDP. In particular, in about 40% of the cases, DoQ yields faster video startup times than DoUDP. These findings position DoQ as a technically efficient DNS protocol, well suited for modern, high-demand performance-sensitive applications such as video streaming. | 10.1109/TNSM.2026.3688441 | |
| Willie Kouam, Yezekael Hayel, Gabriel Deugoué, Charles Kamhoua | Decoy Allocation against Lateral Movement - A Network Centrality Game Approach | 2026 | Early Access | Circuits Feedback Network topology Reconnaissance Communication systems Radio access networks Regional area networks Routing Military communication Computer networks Lateral movement Cyber deception Centrality measure One-sided POSGs | Targeted incidents increasingly threaten internal security with a rise in data breaches and service disruptions. Attackers now employ sophisticated approaches, infiltrating systems for ongoing access to critical information through lateral movement. Detecting and defending against such intrusions is challenging due to commonly exploited specific vulnerabilities. Consequently, various deception techniques have emerged over time, aiming to divert attackers’ attention. In our scenario, attackers use lateral movement within the network to reach a specific target, while defenders strategically deploy decoys to counteract them. Such a dynamic and adversarial interaction is modeled as a one-sided partially observable stochastic game (OS-POSG). Several solutions have been proposed to address this challenge, particularly when the attacker possesses complete knowledge of the network’s topology through the recognition stage. Simultaneously, recent years saw the development of approaches to obscure the attackers’ reconnaissance phase, compelling them to operate without a full comprehension of the network’s structure. We therefore introduce an innovative methodology involving intelligent players who take into account the importance of network devices, assessed by centrality measures, during the decision-making process. This approach aims to improve the effectiveness of the defender’s strategy to counter the attacker’s lateral movement in the network, in the context of step-by-step optimization. | 10.1109/TNSM.2026.3689344 |
| Lal Verda Cakir, Mehmet Ali Erturk, Mehmet Ozdem, Berk Canberk | Digital Twin-assisted Handover Scheme for Mobile Networks using Generative AI | 2026 | Early Access | Electromagnetic propagation Propagation constant Radio broadcasting Radio networks Handover Communication systems Avatars Communication switching Data transfer Cellular networks digital twin 5G/6G handover management generative artificial intelligence | Handover management in mobile networks is challenged by high latency and reduced reliability in dense deployments and under user mobility. Here, existing schemes improve handover initiation by optimising the candidate handover at the decision time. However, these are applied after a non-negligible delay due to the control-plane signalling. Then, when applied, it may become invalid or degrade performance. To address this, we propose a Digital Twin (DT)-assisted handover scheme that performs predictive execution-time validation prior to the preparation of the Next Generation (NG)-based handover. To this end, the DT-What-If Generator (DT-WIG) is used to emulate short-horizon future network states under uncertainty. Here, the DT-WIG is a spatiotemporal graph generative model that uses variational latent sampling to generate counterfactual post-handover trajectories for the candidate handover decision. Then, the AMF estimates the failure and QoS risks associated with the candidate handover and approves/rejects it via standard-compliant signalling. With this, we form a policy-agnostic mechanism that runs on the underlying handover policy. Consequently, we evaluate performance using ns-3/5G-LENA trace generation and replay-based policy analysis, with OpenAirInterface-based signalling evaluation. The results show that the proposed method reduces the handover failure rate and handover interruption time while improving latency, jitter, throughput, and packet loss. | 10.1109/TNSM.2026.3690572 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 | |
| Aditya Pathak, Irfan Al-Anbagi, Howard J. Hamilton | An Early Conflict Resolution Mechanism for Blockchain-Based Delay-Sensitive IoT Networks | 2026 | Vol. 23, Issue | Internet of Things Blockchains Parallel processing Fabrics Throughput Low latency communication Distributed ledger Analytical models System performance Privacy Blockchain conflicting transaction conflict resolution dependency manager IoT networks hyperledger fabric | Blockchain technology, particularly Hyperledger Fabric (HLF), has emerged as a promising solution to enhance security and privacy in various domains, including Internet of Things (IoT) networks. Conflicting transactions in a HLF-based IoT network occur when multiple transactions attempt to modify the same asset or data concurrently. Conflicting transactions can lead to data inconsistencies, because the network may be unable to determine the correct order or the most preferred valid transaction. Existing conflict resolution mechanisms in HLF-based IoT networks often introduce considerable transaction latency, detect and resolve conflicting transactions in the late stages of the transaction lifecycle (ordering and validation), or require significant changes to the underlying HLF blockchain platform. To overcome these limitations, we propose an Early Conflict Resolution (ECR) mechanism that detects and resolves conflicts during the endorsement stage. The ECR mechanism uses a local cache (Sync.Map) and a dependency graph to efficiently detect conflicts by analyzing the Read-Sets (RS) and Write-Sets (WS) of transactions. ECR resolves conflicts in the detected conflicting transactions through transaction reordering or sequential processing. It also executes non-conflicting transactions in parallel to speed their processing. Our results show that the ECR mechanism improves transaction latency and the success rate for varying conflict rates, block sizes, and IoT devices compared to existing mechanisms. | 10.1109/TNSM.2026.3667085 |
| Zhenliang Liu, Xing Fan, Baoning Niu | Node-Based Blockchain Decentralization Measurement Model | 2026 | Vol. 23, Issue | Blockchains Peer-to-peer computing Economics Bitcoin Information entropy Robustness Memory Market research Consensus protocol Complexity theory Blockchain decentralization measurement node distribution transaction-network centralization node reliability | Blockchain decentralization refers to the characteristics of blockchain systems in which data storage, transaction verification, and ledger state updates are performed and maintained by distributed nodes, rather than relying on the servers of centralized institutions or organizational entities. It is one of the core features of blockchain systems to ensure security, anti-censorship, and system robustness. Quantifying the degree of blockchain decentralization helps identify potential centralization risks and evaluate the fairness of system architectures and governance mechanisms, and provides an important basis for the optimization design for improving consensus efficiency and resource allocation, as well as for evaluating the credibility and understanding the future development trends of blockchain systems in terms of scalability, application domains, and regulatory adaptability. Existing blockchain decentralization measurement methods rely on one or two dimensions, ignore trend analysis of the degree of decentralization, and lack standardized quantitative approaches, such as the consistent application of methods like the coefficient of variation or the information entropy, to enable objective and comparable assessments across different systems. Given that blockchain systems are composed of participating nodes, we adopt a node-centric perspective, and propose a Node-based Blockchain Decentralization Measurement Model (NBDMM). NBDMM uses ten key indicators extracted from three dimensions, node distribution, transaction-network centralization, and node reliability, to measure the degree of decentralization. Experimental results demonstrate that the proposed NBDMM effectively quantifies the level of decentralization across blockchain systems with varying consensus mechanisms. Moreover, findings indicate a trend toward increasing centralization in both Bitcoin and Ethereum, suggesting a gradual erosion of their originally decentralized nature. | 10.1109/TNSM.2026.3665402 |
| Kaili Qian, Yiqin Lu, Xiaohuan Zhang, Wenqi Zhou | End-to-End Deterministic Network Slicing: A Profit-Driven Optimization Framework With Domain-Specific Algorithms | 2026 | Vol. 23, Issue | Resource management Optimization Computational modeling Dynamic scheduling Network slicing Reliability Radio access networks Delays Wireless communication Genetic algorithms Deterministic networking end-to-end network slicing resource allocation latency equalization | With the rapid evolution of 5G/6G networks, deterministic services such as the Industrial Internet, vehicular communications, and immersive applications impose stringent end-to-end (E2E) latency guarantees. Efficient resource allocation for E2E network slicing (NS) is therefore required to simultaneously satisfy strict latency constraints and improve overall resource efficiency. However, orchestrating cross-domain and multidimensional resources across the radio access network (RAN) and core network (CN) remains challenging due to strong coupling and scalability issues. To address this problem, this paper develops a profit-driven optimization framework for static E2E NS, which jointly accounts for deterministic service guarantees and system profitability. On the RAN side, a hybrid genetic algorithm (HGA) is developed to preserve the global exploration capability of genetic algorithms, while incorporating a gradient-guided resource scaling (GGRS) operator to refine elite solutions and improve solution quality. On the CN side, we design a capacity-aware workload balancing algorithm (CA-WLB). For each candidate path, CA-WLB couples capacity adaptive virtual network function (VNF) placement with demand responsive CPU allocation to jointly optimize latency and resource consumption, and then selects the path that yields the maximum profit. Extensive simulations across a wide range of scenarios demonstrate that the proposed framework consistently outperforms conventional heuristic and meta-heuristic approaches, while achieving performance close to high-complexity optimization benchmarks with significantly reduced computational overhead. | 10.1109/TNSM.2026.3667996 |
| Behnam Ojaghi, Ricard Vilalta, Raül Muñoz | IBNS: Optimizing Intent-Based 6G Network Slicing for Conflict Detection and Mitigation | 2026 | Vol. 23, Issue | Quality of service 6G mobile communication Complexity theory Service level agreements Optimization 5G mobile communication Network slicing Ultra reliable low latency communication Throughput Monitoring 6G intent-based network management network slicing QoS intent closed-loop handling SLA intent conflict mitigation | The Sixth-Generation (6G) mobile networks aim to automate network resource allocation and support both new and existing vertical services, each with diverse Quality of Service (QoS) intent requirements. Digital Service Providers (DSPs) must consider specific intent expectations, and targets set by different services, and re-configure and prioritize the most critical intents when resources are insufficient. This paper presents an optimization model for a flexible network paradigm using an Intent-Based Network Slicing (IBNS) framework that can manage the complexity of QoS intents and identify slice intent conflicts through closed-loop evaluation and monitoring. It dynamically adjusts the slice configurations to handle and mitigate detected conflicts by executing the agreed-upon Service Level Agreement (SLA) objectives (%) for higher-priority slices to ensure that critical intents are addressed. According to the results, this approach successfully meets the SLA target we set for slice but it negatively impacts the performance of other slices and degrades their slice capacity. | 10.1109/TNSM.2026.3668027 |
| Ziyi Teng, Juan Fang, Neal N. Xiong | DOJS: A Distributed Online Joint Scheme to Optimize Cost in Mobile Edge Networks | 2026 | Vol. 23, Issue | Optimization Costs Resource management Heuristic algorithms Base stations Long short term memory Switches Reinforcement learning Quality of service Handover Edge computing resource allocation cache placement game theory reinforcement learning | Edge computing deploys computing and storage resources at the network edge, thereby providing services closer to terminal users. However, in edge networks, the mobility of terminals, the diversity of requests, and the dynamic nature of wireless channels pose significant challenges for efficiently allocating limited wireless and caching resources among multiple terminal devices. To address the issues of unbalanced network load and high caching costs caused by resource allocation in edge networks, we propose a Distributed Online Joint Optimization Scheme (DOJS). Specifically, we design a joint optimization scheme, referred to as DOJS, which combines centralized user association at the cloud with distributed cache placement at the base stations. This scheme analyzes the impact of terminal device association policies on caching costs and develops a caching cost model that integrates the activity level and content request probability of terminal devices. Based on this model, the relationship between user association selection and caching costs is analyzed, and a Game Theory-based User Association (GTUA) selection algorithm is proposed. In order to adapt to the dynamic characteristics of terminal-user requests in mobile edge networks, we develop a dynamic cache update method LS-TD3, which combines Long Short-Term Memory (LSTM) and Twin Delayed Deep Deterministic policy gradient (TD3). Specifically, we integrate the LSTM layer into the policy model framework of reinforcement learning to better predict the content popularity from dynamic time data, thus improving the accuracy of cache decision making. To further reduce computational complexity and enhance overall system performance, we employ a distributed optimization strategy to improve the dynamic caching decision process. Extensive experimental results demonstrate the superiority of the proposed algorithm in achieving inter-node load balancing and minimizing caching costs. | 10.1109/TNSM.2026.3665360 |
| Can Wang, Run-Hua Shi, Jiang-Yuan Lian, Pei-Xuan Wang, Ze-Hui Jiang | Quantum-Enhanced Matching Mechanism for Secure and Efficient Power IoT Data Trading | 2026 | Vol. 23, Issue | Protocols Photonics Databases Indexes Error analysis Data models Light sources Differential privacy Accuracy Sensitivity Quantum computation trading matching oblivious key distribution range nearest neighbor | The exponential growth of power IoT data has created immense potential for intelligent energy management, but it also presents critical challenges in achieving secure and efficient data trading. In particular, emerging data trading scenarios demand support for range nearest neighbor matching, which current schemes fail to address. This paper proposes a novel quantum trading matching scheme tailored for power data markets, which, for the first time, supports range nearest neighbor matching while balancing accuracy, efficiency, and privacy. To improve matching efficiency, we design a quantum private query (QPQ) mechanism based on a bidirectional sliding window (BSW), which replaces traditional linear search with dynamic range expansion. Furthermore, to ensure strong privacy and security in real-world scenarios, we develop a two-layer QPQ framework that performs data feature matching and identity retrieval separately, supported by a customized key distribution strategy. Our solution resists quantum attacks and significantly reduces computational overhead. At the same time, by utilizing non-ideal photon sources, it offers a practical and privacy-preserving solution for large-scale power data trading and matching. | 10.1109/TNSM.2026.3667701 |