Last updated: 2026-02-27 05:01 UTC
All documents
Number of pages: 157
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Mahnoor Sajid, Mohib Ullah Khan, KyungHi Chang | Intelligent Xn-Based Energy Aware Handover Optimization in 5G Networks via NWDAF-Orchestrated Agent Framework | 2026 | Early Access | Handover 5G mobile communication Quality of service Energy efficiency Energy consumption Optimization Reliability Computer architecture Energy conservation Automation Energy-aware handover NWDAF gNB energy efficiency multi-agent control 5 G mobility management | Energy-aware mobility management in dense Fifth Generation (5G) networks is increasingly challenged by frequent handovers and unnecessary activation of sleeping next-generation NodeBs (gNBs), which lead to excessive energy consumption and degraded mobility reliability. Conventional Xn-based handover schemes rely on static radio thresholds and implicitly assume always-active gNBs, while existing NWDAF-enabled approaches improve stability but do not explicitly account for energy costs during handover execution. To address these limitations, this paper proposes the Intelligent Energy-Aware Handover Framework (IEAHF), a NWDAF-orchestrated architecture that integrates coordinated agent-based control for energy-aware mobility management. The proposed framework introduces an Energy-Aware Handover Optimization Agent (EA-HOA) to guide reliability-driven handover decisions and a Handover Energy Evaluation Agent (HEEA) to assess the energy impact of candidate handover actions, with both agents operating within a closed-loop control process enforced through Operations, Administration, and Maintenance (OAM). By reformulating handover success to incorporate energy inefficiency and enabling per-handover gNB energy reasoning at decision time, IEAHF jointly optimizes service continuity and energy efficiency. System-level simulations demonstrate that the proposed Agent-NWDAF configuration consistently outperforms baseline and analytics-only schemes, achieving tightly concentrated Energy-Aware Handover Success Rates of approximately 0.98, reducing the Energy-Aware Handover Failure Rate to the range of 0.007–0.015, and delivering up to a 32% reduction in average gNB power consumption relative to an always-on baseline. These results indicate that IEAHF provides a scalable and effective solution for energy-efficient mobility management in 5G networks and establishes a foundation for energy-aware handover control in future Sixth Generation (6G) systems. | 10.1109/TNSM.2026.3668238 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Muhammad Fahimullah, Michel Kieffer, Sylvaine Kerboeuf, Shohreh Ahvar, Maria Trocan | Decentralized Coalition Formation of Infrastructure Providers for Resource Provisioning in Coverage Constrained Virtualized Mobile Networks | 2026 | Early Access | Indium phosphide III-V semiconductor materials Resource management Games Costs Wireless communication Quality of service Collaboration Protocols Performance evaluation Resource provisioning wireless virtualized networks coverage integer linear programming coalition formation hedonic approach | The concept of wireless virtualized networks enables Mobile Virtual Network Operators (MVNOs) to utilize resources made available by multiple Infrastructure Providers (InPs) to set up a service. Nevertheless, existing centralized resource provisioning approaches fail to address such a scenario due to conflicting objectives among InPs and their reluctance to share private information. This paper addresses the problem of resource provisioning from several InPs for services with geographic coverage constraints. When complete information is available, an Integer Linear Program (ILP) formulation is provided, along with a greedy solution. An alternative coalition formation approach is then proposed to build coalitions of InPs that satisfy the constraints imposed by an MVNO, while requiring only limited information sharing. The proposed solution adopts a hedonic game-theoretic approach to coalition formation. For each InP, the decision to join or leave a coalition is made in a decentralized manner, relying on the satisfaction of service requirements and on individual profit. Simulation results demonstrate the applicability and performance of the proposed solution. | 10.1109/TNSM.2026.3663437 |
| Qichen Luo, Zhiyun Zhou, Ruisheng Shi, Lina Lan, Qingling Feng, Qifeng Luo, Di Ao | Revisit Fast Event Matching-Routing for High Volume Subscriptions | 2026 | Early Access | Real-time systems Vectors Search problems Indexing Filters Data structures Classification algorithms Scalability Routing Partitioning algorithms Content-based Publish/subscribe Event Matching Existence Problem Matching Time Subscription Aggregation | Although many scalable event matching algorithms have been proposed to achieve scalability for publish/subscribe services, the content-based pub/sub system still suffer from performance deterioration when the system has large numbers of subscriptions, and cannot support the requirements of real-time pub/sub data services. In this paper, we model the event matching problem as an existence problem which only care about whether there is at least one matching subscription in the given subscription set, differing from existing works that try to speed up the time-consuming search operation to find all matching subscriptions. To solve this existence problem efficiently, we propose DLS (Discrete Label Set), a novel subscription and event representation model. Based on the DLS model, we propose an event matching algorithm with O(Nd) time complexity to support real-time event matching for a large volume of subscriptions and high event arrival speed, where Nd is the node degree in overlay network. Experimental results show that the event matching performance can be improved by several orders of magnitude compared with traditional algorithms. | 10.1109/TNSM.2026.3664517 |
| Jing Huang, Yabo Wang, Honggui Han | SCFusionLocator: A Statement-Level Smart Contract Vulnerability Localization Framework Based on Code Slicing and Multi-Modal Feature Fusion | 2026 | Early Access | Smart contracts Feature extraction Location awareness Codes Blockchains Source coding Fuzzing Security Noise Formal verification Smart Contract Vulnerability Detection Statement-level Localization Code Slicing Feature Fusion | Smart contract vulnerabilities have led to over $20 billion in losses, but existing methods suffer from coarse-grained detection, two-stage “detection-then-localization” pipelines, and insufficient feature extraction. This paper proposes SCFusionLocator, a statement-level vulnerability localization framework for smart contracts. It adopts a novel code-slicing mechanism (via function call graphs and data-flow graphs) to decompose contracts into single-function subcontracts and filter low-saliency statements, paired with source code normalization to reduce noise. A dual-branch architecture captures complementary features: the code-sequence branch uses GraphCodeBERT (with data-flow-aware masking) for semantic extraction, while the graph branch fuses call/control-flow/data-flow graphs into a heterogeneous graph and applies HGAT for structural modeling. SCFusionLocator enables end-to-end statement-level localization by framing tasks as statement classification.We release BJUT_SC02, a large dataset of over 240,000 contracts with line-level labels for 58 vulnerability types. Experiments on BJUT_SC02, SCD, and MANDO datasets show SCFusionLocator outperforms 8 conventional tools and nearly 20 ML baselines, achieving over 90% average F1 at the statement level, with better generalization to similar unknown vulnerabilities, and remains competitive in contract-level detection. | 10.1109/TNSM.2026.3664599 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Rui Huang, Qingling Li, Liangru Xie, Fei Shang | Dynamic Migration in Digital Twin-Enabled Industrial Internet: A Stochastic Network Calculus Approach | 2026 | Early Access | Delays Stability analysis Real-time systems Quality of service Optimization Dynamic scheduling Resource management Reliability theory Digital twins Servers Digital twin migration stochastic network calculus Lyapunov optimization industrial internet systems | Digital Twin (DT) technology serves as a critical enabler in Cyber-Physical-Social Systems (CPSS), especially within Industry 5.0’s human-centric manufacturing paradigm. However, the computational intensity of processing real-time data in DT systems often leads to resource saturation and performance degradation at computation nodes. Dynamic service migration of digital twins by offloading computation-intensive tasks to resource-rich nodes to offer a promising solution, yet introduces challenges in preserving service quality during migration. Key issues include high delay, data inconsistency, service interruptions, and limited bandwidth compromising system stability. To address these challenges, this paper proposes a dynamic migration scheduling strategy for digital twins based on a Lyapunov optimization framework. Our approach integrates Stochastic Network Calculus (SNC) for Quality of Service (QoS) quantification and a persistent queue mechanism for reliability assurance. Theoretical analysis and extensive simulations demonstrate that the proposed algorithm achieves near-optimal performance with provable bounds, effectively minimizing migration-induced delay while maintaining service reliability. The results confirm that our framework consistently outperforms existing solutions in managing service migration within industrial internet systems. | 10.1109/TNSM.2026.3665033 |
| Jiazhong Lu, Jimin Peng, Jian Shu, Jiali Yin, Xiaolei Liu | Adversarial Sample Based on Structured Fusion Noise for Botnet Detection in Industrial Control Systems | 2026 | Early Access | Botnet Industrial control Feature extraction Intrusion detection Integrated circuit modeling Time-domain analysis Internet of Things Frequency-domain analysis Biological system modeling Training Adversarial sample botnet industrial control system fusion noise | The industrial control system’s artificial intelligence-based botnet intrusion detection system has a high detection performance and efficiency in an environment without interference. However, these systems are not immune to evasion through adversarial samples. In this study, we introduce a feature extraction technique tailored for ICS botnet detection. This approach classifies traffic packets based on network traffic attributes and ICS-specific identification codes, encompassing the statuses of ICS devices, enhancing detection precision. Meanwhile, this strategy addresses challenges in ICS data collection and bolsters experimental efficacy. To build a comprehensive botnet intrusion dataset within an ICS, we concurrently utilized existing ICS devices to collect both standard ICS and botnet traffic. Additionally, we present an innovative adversarial sample generation method for botnet detection models, integrating both time-domain and frequency-domain noise. Testing under three real-world ICS attack scenarios revealed our technique can markedly degrade the classification performance of eight leading AI-based detection models, emphasizing its potential for evading AI-based ICS intrusion detectors. | 10.1109/TNSM.2026.3665504 |
| Tuan-Vu Truong, Van-Dinh Nguyen, Quang-Trung Luu, Phi-Son Vo, Xuan-Phu Nguyen, Fatemeh Kavehmadavani, Symeon Chatzinotas | Accelerating Resource Allocation in Open RAN Slicing via Deep Reinforcement Learning | 2026 | Early Access | Resource management Open RAN Ultra reliable low latency communication Real-time systems Computational modeling Optimization Deep reinforcement learning Costs Complexity theory Bandwidth Open radio access network network slicing virtual network function resource allocation deep reinforcement learning successive convex approximation | The transition to beyond-fifth-generation (B5G) wireless systems has revolutionized cellular networks, driving unprecedented demand for high-bandwidth, ultra low-latency, and massive connectivity services. The open radio access network (Open RAN) and network slicing provide B5G with greater flexibility and efficiency by enabling tailored virtual networks on shared infrastructure. However, managing resource allocation in these frameworks has become increasingly complex. This paper addresses the challenge of optimizing resource allocation across virtual network functions (VNFs) and network slices, aiming to maximize the total reward for admitted slices while minimizing associated costs. By adhering to the Open RAN architecture, we decompose the formulated problem into two subproblems solved at different timescales. Initially, the successive convex approximation (SCA) method is employed to achieve at least a locally optimal solution. To handle the high complexity of binary variables and adapt to time-varying network conditions, traffic patterns, and service demands, we propose a deep reinforcement learning (DRL) approach for real-time and autonomous optimization of resource allocation. Extensive simulations demonstrate that the DRL framework quickly adapts to evolving network environments, significantly improving slicing performance. The results highlight DRL’s potential to enhance resource allocation in future wireless networks, paving the way for smarter, self-optimizing systems capable of meeting the diverse requirements of modern communication services. | 10.1109/TNSM.2026.3665553 |
| Adel Chehade, Edoardo Ragusa, Paolo Gastaldo, Rodolfo Zunino | Hardware-Aware Neural Architecture Search for Encrypted Traffic Classification on Resource-Constrained Devices | 2026 | Early Access | Accuracy Computational modeling Cryptography Feature extraction Hardware Convolutional neural networks Artificial neural networks Real-time systems Long short term memory Internet of Things Deep neural networks encrypted traffic classification hardware-aware neural architecture search Internet of Things resource-constrained devices | This paper presents a hardware-efficient deep neural network (DNN), optimized through hardware-aware neural architecture search (HW-NAS); the DNN supports the classification of session-level encrypted traffic on resource-constrained Internet of Things (IoT) and edge devices. Thanks to HW-NAS, a 1D convolutional neural network (CNN) is tailored on the ISCX VPN-nonVPN dataset to meet strict memory and computational limits while achieving robust performance. The optimized model attains an accuracy of 96.60% with just 88.26K parameters, 10.08M floating-point operations (FLOPs), and a maximum tensor size of 20.12K. Compared to state-of-the-art (SOTA) models, it achieves reductions of up to 444-fold, 312-fold, and 15-fold in these metrics, respectively, significantly minimizing memory footprint and runtime requirements. The model also demonstrates versatility, achieving up to 99.86% across multiple VPN and traffic classification (TC) tasks; it further generalizes to external benchmarks with up to 99.98% accuracy on USTC-TFC and QUIC NetFlow. In addition, an in-depth approach to header-level preprocessing strategies confirms that the optimized model can provide notable performance across a wide range of configurations, even in scenarios with stricter privacy considerations. Likewise, a reduction in the length of sessions of up to 75% yields significant improvements in efficiency, while maintaining high accuracy with only a negligible drop of 1-2%. However, the importance of careful preprocessing and session length selection in the classification of raw traffic data is still present, as improper settings or aggressive reductions can bring about a 7% reduction in overall accuracy. The quantized architecture was deployed on STM32 microcontrollers and evaluated across input sizes; results confirm that the efficiency gains from shorter sessions translate to practical, low-latency embedded inference. These findings demonstrate the method’s practicality for encrypted traffic analysis in constrained IoT networks. | 10.1109/TNSM.2026.3666676 |
| Pengcheng Guo, Zhi Lin, Haotong Cao, Yifu Sun, Kuljeet Kaur, Sherif Moussa | GAN-Empowered Parasitic Covert Communication: Data Privacy in Next-Generation Networks | 2026 | Early Access | Interference Generators Generative adversarial networks Blind source separation Electronic mail Training Receivers Noise Image reconstruction Hardware Artificial intelligence blind source separation covert communication generative adversarial network | The widespread integration of artificial intelligence (AI) in next-generation communication networks poses a serious threat to data privacy while achieving advanced signal processing. Eavesdroppers can use AI-based analysis to detect and reconstruct transmitted signals, leading to serious leakage of confidential information. In order to protect data privacy at the physical layer, we redefine covert communication as an active data protection mechanism. We propose a new parasitic covert communication framework in which communication signals are embedded into dynamically generated interference by generative adversarial networks (GANs). This method is implemented by our CDGUBSS (complex double generator unsupervised blind source separation) system. The system is explicitly designed to prevent unauthorized AI-based strategies from analyzing and compromising signals. For the intended recipient, the pretrained generator acts as a trusted key and can perfectly recover the original data. Extensive experiments have shown that our framework achieves powerful covert communication, and more importantly, it provides strong defense against data reconstruction attacks, ensuring excellent data privacy in next-generation wireless systems. | 10.1109/TNSM.2026.3666669 |
| Devanshu Anand, Gabriel-Miro Muntean | Mitigating Interferences in 5G O-RAN HetNets through ML-driven xAPP to Enhance Users’ QoS | 2026 | Early Access | Interference 5G mobile communication Quality of service Throughput Base stations User experience Resource management Prevention and mitigation Spectral efficiency Signal to noise ratio HetNets Interference Machine Learning | In today’s rapidly evolving telecommunications landscape, the demand for seamless connectivity and top-tier network performance has reached unprecedented levels. Traditional cellular systems, while valiant in their service, now struggle under the weight of spiraling data demands, spectrum scarcity, and power inefficiency. The era of ultra-dense mobile networks, with Heterogeneous Networks (HetNets) at the forefront, ushers in improved throughput, spectral efficiency, and energy management. To tackle these challenges, this paper introduces MLCIMO (Machine Learning-enhanced Classification for Interference Management and Offloading) into 5G HetNets. MLCIMO employs a multi-binary classification strategy to categorize users based on interference types and levels. It also introduces an offloading scheme tailored to user service priorities, enhancing the user quality of experience, while conserving energy. It seamlessly aligns with the evolving needs of the HetNets, addressing some of the issues introduced by small cell deployments. Simulation results show that MLCIMO achieves the highest throughput, shortest delay, and lowest packet loss ratio in comparison with alternative approaches. In a comprehensive analysis, the varying degrees of interference encountered by users under different schemes are unveiled, further establishing MLCIMO’s distinguished position in mitigating interference. | 10.1109/TNSM.2026.3667462 |
| Kaili Qian, Yiqin Lu, Xiaohuan Zhang, Wenqi Zhou | End-to-End Deterministic Network Slicing: A Profit-Driven Optimization Framework with Domain-Specific Algorithms | 2026 | Early Access | Resource management Optimization Computational modeling Dynamic scheduling Network slicing Reliability Radio access networks Delays Wireless communication Genetic algorithms Deterministic networking end-to-end network slicing resource allocation latency equalization | With the rapid evolution of 5G/6G networks, deterministic services such as the Industrial Internet, vehicular communications, and immersive applications impose stringent end-to-end (E2E) latency guarantees. Efficient resource allocation for E2E network slicing (NS) is therefore required to simultaneously satisfy strict latency constraints and improve overall resource efficiency. However, orchestrating cross-domain and multidimensional resources across the radio access network (RAN) and core network (CN) remains challenging due to strong coupling and scalability issues. To address this problem, this paper develops a profit-driven optimization framework for static E2E NS, which jointly accounts for deterministic service guarantees and system profitability. On the RAN side, a hybrid genetic algorithm (HGA) is developed to preserve the global exploration capability of genetic algorithms, while incorporating a gradientguided resource scaling (GGRS) operator to refine elite solutions and improve solution quality. On the CN side, we design a capacity-aware workload balancing algorithm (CA-WLB). For each candidate path, CA-WLB couples capacity adaptive virtual network function (VNF) placement with demand responsive CPU allocation to jointly optimize latency and resource consumption, and then selects the path that yields the maximum profit. Extensive simulations across a wide range of scenarios demonstrate that the proposed framework consistently outperforms conventional heuristic and meta-heuristic approaches, while achieving performance close to high-complexity optimization benchmarks with significantly reduced computational overhead. | 10.1109/TNSM.2026.3667996 |
| Behnam Ojaghi, Ricard Vilalta, Raül Muñoz | IBNS: Optimizing Intent-Based 6G Network Slicing for Conflict Detection and Mitigation | 2026 | Early Access | Quality of service 6G mobile communication Complexity theory Service level agreements Optimization 5G mobile communication Network slicing Ultra reliable low latency communication Throughput Monitoring 6G Intent-Based Network Management Network Slicing QoS Intent Closed-loop Handling SLA Intent Conflict Mitigation | The Sixth-Generation (6G) mobile networks aim to automate network resource allocation and support both new and existing vertical services, each with diverse Quality of Service (QoS) intent requirements. Digital Service Providers (DSPs) must consider specific intent expectations, and targets set by different services, and re-configure and prioritize the most critical intents when resources are insufficient. This paper presents an optimization model for a flexible network paradigm using an Intent-Based Network Slicing (IBNS) framework that can manage the complexity of QoS intents and identify slice intent conflicts through closed-loop evaluation and monitoring. It dynamically adjusts the slice configurations to handle and mitigate detected conflicts by executing the agreed-upon Service Level Agreement (SLA) objectives (%) for higher-priority slices to ensure that critical intents are addressed. According to the results, this approach successfully meets the SLA target we set for slice but it negatively impacts the performance of other slices and degrades their slice capacity. | 10.1109/TNSM.2026.3668027 |
| Can Wang, Run-Hua Shi, Jiang-Yuan Lian, Pei-Xuan Wang, Ze-Hui Jiang | Quantum-Enhanced Matching Mechanism for Secure and Efficient Power IoT Data Trading | 2026 | Early Access | Protocols Photonics Databases Indexes Error analysis Data models Light sources Differential privacy Accuracy Sensitivity Quantum Computation Trading Matching Oblivious Key Distribution Range Nearest Neighbor | The exponential growth of power IoT data has created immense potential for intelligent energy management, but it also presents critical challenges in achieving secure and efficient data trading. In particular, emerging data trading scenarios demand support for range nearest neighbor matching, which current schemes fail to address. This paper proposes a novel quantum trading matching scheme tailored for power data markets, which, for the first time, supports range nearest neighbor matching while balancing accuracy, efficiency, and privacy. To improve matching efficiency, we design a quantum private query (QPQ) mechanism based on a bidirectional sliding window (BSW), which replaces traditional linear search with dynamic range expansion. Furthermore, to ensure strong privacy and security in real-world scenarios, we develop a two-layer QPQ framework that performs data feature matching and identity retrieval separately, supported by a customized key distribution strategy. Our solution resists quantum attacks and significantly reduces computational overhead. At the same time, by utilizing non-ideal photon sources, it offers a practical and privacy-preserving solution for large-scale power data trading and matching. | 10.1109/TNSM.2026.3667701 |
| Yanxu Lin, Renzhong Zhong, Jingnan Xie, Yueting Zhu, Byung-Gyu Kim, Saru Kumari, Shakila Basheer, Fatimah Alhayan | Privacy-Preserving Digital Publishing Framework for Next-Generation Communication Networks: A Verifiable Homomorphic Federated Learning Approach | 2026 | Early Access | Electronic publishing Federated learning Cryptography Communication networks Homomorphic encryption Next generation networking Complexity theory Protocols Privacy Optimization Digital publishing federated learning next-generation communication networks Chinese remainder theorem | Next-generation communication networks are revolutionizing digital publishing through intelligent content distribution and collaborative optimization capabilities. However, existing federated learning approaches face fundamental limitations, including trusted third-party dependencies, excessive communication overhead, and vulnerability to collusion attacks between servers and participants. This paper introduces VHFL-DP, a verifiable homomorphic federated learning framework for digital publishing environments operating within 6G network infrastructures. The framework addresses critical privacy and scalability challenges through four key innovations: a distributed cryptographic key generation protocol that eliminates trusted third-party requirements, Chinese remainder theorem-based dimensionality reduction, auxiliary validation nodes that enable independent verification with constant-time complexity, and an intelligent incentive mechanism that rewards digital publishing platforms based on objective contribution quality metrics. Experimental evaluation on MNIST and Amazon reviews datasets across six baseline methods demonstrates that VHFL-DP achieves superior performance with accuracy improvements of 4.2% over the best baseline method. The framework maintains constant verification time ranging from 2.73 to 2.91 seconds regardless of platform count, increasing from ten to fifty, or dropout rates reaching thirty percent. Security evaluation reveals strong resilience with only 2.4 percentage point accuracy degradation under poisoning attacks compared to 6.7-7.0 points for baseline method, inference attack success near random guessing at 51.3%, and 92.4% successful aggregation under Byzantine adversaries. | 10.1109/TNSM.2026.3667167 |
| Xin Song, Biao Zhang, Ze Fan, Ruomeng Li, Siyang Xu | Dynamic Normalization TD3-Based Task Offloading for UAV-Assisted Collaborative Computing | 2026 | Early Access | Delays Optimization Energy consumption Autonomous aerial vehicles Heuristic algorithms Collaboration Real-time systems Cloud computing Resource management Vehicle dynamics computation offloading mobile edge computing unmanned aerial vehicle deep reinforcement learning quality of service | To meet the computational requirements of computation-intensive and delay-sensitive applications, we construct an Unmanned Aerial Vehicle (UAV)-assisted three-layer collaborative computing framework that integrates local, edge, and cloud computing resources. However, in dynamic UAV-assisted environments, some existing approaches lack adaptability and struggle to effectively balance delay and energy consumption. To address these challenges, we formulate a joint optimization problem that minimizes the weighted sum of delay and energy consumption, where adaptive weight factors are dynamically adjusted according to system state variations. Due to the non-convex and high-dimensional nature of our proposed problem, traditional optimization methods are generally inadequate. Hence, the problem is modeled as a Markov Decision Process (MDP), and a normalization-based reward function is designed to eliminate the dimensional imbalance between delay and energy consumption. A Dynamic Normalization Twin Delayed Deep Deterministic Policy Gradient (DN-TD3) algorithm is then proposed, which incorporates mechanisms of adaptive exploration and criticdriven policy updates to enhance convergence stability and reduce sensitivity to hyperparameters. Simulation results demonstrate that the proposed DN-TD3 algorithm outperforms benchmark schemes in terms of system cost reduction, convergence speed, and overall stability. | 10.1109/TNSM.2026.3667404 |
| Jesus Omaña Iglesias, Carlos Segura Perales, Stefan Geißler, Diego Perino, Andra Lutu | Anomaly Detection for IoT Global Connectivity | 2026 | Early Access | Internet of Things Anomaly detection Ecosystems Biological system modeling Home automation Pipelines Monitoring Knowledge engineering Elevators Data models Anomaly detection roaming internet of things mobile networks signaling traffic | Internet of Things (IoT) application providers rely on Mobile Network Operators (MNOs) and roaming infrastructures to deliver their services globally. In this complex ecosystem, where the end-to-end communication path traverses multiple entities, it became increasingly challenging to guarantee communication availability and reliability. Further, most platform operators use a reactive approach to communication issues, responding to user complaints only after incidents have become severe, compromising service quality. This paper presents our experience in the design and deployment of ANCHOR – an unsupervised anomaly detection solution for the IoT connectivity service of a large global roaming platform. ANCHOR assists engineers by filtering vast amounts of data to identify potential problematic clients (i.e., those with connectivity issues affecting several of their IoT devices), enabling proactive issue resolution before the service is critically impacted. We first describe the IoT service, infrastructure, and network visibility of the IoT connectivity provider we operate. Second, we describe the main challenges and operational requirements for designing an unsupervised anomaly detection solution on this platform. Following these guidelines, we propose different statistical rules, and machine- and deep-learning models for IoT verticals anomaly detection based on passive signaling traffic.We describe the steps we followed working with the operational teams on the design and evaluation of our solution on the operational platform, and report an evaluation on operational IoT customers. | 10.1109/TNSM.2026.3666123 |
| Aditya Pathak, Irfan Al-Anbagi, Howard J. Hamilton | An Early Conflict Resolution Mechanism for Blockchain-Based Delay-Sensitive IoT Networks | 2026 | Early Access | Internet of Things Blockchains Parallel processing Fabrics Throughput Low latency communication Distributed ledger Analytical models System performance Privacy Blockchain Conflicting transaction Conflict resolution Dependency manager IoT networks Hyperledger Fabric | Blockchain technology, particularly Hyperledger Fabric (HLF), has emerged as a promising solution to enhance security and privacy in various domains, including Internet of Things (IoT) networks. Conflicting transactions in a HLF-based IoT network occur when multiple transactions attempt to modify the same asset or data concurrently. Conflicting transactions can lead to data inconsistencies, because the network may be unable to determine the correct order or the most preferred valid transaction. Existing conflict resolution mechanisms in HLF-based IoT networks often introduce considerable transaction latency, detect and resolve conflicting transactions in the late stages of the transaction lifecycle (ordering and validation), or require significant changes to the underlying HLF blockchain platform. To overcome these limitations, we propose an Early Conflict Resolution (ECR) mechanism that detects and resolves conflicts during the endorsement stage. The ECR mechanism uses a local cache (Sync.Map) and a dependency graph to efficiently detect conflicts by analyzing the Read-Sets (RS) and Write-Sets (WS) of transactions. ECR resolves conflicts in the detected conflicting transactions through transaction reordering or sequential processing. It also executes non-conflicting transactions in parallel to speed their processing. Our results show that the ECR mechanism improves transaction latency and the success rate for varying conflict rates, block sizes, and IoT devices compared to existing mechanisms. | 10.1109/TNSM.2026.3667085 |
| Fernando Martinez-Lopez, Lesther Santana, Mohamed Rahouti, Abdellah Chehri, Shawqi Al-Maliki, Gwanggil Jeon | Learning in Multiple Spaces: Prototypical Few-Shot Learning with Metric Fusion for Next-Generation Network Security | 2026 | Early Access | Measurement Prototypes Extraterrestrial measurements Training Chebyshev approximation Metalearning Scalability Next generation networking Learning (artificial intelligence) Data models Few-Shot Learning Network Intrusion Detection Metric-Based Learning Multi-Space Prototypical Learning | As next-generation communication networks increasingly rely on AI-driven automation, ensuring robust and secure intrusion detection becomes critical, especially under limited labeled data. In this context, we introduce Multi-Space Prototypical Learning (MSPL), a few-shot intrusion detection framework that improves prototype-based classification by fusing complementary metric-induced spaces (Euclidean, Cosine, Chebyshev, and Wasserstein) via a constrained weighting mechanism. MSPL further enhances stability through Polyak-averaged prototype generation and balanced episodic training to mitigate class imbalance across diverse attack categories. In a few-shot setting with as few as 200 training samples, MSPL consistently outperforms single-metric baselines across three benchmarks: on CICEVSE Network2024, AUPRC improves from 0.3719 to 0.7324 and F1 increases from 0.4194 to 0.8502; on CICIDS2017, AUPRC improves from 0.4319 to 0.4799; and on CICIoV2024, AUPRC improves from 0.5881 to 0.6144. These results demonstrate that multi-space metric fusion yields more discriminative and robust representations for detecting rare and emerging attacks in intelligent network environments. | 10.1109/TNSM.2026.3665647 |