Last updated: 2026-04-01 05:01 UTC
All documents
Number of pages: 160
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Jianwei Zhang, Bowen Cui | Bandwidth-Delay Optimal Segment Routing: Upper-Bound and Lower-Bound Algorithms | 2026 | Early Access | Routing Optimization Quality of service Delays Complexity theory Bandwidth Topology Network topology Measurement Approximation algorithms Segment routing quality-of-service routing multicriteria optimization labeling algorithm | Segment routing (SR) is a novel source routing paradigm that enables network programmability. However, existing research rarely considers multicriteria optimization problems in SR networks. Given the critical role of bandwidth and delay in quality-of-service (QoS) routing, we formally define the bandwidth-delay optimal SR (BDoSR) problem for the first time and prove its NP-hardness. By leveraging the label correcting algorithm schema, we design a suite of polynomial-time algorithms, including an upper-bound algorithm (BDoSR-UB) and a lower-bound algorithm (BDoSR-LB). BDoSR-UB enables rapid estimation of the optimal solution while BDoSR-LB is accuracy-adjustable and delivers (near-)optimal feasible solutions. We rigorously analyze their performance gap through carefully constructed network examples, providing deep insights into the adjustable parameters of BDoSR-LB. Finally, we validate our algorithms on realistic network topologies, demonstrating that both BDoSR-UB and BDoSR-LB frequently converge to the optimal solution in practice while offering superior computational efficiency compared to existing approaches. | 10.1109/TNSM.2026.3678190 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Junyan Guo, Shuang Yao, Yue Song, Le Zhang, Xu Han, Liyuan Chang | EF-CPPA: Escrow-Free Conditional Privacy-Preserving Authentication Scheme for Real-Time Emergency Messages in Smart Grids | 2026 | Early Access | Authentication Smart grids Security Privacy Smart meters Logic gates Real-time systems Vehicle dynamics Time factors Power system reliability Smart grid emergency message authentication conditional privacy preservation escrow-free key generation unlinkability dynamic joining and revocation | Timely and secure emergency message delivery is critical to resilient smart-grid operation and rapid disturbance response. However, existing schemes remain inadequate, leaving smart grids vulnerable to security and privacy threats and causing verification bottlenecks, particularly when nonlinear emergency measurements cannot be homomorphically aggregated, which prevents bandwidth-efficient in-network aggregation and scalable batch verification. We propose EF-CPPA, an escrow-free, conditional privacy-preserving authentication scheme for real-time emergency messaging in smart grids. EF-CPPA enables smart meters to deliver authenticated emergency messages to the CC via power gateways verifiable as legitimate relays, while ensuring the confidentiality, integrity, and unlinkability of embedded nonlinear measurements. EF-CPPA further provides conditional anonymity with accountable tracing, as well as origin authentication, intra-domain verification, and scalable batch verification under bursty multi-meter messaging. An ECDLP-based escrow-free key-generation mechanism reduces reliance on the CC and enables efficient node joining and revocation. Security analysis shows that EF-CPPA achieves existential unforgeability under chosen-message attacks (EUF-CMA) and satisfies the stated security and privacy requirements. Performance evaluation demonstrates low computational, communication, energy, and node-management overhead, making EF-CPPA suitable for security-critical, time-sensitive smart-grid emergency messaging. | 10.1109/TNSM.2026.3672754 |
| Amin Mohajer, Abbas Mirzaei, Mostafa Darabi, Xavier Fernando | Joint SLA-Aware Task Offloading and Adaptive Service Orchestration with Graph-Attentive Multi-Agent Reinforcement Learning | 2026 | Early Access | Quality of service Resource management Observability Training Delays Job shop scheduling Dynamic scheduling Bandwidth Vehicle dynamics Thermal stability Edge intelligence network slicing QoS-aware scheduling graph attention networks adaptive resource allocation | Coordinated service offloading is essential to meet Quality-of-Service (QoS) targets under non-stationary edge traffic. Yet conventional schedulers lack dynamic prioritization, causing deadline violations for delay-sensitive, lower-priority flows. We present PRONTO, a multi-agent framework with centralized training and decentralized execution (CTDE) that jointly optimizes SLA-aware offloading and adaptive service orchestration. PRONTO builds on Twin Delayed Deep Deterministic Policy Gradient (TD3) and incorporates spatiotemporal, topology-aware graph attention with top-K masking and temperature scaling to encode neighborhood influence at linear coordination cost. Gated Recurrent Units (GRUs) filter temporal features, while a hybrid reward couples task urgency, SLA satisfaction, and utilization costs. A priority-aware slicing policy divides bandwidth and compute between latency-critical and throughput-oriented flows. To improve robustness, we employ stability regularizers (temporal smoothing and confidence-weighted neighbor alignment), mitigating action jitter under bursts. Extensive evaluations show superior QoS and channel utilization, with up to 27.4% lower service delay and over 18% higher SLA Satisfaction Rate (SSR) compared with strong baselines. | 10.1109/TNSM.2026.3673188 |
| Yifei Xie, Zhi Lin, Kefeng Guo, Ruiqian Ma, Hussam Al Hamadi, Fatima Asiri, Ahlam Almusharraf | Lightweight Learning for Symbiotic Secure and Efficient ISAC in RIS-assisted Intelligent Transportation Networks | 2026 | Early Access | Achieving real-time processing in integrated sensing and communication (ISAC) systems presents significant challenges due to the high computational burden of conventional optimization methods, particularly within intelligent transportation networks (ITN). This paper addresses these challenges by proposing lightweight supervised and unsupervised deep learning (DL) algorithms, respectively for quasi-static and dynamic environments, aiming to improve the secrecy energy efficiency (SEE) of ITN under the constraints of the Cram´er-Rao bound (CRB) for direction-of-arrival (DOA) estimation and the transmission rate of each user. By jointly optimizing power allocation and reconfigurable intelligent surface (RIS) phase shifts, the framework ensures robust physical layer security (PLS) alongside communication efficiency, aligning with defense-in-depth strategies for securing next-generation ITN. For quasi-static environments, a supervised deep neural network (DNN) algorithm leverages offline codebook-generated labels to achieve near-optimal channel state information (CSI) mapping, explicitly minimizing signal leakage to eavesdroppers. In dynamic scenarios, an unsupervised channel attention mechanism-based residual network (CAM-ResNet) eliminates labeling overhead through direct physics-informed SEE optimization with adaptive constraint enforcement, enabling real-time adaptation to rapidly varying channels and evolving security threats. Simulation results demonstrate that both algorithms achieve comparable SEE performance with the zero-forcing (ZF) method, while significantly reducing computational complexity, with the CAM-ResNet demonstrating superior resilience to dynamic security threats. This work contributes to advancing secure and efficient ISAC solutions, reinforcing multi-layered defense mechanisms critical for future ITN. | 10.1109/TNSM.2026.3679370 | |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Yanli Liu, Yue Pang, Yidi Wang, Shengnan Li, Jin Li, Min Zhang, Danshi Wang | Developing A Domain-Specific LLM for Optical Networks: A Reinforcement Learning-Based Fine-Tuning Framework | 2026 | Early Access | Optical fiber networks Cognition Accuracy Location awareness Reinforcement learning Adaptation models Semantics Optimization Maintenance Training Large language model reinforcement learning from human feedback reinforced fine-tuning optical networks | Optical networks serve as the backbone of modern communication infrastructure, where efficient operation and maintenance (O&M) are essential for ensuring reliable and high-speed data services. However, traditional network O&M face persistent challenges, including high labor costs, delayed response time, and difficulties in processing massive and complex network data. Although large language models (LLMs) have demonstrated strong capabilities in text understanding, generation, and reasoning, their direct application in optical network O&M is limited by domain-specific knowledge barriers, inherent reasoning biases, and insufficient performance in complex multi-step tasks. To address this issue, this study develops a domain-adaptation and system-implementation framework that applies two established reinforcement learning-based fine-tuning methods (RLHF and ReFT) to construct domain-specialized LLMs for optical network O&M tasks. In the context of log analysis, RLHF achieves improvements of 1.64 points in accuracy, 1.02 points in content richness, and a notable 10-point increase in interactivity over supervised fine-tuning. In alarm localization, ReFT achieves accuracy improvements of 2%–13% across four reasoning tasks. The extensive tests not only demonstrate the practical value of RL-based fine-tuning in enhancing alignment and reasoning for domain-specific applications, but also provides a practical methodology and implementation reference for applying reinforcement learning-based LLM adaptation in optical network O&M environments. | 10.1109/TNSM.2026.3676522 |
| Ei Theingi, Lokman Sboui, Diala Naboulsi | Adaptive and Energy-Efficient Deployment of Robotic Airborne Base Stations: A Deep Reinforcement Learning Approach | 2026 | Early Access | Energy efficiency Base stations Adaptation models Energy consumption Vehicle dynamics Optimization Adaptive systems Robot kinematics Grasping Fluctuations Actor-Critic Deep Reinforcement Learning Dynamic Network Deployment Energy Efficiency Robotic Airborne Base Stations Sustainable Wireless Networks | The increasing energy demands of future wireless networks drive the need for intelligent and adaptive deployment strategies. Traditional methods often lack the flexibility required to handle the spatio-temporal fluctuations inherent in modern communication environments. To address this challenge, we investigate the energy-efficient deployment of Robotic Airborne Base Stations (RABSs) in practical scenarios, such as managing sudden traffic surges during large-scale public events and providing emergency coverage in disaster-stricken areas where terrestrial infrastructure is compromised. We propose a novel Deep Reinforcement Learning (DRL)-based framework for an energy-efficient deployment of multiple RABSs. Unlike existing approaches, our framework features both centralized and decentralized Actor-Critic DRL, enabling scalable and adaptive decision-making. The centralized model leverages global network information to optimize the collective deployment of RABSs, while the multi-agent decentralized approach allows RABSs to make independent yet coordinated decisions based on local observations, ensuring scalability in large-scale networks. In addition, we introduce a state-action representation that captures spatio-temporal traffic variations and energy consumption dynamics. Our simulations validate the effectiveness of the proposed framework, demonstrating significant improvements in energy efficiency and adaptability compared to heuristic, Gauss-Markov, and Q-Learning models. Furthermore, comparison with an exhaustive search benchmark confirms that our approach achieves an optimal energy efficiency with significantly lower computational complexity. | 10.1109/TNSM.2026.3678488 |
| Henghua Zhang, Jue Chen, Haidong Peng, Junru Chen | MAT4PM: Machine Learning-Guided Adaptive Threshold Control for P4-based Monitoring in SDNs | 2026 | Early Access | Monitoring Switches Accuracy Control systems Real-time systems Scalability Data collection Adaptation models Telemetry Process control Software-Defined Networking Programmable Data Plane Machine Learning Network Monitoring P4 | This paper presents MAT4PM, a P4-based proactive monitoring framework designed for Software-Defined Networking (SDN). This is the first monitoring framework that combines Programmable Data Plane (PDP) capabilities for event-driven data collection with control plane intelligence for real-time threshold optimization. The architecture consists of a lightweight P4-based monitoring module deployed at the switch, a Machine Learning (ML) inference engine running at the controller, and a P4Runtime feedback channel for real-time threshold updates. Traffic features are leveraged to predict optimal monitoring thresholds, which are then synchronized with the data plane. A composite cost function is introduced to jointly consider monitoring error and communication overhead, guiding the model toward a balanced trade-off between accuracy and efficiency. Experimental evaluation on BMv2 software switches demonstrates that, compared to static threshold strategies, MAT4PM reduces monitoring error to 7.0% and achieves a 5.6% reduction in overall cost, while maintaining sub-millisecond inference latency and minimal resource consumption. These results demonstrate the practical viability and scalability of MAT4PM in SDN environments. | 10.1109/TNSM.2026.3677416 |
| Archana Ojha, Om Jee Pandey, Prasenjit Chanak | Energy-Efficient Network Cut Detection and Recovery Mechanism for Cluster-Based IoT Networks | 2026 | Early Access | Wireless sensor networks Data collection Energy consumption Relays Internet of Things Delays Data communication Detection algorithms Smart cities Routing Wireless sensor networks (WSNs) internet of things (IoT) data routing network cut detection and recovery reinforcement learning brain storm optimization (RLBSO) mobile data collector (MDC) | Recently, the Internet of Things (IoT) has found widespread applications in diverse fields, including environmental monitoring, Industry 4.0, smart cities, and smart agriculture. In these applications, sensor nodes form Wireless Sensor Networks (WSNs) and collect data from the monitoring environment. Sensor nodes are vulnerable to various faults, including battery depletion and hardware malfunctions. These faulty nodes cut/partition the network into several isolated segments. Therefore, several non-faulty nodes become disconnected from the Base Station (BS)/Sink and are unable to transmit their data to the BS. It is subject to the early demise of the network. Network cuts also significantly degrade overall network performance. Once the network is divided into isolated segments, it is very difficult to detect and collect data from them. Therefore, this paper proposes a Mobile Data Collector (MDC)-based data-gathering approach for WSNs to collect data from isolated segments. This paper proposes a novel MDC-based network cut detection algorithm that identifies the formation of network cuts in WSNs. A network recovery algorithm is also proposed to enable data collection from the isolated segment. Furthermore, this paper proposes a Reinforcement learning Brain Storm Optimization (RLBSO) algorithm for optimal selection of Rendezvous Points (RPs) and optimal MDC path design. It significantly reduces data-gathering time across isolated network segments. The simulation and testbed results show that the proposed approach outperforms existing state-of-the-art approaches in terms of network lifetime, data collection ratio, energy consumption, and latency. | 10.1109/TNSM.2026.3677868 |
| Basharat Ali, Guihai Chen | MIRAGE-DoH: Metamorphic Intelligence and Resilient AI Grid for Autonomous Governance of Encrypted DNS | 2026 | Early Access | Cryptography Domain Name System Fingerprint recognition Accuracy Metadata Artificial intelligence Software Perturbation methods Network security Monitoring Network Security Network Protocol Enhancing Encrypted Network Security Cyber Threats Detection Anomaly Detection Attack Detection Traffic Classification Quantum ML in Encryted DNS | Existing DNS over HTTPS defenses have demonstrated limited resilience against polymorphic traffic shaping, staged tunneling, and adaptive mimicry, largely because they rely on static learning pipelines and rigid cryptographic configurations. MIRAGE-DoH was designed to examine whether adaptive inference, persistent structural encoding, and calibrated cryptographic agility could be integrated into a deployable and measurable encrypted DNS control architecture. The framework combined flow-level Cognitive MetaAgents capable of internal reconfiguration, Topological Memory Networks that preserved stable geometric irregularities across temporal windows, and Metamorphic Cryptographic Shards that adjusted key encapsulation policies according to empirically calibrated threat severity. A Causal Counterfactual Environment modeled constrained attacker decision pathways, while Spectral Game Intelligence analyzed flow interaction graphs to anticipate structural attack transitions.Evaluation on extended CIC-DoH2023 and Gen-C-DDD-2022 datasets was conducted under fixed flow-level decision intervals with explicit accounting for synchronization overhead, spectral graph construction cost, and cryptographic rotation latency. Cross-dataset experiments yielded a mean detection accuracy of 97.8% with a 0.41% false positive rate, sustaining median inference latency of 62μs and cryptographic morph latency of 3.7 ms under load. Quantum-assisted inference was assessed through bounded simulations, indicating constrained information gain within the adopted lattice-based configuration, without asserting unconditional post-quantum immunity. These results demonstrated that adaptive encrypted DNS governance can be empirically grounded, operationally bounded, and stress-evaluated without reliance on unqualified claims of perfect security. | 10.1109/TNSM.2026.3677474 |
| Kang Liu, Jianchen Hu, Donglai Ma, Xiaoyu Cao, Yuzhou Zhou, Lei Zhu, Li Su, Wenli Zhou, Xueqi Wu, Feng Gao | Topology-Aware Virtual Machine Placement through the Buffer Migration Mechanism | 2026 | Early Access | Central Processing Unit Filtering Filters Electronic circuits Circuits Circuits and systems Feedback Cloud computing Radio access networks Regional area networks Buffer management Optimization Topology-aware VM Placement | The virtual machine (VM) placement considering the topology constraints is difficult because the unpredictable topological VMs raise additional structural requirements (including the affinity, anti-affinity and fault-domain) on the resource pool. Thus, the service level agreement (SLA) can be violated even when the occupancy of the resource pool is quite modest. In order to solve this problem, we propose an efficient buffer-migration-based heuristic online algorithm. First, we build an integer programming model for the topology-aware VM placement problem. Second, we propose a hierarchical resource-preserving online approach, where the Rack and physical machine (PM) nodes are selected in the upper and lower layers respectively. Finally, we utilize the buffer to place and migrate the unfitted VMs to enhance the capacity of the resource pool. The proposed approach is tested with high proportional topological VM requests (nearly 60%) in the resource pool with the scale of 500, 1000 and 1500 PMs. The results show that our online approach (with unknown upcoming VM information) can achieve more than 85% of the performance for the offline approach (with complete upcoming VM information). The latency is lower than 5ms per VM. | 10.1109/TNSM.2026.3678976 |
| Wangqing Luo, Jinbin Hu, Hua Sun, Pradip Kumar Sharma, Jin Wang | SALB: Security-Aware Load Balancing for Large Language Model Training in Datacenter Networks | 2026 | Early Access | Training Load management Packet loss Throughput Delays Topology Scheduling Telecommunication traffic Fluctuations Switches Datacenter Networks Load Balancing Data Security Deep Reinforcement Learning | To meet the massive compute and high-speed communication demands of Large Language Model (LLM) training, modern datacenters typically adopt multipath topologies such as Fat-Tree and Clos to host parallel jobs across hundreds to thousands of GPUs. However, LLM training exhibits periodic, high-bandwidth communication patterns. Existing load-balancing schemes become misaligned under dynamic congestion and anomalous surges: they struggle to promptly mitigate iteration-peak congestion and lack effective isolation of anomalous traffic. To address this, we propose Security-Aware Load Balancing (SALB) for LLM training. SALB leverages a Deep Reinforcement Learning (DRL) controller with queue and delay signals for packet-level multipath load balancing and employs path binding to confine suspicious flows. By integrating data security into load balancing, SALB simultaneously achieves high throughput and robust traffic isolation. NS-3 simulation results show that, compared with CONGA, Hermes, and ConWeave, SALB reduces the 99th-percentile flow completion time (FCT) of short flows by an average of 65% and increases the throughput of long flows by an average of 54%. It further outperforms the baselines in aggregate throughput, path utilization, and packet loss rate, thereby significantly enhancing system stability, robustness, and data security. | 10.1109/TNSM.2026.3678979 |
| Zhiwei Yu, Chengze Du, Heng Xu, Ying Zhou, Bo Liu, Jialong Li | REACH: Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks | 2026 | Vol. 23, Issue | Graphics processing units Computational modeling Reliability Processor scheduling Costs Biological system modeling Artificial intelligence Reinforcement learning Transformers Robustness Community GPU platforms reinforcement learning task scheduling distributed AI infrastructure | Community GPU(Graphics Processing Unit) platforms are emerging as a cost-effective and democratized alternative to centralized GPU clusters for AI(Artificial Intelligence) workloads, aggregating idle consumer GPUs from globally distributed and heterogeneous environments. However, their extreme hardware/software diversity, volatile availability, and variable network conditions render traditional schedulers ineffective, leading to suboptimal task completion. In this work, we present REACH (Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks), a Transformer-based reinforcement learning framework that redefines task scheduling as a sequence scoring problem to balance performance, reliability, cost, and network efficiency. By modeling both global GPU states and task requirements, REACH learns to adaptively co-locate computation with data, prioritize critical jobs, and mitigate the impact of unreliable resources. Extensive simulation results show that REACH improves task completion rates by up to 17%, more than doubles the success rate for high-priority tasks, and reduces bandwidth penalties by over 80% compared to state-of-the-art baselines. Stress tests further demonstrate its robustness to GPU churn and network congestion, while scalability experiments confirm its effectiveness in large-scale, high-contention scenarios. | 10.1109/TNSM.2026.3663316 |
| Qichen Luo, Zhiyun Zhou, Ruisheng Shi, Lina Lan, Qingling Feng, Qifeng Luo, Di Ao | Revisit Fast Event Matching–Routing for High-Volume Subscriptions | 2026 | Vol. 23, Issue | Real-time systems Vectors Search problems Indexing Filters Data structures Classification algorithms Scalability Routing Partitioning algorithms Content-based publish/subscribe event matching existence problem matching time subscription aggregation | Although many scalable event matching algorithms have been proposed to achieve scalability for publish/subscribe services, the content-based pub/sub system still suffer from performance deterioration when the system has large numbers of subscriptions, and cannot support the requirements of real-time pub/sub data services. In this paper, we model the event matching problem as an existence problem which only care about whether there is at least one matching subscription in the given subscription set, differing from existing works that try to speed up the time-consuming search operation to find all matching subscriptions. To solve this existence problem efficiently, we propose DLS (Discrete Label Set), a novel subscription and event representation model. Based on the DLS model, we propose an event matching algorithm with $O(N_{d})$ time complexity to support real-time event matching for a large volume of subscriptions and high event arrival speed, where $N_{d}$ is the node degree in overlay network. Experimental results show that the event matching performance can be improved by several orders of magnitude compared with traditional algorithms. | 10.1109/TNSM.2026.3664517 |
| Domenico Scotece, Giuseppe Santaromita, Claudio Fiandrino, Luca Foschini, Domenico Giustiniano | On the Scalability of Access and Mobility Management Function: The Localization Management Function Use Case | 2026 | Vol. 23, Issue | 5G mobile communication Scalability Location awareness 3GPP Quality of service Position measurement Routing Radio access networks Protocols Global navigation satellite system 5G localization 5G core SBA AMF localization management function (LMF) | The adoption of Service-Based Architecture (SBA) in 5G Core Networks (5GC) has significantly transformed the design and operation of the control plane, enabling greater flexibility and agility for cloud-native deployments. While the infrastructure has initially evolved by implementing key functions, there remains significant potential for additional services, such as localization, paving the way for the integration of the Location Management Function (LMF). However, the extensive functional decomposition within SBA leads to consequences, such as the increase of control plane operations. Specifically, we observe that the additional signaling traffic introduced by the presence of the LMF overwhelms the Access and Mobility Management Function (AMF) which is responsible for authentication and mobility. In fact, in mobile positioning, each connected mobile device requires a significant amount of control traffic to support location algorithms in the 5GC. To address this scalability challenge, we analyze the impact of three well-known optimization techniques on location procedures to reduce control message traffic in the specific context of the 5GC, namely a caching system, a request aggregation system, and a service scalability system. Our solutions are evaluated in an OpenAirInterface (OAI) emulated environment with real hardware. After the analysis in the emulated environment, we select the caching system–due to its feasibility–for being analyzed in a real 5G testbed. Our results demonstrate a significant reduction in the additional overhead introduced by the LMF, improving scalability by minimizing the impact on AMF processing time up to a 50% reduction. | 10.1109/TNSM.2026.3664546 |
| Jordan F. Masakuna, D'Jeff K. Nkashama, Arian Soltani, Marc Frappier, Pierre-Martin Tardif, Froduald Kabanza | Enhancing Anomaly Alert Prioritization Through Calibrated Standard Deviation Uncertainty Estimation With an Ensemble of Auto-Encoders | 2026 | Vol. 23, Issue | Uncertainty Standards Measurement Anomaly detection Calibration Bayes methods Predictive models Computer security Reliability Monitoring Auto-encoders security anomaly detection alert prioritization uncertainty estimation | Deep auto-encoders (AEs) are widely employed deep learning methods in the field of anomaly detection across diverse domains (e.g., cybersecurity analysts managing large volumes of alerts, or medical practitioners monitoring irregular patient signals). In such contexts, practitioners often face challenges of scale and limited processing resources. To cope, strategies such as false positive reduction, human-in-the-loop review, and alert prioritization are commonly adopted. This paper explores the integration of uncertainty quantification (UQ) methods into alert prioritization for anomaly detection using ensembles of AEs. UQ models highlight doubtful classification decisions, enabling analysts to address the most certain alerts first, since higher certainty typically correlates with greater accuracy. Our study reveals a nuanced issue where applying UQ to ensembles of AEs can produce skewed distributions of large reconstruction errors (errors exceeding a pre-defined threshold), which may falsely suggest high uncertainty when standard deviation is used as the metric. Conventionally, a high standard deviation indicates high uncertainty. However, contrary to intuition, large reconstruction errors often reflect AE is strongly confident that an input is anomalous—not uncertainty about it. Moreover, ensembles of AEs generate reconstruction errors with varying ranges, complicating interpretation. To address this, we propose an extension that calibrates the standard deviation distribution of uncertainties, mitigating erroneous prioritization. Evaluation on 10 benchmark datasets demonstrates that our calibration approach improves the effectiveness of UQ methods in prioritizing alerts, while maintaining favorable trade-offs across other key performance metrics. | 10.1109/TNSM.2026.3664298 |
| Muhammad Fahimullah, Michel Kieffer, Sylvaine Kerboeuf, Shohreh Ahvar, Maria Trocan | Decentralized Coalition Formation of Infrastructure Providers for Resource Provisioning in Coverage Constrained Virtualized Mobile Networks | 2026 | Vol. 23, Issue | Indium phosphide III-V semiconductor materials Resource management Games Costs Wireless communication Quality of service Collaboration Protocols Performance evaluation Resource provisioning wireless virtualized networks coverage integer linear programming coalition formation hedonic approach | The concept of wireless virtualized networks enables Mobile Virtual Network Operators (MVNOs) to utilize resources made available by multiple Infrastructure Providers (InPs) to set up a service. Nevertheless, existing centralized resource provisioning approaches fail to address such a scenario due to conflicting objectives among InPs and their reluctance to share private information. This paper addresses the problem of resource provisioning from several InPs for services with geographic coverage constraints. When complete information is available, an Integer Linear Program (ILP) formulation is provided, along with a greedy solution. An alternative coalition formation approach is then proposed to build coalitions of InPs that satisfy the constraints imposed by an MVNO, while requiring only limited information sharing. The proposed solution adopts a hedonic game-theoretic approach to coalition formation. For each InP, the decision to join or leave a coalition is made in a decentralized manner, relying on the satisfaction of service requirements and on individual profit. Simulation results demonstrate the applicability and performance of the proposed solution. | 10.1109/TNSM.2026.3663437 |
| Yuhao Chen, Jinyao Yan, Yuan Zhang, Lingjun Pu | WiLD: Learning-Based Wireless Loss Diagnosis for Congestion Control With Ultra-Low Kernel Overhead | 2026 | Vol. 23, Issue | Packet loss Kernel Linux Wireless networks Quantization (signal) Artificial neural networks Throughput Accuracy Real-time systems Computational modeling Wireless loss diagnosis kernel implementation congestion control quantization | Current congestion control algorithms (CCAs) are inefficient in wireless networks due to the lack of distinction of congestion and wireless packet losses. In this work, we propose a simple yet effective learning-based wireless loss diagnosis (WiLD) solution for enhancing wireless congestion control. WiLD uses a neural network (NN) to accurately distinguish between wireless packet loss and congestion packet loss. To seamlessly cooperate with rule-based CCAs and make real-time decisions, we further implement WiLD in Linux kernel to avoid the frequent kernel-space communication. Specifically, we use a lightweight NN for inference and propose an integer quantization for WiLD deployment in various Linux versions. Real-world experiments and simulations demonstrate that WiLD can accurately differentiate the wireless and congestion packet loss with negligible CPU overhead (around 1% of WiLD vs. around 100% of learning-based algorithms such as Vivace and Aurora) and fast inference time (45% less compared to TensorFlow Lite). When combined with Cubic, WiLD-Cubic can achieve around 792%, 536%, 412%, 231%, 218%, 108%, 85% and 291% throughput improvement compared with BBRv2, Cubic, Westwood, Copa, Copa+, Vivace, Aurora and Indigo in the real network environment. | 10.1109/TNSM.2026.3664422 |