Last updated: 2026-03-06 05:01 UTC
All documents
Number of pages: 158
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Woojin Jeon, Donghyun Yu, Ruei-Hau Hsu, Jemin Lee | Secure Data Sharing Framework with Fine-grained Access Control and Privacy Protection for IoT Data Marketplace | 2026 | Early Access | Internet of Things Encryption Access control Data privacy Protocols Authentication Protection Vectors Scalability Privacy IoT data marketplace fine-grained access control attributes privacy outsourcing encryption match test | The proliferation of IoT devices has led to an exponential increase in data generation, creating new opportunities for data marketplaces. However, due to the security and privacy issues arising from the sensitive nature of IoT data, as well as the need for efficient management of vast amounts of IoT data, a robust solution is necessary. Therefore, this paper proposes a secure data sharing framework with fine-grained access control and privacy protection for the internet of things (IoT) data marketplace. For fine-grained access control of the data in the proposed protocol, we develop the hidden attributes and encryption outsourced key-policy attribute-based encryption (HAEO-KP-ABE) that outsources high-complex operations to peripheral devices with high capability to reduce the computation burden of IoT device. It achieves data privacy by hiding attributes in the ciphertext and by preventing entities that do not hold the data consumer’s secret key material (including SA/CS) from running the match test on stored ciphertexts before decryption. It also has an efficient match test algorithm which can verify that the hidden attributes of the ciphertext match the access policy of the data consumer’s private key without revealing those attributes. We demonstrate the proposed protocol satisfies the security features required for the data sharing process in an IoT data marketplace environment. Furthermore, we evaluate the execution time of the proposed protocol according to the number of attributes and show the practicality and efficiency of the proposed protocol compared to the related works. | 10.1109/TNSM.2026.3670207 |
| Ghofran Khalaf, May Itani, Sanaa Sharafeddine | A UAV-Aided Digital Twin Framework for IoT Networks with High Accuracy and Synchronization | 2026 | Early Access | Digital Twin (DT) technology has emerged as a promising link between the physical and virtual worlds, enabling simulation, prediction, and real-time performance optimization in different domains. In this work we develop a high-fidelity digital twin framework, focusing on synchronization and accuracy between physical and digital systems to enhance data-driven decision making. To achieve this, we deploy several stationary UAVs in optimized locations to collect data from IoT devices, which were used to monitor multiple physical entities and perform computations to evaluate their status. We formulate a mixed-integer non-convex program to maximize the total amount of data collected from all IoT devices while ensuring a constrained age of digital twin threshold and solve it using successive convex approximation (SCA). To cope with realistic scenarios involving unpredictable environments and large network sizes, we model our problem as a Markov Decision Process (MDP), and propose a deep reinforcement learning-based approach using a Twin Delayed Deep Deterministic Policy Gradient (TD3) to optimize the unmanned aerial vehicle positions and the sum rate. Finally, we present different simulation results of the SCA and TD3 based solutions together with two baseline approaches and evaluated the sum rate in terms of IoT device count, AoDT threshold, task arrival rate and UAVs’ computational capacity. In all simulation results, the proposed TD3-based approach consistently proved to be superior as compared to the baseline solutions. | 10.1109/TNSM.2026.3670040 | |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Jiazhong Lu, Jimin Peng, Jian Shu, Jiali Yin, Xiaolei Liu | Adversarial Sample Based on Structured Fusion Noise for Botnet Detection in Industrial Control Systems | 2026 | Early Access | Botnet Industrial control Feature extraction Intrusion detection Integrated circuit modeling Time-domain analysis Internet of Things Frequency-domain analysis Biological system modeling Training Adversarial sample botnet industrial control system fusion noise | The industrial control system’s artificial intelligence-based botnet intrusion detection system has a high detection performance and efficiency in an environment without interference. However, these systems are not immune to evasion through adversarial samples. In this study, we introduce a feature extraction technique tailored for ICS botnet detection. This approach classifies traffic packets based on network traffic attributes and ICS-specific identification codes, encompassing the statuses of ICS devices, enhancing detection precision. Meanwhile, this strategy addresses challenges in ICS data collection and bolsters experimental efficacy. To build a comprehensive botnet intrusion dataset within an ICS, we concurrently utilized existing ICS devices to collect both standard ICS and botnet traffic. Additionally, we present an innovative adversarial sample generation method for botnet detection models, integrating both time-domain and frequency-domain noise. Testing under three real-world ICS attack scenarios revealed our technique can markedly degrade the classification performance of eight leading AI-based detection models, emphasizing its potential for evading AI-based ICS intrusion detectors. | 10.1109/TNSM.2026.3665504 |
| Tuan-Vu Truong, Van-Dinh Nguyen, Quang-Trung Luu, Phi-Son Vo, Xuan-Phu Nguyen, Fatemeh Kavehmadavani, Symeon Chatzinotas | Accelerating Resource Allocation in Open RAN Slicing via Deep Reinforcement Learning | 2026 | Early Access | Resource management Open RAN Ultra reliable low latency communication Real-time systems Computational modeling Optimization Deep reinforcement learning Costs Complexity theory Bandwidth Open radio access network network slicing virtual network function resource allocation deep reinforcement learning successive convex approximation | The transition to beyond-fifth-generation (B5G) wireless systems has revolutionized cellular networks, driving unprecedented demand for high-bandwidth, ultra low-latency, and massive connectivity services. The open radio access network (Open RAN) and network slicing provide B5G with greater flexibility and efficiency by enabling tailored virtual networks on shared infrastructure. However, managing resource allocation in these frameworks has become increasingly complex. This paper addresses the challenge of optimizing resource allocation across virtual network functions (VNFs) and network slices, aiming to maximize the total reward for admitted slices while minimizing associated costs. By adhering to the Open RAN architecture, we decompose the formulated problem into two subproblems solved at different timescales. Initially, the successive convex approximation (SCA) method is employed to achieve at least a locally optimal solution. To handle the high complexity of binary variables and adapt to time-varying network conditions, traffic patterns, and service demands, we propose a deep reinforcement learning (DRL) approach for real-time and autonomous optimization of resource allocation. Extensive simulations demonstrate that the DRL framework quickly adapts to evolving network environments, significantly improving slicing performance. The results highlight DRL’s potential to enhance resource allocation in future wireless networks, paving the way for smarter, self-optimizing systems capable of meeting the diverse requirements of modern communication services. | 10.1109/TNSM.2026.3665553 |
| Adel Chehade, Edoardo Ragusa, Paolo Gastaldo, Rodolfo Zunino | Hardware-Aware Neural Architecture Search for Encrypted Traffic Classification on Resource-Constrained Devices | 2026 | Early Access | Accuracy Computational modeling Cryptography Feature extraction Hardware Convolutional neural networks Artificial neural networks Real-time systems Long short term memory Internet of Things Deep neural networks encrypted traffic classification hardware-aware neural architecture search Internet of Things resource-constrained devices | This paper presents a hardware-efficient deep neural network (DNN), optimized through hardware-aware neural architecture search (HW-NAS); the DNN supports the classification of session-level encrypted traffic on resource-constrained Internet of Things (IoT) and edge devices. Thanks to HW-NAS, a 1D convolutional neural network (CNN) is tailored on the ISCX VPN-nonVPN dataset to meet strict memory and computational limits while achieving robust performance. The optimized model attains an accuracy of 96.60% with just 88.26K parameters, 10.08M floating-point operations (FLOPs), and a maximum tensor size of 20.12K. Compared to state-of-the-art (SOTA) models, it achieves reductions of up to 444-fold, 312-fold, and 15-fold in these metrics, respectively, significantly minimizing memory footprint and runtime requirements. The model also demonstrates versatility, achieving up to 99.86% across multiple VPN and traffic classification (TC) tasks; it further generalizes to external benchmarks with up to 99.98% accuracy on USTC-TFC and QUIC NetFlow. In addition, an in-depth approach to header-level preprocessing strategies confirms that the optimized model can provide notable performance across a wide range of configurations, even in scenarios with stricter privacy considerations. Likewise, a reduction in the length of sessions of up to 75% yields significant improvements in efficiency, while maintaining high accuracy with only a negligible drop of 1-2%. However, the importance of careful preprocessing and session length selection in the classification of raw traffic data is still present, as improper settings or aggressive reductions can bring about a 7% reduction in overall accuracy. The quantized architecture was deployed on STM32 microcontrollers and evaluated across input sizes; results confirm that the efficiency gains from shorter sessions translate to practical, low-latency embedded inference. These findings demonstrate the method’s practicality for encrypted traffic analysis in constrained IoT networks. | 10.1109/TNSM.2026.3666676 |
| Pengcheng Guo, Zhi Lin, Haotong Cao, Yifu Sun, Kuljeet Kaur, Sherif Moussa | GAN-Empowered Parasitic Covert Communication: Data Privacy in Next-Generation Networks | 2026 | Early Access | Interference Generators Generative adversarial networks Blind source separation Electronic mail Training Receivers Noise Image reconstruction Hardware Artificial intelligence blind source separation covert communication generative adversarial network | The widespread integration of artificial intelligence (AI) in next-generation communication networks poses a serious threat to data privacy while achieving advanced signal processing. Eavesdroppers can use AI-based analysis to detect and reconstruct transmitted signals, leading to serious leakage of confidential information. In order to protect data privacy at the physical layer, we redefine covert communication as an active data protection mechanism. We propose a new parasitic covert communication framework in which communication signals are embedded into dynamically generated interference by generative adversarial networks (GANs). This method is implemented by our CDGUBSS (complex double generator unsupervised blind source separation) system. The system is explicitly designed to prevent unauthorized AI-based strategies from analyzing and compromising signals. For the intended recipient, the pretrained generator acts as a trusted key and can perfectly recover the original data. Extensive experiments have shown that our framework achieves powerful covert communication, and more importantly, it provides strong defense against data reconstruction attacks, ensuring excellent data privacy in next-generation wireless systems. | 10.1109/TNSM.2026.3666669 |
| Fernando Martinez-Lopez, Lesther Santana, Mohamed Rahouti, Abdellah Chehri, Shawqi Al-Maliki, Gwanggil Jeon | Learning in Multiple Spaces: Prototypical Few-Shot Learning with Metric Fusion for Next-Generation Network Security | 2026 | Early Access | Measurement Prototypes Extraterrestrial measurements Training Chebyshev approximation Metalearning Scalability Next generation networking Learning (artificial intelligence) Data models Few-Shot Learning Network Intrusion Detection Metric-Based Learning Multi-Space Prototypical Learning | As next-generation communication networks increasingly rely on AI-driven automation, ensuring robust and secure intrusion detection becomes critical, especially under limited labeled data. In this context, we introduce Multi-Space Prototypical Learning (MSPL), a few-shot intrusion detection framework that improves prototype-based classification by fusing complementary metric-induced spaces (Euclidean, Cosine, Chebyshev, and Wasserstein) via a constrained weighting mechanism. MSPL further enhances stability through Polyak-averaged prototype generation and balanced episodic training to mitigate class imbalance across diverse attack categories. In a few-shot setting with as few as 200 training samples, MSPL consistently outperforms single-metric baselines across three benchmarks: on CICEVSE Network2024, AUPRC improves from 0.3719 to 0.7324 and F1 increases from 0.4194 to 0.8502; on CICIDS2017, AUPRC improves from 0.4319 to 0.4799; and on CICIoV2024, AUPRC improves from 0.5881 to 0.6144. These results demonstrate that multi-space metric fusion yields more discriminative and robust representations for detecting rare and emerging attacks in intelligent network environments. | 10.1109/TNSM.2026.3665647 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Devanshu Anand, Gabriel-Miro Muntean | Mitigating Interferences in 5G O-RAN HetNets through ML-driven xAPP to Enhance Users’ QoS | 2026 | Early Access | Interference 5G mobile communication Quality of service Throughput Base stations User experience Resource management Prevention and mitigation Spectral efficiency Signal to noise ratio HetNets Interference Machine Learning | In today’s rapidly evolving telecommunications landscape, the demand for seamless connectivity and top-tier network performance has reached unprecedented levels. Traditional cellular systems, while valiant in their service, now struggle under the weight of spiraling data demands, spectrum scarcity, and power inefficiency. The era of ultra-dense mobile networks, with Heterogeneous Networks (HetNets) at the forefront, ushers in improved throughput, spectral efficiency, and energy management. To tackle these challenges, this paper introduces MLCIMO (Machine Learning-enhanced Classification for Interference Management and Offloading) into 5G HetNets. MLCIMO employs a multi-binary classification strategy to categorize users based on interference types and levels. It also introduces an offloading scheme tailored to user service priorities, enhancing the user quality of experience, while conserving energy. It seamlessly aligns with the evolving needs of the HetNets, addressing some of the issues introduced by small cell deployments. Simulation results show that MLCIMO achieves the highest throughput, shortest delay, and lowest packet loss ratio in comparison with alternative approaches. In a comprehensive analysis, the varying degrees of interference encountered by users under different schemes are unveiled, further establishing MLCIMO’s distinguished position in mitigating interference. | 10.1109/TNSM.2026.3667462 |
| Yanxu Lin, Renzhong Zhong, Jingnan Xie, Yueting Zhu, Byung-Gyu Kim, Saru Kumari, Shakila Basheer, Fatimah Alhayan | Privacy-Preserving Digital Publishing Framework for Next-Generation Communication Networks: A Verifiable Homomorphic Federated Learning Approach | 2026 | Early Access | Electronic publishing Federated learning Cryptography Communication networks Homomorphic encryption Next generation networking Complexity theory Protocols Privacy Optimization Digital publishing federated learning next-generation communication networks Chinese remainder theorem | Next-generation communication networks are revolutionizing digital publishing through intelligent content distribution and collaborative optimization capabilities. However, existing federated learning approaches face fundamental limitations, including trusted third-party dependencies, excessive communication overhead, and vulnerability to collusion attacks between servers and participants. This paper introduces VHFL-DP, a verifiable homomorphic federated learning framework for digital publishing environments operating within 6G network infrastructures. The framework addresses critical privacy and scalability challenges through four key innovations: a distributed cryptographic key generation protocol that eliminates trusted third-party requirements, Chinese remainder theorem-based dimensionality reduction, auxiliary validation nodes that enable independent verification with constant-time complexity, and an intelligent incentive mechanism that rewards digital publishing platforms based on objective contribution quality metrics. Experimental evaluation on MNIST and Amazon reviews datasets across six baseline methods demonstrates that VHFL-DP achieves superior performance with accuracy improvements of 4.2% over the best baseline method. The framework maintains constant verification time ranging from 2.73 to 2.91 seconds regardless of platform count, increasing from ten to fifty, or dropout rates reaching thirty percent. Security evaluation reveals strong resilience with only 2.4 percentage point accuracy degradation under poisoning attacks compared to 6.7-7.0 points for baseline method, inference attack success near random guessing at 51.3%, and 92.4% successful aggregation under Byzantine adversaries. | 10.1109/TNSM.2026.3667167 |
| Mahnoor Sajid, Mohib Ullah Khan, KyungHi Chang | Intelligent Xn-Based Energy Aware Handover Optimization in 5G Networks via NWDAF-Orchestrated Agent Framework | 2026 | Early Access | Handover 5G mobile communication Quality of service Energy efficiency Energy consumption Optimization Reliability Computer architecture Energy conservation Automation Energy-aware handover NWDAF gNB energy efficiency multi-agent control 5 G mobility management | Energy-aware mobility management in dense Fifth Generation (5G) networks is increasingly challenged by frequent handovers and unnecessary activation of sleeping next-generation NodeBs (gNBs), which lead to excessive energy consumption and degraded mobility reliability. Conventional Xn-based handover schemes rely on static radio thresholds and implicitly assume always-active gNBs, while existing NWDAF-enabled approaches improve stability but do not explicitly account for energy costs during handover execution. To address these limitations, this paper proposes the Intelligent Energy-Aware Handover Framework (IEAHF), a NWDAF-orchestrated architecture that integrates coordinated agent-based control for energy-aware mobility management. The proposed framework introduces an Energy-Aware Handover Optimization Agent (EA-HOA) to guide reliability-driven handover decisions and a Handover Energy Evaluation Agent (HEEA) to assess the energy impact of candidate handover actions, with both agents operating within a closed-loop control process enforced through Operations, Administration, and Maintenance (OAM). By reformulating handover success to incorporate energy inefficiency and enabling per-handover gNB energy reasoning at decision time, IEAHF jointly optimizes service continuity and energy efficiency. System-level simulations demonstrate that the proposed Agent-NWDAF configuration consistently outperforms baseline and analytics-only schemes, achieving tightly concentrated Energy-Aware Handover Success Rates of approximately 0.98, reducing the Energy-Aware Handover Failure Rate to the range of 0.007–0.015, and delivering up to a 32% reduction in average gNB power consumption relative to an always-on baseline. These results indicate that IEAHF provides a scalable and effective solution for energy-efficient mobility management in 5G networks and establishes a foundation for energy-aware handover control in future Sixth Generation (6G) systems. | 10.1109/TNSM.2026.3668238 |
| Honghao Gao, Qionghuizi Ran, Ye Wang, Yueshen Xu | BiTrustChain: A Dual-Blockchain Empowered Dynamic Vehicle Trust Management for Malicious Detection in IoV | 2026 | Early Access | Blockchains Vehicle dynamics Trust management Real-time systems Synchronization Scalability Data models Bayes methods Reliability Computer architecture IoV Blockchain Reputation Calculation Dual-Chain Architecture Bayesian Model | The rapid development of the Internet of Vehicles (IoV) has accelerated technological progress, but several critical security challenges remain, especially in the context of vehicle trust management. Two representative issues are malicious nodes and unreliable information transmission. To address these problems, we propose BiTrustChain, a dual-layer blockchain framework designed to enhance security and trust management in IoV environments. First, it consists of two innovative data chains: a Behavior Data Chain (BDC) and a Reputation Evaluation Chain (REC). The BDC records vehicle interaction data, whereas the REC stores and updates the trust values in real time. Second, within this framework, we develop a Multifactor Bayesian Reputation (MFBR) model that enables quantitative evaluation of node trustworthiness. It integrates a time-decay function and a penalty mechanism to regulate reputation evolution. The trust values decrease after malicious behaviors and recover through continuous normal interactions. In addition, we propose a dynamic local whitelist for indirect reputation evaluation. It filters out untrustworthy nodes and ensures that only reliable nodes remain. The filtered indirect trust is then combined with direct trust to produce a comprehensive reputation score. Third, we design a new set of event-driven smart contracts to synchronize the BDC and REC in real time and ensure secure and efficient data exchange. Finally, we performed experiments on the evaluation platform SUMO/NS-3, and the results show that our method identifies malicious nodes with higher accuracy. In particular, the framework achieves 1.5× higher throughput and reduces latency by 40% compared to the baseline single-chain system. The framework also enhances interaction data integrity and improves robustness against adversarial reputation manipulation. | 10.1109/TNSM.2026.3670385 |
| Zhaoping Li, Mingshu He, Xiaojuan Wang | HKD-Net: Hierarchical Knowledge Distillation Based on Multi-Domain Feature Fusion for Efficient Network Intrusion Detection | 2026 | Early Access | Feature extraction Telecommunication traffic Knowledge engineering Accuracy Deep learning Anomaly detection Adaptation models Network intrusion detection Knowledge transfer Convolutional neural networks Network traffic anomaly detection Knowledge distillation Multi-domain feature Deep learning Network intrusion detection | We propose HKD-Net1, a hierarchical knowledge distillation network based on multi-domain feature fusion, for efficient network intrusion detection on resource-constrained edge devices. The framework incorporates dedicated feature extraction modules across temporal, frequency, and spatial domains, and introduces a dynamic gating mechanism for adaptive feature fusion, resulting in a more discriminative and comprehensive feature representation. Moreover, a hierarchical distillation mechanism is designed that not only preserves soft labels from the output layer but also aligns intermediate features from spatial, temporal, frequency, and fused domains, enabling efficient knowledge transfer from a large teacher model to a compact student model. Through knowledge distillation, the final lightweight model requires only 278,580 parameters, reducing the number of parameters by approximately 74.68% compared to the teacher, while maintaining high detection accuracy. Extensive experiments on three public datasets (Kitsune, CIRA-CIC-DoHBrw2020, and CICIoT2023) demonstrate that HKD-Net outperforms five state-of-the-art methods, achieving accuracies of 96.72%, 97.19%, and 87.19%, respectively, while reducing parameters by 74.68% and maintaining low computational cost. | 10.1109/TNSM.2026.3668812 |
| Vaishnavi Kasuluru, Luis Blanco, Cristian J. Vaca-Rubio, Engin Zeydan, Albert Bel | AI-Empowered Multivariate Probabilistic Forecasting: A Key Enabler for Sustainability in Open RAN | 2026 | Early Access | Open RAN Forecasting Probabilistic logic Switches Resource management Telecommunication traffic Sustainable development Predictive models Power demand Energy consumption Sustainability Open RAN 6G Probabilistic Forecasting Network Analytics Artificial Intelligence | This paper explores the role of multivariate probabilistic forecasting in improving O-RAN operations, focusing on network sustainability aspects. A comprehensive analysis of its potential benefits and challenges, as well as its integration into the O-RAN architecture are described. The paper first presents an overview of the O-RAN architecture and components, followed by an examination of power consumption models relevant to O-RAN deployments and the challenges associated with traditional deterministic models in resource allocation. We then examine the performance of several state-of-the-art probabilistic multivariate forecasting techniques namely, Gaussian Process Vector Autoregression (GPVAR), Temporal Fusion Transformer (TFT) and non-probabilistic multivariate technique namely, Multivariate Long-Short Term Memory (LSTM) and explain their implementation details and provide their evaluations. The simulation results show the effectiveness of these techniques in predicting Physical Resource Block (PRB) utilization and optimizing resource allocation. In particular, significant energy savings – around 20-30%– are achieved, depending on the percentile of the used probabilistic forecasting techniques. The benefits of considering probabilistic forecasting techniques compared to multi-variate LSTM are also analyzed. Our results emphasize the potential of probabilistic forecasting to improve energy efficiency and sustainability in O-RAN operations. | 10.1109/TNSM.2026.3669847 |
| Zewei Han, Go Hasegawa | BBR-ES: An Extended-State Optimization for BBR Congestion Control | 2026 | Early Access | Delays Bandwidth Internet Heuristic algorithms Videos Throughput Taxonomy Reviews Market research Proposals Congestion control algorithm Bottleneck Bandwidth and Round-trip propagation time (BBR) Throughput fairness Round Trip Time (RTT) | In recent years, many optimization proposals for TCP BBR have been introduced, but most rely mainly on delay variations and do not fully resolve BBR’s limitations in RTT fairness, link utilization, and delay control in networks. This paper proposes BBR with Extended State (BBR-ES), which extends BBR’s state machine with a short stabilization state and a trend-based transition mechanism that react to per-flow bandwidth and RTT evolution instead of global delay alone. BBR-ES uses lightweight bandwidth and RTT trend tracking to adjust its sending rate while preserving BBR’s model-based design. Experiments on both emulated (Mininet) and real-world Internet paths (Amazon EC2) show that BBR-ES consistently improves RTT fairness and link utilization over BBRv1, BBRv3, and CUBIC while keeping queuing delay moderate and bounded; in most settings, it achieves Jain’s fairness index above 0.9 and link utilization above 98%. These results indicate that BBR-ES is a practical candidate for deployment in large-scale content delivery and a useful design reference for future model-based congestion control schemes. | 10.1109/TNSM.2026.3668966 |
| Wenjing Jing, Quan Zheng, Siwei Peng, Shuangwu Chen, Xiaobin Tan, Jian Yang | Equivalent Characteristic Time Approximation Based Network Planning for Cache-enabled Networks | 2026 | Early Access | Planning Resource management Costs Estimation Bandwidth Optimization Measurement Servers Investment Web and internet services Cache-enabled Network Cache Capacity Bandwidth Resources Estimation Network Planning | The exponential surge in network traffic has imposed significant challenges on traditional Internet architectures, resulting in high latency and redundant transmissions. Cache-enabled networks alleviate these issues by deploying content closer to end-users, making the planning of such networks a research focus. However, regional heterogeneity in user demand and caching interdependencies among hierarchical nodes complicate the planning process. Most existing approaches rely on simplistic even allocation or empirical methods, which fail to simultaneously meet user performance expectations and minimize deployment costs. This paper proposes a network planning framework based on the Equivalent Characteristic Time Approximation (ECTA). The approach begins by establishing a performance–resource mapping. Using ECTA, we decouple the tightly coupled characteristic time relationships across hierarchical nodes, thereby accurately estimating the required cache capacity and bandwidth needed to achieve user performance targets. Building on this foundation, we formulated the network planning as a constrained convex optimization problem that minimizes deployment cost while satisfying user performance constraints. We conducted extensive experiments on a large-scale simulation platform (ndnSIM) and a real-world cache-enabled network testbed (CENI-HeFei). The results demonstrate that, under identical network topologies and total resource constraints, our method significantly improves cache hit probability while reducing deployment costs compared to homogeneous resource allocation schemes. This work provides a practical theoretical foundation and valuable insights for the design, deployment, and optimization of future cache-enabled networks. | 10.1109/TNSM.2026.3670399 |
| Chengwei Liao, Guofeng Yan, Hengliang Tan, Jiao Du, Xia Deng, Heng Wu | jTOLP-MADRL: A MADRL-based Joint Optimization Algorithm of Task Offloading Location and Proportion for Latency-sensitive Tasks in Vehicle Edge Computing Network | 2026 | Early Access | Servers Resource management Edge computing Optimization Quality of service Deep reinforcement learning Computer science Computational modeling TV Simulation Task Offloading Deep Reinforcement Learning Vehicular Edge Computing Quality of Service | In Vehicle Edge Computing Network (VECN), task offloading is a key technique to provide the satisfactory quality of service (QoS) for latency-sensitive tasks. However, the diversity of computational resources in edge nodes (i.e., RSU and idle vehicles) and the mobility of vehicles present significant challenges to task offloading. Hence, to address these challenges, we propose an offloading scheme that jointly allocates RSU nodes (including MEC servers) and idle service vehicle resources in this paper. We first prioritize these tasks based on their maximum tolerable latency and design a utility function to capture the executing cost for latency-sensitive tasks. Then, we propose a joint optimization algorithm of task offloading location and proportion based on Multi-agent Deep Reinforcement Learning (jTOLP-MADRL algorithm) for latency-sensitive tasks in VECN, which consists of two sub-algorithms: the Offloading Location Selection (OLS) algorithm and the Offloading Proportion Allocation (OPA) algorithm. Additionally, we design a Convolutional Recurrent Actor-Critic Network (CRACN) to enhance the learning efficiency of the OLS algorithm. Finally, we indicate our algorithm is effective based on simulation results. Compared with the other benchmark algorithms, jTOLP-MADRL can significantly reduce latency and enhance system utility. | 10.1109/TNSM.2026.3669913 |
| Mengmeng Sun, Zeyu Tan, Dianlong You, Zhen Chen | PCNet: A Personalized Complementary Network via Tensor Decomposition for Service Recommendation | 2026 | Early Access | Mashups Tensors Collaborative filtering Web sites Video on demand Artificial intelligence Semantics Reviews Cloud computing Software development management Web service Complementarity Tensor Decomposition Personalized Recommendation Mashup | Web services are widely utilized across domains such as cloud computing, mobile networks, and Web applications. Due to their single-function nature, these services are often composed into Mashups to achieve more comprehensive functionality. However, the rapid growth in the number and variety of Web services has made it increasingly difficult to identify suitable services for Mashup development. Web service recommendation systems have emerged as a solution to this service overload, supporting innovative practices within the service-oriented development paradigm. While existing methods emphasize recommendation accuracy and relevance, few approaches simultaneously consider the personalized requirements of the Mashup side and the complementary relationships on the service side, both of which are essential for reconstructing the Web service ecosystem’s value chain. To address this gap, we propose PCNet, a Personalized Complementary Network for service recommendation based on tensor decomposition. We conceptualize the interaction dynamics between Mashups and services, as well as coinvocation patterns among services, using a three-dimensional tensor. The RESCAL tensor decomposition technique is then applied to jointly learn these relationships and uncover personalized complementary relationships among services. In addition, we develop a complementary perception module that uses an attention mechanism to dynamically model a Mashup’s focus on different complementary relationships, extending them to higher orders. Experimental results on real-world Web service datasets demonstrate that PCNet significantly outperforms state-of-the-art baselines. The implementation of PCNet is publicly available at: https://github.com/MengMeng3399/PCNet. | 10.1109/TNSM.2026.3669613 |
| Wenxuan Li, Yu Yao, Ni Zhang, Chuan Sheng, Ziyong Ran, Wei Yang | IMADP: Imputation-based Anomaly Detection in SCADA Systems via Adversarial Diffusion Process | 2026 | Early Access | Anomaly detection Adaptation models Data models Training SCADA systems Transformers Diffusion models Monitoring Robustness Roads SCADA Multi-sensor Anomaly Detection Imputation-based Conditional Diffusion | As the confrontation of the industrial cybersecurity upgrades, multi-dimensional variables measured by the SCADA multi-sensor are critical for assessing security risks in industrial field devices. While Deep Learning (DL) methods based on generative models have demonstrated effectiveness, the impact of missing features in samples and temporal window size on modeling and detection processes has been consistently overlooked. To address these challenges, this work proposes an IMADP framework that integratively solves two tasks of missingness patching and anomaly detection. Firstly, the Window-based Adaptive Selection Strategy (WASS) is also designed to intelligently window samples, reducing reliance on prior settings. Secondly, an imputer is constructed under WASS to restore sample integrity, which is implemented by a fully-connected network centered on Neural Controlled Differential Equations (NCDEs). Thirdly, a adversarial diffusion detection model with the variant Transformer as the inverse solver is proposed. Additionally, the Adaptive Dynamic Mask Mechanism (ADMM) is built upon to bolster the model’s comprehension of inter-dependencies between time and sensor nodes. Simultaneously, adversarial training is introduced to optimize training and detection latency caused by the excessive diffusion step size during the native Conditional Diffusion process. The experimental results validate that the proposed framework has the capability to build detectors using missing training samples, and its overall detection performance, tested across six datasets, is superior to existing methods. | 10.1109/TNSM.2026.3670062 |