Last updated: 2026-03-07 05:01 UTC
All documents
Number of pages: 158
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Imran Ahmed, Bijoy Chand Chatterjee, Eiji Oki | AnalyticalSAR: Analytical Modeling for Blocking Performance with Security-Aware Reconfiguration in Spectrally-Spatially Elastic Optical Networks | 2026 | Early Access | Analytical models Security Resource management Computational modeling Iterative methods Spread spectrum communication Crosstalk Monte Carlo methods Elastic optical networks System performance Elastic optical network spectrum reconfiguration Markov chain blocking analysis security | Spectrally-spatially elastic optical networks (SS-EONs) enable ultra-high data rate transmission, which raises critical concerns about physical-layer security vulnerabilities, particularly against eavesdropping and unauthorized network access. Dynamic resource allocation through lightpath reconfiguration presents an effective approach to improving security by reducing request exposure windows. However, implementing secure reconfiguration in SS-EONs introduces significant complexity due to the complex relationships between spectral allocation and spatial resource management constraints. This paper proposes an analytical model for blocking performance with security-aware reconfiguration (AnalyticalSAR) in SS-EONs based on continuous-time Markov chain analysis to tackle these security challenges. The AnalyticalSAR provides analytical assessment of how spectrum reconfiguration affects both network security and blocking performance while accounting for inter-core and intermode crosstalks. The model generates all viable states accounting for spectrum reconfiguration processes and their corresponding transitions to establish state probabilities. Our analysis incorporates two distinct spectrum allocation policies: core-modespectrum random fit (CMS-RF) and core-mode-spectrum first fit (CMS-FF) policies. Our model supports diverse traffic scenarios, including single-class requests with uniform slot requirements and multi-class requests including heterogeneous bandwidth demands. To overcome computational complexity limitations in single-hop analyses, we develop an heuristic iterative approach and subsequently extend this approach to multi-hop network scenarios. We compare AnalyticalSAR, the heuristic iterative approach, and Monte Carlo simulation studies for a single-hop link. Analytical evaluation reveals that random spectrum reconfiguration substantially improves security metrics while introducing minimal blocking probability increases. These performance trade-offs depend critically on number of spectrum reconfiguration, link and network load conditions, and available link capacity. The results validate that AnalyticalSAR achieves an effective compromise between security enhancement and operational performance, providing a practical framework for secure resource management in SS-EON deployments. | 10.1109/TNSM.2026.3669870 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Yanxu Lin, Renzhong Zhong, Jingnan Xie, Yueting Zhu, Byung-Gyu Kim, Saru Kumari, Shakila Basheer, Fatimah Alhayan | Privacy-Preserving Digital Publishing Framework for Next-Generation Communication Networks: A Verifiable Homomorphic Federated Learning Approach | 2026 | Early Access | Electronic publishing Federated learning Cryptography Communication networks Homomorphic encryption Next generation networking Complexity theory Protocols Privacy Optimization Digital publishing federated learning next-generation communication networks Chinese remainder theorem | Next-generation communication networks are revolutionizing digital publishing through intelligent content distribution and collaborative optimization capabilities. However, existing federated learning approaches face fundamental limitations, including trusted third-party dependencies, excessive communication overhead, and vulnerability to collusion attacks between servers and participants. This paper introduces VHFL-DP, a verifiable homomorphic federated learning framework for digital publishing environments operating within 6G network infrastructures. The framework addresses critical privacy and scalability challenges through four key innovations: a distributed cryptographic key generation protocol that eliminates trusted third-party requirements, Chinese remainder theorem-based dimensionality reduction, auxiliary validation nodes that enable independent verification with constant-time complexity, and an intelligent incentive mechanism that rewards digital publishing platforms based on objective contribution quality metrics. Experimental evaluation on MNIST and Amazon reviews datasets across six baseline methods demonstrates that VHFL-DP achieves superior performance with accuracy improvements of 4.2% over the best baseline method. The framework maintains constant verification time ranging from 2.73 to 2.91 seconds regardless of platform count, increasing from ten to fifty, or dropout rates reaching thirty percent. Security evaluation reveals strong resilience with only 2.4 percentage point accuracy degradation under poisoning attacks compared to 6.7-7.0 points for baseline method, inference attack success near random guessing at 51.3%, and 92.4% successful aggregation under Byzantine adversaries. | 10.1109/TNSM.2026.3667167 |
| Ghofran Khalaf, May Itani, Sanaa Sharafeddine | A UAV-Aided Digital Twin Framework for IoT Networks with High Accuracy and Synchronization | 2026 | Early Access | Digital Twin (DT) technology has emerged as a promising link between the physical and virtual worlds, enabling simulation, prediction, and real-time performance optimization in different domains. In this work we develop a high-fidelity digital twin framework, focusing on synchronization and accuracy between physical and digital systems to enhance data-driven decision making. To achieve this, we deploy several stationary UAVs in optimized locations to collect data from IoT devices, which were used to monitor multiple physical entities and perform computations to evaluate their status. We formulate a mixed-integer non-convex program to maximize the total amount of data collected from all IoT devices while ensuring a constrained age of digital twin threshold and solve it using successive convex approximation (SCA). To cope with realistic scenarios involving unpredictable environments and large network sizes, we model our problem as a Markov Decision Process (MDP), and propose a deep reinforcement learning-based approach using a Twin Delayed Deep Deterministic Policy Gradient (TD3) to optimize the unmanned aerial vehicle positions and the sum rate. Finally, we present different simulation results of the SCA and TD3 based solutions together with two baseline approaches and evaluated the sum rate in terms of IoT device count, AoDT threshold, task arrival rate and UAVs’ computational capacity. In all simulation results, the proposed TD3-based approach consistently proved to be superior as compared to the baseline solutions. | 10.1109/TNSM.2026.3670040 | |
| Xing Li, Ge Gao, Zhaoyu Chen, Xin Li, Qian Huang | MD-PCSN: Meta-motion Decoupling Point Cloud Sequence Network for Privacy-Preserving Human Action Recognition in AI machines | 2026 | Early Access | In next-generation communication networks and Industry 5.0 based applications, ensuring robust security and reliability in human-computer interaction (HCI) constitutes a fundamental prerequisite for safety-critical AI machine systems. Point cloud sequence-based human action recognition demonstrates intrinsic advantages in privacy-preserving HCI, leveraging its non-intrusive sensing modality to mitigate data vulnerability while maintaining high-precision action interpretation in industrial environments. Existing spatio-temporal encoding methods for point cloud sequence-based action recognition suffer from two fundamental limitations: (1) rigid neighborhood constraints impair multi-scale feature extraction for heterogeneous body parts, and (2) independent spatial-temporal decomposition introduces motion representation distortion. We propose a Meta-motion Decoupling Point Cloud Sequence Network (MD-PCSN) that addresses these challenges through: (1) logarithmic spatio-temporal point convolution for hierarchical meta-motion construction at variable granularities, and (2) a novel Gated-KANsformer architecture with differential motion encoding to explicitly model both short-term displacements and long-term spatio-temporal dependencies. The proposed meta-motion decoupling mechanism significantly enhances robustness against sensor perturbations, making the framework particularly suitable for security-critical applications. Extensive experiments on three benchmark datasets demonstrate MD-PCSN’s superior performance. It outperforms classic PST-Transformer by 1.5% on MSR Action3D and 4.14% on UTD-MHAD. Under the NTU RGB+D 60, it achieves 2.9% cross-view gain over the latest PointActionCLIP. | 10.1109/TNSM.2026.3671357 | |
| Yuanzhen Jiang, Yaqiong Liu, Xidian Wang, Nan Cheng, Zihan Jia, Duo Shi, Zhe Lv, Zhouyuan Li, Yan Zhang | Entity-level Autoregressive Relational Triple Extraction toward Knowledge Graph Construction for Network Operation and Maintenance | 2026 | Early Access | With the significant increase of communication network scales, intelligent Network Operation and Maintenance (NOM) becomes essential. Knowledge Graphs (KGs) are a key enabler for intelligent NOM, and Relational Triple Extraction (RTE) plays a critical role in KG construction. However, most existing RTE researches rely on general-domain corpora, with limited exploration into the specialized domain. In this paper, we identify a novel challenge in Chinese NOM corpus — Segmented Entity, which has garnered little attention in prior works. To address it, this paper proposes an Entity-level Autoregressive RTE (EARTE) method, which incorporates an innovative Segmented-BIO (Begin, Inside, Outside) tagging scheme. Furthermore, we construct the CMIM23-NOM1-RA, the first high-quality restricted domain RTE dataset for NOM. Throughout the experimentation, we meticulously reproduce all baselines and provide a comprehensive analysis. The results show that EARTE achieves the best performance on CMIM23-NOM1-RA. EARTE’s F1 scores surpass those of the best-performing baselines by 0.4%, 2.7%, and 0.8% under the strict criterion, the lenient criterion, and the setting focusing only on segmented entities, respectively. Finally, our codes, dataset, and reproduction guidelines are publicly available at: https://github.com/JYzzzzzz/PEAR-RTE. | 10.1109/TNSM.2026.3671463 | |
| Mahnoor Sajid, Mohib Ullah Khan, KyungHi Chang | Intelligent Xn-Based Energy Aware Handover Optimization in 5G Networks via NWDAF-Orchestrated Agent Framework | 2026 | Early Access | Handover 5G mobile communication Quality of service Energy efficiency Energy consumption Optimization Reliability Computer architecture Energy conservation Automation Energy-aware handover NWDAF gNB energy efficiency multi-agent control 5 G mobility management | Energy-aware mobility management in dense Fifth Generation (5G) networks is increasingly challenged by frequent handovers and unnecessary activation of sleeping next-generation NodeBs (gNBs), which lead to excessive energy consumption and degraded mobility reliability. Conventional Xn-based handover schemes rely on static radio thresholds and implicitly assume always-active gNBs, while existing NWDAF-enabled approaches improve stability but do not explicitly account for energy costs during handover execution. To address these limitations, this paper proposes the Intelligent Energy-Aware Handover Framework (IEAHF), a NWDAF-orchestrated architecture that integrates coordinated agent-based control for energy-aware mobility management. The proposed framework introduces an Energy-Aware Handover Optimization Agent (EA-HOA) to guide reliability-driven handover decisions and a Handover Energy Evaluation Agent (HEEA) to assess the energy impact of candidate handover actions, with both agents operating within a closed-loop control process enforced through Operations, Administration, and Maintenance (OAM). By reformulating handover success to incorporate energy inefficiency and enabling per-handover gNB energy reasoning at decision time, IEAHF jointly optimizes service continuity and energy efficiency. System-level simulations demonstrate that the proposed Agent-NWDAF configuration consistently outperforms baseline and analytics-only schemes, achieving tightly concentrated Energy-Aware Handover Success Rates of approximately 0.98, reducing the Energy-Aware Handover Failure Rate to the range of 0.007–0.015, and delivering up to a 32% reduction in average gNB power consumption relative to an always-on baseline. These results indicate that IEAHF provides a scalable and effective solution for energy-efficient mobility management in 5G networks and establishes a foundation for energy-aware handover control in future Sixth Generation (6G) systems. | 10.1109/TNSM.2026.3668238 |
| Zewei Han, Go Hasegawa | BBR-ES: An Extended-State Optimization for BBR Congestion Control | 2026 | Early Access | Delays Bandwidth Internet Heuristic algorithms Videos Throughput Taxonomy Reviews Market research Proposals Congestion control algorithm Bottleneck Bandwidth and Round-trip propagation time (BBR) Throughput fairness Round Trip Time (RTT) | In recent years, many optimization proposals for TCP BBR have been introduced, but most rely mainly on delay variations and do not fully resolve BBR’s limitations in RTT fairness, link utilization, and delay control in networks. This paper proposes BBR with Extended State (BBR-ES), which extends BBR’s state machine with a short stabilization state and a trend-based transition mechanism that react to per-flow bandwidth and RTT evolution instead of global delay alone. BBR-ES uses lightweight bandwidth and RTT trend tracking to adjust its sending rate while preserving BBR’s model-based design. Experiments on both emulated (Mininet) and real-world Internet paths (Amazon EC2) show that BBR-ES consistently improves RTT fairness and link utilization over BBRv1, BBRv3, and CUBIC while keeping queuing delay moderate and bounded; in most settings, it achieves Jain’s fairness index above 0.9 and link utilization above 98%. These results indicate that BBR-ES is a practical candidate for deployment in large-scale content delivery and a useful design reference for future model-based congestion control schemes. | 10.1109/TNSM.2026.3668966 |
| Zhaoping Li, Mingshu He, Xiaojuan Wang | HKD-Net: Hierarchical Knowledge Distillation Based on Multi-Domain Feature Fusion for Efficient Network Intrusion Detection | 2026 | Early Access | Feature extraction Telecommunication traffic Knowledge engineering Accuracy Deep learning Anomaly detection Adaptation models Network intrusion detection Knowledge transfer Convolutional neural networks Network traffic anomaly detection Knowledge distillation Multi-domain feature Deep learning Network intrusion detection | We propose HKD-Net1, a hierarchical knowledge distillation network based on multi-domain feature fusion, for efficient network intrusion detection on resource-constrained edge devices. The framework incorporates dedicated feature extraction modules across temporal, frequency, and spatial domains, and introduces a dynamic gating mechanism for adaptive feature fusion, resulting in a more discriminative and comprehensive feature representation. Moreover, a hierarchical distillation mechanism is designed that not only preserves soft labels from the output layer but also aligns intermediate features from spatial, temporal, frequency, and fused domains, enabling efficient knowledge transfer from a large teacher model to a compact student model. Through knowledge distillation, the final lightweight model requires only 278,580 parameters, reducing the number of parameters by approximately 74.68% compared to the teacher, while maintaining high detection accuracy. Extensive experiments on three public datasets (Kitsune, CIRA-CIC-DoHBrw2020, and CICIoT2023) demonstrate that HKD-Net outperforms five state-of-the-art methods, achieving accuracies of 96.72%, 97.19%, and 87.19%, respectively, while reducing parameters by 74.68% and maintaining low computational cost. | 10.1109/TNSM.2026.3668812 |
| Vaishnavi Kasuluru, Luis Blanco, Cristian J. Vaca-Rubio, Engin Zeydan, Albert Bel | AI-Empowered Multivariate Probabilistic Forecasting: A Key Enabler for Sustainability in Open RAN | 2026 | Early Access | Open RAN Forecasting Probabilistic logic Switches Resource management Telecommunication traffic Sustainable development Predictive models Power demand Energy consumption Sustainability Open RAN 6G Probabilistic Forecasting Network Analytics Artificial Intelligence | This paper explores the role of multivariate probabilistic forecasting in improving O-RAN operations, focusing on network sustainability aspects. A comprehensive analysis of its potential benefits and challenges, as well as its integration into the O-RAN architecture are described. The paper first presents an overview of the O-RAN architecture and components, followed by an examination of power consumption models relevant to O-RAN deployments and the challenges associated with traditional deterministic models in resource allocation. We then examine the performance of several state-of-the-art probabilistic multivariate forecasting techniques namely, Gaussian Process Vector Autoregression (GPVAR), Temporal Fusion Transformer (TFT) and non-probabilistic multivariate technique namely, Multivariate Long-Short Term Memory (LSTM) and explain their implementation details and provide their evaluations. The simulation results show the effectiveness of these techniques in predicting Physical Resource Block (PRB) utilization and optimizing resource allocation. In particular, significant energy savings – around 20-30%– are achieved, depending on the percentile of the used probabilistic forecasting techniques. The benefits of considering probabilistic forecasting techniques compared to multi-variate LSTM are also analyzed. Our results emphasize the potential of probabilistic forecasting to improve energy efficiency and sustainability in O-RAN operations. | 10.1109/TNSM.2026.3669847 |
| Mengmeng Sun, Zeyu Tan, Dianlong You, Zhen Chen | PCNet: A Personalized Complementary Network via Tensor Decomposition for Service Recommendation | 2026 | Early Access | Mashups Tensors Collaborative filtering Web sites Video on demand Artificial intelligence Semantics Reviews Cloud computing Software development management Web service Complementarity Tensor Decomposition Personalized Recommendation Mashup | Web services are widely utilized across domains such as cloud computing, mobile networks, and Web applications. Due to their single-function nature, these services are often composed into Mashups to achieve more comprehensive functionality. However, the rapid growth in the number and variety of Web services has made it increasingly difficult to identify suitable services for Mashup development. Web service recommendation systems have emerged as a solution to this service overload, supporting innovative practices within the service-oriented development paradigm. While existing methods emphasize recommendation accuracy and relevance, few approaches simultaneously consider the personalized requirements of the Mashup side and the complementary relationships on the service side, both of which are essential for reconstructing the Web service ecosystem’s value chain. To address this gap, we propose PCNet, a Personalized Complementary Network for service recommendation based on tensor decomposition. We conceptualize the interaction dynamics between Mashups and services, as well as coinvocation patterns among services, using a three-dimensional tensor. The RESCAL tensor decomposition technique is then applied to jointly learn these relationships and uncover personalized complementary relationships among services. In addition, we develop a complementary perception module that uses an attention mechanism to dynamically model a Mashup’s focus on different complementary relationships, extending them to higher orders. Experimental results on real-world Web service datasets demonstrate that PCNet significantly outperforms state-of-the-art baselines. The implementation of PCNet is publicly available at: https://github.com/MengMeng3399/PCNet. | 10.1109/TNSM.2026.3669613 |
| Wenxuan Li, Yu Yao, Ni Zhang, Chuan Sheng, Ziyong Ran, Wei Yang | IMADP: Imputation-based Anomaly Detection in SCADA Systems via Adversarial Diffusion Process | 2026 | Early Access | Anomaly detection Adaptation models Data models Training SCADA systems Transformers Diffusion models Monitoring Robustness Roads SCADA Multi-sensor Anomaly Detection Imputation-based Conditional Diffusion | As the confrontation of the industrial cybersecurity upgrades, multi-dimensional variables measured by the SCADA multi-sensor are critical for assessing security risks in industrial field devices. While Deep Learning (DL) methods based on generative models have demonstrated effectiveness, the impact of missing features in samples and temporal window size on modeling and detection processes has been consistently overlooked. To address these challenges, this work proposes an IMADP framework that integratively solves two tasks of missingness patching and anomaly detection. Firstly, the Window-based Adaptive Selection Strategy (WASS) is also designed to intelligently window samples, reducing reliance on prior settings. Secondly, an imputer is constructed under WASS to restore sample integrity, which is implemented by a fully-connected network centered on Neural Controlled Differential Equations (NCDEs). Thirdly, a adversarial diffusion detection model with the variant Transformer as the inverse solver is proposed. Additionally, the Adaptive Dynamic Mask Mechanism (ADMM) is built upon to bolster the model’s comprehension of inter-dependencies between time and sensor nodes. Simultaneously, adversarial training is introduced to optimize training and detection latency caused by the excessive diffusion step size during the native Conditional Diffusion process. The experimental results validate that the proposed framework has the capability to build detectors using missing training samples, and its overall detection performance, tested across six datasets, is superior to existing methods. | 10.1109/TNSM.2026.3670062 |
| Masaki Oda, Akio Kawabata, Eiji Oki | Consistency-Aware Multi-Server Network Design for Delay-Sensitive Applications under Server Failures | 2026 | Early Access | Servers Delays Resource management Approximation algorithms Optimization Numerical models Computational modeling Performance analysis Data models Software algorithms Server allocation data consistency preventive start-time optimization server failure approximation algorithm | Real-time applications require low latency and event order guarantees. Distributed server processing is effective for this purpose, and data consistency between servers is crucial. Although existing models in previous work handle data consistency, they do not address server failures. This paper proposes a server allocation model for a consistency-aware multi-server network for delay-sensitive applications with preventive start-time optimization (PSO) under single-server failures. The proposed model considers data consistency between servers and handles single-server failures with PSO. PSO determines the assignment to minimize the worst-case delay over all possible failure scenarios while avoiding service disruption for users connected to non-failed servers. We formulate the proposed model as an integer linear programming (ILP) problem. The decision version of the server allocation problem is proven to be NP-complete, and it becomes difficult to solve in a practical time when the problem size is large. We develop two polynomial-time approximation algorithms with theoretical performance analysis. Numerical results show that the proposed model outperforms start-time optimization in terms of the largest total delay and run-time optimization in terms of avoiding instability. The results also show that the faster of our two developed algorithms achieves a speedup ranging from 2.26×103 to 4.37×106 times compared to the ILP approach, while the maximum delay is, on average, only 1.029 times the optimal value. The results indicate that the speedup effect becomes more significant as the number of users and servers increases. | 10.1109/TNSM.2026.3669840 |
| Chengwei Liao, Guofeng Yan, Hengliang Tan, Jiao Du, Xia Deng, Heng Wu | jTOLP-MADRL: A MADRL-based Joint Optimization Algorithm of Task Offloading Location and Proportion for Latency-sensitive Tasks in Vehicle Edge Computing Network | 2026 | Early Access | Servers Resource management Edge computing Optimization Quality of service Deep reinforcement learning Computer science Computational modeling TV Simulation Task Offloading Deep Reinforcement Learning Vehicular Edge Computing Quality of Service | In Vehicle Edge Computing Network (VECN), task offloading is a key technique to provide the satisfactory quality of service (QoS) for latency-sensitive tasks. However, the diversity of computational resources in edge nodes (i.e., RSU and idle vehicles) and the mobility of vehicles present significant challenges to task offloading. Hence, to address these challenges, we propose an offloading scheme that jointly allocates RSU nodes (including MEC servers) and idle service vehicle resources in this paper. We first prioritize these tasks based on their maximum tolerable latency and design a utility function to capture the executing cost for latency-sensitive tasks. Then, we propose a joint optimization algorithm of task offloading location and proportion based on Multi-agent Deep Reinforcement Learning (jTOLP-MADRL algorithm) for latency-sensitive tasks in VECN, which consists of two sub-algorithms: the Offloading Location Selection (OLS) algorithm and the Offloading Proportion Allocation (OPA) algorithm. Additionally, we design a Convolutional Recurrent Actor-Critic Network (CRACN) to enhance the learning efficiency of the OLS algorithm. Finally, we indicate our algorithm is effective based on simulation results. Compared with the other benchmark algorithms, jTOLP-MADRL can significantly reduce latency and enhance system utility. | 10.1109/TNSM.2026.3669913 |
| Wenjing Jing, Quan Zheng, Siwei Peng, Shuangwu Chen, Xiaobin Tan, Jian Yang | Equivalent Characteristic Time Approximation Based Network Planning for Cache-enabled Networks | 2026 | Early Access | Planning Resource management Costs Estimation Bandwidth Optimization Measurement Servers Investment Web and internet services Cache-enabled Network Cache Capacity Bandwidth Resources Estimation Network Planning | The exponential surge in network traffic has imposed significant challenges on traditional Internet architectures, resulting in high latency and redundant transmissions. Cache-enabled networks alleviate these issues by deploying content closer to end-users, making the planning of such networks a research focus. However, regional heterogeneity in user demand and caching interdependencies among hierarchical nodes complicate the planning process. Most existing approaches rely on simplistic even allocation or empirical methods, which fail to simultaneously meet user performance expectations and minimize deployment costs. This paper proposes a network planning framework based on the Equivalent Characteristic Time Approximation (ECTA). The approach begins by establishing a performance–resource mapping. Using ECTA, we decouple the tightly coupled characteristic time relationships across hierarchical nodes, thereby accurately estimating the required cache capacity and bandwidth needed to achieve user performance targets. Building on this foundation, we formulated the network planning as a constrained convex optimization problem that minimizes deployment cost while satisfying user performance constraints. We conducted extensive experiments on a large-scale simulation platform (ndnSIM) and a real-world cache-enabled network testbed (CENI-HeFei). The results demonstrate that, under identical network topologies and total resource constraints, our method significantly improves cache hit probability while reducing deployment costs compared to homogeneous resource allocation schemes. This work provides a practical theoretical foundation and valuable insights for the design, deployment, and optimization of future cache-enabled networks. | 10.1109/TNSM.2026.3670399 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Shi Dong, Fuxiang Zhao, Longhui Shu, Junjie Huang | Android Zero-Day Guard: Zero-Shot Malware Detection Using Deep Learning and Generative Models | 2026 | Early Access | Malware Feature extraction Accuracy Zero shot learning Smart phones Generative adversarial networks Computational modeling Data models Convolutional neural networks Application programming interfaces Android Zero-Day Malware Zero-Shot Learning Wasserstein Generative Adversarial Network Malware Detection | This paper proposes an Android-oriented zero-day malware detection method named ”Android Zero-Day Guard.” By integrating deep neural networks with zero-shot learning, this approach is capable of identifying emerging threats without prior exposure to malicious samples. The method converts APK files into images and extracts deep features, enabling effective capture of behavioral malware patterns. Experimental results demonstrate that the proposed method achieves a precision of 94.93%, a recall of 93.75%, and an F1-score of 94.28% across multiple malware families. Without relying on dynamic analysis, it exhibits strong detection capability and generalization performance, making it well-suited for the early identification of emerging threats. While the model performs strongly on benchmark datasets, continuous validation on the latest families is essential for deployment in a rapidly evolving threat landscape. | 10.1109/TNSM.2026.3671305 |
| Fernando Martinez-Lopez, Lesther Santana, Mohamed Rahouti, Abdellah Chehri, Shawqi Al-Maliki, Gwanggil Jeon | Learning in Multiple Spaces: Prototypical Few-Shot Learning with Metric Fusion for Next-Generation Network Security | 2026 | Early Access | Measurement Prototypes Extraterrestrial measurements Training Chebyshev approximation Metalearning Scalability Next generation networking Learning (artificial intelligence) Data models Few-Shot Learning Network Intrusion Detection Metric-Based Learning Multi-Space Prototypical Learning | As next-generation communication networks increasingly rely on AI-driven automation, ensuring robust and secure intrusion detection becomes critical, especially under limited labeled data. In this context, we introduce Multi-Space Prototypical Learning (MSPL), a few-shot intrusion detection framework that improves prototype-based classification by fusing complementary metric-induced spaces (Euclidean, Cosine, Chebyshev, and Wasserstein) via a constrained weighting mechanism. MSPL further enhances stability through Polyak-averaged prototype generation and balanced episodic training to mitigate class imbalance across diverse attack categories. In a few-shot setting with as few as 200 training samples, MSPL consistently outperforms single-metric baselines across three benchmarks: on CICEVSE Network2024, AUPRC improves from 0.3719 to 0.7324 and F1 increases from 0.4194 to 0.8502; on CICIDS2017, AUPRC improves from 0.4319 to 0.4799; and on CICIoV2024, AUPRC improves from 0.5881 to 0.6144. These results demonstrate that multi-space metric fusion yields more discriminative and robust representations for detecting rare and emerging attacks in intelligent network environments. | 10.1109/TNSM.2026.3665647 |
| Pengcheng Guo, Zhi Lin, Haotong Cao, Yifu Sun, Kuljeet Kaur, Sherif Moussa | GAN-Empowered Parasitic Covert Communication: Data Privacy in Next-Generation Networks | 2026 | Early Access | Interference Generators Generative adversarial networks Blind source separation Electronic mail Training Receivers Noise Image reconstruction Hardware Artificial intelligence blind source separation covert communication generative adversarial network | The widespread integration of artificial intelligence (AI) in next-generation communication networks poses a serious threat to data privacy while achieving advanced signal processing. Eavesdroppers can use AI-based analysis to detect and reconstruct transmitted signals, leading to serious leakage of confidential information. In order to protect data privacy at the physical layer, we redefine covert communication as an active data protection mechanism. We propose a new parasitic covert communication framework in which communication signals are embedded into dynamically generated interference by generative adversarial networks (GANs). This method is implemented by our CDGUBSS (complex double generator unsupervised blind source separation) system. The system is explicitly designed to prevent unauthorized AI-based strategies from analyzing and compromising signals. For the intended recipient, the pretrained generator acts as a trusted key and can perfectly recover the original data. Extensive experiments have shown that our framework achieves powerful covert communication, and more importantly, it provides strong defense against data reconstruction attacks, ensuring excellent data privacy in next-generation wireless systems. | 10.1109/TNSM.2026.3666669 |
| Adel Chehade, Edoardo Ragusa, Paolo Gastaldo, Rodolfo Zunino | Hardware-Aware Neural Architecture Search for Encrypted Traffic Classification on Resource-Constrained Devices | 2026 | Early Access | Accuracy Computational modeling Cryptography Feature extraction Hardware Convolutional neural networks Artificial neural networks Real-time systems Long short term memory Internet of Things Deep neural networks encrypted traffic classification hardware-aware neural architecture search Internet of Things resource-constrained devices | This paper presents a hardware-efficient deep neural network (DNN), optimized through hardware-aware neural architecture search (HW-NAS); the DNN supports the classification of session-level encrypted traffic on resource-constrained Internet of Things (IoT) and edge devices. Thanks to HW-NAS, a 1D convolutional neural network (CNN) is tailored on the ISCX VPN-nonVPN dataset to meet strict memory and computational limits while achieving robust performance. The optimized model attains an accuracy of 96.60% with just 88.26K parameters, 10.08M floating-point operations (FLOPs), and a maximum tensor size of 20.12K. Compared to state-of-the-art (SOTA) models, it achieves reductions of up to 444-fold, 312-fold, and 15-fold in these metrics, respectively, significantly minimizing memory footprint and runtime requirements. The model also demonstrates versatility, achieving up to 99.86% across multiple VPN and traffic classification (TC) tasks; it further generalizes to external benchmarks with up to 99.98% accuracy on USTC-TFC and QUIC NetFlow. In addition, an in-depth approach to header-level preprocessing strategies confirms that the optimized model can provide notable performance across a wide range of configurations, even in scenarios with stricter privacy considerations. Likewise, a reduction in the length of sessions of up to 75% yields significant improvements in efficiency, while maintaining high accuracy with only a negligible drop of 1-2%. However, the importance of careful preprocessing and session length selection in the classification of raw traffic data is still present, as improper settings or aggressive reductions can bring about a 7% reduction in overall accuracy. The quantized architecture was deployed on STM32 microcontrollers and evaluated across input sizes; results confirm that the efficiency gains from shorter sessions translate to practical, low-latency embedded inference. These findings demonstrate the method’s practicality for encrypted traffic analysis in constrained IoT networks. | 10.1109/TNSM.2026.3666676 |