Last updated: 2026-03-11 05:01 UTC
All documents
Number of pages: 158
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Woojin Jeon, Donghyun Yu, Ruei-Hau Hsu, Jemin Lee | Secure Data Sharing Framework with Fine-grained Access Control and Privacy Protection for IoT Data Marketplace | 2026 | Early Access | Internet of Things Encryption Access control Data privacy Protocols Authentication Protection Vectors Scalability Privacy IoT data marketplace fine-grained access control attributes privacy outsourcing encryption match test | The proliferation of IoT devices has led to an exponential increase in data generation, creating new opportunities for data marketplaces. However, due to the security and privacy issues arising from the sensitive nature of IoT data, as well as the need for efficient management of vast amounts of IoT data, a robust solution is necessary. Therefore, this paper proposes a secure data sharing framework with fine-grained access control and privacy protection for the internet of things (IoT) data marketplace. For fine-grained access control of the data in the proposed protocol, we develop the hidden attributes and encryption outsourced key-policy attribute-based encryption (HAEO-KP-ABE) that outsources high-complex operations to peripheral devices with high capability to reduce the computation burden of IoT device. It achieves data privacy by hiding attributes in the ciphertext and by preventing entities that do not hold the data consumer’s secret key material (including SA/CS) from running the match test on stored ciphertexts before decryption. It also has an efficient match test algorithm which can verify that the hidden attributes of the ciphertext match the access policy of the data consumer’s private key without revealing those attributes. We demonstrate the proposed protocol satisfies the security features required for the data sharing process in an IoT data marketplace environment. Furthermore, we evaluate the execution time of the proposed protocol according to the number of attributes and show the practicality and efficiency of the proposed protocol compared to the related works. | 10.1109/TNSM.2026.3670207 |
| Zhaoping Li, Mingshu He, Xiaojuan Wang | HKD-Net: Hierarchical Knowledge Distillation Based on Multi-Domain Feature Fusion for Efficient Network Intrusion Detection | 2026 | Early Access | Feature extraction Telecommunication traffic Knowledge engineering Accuracy Deep learning Anomaly detection Adaptation models Network intrusion detection Knowledge transfer Convolutional neural networks Network traffic anomaly detection Knowledge distillation Multi-domain feature Deep learning Network intrusion detection | We propose HKD-Net1, a hierarchical knowledge distillation network based on multi-domain feature fusion, for efficient network intrusion detection on resource-constrained edge devices. The framework incorporates dedicated feature extraction modules across temporal, frequency, and spatial domains, and introduces a dynamic gating mechanism for adaptive feature fusion, resulting in a more discriminative and comprehensive feature representation. Moreover, a hierarchical distillation mechanism is designed that not only preserves soft labels from the output layer but also aligns intermediate features from spatial, temporal, frequency, and fused domains, enabling efficient knowledge transfer from a large teacher model to a compact student model. Through knowledge distillation, the final lightweight model requires only 278,580 parameters, reducing the number of parameters by approximately 74.68% compared to the teacher, while maintaining high detection accuracy. Extensive experiments on three public datasets (Kitsune, CIRA-CIC-DoHBrw2020, and CICIoT2023) demonstrate that HKD-Net outperforms five state-of-the-art methods, achieving accuracies of 96.72%, 97.19%, and 87.19%, respectively, while reducing parameters by 74.68% and maintaining low computational cost. | 10.1109/TNSM.2026.3668812 |
| Vaishnavi Kasuluru, Luis Blanco, Cristian J. Vaca-Rubio, Engin Zeydan, Albert Bel | AI-Empowered Multivariate Probabilistic Forecasting: A Key Enabler for Sustainability in Open RAN | 2026 | Early Access | Open RAN Forecasting Probabilistic logic Switches Resource management Telecommunication traffic Sustainable development Predictive models Power demand Energy consumption Sustainability Open RAN 6G Probabilistic Forecasting Network Analytics Artificial Intelligence | This paper explores the role of multivariate probabilistic forecasting in improving O-RAN operations, focusing on network sustainability aspects. A comprehensive analysis of its potential benefits and challenges, as well as its integration into the O-RAN architecture are described. The paper first presents an overview of the O-RAN architecture and components, followed by an examination of power consumption models relevant to O-RAN deployments and the challenges associated with traditional deterministic models in resource allocation. We then examine the performance of several state-of-the-art probabilistic multivariate forecasting techniques namely, Gaussian Process Vector Autoregression (GPVAR), Temporal Fusion Transformer (TFT) and non-probabilistic multivariate technique namely, Multivariate Long-Short Term Memory (LSTM) and explain their implementation details and provide their evaluations. The simulation results show the effectiveness of these techniques in predicting Physical Resource Block (PRB) utilization and optimizing resource allocation. In particular, significant energy savings – around 20-30%– are achieved, depending on the percentile of the used probabilistic forecasting techniques. The benefits of considering probabilistic forecasting techniques compared to multi-variate LSTM are also analyzed. Our results emphasize the potential of probabilistic forecasting to improve energy efficiency and sustainability in O-RAN operations. | 10.1109/TNSM.2026.3669847 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Wenxuan Li, Yu Yao, Ni Zhang, Chuan Sheng, Ziyong Ran, Wei Yang | IMADP: Imputation-based Anomaly Detection in SCADA Systems via Adversarial Diffusion Process | 2026 | Early Access | Anomaly detection Adaptation models Data models Training SCADA systems Transformers Diffusion models Monitoring Robustness Roads SCADA Multi-sensor Anomaly Detection Imputation-based Conditional Diffusion | As the confrontation of the industrial cybersecurity upgrades, multi-dimensional variables measured by the SCADA multi-sensor are critical for assessing security risks in industrial field devices. While Deep Learning (DL) methods based on generative models have demonstrated effectiveness, the impact of missing features in samples and temporal window size on modeling and detection processes has been consistently overlooked. To address these challenges, this work proposes an IMADP framework that integratively solves two tasks of missingness patching and anomaly detection. Firstly, the Window-based Adaptive Selection Strategy (WASS) is also designed to intelligently window samples, reducing reliance on prior settings. Secondly, an imputer is constructed under WASS to restore sample integrity, which is implemented by a fully-connected network centered on Neural Controlled Differential Equations (NCDEs). Thirdly, a adversarial diffusion detection model with the variant Transformer as the inverse solver is proposed. Additionally, the Adaptive Dynamic Mask Mechanism (ADMM) is built upon to bolster the model’s comprehension of inter-dependencies between time and sensor nodes. Simultaneously, adversarial training is introduced to optimize training and detection latency caused by the excessive diffusion step size during the native Conditional Diffusion process. The experimental results validate that the proposed framework has the capability to build detectors using missing training samples, and its overall detection performance, tested across six datasets, is superior to existing methods. | 10.1109/TNSM.2026.3670062 |
| Shankar K. Ghosh, Souvik Deb, Rishi Balamurugan, AB Santhosh | Exploring the conditional effect of RLF on handover failure based on ns-3 under stochastic channel condition | 2026 | Early Access | A Key component of Handover failure (HOF) in Fifth generation (5G) cellular network is the underlying radio link failure (RLF) event; existing model based analyses of HOF have not adequately explored this dependency. Moreover, HOF as a function of user mobility necessitates models that incorporate spatio-temporal correlation that has been largely ignored. In this work, based on ns-3 simulation, we characterize the relationship between RLF and HOF considering the effects of handover parameters (i.e., hysteresis (Hys), time-to-trigger (TTT), A2 threshold, A4 threshold) and RLF parameters (i.e., out-of-synch threshold (Qout), out-of-synch indication (N310), insynch indication (N311) and RLF timer (T310)) for correlated RSRP samples. The study has been carried out for different kinds of handovers in Non-Standalone (NSA) deployment of 5G. Our study reveals that optimal settings of handover parameters and RLF parameters to optimize HOF are actually constrained by the correlation characteristics of the prevailing channels. Comparison of simulation results with an existing semi-analytic model based analysis shows the novelty of the proposed ns-3 simulation methodology in capturing the cumulative impact of all the aforementioned factors in causing HOF. This study will help the mobile operators in choosing optimal RLF and handover parameters to minimize HOF under different UE velocities and fading scenarios. | 10.1109/TNSM.2026.3672646 | |
| Fernando Martinez-Lopez, Lesther Santana, Mohamed Rahouti, Abdellah Chehri, Shawqi Al-Maliki, Gwanggil Jeon | Learning in Multiple Spaces: Prototypical Few-Shot Learning with Metric Fusion for Next-Generation Network Security | 2026 | Early Access | Measurement Prototypes Extraterrestrial measurements Training Chebyshev approximation Metalearning Scalability Next generation networking Learning (artificial intelligence) Data models Few-Shot Learning Network Intrusion Detection Metric-Based Learning Multi-Space Prototypical Learning | As next-generation communication networks increasingly rely on AI-driven automation, ensuring robust and secure intrusion detection becomes critical, especially under limited labeled data. In this context, we introduce Multi-Space Prototypical Learning (MSPL), a few-shot intrusion detection framework that improves prototype-based classification by fusing complementary metric-induced spaces (Euclidean, Cosine, Chebyshev, and Wasserstein) via a constrained weighting mechanism. MSPL further enhances stability through Polyak-averaged prototype generation and balanced episodic training to mitigate class imbalance across diverse attack categories. In a few-shot setting with as few as 200 training samples, MSPL consistently outperforms single-metric baselines across three benchmarks: on CICEVSE Network2024, AUPRC improves from 0.3719 to 0.7324 and F1 increases from 0.4194 to 0.8502; on CICIDS2017, AUPRC improves from 0.4319 to 0.4799; and on CICIoV2024, AUPRC improves from 0.5881 to 0.6144. These results demonstrate that multi-space metric fusion yields more discriminative and robust representations for detecting rare and emerging attacks in intelligent network environments. | 10.1109/TNSM.2026.3665647 |
| Pengcheng Guo, Zhi Lin, Haotong Cao, Yifu Sun, Kuljeet Kaur, Sherif Moussa | GAN-Empowered Parasitic Covert Communication: Data Privacy in Next-Generation Networks | 2026 | Early Access | Interference Generators Generative adversarial networks Blind source separation Electronic mail Training Receivers Noise Image reconstruction Hardware Artificial intelligence blind source separation covert communication generative adversarial network | The widespread integration of artificial intelligence (AI) in next-generation communication networks poses a serious threat to data privacy while achieving advanced signal processing. Eavesdroppers can use AI-based analysis to detect and reconstruct transmitted signals, leading to serious leakage of confidential information. In order to protect data privacy at the physical layer, we redefine covert communication as an active data protection mechanism. We propose a new parasitic covert communication framework in which communication signals are embedded into dynamically generated interference by generative adversarial networks (GANs). This method is implemented by our CDGUBSS (complex double generator unsupervised blind source separation) system. The system is explicitly designed to prevent unauthorized AI-based strategies from analyzing and compromising signals. For the intended recipient, the pretrained generator acts as a trusted key and can perfectly recover the original data. Extensive experiments have shown that our framework achieves powerful covert communication, and more importantly, it provides strong defense against data reconstruction attacks, ensuring excellent data privacy in next-generation wireless systems. | 10.1109/TNSM.2026.3666669 |
| Chengwei Liao, Guofeng Yan, Hengliang Tan, Jiao Du, Xia Deng, Heng Wu | jTOLP-MADRL: A MADRL-based Joint Optimization Algorithm of Task Offloading Location and Proportion for Latency-sensitive Tasks in Vehicle Edge Computing Network | 2026 | Early Access | Servers Resource management Edge computing Optimization Quality of service Deep reinforcement learning Computer science Computational modeling TV Simulation Task Offloading Deep Reinforcement Learning Vehicular Edge Computing Quality of Service | In Vehicle Edge Computing Network (VECN), task offloading is a key technique to provide the satisfactory quality of service (QoS) for latency-sensitive tasks. However, the diversity of computational resources in edge nodes (i.e., RSU and idle vehicles) and the mobility of vehicles present significant challenges to task offloading. Hence, to address these challenges, we propose an offloading scheme that jointly allocates RSU nodes (including MEC servers) and idle service vehicle resources in this paper. We first prioritize these tasks based on their maximum tolerable latency and design a utility function to capture the executing cost for latency-sensitive tasks. Then, we propose a joint optimization algorithm of task offloading location and proportion based on Multi-agent Deep Reinforcement Learning (jTOLP-MADRL algorithm) for latency-sensitive tasks in VECN, which consists of two sub-algorithms: the Offloading Location Selection (OLS) algorithm and the Offloading Proportion Allocation (OPA) algorithm. Additionally, we design a Convolutional Recurrent Actor-Critic Network (CRACN) to enhance the learning efficiency of the OLS algorithm. Finally, we indicate our algorithm is effective based on simulation results. Compared with the other benchmark algorithms, jTOLP-MADRL can significantly reduce latency and enhance system utility. | 10.1109/TNSM.2026.3669913 |
| Rong Jiang, Yulin Li, Xuetao Pu, Xueke Wang, Yukun Xue | A Contract Data Sharing Model Based on Consortium Blockchain and Local Differential Privacy | 2026 | Early Access | Privacy-preserving and sharing for contract data are crucial for enterprise collaboration. However, current approaches combining blockchain and differential privacy face challenges including high computational costs, low data processing efficiency, and trust issues in decentralized privacy mechanisms. To address this, we propose a federated blockchain model based on multi-dimensional local differential privacy. A Multi-Dimensional Randomized Response (MDRR) mechanism is designed to protect privacy while retaining internal attribute correlations. Secondly, we construct a hybrid computation mechanism that integrates consortium blockchain and differential privacy, enabling on-chain scheduling with off-chain efficient computation, thereby significantly reducing computational overhead. Furthermore, we introduce a Trust-Utility Synergistic Optimization (TUSO) mechanism to enhance reliability by combining trust scores and utility. Experiments show superior accuracy, reduced error, and improved efficiency. | 10.1109/TNSM.2026.3672462 | |
| Wenjing Jing, Quan Zheng, Siwei Peng, Shuangwu Chen, Xiaobin Tan, Jian Yang | Equivalent Characteristic Time Approximation Based Network Planning for Cache-enabled Networks | 2026 | Early Access | Planning Resource management Costs Estimation Bandwidth Optimization Measurement Servers Investment Web and internet services Cache-enabled Network Cache Capacity Bandwidth Resources Estimation Network Planning | The exponential surge in network traffic has imposed significant challenges on traditional Internet architectures, resulting in high latency and redundant transmissions. Cache-enabled networks alleviate these issues by deploying content closer to end-users, making the planning of such networks a research focus. However, regional heterogeneity in user demand and caching interdependencies among hierarchical nodes complicate the planning process. Most existing approaches rely on simplistic even allocation or empirical methods, which fail to simultaneously meet user performance expectations and minimize deployment costs. This paper proposes a network planning framework based on the Equivalent Characteristic Time Approximation (ECTA). The approach begins by establishing a performance–resource mapping. Using ECTA, we decouple the tightly coupled characteristic time relationships across hierarchical nodes, thereby accurately estimating the required cache capacity and bandwidth needed to achieve user performance targets. Building on this foundation, we formulated the network planning as a constrained convex optimization problem that minimizes deployment cost while satisfying user performance constraints. We conducted extensive experiments on a large-scale simulation platform (ndnSIM) and a real-world cache-enabled network testbed (CENI-HeFei). The results demonstrate that, under identical network topologies and total resource constraints, our method significantly improves cache hit probability while reducing deployment costs compared to homogeneous resource allocation schemes. This work provides a practical theoretical foundation and valuable insights for the design, deployment, and optimization of future cache-enabled networks. | 10.1109/TNSM.2026.3670399 |
| Honghao Gao, Qionghuizi Ran, Ye Wang, Yueshen Xu | BiTrustChain: A Dual-Blockchain Empowered Dynamic Vehicle Trust Management for Malicious Detection in IoV | 2026 | Early Access | Blockchains Vehicle dynamics Trust management Real-time systems Synchronization Scalability Data models Bayes methods Reliability Computer architecture IoV Blockchain Reputation Calculation Dual-Chain Architecture Bayesian Model | The rapid development of the Internet of Vehicles (IoV) has accelerated technological progress, but several critical security challenges remain, especially in the context of vehicle trust management. Two representative issues are malicious nodes and unreliable information transmission. To address these problems, we propose BiTrustChain, a dual-layer blockchain framework designed to enhance security and trust management in IoV environments. First, it consists of two innovative data chains: a Behavior Data Chain (BDC) and a Reputation Evaluation Chain (REC). The BDC records vehicle interaction data, whereas the REC stores and updates the trust values in real time. Second, within this framework, we develop a Multifactor Bayesian Reputation (MFBR) model that enables quantitative evaluation of node trustworthiness. It integrates a time-decay function and a penalty mechanism to regulate reputation evolution. The trust values decrease after malicious behaviors and recover through continuous normal interactions. In addition, we propose a dynamic local whitelist for indirect reputation evaluation. It filters out untrustworthy nodes and ensures that only reliable nodes remain. The filtered indirect trust is then combined with direct trust to produce a comprehensive reputation score. Third, we design a new set of event-driven smart contracts to synchronize the BDC and REC in real time and ensure secure and efficient data exchange. Finally, we performed experiments on the evaluation platform SUMO/NS-3, and the results show that our method identifies malicious nodes with higher accuracy. In particular, the framework achieves 1.5× higher throughput and reduces latency by 40% compared to the baseline single-chain system. The framework also enhances interaction data integrity and improves robustness against adversarial reputation manipulation. | 10.1109/TNSM.2026.3670385 |
| Xing Li, Ge Gao, Zhaoyu Chen, Xin Li, Qian Huang | MD-PCSN: Meta-motion Decoupling Point Cloud Sequence Network for Privacy-Preserving Human Action Recognition in AI machines | 2026 | Early Access | Point cloud compression Convolution Three-dimensional displays Dynamics Encoding Artificial intelligence Human activity recognition Adaptation models Skeleton Feature extraction Point cloud sequence 3D action recognition spatio-temporal point convolution meta-motion | In next-generation communication networks and Industry 5.0 based applications, ensuring robust security and reliability in human-computer interaction (HCI) constitutes a fundamental prerequisite for safety-critical AI machine systems. Point cloud sequence-based human action recognition demonstrates intrinsic advantages in privacy-preserving HCI, leveraging its non-intrusive sensing modality to mitigate data vulnerability while maintaining high-precision action interpretation in industrial environments. Existing spatio-temporal encoding methods for point cloud sequence-based action recognition suffer from two fundamental limitations: (1) rigid neighborhood constraints impair multi-scale feature extraction for heterogeneous body parts, and (2) independent spatial-temporal decomposition introduces motion representation distortion. We propose a Meta-motion Decoupling Point Cloud Sequence Network (MD-PCSN) that addresses these challenges through: (1) logarithmic spatio-temporal point convolution for hierarchical meta-motion construction at variable granularities, and (2) a novel Gated-KANsformer architecture with differential motion encoding to explicitly model both short-term displacements and long-term spatio-temporal dependencies. The proposed meta-motion decoupling mechanism significantly enhances robustness against sensor perturbations, making the framework particularly suitable for security-critical applications. Extensive experiments on three benchmark datasets demonstrate MD-PCSN’s superior performance. It outperforms classic PST-Transformer by 1.5% on MSR Action3D and 4.14% on UTD-MHAD. Under the NTU RGB+D 60, it achieves 2.9% cross-view gain over the latest PointActionCLIP. | 10.1109/TNSM.2026.3671357 |
| Guolong Li, Yuan Gao, Jiongjiong Ren, Shaozhen Chen | BPF-GNN: A multi-granularity feature extraction model using graph neural networks for encrypted traffic classification | 2026 | Early Access | Feature extraction Cryptography Payloads Deep learning Protocols Telecommunication traffic Machine learning Representation learning Data mining Quality of service Encrypted traffic classification Deep learning Graph neural networks Multi-granularity feature extraction | Encrypted traffic classification is crucial for critical network management tasks such as traffic type identification, resource allocation, and risk mitigation, especially given that encrypted traffic has become the dominant form of modern network communication. However, existing classification methods are typically confined to single-level feature extraction, failing to capture the multi-granularity information inherent in traffic and thus limiting their ability to characterize complex encrypted traffic patterns. To address this issue, this paper proposes BPF-GNN, a hierarchical graph feature extraction model for encrypted traffic classification. The model enables multi-granularity feature learning by constructing a three-tier graph structure (Byte-, Packet-, and Flow-level). It sequentially extracts discriminative information inherent in each granularity level and accumulates multi-dimensional traffic characteristics, significantly improving the classification accuracy of encrypted traffic. Experiments on the ISCX-VPN2016, ISCX-Tor2016, USTC-TFC2016, and MIRAGE-2024 datasets demonstrate that BPF-GNN outperforms existing methods, validating the effectiveness and superiority of the proposed hierarchical multi-granularity feature extraction approach. | 10.1109/TNSM.2026.3671203 |
| Beibei Li | B-TWGA: A Trusted Gateway Architecture Based on Blockchain for Internet of Things | 2026 | Early Access | Internet of Things Blockchains Security Hardware Logic gates Computer architecture Sensors Radiofrequency identification Trust management Middleware Internet of Things communication links Blockchain-based Trustworthy Gateway Architecture | Internet of Things (IoT) terminals are commonly used for data sensing and edge control. The communication links between these hardware devices are critical points that are vulnerable to security attacks. Moreover, these links are usually composed of resource-constrained nodes that cannot implement strong security protections. To address these security threats, we introduce a Blockchain-based Trustworthy Gateway Architecture (B-TWGA), which does not rely on additional thirdparty management institutions or hardware facilities, nor does it require central control. Our proposal further considers the possibility of Denial of Service (DoS) attacks in blockchain transactions, ensuring secure storage and seamless interaction within the network. The proposed scheme offers advantages such as tamper-proofing, protection against malicious attacks, and reliability while maintaining operational simplicity. Experimental results demonstrate that B-TWGA maintains stable trust levels even when 40% of the network nodes are malicious, effectively mitigates trust degradation caused by vote-stuffing and switch attacks, and ensures high transaction processing performance, achieving an average throughput of 97.55% for storage transactions with practical response times below 0.7s for typical trust file sizes. | 10.1109/TNSM.2026.3671208 |
| Tuan-Vu Truong, Van-Dinh Nguyen, Quang-Trung Luu, Phi-Son Vo, Xuan-Phu Nguyen, Fatemeh Kavehmadavani, Symeon Chatzinotas | Accelerating Resource Allocation in Open RAN Slicing via Deep Reinforcement Learning | 2026 | Early Access | Resource management Open RAN Ultra reliable low latency communication Real-time systems Computational modeling Optimization Deep reinforcement learning Costs Complexity theory Bandwidth Open radio access network network slicing virtual network function resource allocation deep reinforcement learning successive convex approximation | The transition to beyond-fifth-generation (B5G) wireless systems has revolutionized cellular networks, driving unprecedented demand for high-bandwidth, ultra low-latency, and massive connectivity services. The open radio access network (Open RAN) and network slicing provide B5G with greater flexibility and efficiency by enabling tailored virtual networks on shared infrastructure. However, managing resource allocation in these frameworks has become increasingly complex. This paper addresses the challenge of optimizing resource allocation across virtual network functions (VNFs) and network slices, aiming to maximize the total reward for admitted slices while minimizing associated costs. By adhering to the Open RAN architecture, we decompose the formulated problem into two subproblems solved at different timescales. Initially, the successive convex approximation (SCA) method is employed to achieve at least a locally optimal solution. To handle the high complexity of binary variables and adapt to time-varying network conditions, traffic patterns, and service demands, we propose a deep reinforcement learning (DRL) approach for real-time and autonomous optimization of resource allocation. Extensive simulations demonstrate that the DRL framework quickly adapts to evolving network environments, significantly improving slicing performance. The results highlight DRL’s potential to enhance resource allocation in future wireless networks, paving the way for smarter, self-optimizing systems capable of meeting the diverse requirements of modern communication services. | 10.1109/TNSM.2026.3665553 |
| Yuanzhen Jiang, Yaqiong Liu, Xidian Wang, Nan Cheng, Zihan Jia, Duo Shi, Zhe Lv, Zhouyuan Li, Yan Zhang | Entity-level Autoregressive Relational Triple Extraction toward Knowledge Graph Construction for Network Operation and Maintenance | 2026 | Early Access | Knowledge graphs Tagging Maintenance Electronic mail Video sequences Vectors Telecommunications Soft sensors Semantics Matrix converters Network Operation and Maintenance Knowledge Graph Relational Triple Extraction Segmented Entity Autoregressive Sequence Tagging Task BERT Segmented-BIO | With the significant increase of communication network scales, intelligent Network Operation and Maintenance (NOM) becomes essential. Knowledge Graphs (KGs) are a key enabler for intelligent NOM, and Relational Triple Extraction (RTE) plays a critical role in KG construction. However, most existing RTE researches rely on general-domain corpora, with limited exploration into the specialized domain. In this paper, we identify a novel challenge in Chinese NOM corpus — Segmented Entity, which has garnered little attention in prior works. To address it, this paper proposes an Entity-level Autoregressive RTE (EARTE) method, which incorporates an innovative Segmented-BIO (Begin, Inside, Outside) tagging scheme. Furthermore, we construct the CMIM23-NOM1-RA, the first high-quality restricted domain RTE dataset for NOM. Throughout the experimentation, we meticulously reproduce all baselines and provide a comprehensive analysis. The results show that EARTE achieves the best performance on CMIM23-NOM1-RA. EARTE’s F1 scores surpass those of the best-performing baselines by 0.4%, 2.7%, and 0.8% under the strict criterion, the lenient criterion, and the setting focusing only on segmented entities, respectively. Finally, our codes, dataset, and reproduction guidelines are publicly available at: https://github.com/JYzzzzzz/PEAR-RTE. | 10.1109/TNSM.2026.3671463 |
| Shi Dong, Fuxiang Zhao, Longhui Shu, Junjie Huang | Android Zero-Day Guard: Zero-Shot Malware Detection Using Deep Learning and Generative Models | 2026 | Early Access | Malware Feature extraction Accuracy Zero shot learning Smart phones Generative adversarial networks Computational modeling Data models Convolutional neural networks Application programming interfaces Android Zero-Day Malware Zero-Shot Learning Wasserstein Generative Adversarial Network Malware Detection | This paper proposes an Android-oriented zero-day malware detection method named ”Android Zero-Day Guard.” By integrating deep neural networks with zero-shot learning, this approach is capable of identifying emerging threats without prior exposure to malicious samples. The method converts APK files into images and extracts deep features, enabling effective capture of behavioral malware patterns. Experimental results demonstrate that the proposed method achieves a precision of 94.93%, a recall of 93.75%, and an F1-score of 94.28% across multiple malware families. Without relying on dynamic analysis, it exhibits strong detection capability and generalization performance, making it well-suited for the early identification of emerging threats. While the model performs strongly on benchmark datasets, continuous validation on the latest families is essential for deployment in a rapidly evolving threat landscape. | 10.1109/TNSM.2026.3671305 |
| Ebrima Jaw, Moritz Müller, Cristian Hesselman, Lambert Nieuwenhuis | Reproducibility Study and Assessment of the Evolution of Serial BGP Hijacking Events | 2026 | Early Access | Internet Routing Border Gateway Protocol Routing protocols Security IP networks Cloud computing Autonomous systems Authorization Scalability Border Gateway Protocol (BGP) Prefix hijacks RPKI Regional Internet Registries (RIR) Serial hijackers | The Border Gateway Protocol (BGP) is the Internet’s most crucial protocol for efficient global connectivity and traffic routing. However, BGP is well known to be susceptible to route hijacks and leaks. Route hijacks are the intentional or unintentional illegitimate announcements of network resources that can compromise the confidentiality, integrity, and availability of communication systems. In the past, the so-called “serial hijackers” have hijacked Internet resources multiple times, some lasting for several months or years. So far, only the paper “Profiling BGP Serial Hijackers” has explicitly focused on these repeat offenders, and it dates back to 2019. Back then, they had to process large amounts of BGP announcements to find a few potential serial hijackers. In this paper, we revisit the profiling of serial hijackers. We reproduced the 2019 study and showed that we can identify potential offenders with less data while achieving similar accuracy. Our study confirms that there has been no significant increase in the evolution of serial hijacking activities in the last five years. We then extend their research, further analyze the characteristics of the serial hijackers, and show that most of the alleged serial hijackers are still active on the Internet. We also find that 22.9% of the hijacks violated RPKI objects but were still widely propagated, and that even MANRS participants were among the propagating networks. | 10.1109/TNSM.2026.3671613 |
| Shaohui Gong, Luohao Tang, Jianjiang Wang, Quan Chen, Cheng Zhu | A Key Node Set Analysis Method For Regional Service Denial In Mega-Constellation Networks | 2026 | Early Access | Satellites Measurement Analytical models Robustness Collaboration Satellite constellations Protection Degradation Correlation Spatiotemporal phenomena Mega-Constellation Networks Regional Service Service Denial Key Node Set Temporal Networks Mixed-Integer Programming | Mega-constellation networks (MCNs) face the significant threats of regional service denial attacks. To improve the robustness of regional services in MCNs against such attacks, a cost-effective approach is to identify key node sets for targeted protection efforts. This paper formally defines the key node set analysis problem for regional service denial in MCNs and develops a comprehensive solution framework. First, we develop a regional service capability analysis model that considers the dynamic collaboration of multiple satellites within regional communication service scenarios in MCNs, alongside a temporal network model for their collaborative relationships. Next, we design a multi-satellite criticality metric that quantifies the multi-dimensional impacts of satellite node set failures on regional service capabilities. Building on these, we construct a mixed-integer programming-based key node set analysis model to achieve precise identification of key node sets. Finally, simulation experiments are conducted to verify and analyze the proposed methods, providing insights to enhance the robustness of regional services in MCNs. | 10.1109/TNSM.2026.3672157 |