Last updated: 2026-04-10 05:01 UTC
All documents
Number of pages: 161
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Wangqing Luo, Jinbin Hu, Hua Sun, Pradip Kumar Sharma, Jin Wang | SALB: Security-Aware Load Balancing for Large Language Model Training in Datacenter Networks | 2026 | Early Access | Training Load management Packet loss Throughput Delays Topology Scheduling Telecommunication traffic Fluctuations Switches Datacenter Networks Load Balancing Data Security Deep Reinforcement Learning | To meet the massive compute and high-speed communication demands of Large Language Model (LLM) training, modern datacenters typically adopt multipath topologies such as Fat-Tree and Clos to host parallel jobs across hundreds to thousands of GPUs. However, LLM training exhibits periodic, high-bandwidth communication patterns. Existing load-balancing schemes become misaligned under dynamic congestion and anomalous surges: they struggle to promptly mitigate iteration-peak congestion and lack effective isolation of anomalous traffic. To address this, we propose Security-Aware Load Balancing (SALB) for LLM training. SALB leverages a Deep Reinforcement Learning (DRL) controller with queue and delay signals for packet-level multipath load balancing and employs path binding to confine suspicious flows. By integrating data security into load balancing, SALB simultaneously achieves high throughput and robust traffic isolation. NS-3 simulation results show that, compared with CONGA, Hermes, and ConWeave, SALB reduces the 99th-percentile flow completion time (FCT) of short flows by an average of 65% and increases the throughput of long flows by an average of 54%. It further outperforms the baselines in aggregate throughput, path utilization, and packet loss rate, thereby significantly enhancing system stability, robustness, and data security. | 10.1109/TNSM.2026.3678979 |
| Kang Liu, Jianchen Hu, Donglai Ma, Xiaoyu Cao, Yuzhou Zhou, Lei Zhu, Li Su, Wenli Zhou, Xueqi Wu, Feng Gao | Topology-Aware Virtual Machine Placement through the Buffer Migration Mechanism | 2026 | Early Access | Central Processing Unit Filtering Filters Electronic circuits Circuits Circuits and systems Feedback Cloud computing Radio access networks Regional area networks Buffer management Optimization Topology-aware VM Placement | The virtual machine (VM) placement considering the topology constraints is difficult because the unpredictable topological VMs raise additional structural requirements (including the affinity, anti-affinity and fault-domain) on the resource pool. Thus, the service level agreement (SLA) can be violated even when the occupancy of the resource pool is quite modest. In order to solve this problem, we propose an efficient buffer-migration-based heuristic online algorithm. First, we build an integer programming model for the topology-aware VM placement problem. Second, we propose a hierarchical resource-preserving online approach, where the Rack and physical machine (PM) nodes are selected in the upper and lower layers respectively. Finally, we utilize the buffer to place and migrate the unfitted VMs to enhance the capacity of the resource pool. The proposed approach is tested with high proportional topological VM requests (nearly 60%) in the resource pool with the scale of 500, 1000 and 1500 PMs. The results show that our online approach (with unknown upcoming VM information) can achieve more than 85% of the performance for the offline approach (with complete upcoming VM information). The latency is lower than 5ms per VM. | 10.1109/TNSM.2026.3678976 |
| Yifei Xie, Zhi Lin, Kefeng Guo, Ruiqian Ma, Hussam Al Hamadi, Fatima Asiri, Ahlam Almusharraf | Lightweight Learning for Symbiotic Secure and Efficient ISAC in RIS-assisted Intelligent Transportation Networks | 2026 | Early Access | Low earth orbit satellites Artificial satellites Antennas Antenna arrays Feeds Antenna feeds Receiving antennas Transmitting antennas Phased arrays Planar arrays Integrated sensing and communications reconfigurable intelligent surface secrecy energy efficiency deep learning ITN | Achieving real-time processing in integrated sensing and communication (ISAC) systems presents significant challenges due to the high computational burden of conventional optimization methods, particularly within intelligent transportation networks (ITN). This paper addresses these challenges by proposing lightweight supervised and unsupervised deep learning (DL) algorithms, respectively for quasi-static and dynamic environments, aiming to improve the secrecy energy efficiency (SEE) of ITN under the constraints of the Cram´er-Rao bound (CRB) for direction-of-arrival (DOA) estimation and the transmission rate of each user. By jointly optimizing power allocation and reconfigurable intelligent surface (RIS) phase shifts, the framework ensures robust physical layer security (PLS) alongside communication efficiency, aligning with defense-in-depth strategies for securing next-generation ITN. For quasi-static environments, a supervised deep neural network (DNN) algorithm leverages offline codebook-generated labels to achieve near-optimal channel state information (CSI) mapping, explicitly minimizing signal leakage to eavesdroppers. In dynamic scenarios, an unsupervised channel attention mechanism-based residual network (CAM-ResNet) eliminates labeling overhead through direct physics-informed SEE optimization with adaptive constraint enforcement, enabling real-time adaptation to rapidly varying channels and evolving security threats. Simulation results demonstrate that both algorithms achieve comparable SEE performance with the zero-forcing (ZF) method, while significantly reducing computational complexity, with the CAM-ResNet demonstrating superior resilience to dynamic security threats. This work contributes to advancing secure and efficient ISAC solutions, reinforcing multi-layered defense mechanisms critical for future ITN. | 10.1109/TNSM.2026.3679370 |
| Xiaolong Wang, Haipeng Yao, Lin Zhu, Wenji He, Wei Zhang, Mohsen Guizani | Joint Optimization of Routing and Scheduling in Cross-Domain Deterministic Networks | 2026 | Early Access | Space exploration Filtering Filters Central Processing Unit Integrated circuits Circuits and systems Communication systems Computer networks Network topology Routing CQF CSQF joint routing and scheduling | Industrial Internet applications require networks to guarantee deterministic end-to-end latency and zero packet loss at both the data link and network layers. Traditional best-effort communication models in consumer networks are insufficient to meet these stringent demands. To meet these stringent demands, the IEEE 802.1 standards introduce Time-Sensitive Networking (TSN) at the data link layer, while the IETF proposes Deterministic Networking (DetNet) for the network layer. However, enabling seamless cross-domain communication between TSN and DetNet remains a significant challenge. This paper proposes a unified cross-domain network architecture and a time-slot alignment strategy that compensates for synchronization errors between the TSN and DetNet layers. We further develop a Joint Routing and Scheduling algorithm for Deterministic Cross-Domain Transmission (JRS-DCT), which simultaneously addresses routing and scheduling under cross-domain constraints. The algorithm leverages Cycle-Specified Queuing and Forwarding (CSQF) in DetNet and Cycle Queuing and Forwarding (CQF) in TSN to ensure bounded latency and deterministic transmission. Extensive simulations demonstrate that the proposed JRS-DCT algorithm significantly improves the scheduling success rate and effectively reduces network resource utilization compared to two baseline algorithms. These results validate the effectiveness and robustness of the proposed framework in supporting time-sensitive communication across heterogeneous network environments. | 10.1109/TNSM.2026.3679810 |
| Maria K. S. Barbosa, Mateus Machado, Kelvin L. Dias, Hansenclever Bassani, Adiel T. De Almeida Filho | Segment Routing Path Optimization for URLLC in NextG Mobile Transport Network via Multi-Armed Bandits | 2026 | Early Access | In the Sixth Generation (6G)/Next Generation (NextG) of Mobile Networks, enhanced Fifth Generation (5G) ultra reliable and low-latency communications (URLLC) service category will play a pivotal role for innovative services. Existing solutions to embrace URLLC leverage on radio link optimizations and Multi-access Edge Computing (MEC) deployment. In addition to URLLC requirements, the applications not yet effective materialized in 5G such as remote surgery, Industrial IoT, cloud gaming, self-driving cars, and the new ones (e.g., holographic communication), will require even more data transfer. Thus, an end-to-end (E2E) solution is required since this 6G traffic surge may cross multiple routers in a transport network between the User Equipment (UE) and the application servers, even in MEC deployment scenarios. Such traffic flooding can lead to packet losses and increased latencies in the packet forwarding process. Recently, Segment Routing (SR) has emerged as an enabler for traffic engineering/ path selection for transport networks. This proposal presents a solution based on SR, which defines the paths from the MEC application to the UE. To select the optimal path that minimizes packet loss and latency, we employ a Multi-Armed Bandit model to determine the best SR path (SRP). We categorized the problem as multi-objective and applied different exploration strategies, such as Epsilon Greedy (EG), Thompson Sampling (TS), Contextual Multi-Armed Bandit (CMAB), and n-step bootstrapping, to identify the best fit for the problem. We evaluated the proposal using two different network topologies, one balanced and one unbalanced. The results demonstrate that the CMAB reduced latency in 27% and 25% compared with the EG, for balanced and unbalanced topology, respectively. | 10.1109/TNSM.2026.3682208 | |
| Luyao Jiang, Xinguo Ming, Mengli Wei | Privacy Decentralized Online Federated Learning for Smart Healthcare Service Systems | 2026 | Early Access | Broadcasting Broadcast technology Feedback Circuits Oscillators Internet of Things Communication systems Internet Computer networks Communication networks decentralized federated learning online learning smart healthcare service system differential privacy one-point residual feedback | The Smart Healthcare Service Systems (SHSS) aim to integrate decentralized healthcare institutions, intelligent technologies, and end users into a cyber-physical system that enables high-quality medical decision-making. However, the sensitive nature of healthcare data presents significant privacy and security challenges, which hinder effective collaboration among healthcare providers. Moreover, existing research lacks a comprehensive theoretical framework that spans the full pipeline from data acquisition to intelligent decision services. To address these challenges, we propose a theoretical framework for SHSS that systematically analyzes data processing and user demand to establish the goals of secure, stable, and adaptive collaborative learning. Guided by these goals, a Decentralized Online Federated Learning (DOFL) network model is tailored for SHSS, where participating institutions interact through a decentralized federated learning structure. Building on this model, we design DP-DOOR (Differentially Private Decentralized Online Federated Learning with One-Point Residual Feedback), a fully decentralized algorithm that supports row-stochastic communication topologies, accommodating practical limitations where bidirectional synchronization is often infeasible. DP-DOOR ensures data privacy through differential privacy (DP) mechanisms and achieves efficient gradient estimation using a one-point residual feedback (OPRF) approach. Theoretical analysis shows that DP-DOOR provides ϵ-DP guarantees and achieves sub-linear regret. Experimental evaluations on diverse real-world medical datasets under both IID and non-IID settings demonstrate the algorithm’s robustness and effectiveness in enabling secure, decentralized collaboration and enhancing adaptability in dynamic healthcare environments. | 10.1109/TNSM.2026.3680310 |
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Xinyue Zhang, Xuan Zhou, Jie Ma, Zeqi Li, Feng He | Interference-Aware Multi-Metric Delay Evaluation and Optimization for Switched Networks | 2026 | Early Access | Aerospace electronics Aerospace engineering Radio broadcasting Frequency modulation Communication systems Routing Computer networks Internet of Things Ethernet Software defined networking Time-varying delay flow interference delay jitter worst-case delay routing optimization switched networks | Switched networks are essential to modern real-time systems, where packet delays must be tightly bounded with minimal variation. Traditional delay analysis often focuses on worst-case bounds, but may overlook delay jitter induced by fine-grained inter-flow interference, which can degrade real-time performance and stability. Existing routing schemes typically rely on proxy indicators such as link load or path length, offering limited explicit control over delay and jitter behavior. To address these limitations, we propose an interference-aware delay evaluation and optimization framework that models the encounter interval and magnitude of flow interference at the packet level. From this, we derive worst-case delay, average delay, and delay jitter, and integrate these metrics into a unified, tunable optimization objective. We design a K-shortest-path genetic algorithm to jointly reduce them. Experimental results over multiple traffic loads demonstrate consistent improvements in delay and jitter performance, indicating that the proposed approach is scalable and practical for delay-sensitive and stability-critical switched networks. | 10.1109/TNSM.2026.3680250 |
| Yaru Zhao, Yuanting Yan, Man He, Yuanwei Zhu, Yi Yue, Yakun Huang | EcoPath: Energy-Efficient Multi-Path Data Aggregation for Ubiquitous Connectivity Services | 2026 | Early Access | Payloads Military aircraft Space technology Central Processing Unit Circuits Feedback Electronic circuits Circuits and systems Microcontrollers Microprocessors Data aggregation multi-path transmission large-scale WSNs ubiquitous connectivity | Ubiquitous connectivity is a key 6G usage scenario, in which large-scale sensing systems deployed in remote and underserved regions must deliver heterogeneous sensing data under stringent energy budgets and deadline constraints. This paper presents EcoPath, a two-tier data aggregation framework for clustered large-scale sensor networks. EcoPath separates low-power intra-cluster collection from a high-rate multi-interface backhaul operated by cluster heads, where Multipath QUIC (MPQUIC) can be practically deployed to exploit path diversity. At the cluster head, EcoPath jointly integrates (i) a deadline-aware bundling controller that aggregates sensor frames into MTU-bounded bundles to amortize protocol overhead while bounding additional waiting time, and (ii) a robust multi-path scheduler that prioritizes packets using Weighted Earliest- Deadline-First (W-EDF) with fairness protection and selects backhaul paths via a stability-aware quality metric with hysteresis to avoid flapping under time-varying links. We further formulate an explicit energy–timeliness optimization and show how its outputs parameterize the online bundling and scheduling policies. Extensive simulations with realistic wireless effects, together with baselines and ablations, demonstrate that EcoPath improves energy efficiency and deadline satisfaction for large-scale aggregation. | 10.1109/TNSM.2026.3680781 |
| Hasanin Harkous, Ahan Kak, Alistair Urie, Heiko Straulino, Huanzhuo Wu, Huu-Trung Thieu, Nakjung Choi | Flat UP: A Converged RAN-Core Architecture for the 6G User Plane | 2026 | Early Access | Broadcasting Broadcast technology Central Processing Unit Filtering Filters Matched filters Electronic circuits 5G mobile communication Communication systems Handover 6G System Architecture RAN-Core Convergence 3GPP QoS User Plane Control Plane | The ongoing industry shift toward Radio Access Network (RAN) disaggregation, virtualization, and cloudification has disrupted the conventional hierarchical design of cellular networks and opened the door to greater convergence between the RAN and core domains. Despite this progress, implementing such converged architectures in practice presents numerous challenges, including those related to protocol and architectural design, quality-of-service (QoS) assurance, control plane configuration, and support for emerging 6G-specific use cases. To address these challenges, this article presents the flat User Plane (UP) architecture, a novel framework for RAN-core convergence centered around a new 6G-native component: the Access User Plane Function (AUPF). The article outlines the key innovations in the newly proposed flat user plane architecture, including protocol- and feature-level design evolutions as well as enhancements to QoS provisioning. It then explores various counterpart Control Plane (CP) architectures, analyzing the impact of the new design on different 3GPP CP procedures. A concrete, system-level prototype implementation of the AUPF is developed, accompanied by a comprehensive over-the-air evaluation to assess both fundamental network performance metrics and user plane Quality of Experience (QoE). Additionally, multiple deployment models are examined to quantify the CP signaling overhead associated with different architectural options. The results demonstrate that the proposed flat UP architecture not only improves throughput, latency performance and QoE but also reduces overall compute resource utilization when compared to the conventional hierarchical 5G user plane. The CP evaluation further provides practical insights and guidelines for real-world deployment scenarios. | 10.1109/TNSM.2026.3680720 |
| Kai-Chi Chen, Shan-Hsiang Shen | Named Image-Layer Networks for Containers | 2026 | Early Access | Radio broadcasting Frequency modulation Central Processing Unit Contacts Integrated circuits Protocols Communication systems Internet Network architecture Computer networks Named Data Networking Container CICD In-network Cache | In recent years, continuous integration and continuous deployment (CICD) [1][2][3] have become industry standards in software development, aiming for automatic and efficient build and deployment processes. Nonetheless, integrating core frameworks and similar packages in each build often leads to redundancy, time wastage, and overall efficiency reduction. This study focuses on optimizing the CICD process, particularly in package management and data storage efficiency. It utilizes Network File System (NFS), which employs a robust caching mechanism, to store and deliver the necessary packages and resources. This approach significantly reduces redundant downloads and storage, enhancing efficiency. In addition, we identify challenges in package transmission and storage under the existing network architecture. To overcome these challenges, we propose the Named Image Layer Networking (NIN) technology to optimize package management and retrieval. The integration of NIN allows for a more effective selection of optimal caching nodes, thereby further improving the efficiency of the CICD process. | 10.1109/TNSM.2026.3681754 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Cheng Ren, Jinsong Gao, Yu Wang, Yaxin Li, Hongwei Li | GCN-Transformer Assisted Live SFC Migration with Hierarchical Reinforcement Learning in Mobile Edge Computing | 2026 | Early Access | Feeds Antennas Filtering theory Collaborative filtering Filters Filtering Internet of Things Routing Communication systems Service function chaining Service function chain live migration hierarchical deep reinforcement learning Transformer Graph Convolutional Network | Empowered by network function virtualization (NFV), mobile edge computing aims to provide low latency and ultra reliable network services to mobile end users, achieved as a service function chain (SFC) consisting of a series of ordered virtual network functions (VNFs). Due to user mobility, live SFC migration is imperative to avoid Quality of Service (QoS) degradation. Recent advances mainly make separate decisions on VNF node remapping and migration path routing in a heuristic manner, or implement both through reinforcement learning within a single agent of ill-defined policy and action space. In this paper, given next access node, we first formulate the live SFC migration problem as an integer linear programming (ILP) model to achieve optimal solutions. Then, we present HRL-QC, a hierarchical reinforcement learning framework that jointly optimizes VNF destination node remapping, migration path and post-migration service path selections for QoS-aware and cost-efficient live SFC migration. A GCN-Transformer block is introduced to capture long-range VNF-to-physical node dependencies, while a two-level actor-critic design couples the decision-makings through inter-level reward passing. Extensive evaluations show that HRL-QC outperforms the state-of-the-art in energy consumption, migration time, end-to-end service delay, and migration success rate, while remaining within a small margin of the optimal ILP solution. | 10.1109/TNSM.2026.3681690 |
| Jun Xu, Dejun Yang, Abdulelah Talea | Communication-Efficient Client Selection for Federated Learning with Unknown Channel State | 2026 | Early Access | Broadcasting Broadcast technology Feedback Circuits Internet of Things Communication systems Wireless communication Internet Mobile handsets Uplink Federated learning client selection unknown channel state Whittles index RMAB | Federated learning (FL) utilizes distributed edge devices for training based on local datasets which preserves data privacy at the cost of frequent communications of model parameters. The channel state between clients and the aggregator affects the successful delivery of model parameters. A client under poor channel state may fail to deliver its local model parameters and thus results in energy waste. Besides, obtaining the channel state takes extra overhead, which may degrade communication efficiency. It motivates us to investigate the client selection problem for FL with unknown channel state. We first derive an upper bound of the convergence for FL, which reflects the effects of the channel state and client selection decisions. We then formulate a client selection problem considering both the convergence and energy consumption. To solve this problem, we further transform it into a restless multi-armed bandit (RMAB) problem. We prove its indexability and propose an index-based client selection algorithm, termed IDXSel, which has low time complexity, is easy to implement, and is proved to be asymptotically optimal. We compare our IDXSel algorithm with the FedAvg, TransP, IS, FedNorm, UCB-based, and ϵ-greedy-based algorithms on the MNIST and CIFAR-10 datasets. Results show that our algorithm achieves comparable or higher accuracy than the baselines, but wastes more than 5× less energy than the worst of the baselines among all the evaluated scenarios. | 10.1109/TNSM.2026.3682080 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Zuodong Wu, Dawei Zhang, Mianxiong Dong, Kaoru Ota | PDRAA: An Efficient Privacy Data Retrieval Protocol with Anonymous Authorization Based on Verifiable Credential | 2026 | Early Access | Payloads Broadcasting Broadcast technology Communication systems Protocols Internet of Things Computer networks Internet Radio access networks Regional area networks GDPR VC Anonymous authorization Lawfulness data minimization Labeled PSI batch retrieval UCsecurity | In the data-driven era, the unchecked collection and processing of personal data has given rise to serious privacy concerns. In response, the General Data Protection Regulation (GDPR) was introduced to grant individuals stronger control over the use of their data. Privacy data retrieval methods show considerable promise in this context, but further improvements are required to balance the principles of lawfulness and data minimization. To address this problem, we propose PDRAA, an efficient privacy data retrieval protocol with anonymous authorization based on the verifiable credential (VC). Specifically, our designed VC achieves anonymous identification of data subjects and facilitates fine-grained access control by supporting selective disclosure of attributes. By combining VC with non-interactive zero-knowledge (NIZK) proofs, PDRAA enables data subjects to anonymously authenticate via VC presentation. This allows the data controller to verify the legitimacy of retrieval requests while ensuring compliance with the principle of data minimization. Besides, PDRAA introduces a re-randomization mechanism to prevent linkability attacks during the authorization process and provides lightweight, flexible authorization revocation. Moreover, we utilize Labeled Private Set Intersection (Labeled PSI) technology to meet the privacy requirements of participants and support batch retrieval. Our protocol takes a comprehensive security analysis within the Universal Composability framework. Experimental results demonstrate that PDRAA outperforms existing methods in terms of performance, which is significant for promoting compliance with GDPR. | 10.1109/TNSM.2026.3681957 |
| Vibha Jain, Prabal Verma, Mohit Kumar, Aryan Kaushik | Blockchain-enabled Incentive Mechanism for Federated Learning: A Multi-Agent Deep Deterministic Policy Gradient Approach | 2026 | Early Access | Broadcasting Broadcast technology Central Processing Unit Circuits Electronic circuits Feedback Communication systems Internet of Things Internet Wireless communication Federated Learning Incentive Mechanism Blockchain Multi-Agent Deep Deterministic Policy Gradient MA-DDPG | The expeditious growth of the Internet of Things (IoT) generates massive data, which allows advanced machine learning. However, the traditional approach of centralized model training raises the issue of high bandwidth consumption and privacy. Federated learning (FL) mitigates this by enabling local training on raw data with centralized aggregation to generate the global model. The effectiveness of FL depends upon the active participation of resource-constrained local devices. This article presents a blockchain-enabled incentive mechanism for FL leveraging the Multi-Agent Deep Deterministic Policy Gradient (MA-DDPG) algorithm. Specifically, an incentive scheme is formulated with the MEC (Mobile Edge Computing) server as the leader agent and local devices as learning agents in a cooperative environment. We formalize a two-stage Stackelberg game to establish a Nash equilibrium, which ensures fair and utility-optimized reward distribution for MEC and devices. A Markov Decision Process (MDP) is utilized to solve the equilibrium with incomplete knowledge, and utilities are optimized using the MA-DDPG algorithm. The proposed model considers data quality and device contribution to obtain optimal reward distribution and participation strategies dynamically. The experimental results show an approximate 38% improvement in MEC utility and approx 17% in device utility, with rapid convergence (approximately 300-500 episodes) at a learning rate of 0.0001. | 10.1109/TNSM.2026.3682129 |
| Archana Ojha, Om Jee Pandey, Prasenjit Chanak | Energy-Efficient Network Cut Detection and Recovery Mechanism for Cluster-Based IoT Networks | 2026 | Vol. 23, Issue | Wireless sensor networks Data collection Energy consumption Relays Internet of Things Delays Data communication Detection algorithms Smart cities Routing Wireless sensor networks (WSNs) Internet of Things (IoT) data routing network cut detection and recovery reinforcement learning brain storm optimization (RLBSO) mobile data collector (MDC) | Recently, the Internet of Things (IoT) has found widespread applications in diverse fields, including environmental monitoring, Industry 4.0, smart cities, and smart agriculture. In these applications, sensor nodes form Wireless Sensor Networks (WSNs) and collect data from the monitoring environment. Sensor nodes are vulnerable to various faults, including battery depletion and hardware malfunctions. These faulty nodes cut/partition the network into several isolated segments. Therefore, several non-faulty nodes become disconnected from the Base Station (BS)/Sink and are unable to transmit their data to the BS. It is subject to the early demise of the network. Network cuts also significantly degrade overall network performance. Once the network is divided into isolated segments, it is very difficult to detect and collect data from them. Therefore, this paper proposes a Mobile Data Collector (MDC)-based data-gathering approach for WSNs to collect data from isolated segments. This paper proposes a novel MDC-based network cut detection algorithm that identifies the formation of network cuts in WSNs. A network recovery algorithm is also proposed to enable data collection from the isolated segment. Furthermore, this paper proposes a Reinforcement learning Brain Storm Optimization (RLBSO) algorithm for optimal selection of Rendezvous Points (RPs) and optimal MDC path design. It significantly reduces data-gathering time across isolated network segments. The simulation and testbed results show that the proposed approach outperforms existing state-of-the-art approaches in terms of network lifetime, data collection ratio, energy consumption, and latency. | 10.1109/TNSM.2026.3677868 |
| Ei Theingi, Lokman Sboui, Diala Naboulsi | Adaptive and Energy-Efficient Deployment of Robotic Airborne Base Stations: A Deep Reinforcement Learning Approach | 2026 | Vol. 23, Issue | Energy efficiency Base stations Adaptation models Energy consumption Vehicle dynamics Optimization Adaptive systems Robot kinematics Grasping Fluctuations Actor-critic deep reinforcement learning dynamic network deployment energy efficiency robotic airborne base stations sustainable wireless networks | The increasing energy demands of future wireless networks drive the need for intelligent and adaptive deployment strategies. Traditional methods often lack the flexibility required to handle the spatio-temporal fluctuations inherent in modern communication environments. To address this challenge, we investigate the energy-efficient deployment of Robotic Airborne Base Stations (RABSs) in practical scenarios, such as managing sudden traffic surges during large-scale public events and providing emergency coverage in disaster-stricken areas where terrestrial infrastructure is compromised. We propose a novel Deep Reinforcement Learning (DRL)-based framework for an energy-efficient deployment of multiple RABSs. Unlike existing approaches, our framework features both centralized and decentralized Actor-Critic DRL, enabling scalable and adaptive decision-making. The centralized model leverages global network information to optimize the collective deployment of RABSs, while the multi-agent decentralized approach allows RABSs to make independent yet coordinated decisions based on local observations, ensuring scalability in large-scale networks. In addition, we introduce a state-action representation that captures spatio-temporal traffic variations and energy consumption dynamics. Our simulations validate the effectiveness of the proposed framework, demonstrating significant improvements in energy efficiency and adaptability compared to heuristic, Gauss-Markov, and Q-Learning models. Furthermore, comparison with an exhaustive search benchmark confirms that our approach achieves an optimal energy efficiency with significantly lower computational complexity. | 10.1109/TNSM.2026.3678488 |