Last updated: 2025-12-05 05:01 UTC
All documents
Number of pages: 152
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Giovanni Simone Sticca, Memedhe Ibrahimi, Francesco Musumeci, Nicola Di Cicco, Massimo Tornatore | Hollow-Core Fibers for Latency-Constrained and Low-Cost Edge Data Center Networks | 2025 | Early Access | Optical fiber networks Costs Optical fiber communication Data centers Optical fiber devices Optical fibers Optical attenuators Network topology Fiber nonlinear optics Throughput Hollow Core Fiber edge Data Centers Network Cost Minimization Latency-Constrained Networks | Recent advancements in Hollow Core Fibers (HCF) production are paving the way toward new ground-breaking opportunities of HCF for 6G-and-beyond applications. While Standard Single-Mode Fibers (SSMF) have been the go-to solution in optical communications for the past 50 years, HCF is expected to be a turning point in how next-generation optical networks are planned and designed. Compared to SSMF, in which the optical signal is transmitted in a silica core, in HCF, the optical signal is transmitted in a hollow, i.e., air, core, significantly reducing latency (by 30%), while also decreasing attenuation (as low as 0.11 dB/km) and non-linearities. In this study, we investigate the optimal placement of HCF in latency-constrained optical networks to minimize the number of edge Data Centers (edgeDCs), while also ensuring physical-layer validation. Given the optimized placement of HCF and edgeDCs, we minimize the overall network cost in terms of transponders (TXPs) and Wavelength Selective Switches (WSSes) by optimizing the type, number, and transmission mode of TXPs, and the type and number of WSSes. We develop a Mixed Integer Nonlinear Programming (MINLP) model and a Genetic Algorithm (GA) to solve these problems. We validate the GA against the MINLP model in four synthetically generated topologies and perform extensive numerical evaluations in a realistic 25-node metro aggregation topology and a 22-node national topology. We show that by upgrading 25% of the links to HCF, we can significantly reduce the number of edgeDCs by up to 40%, while also reducing network equipment cost by up to 38%, compared to an SSMF-only network. | 10.1109/TNSM.2025.3625391 |
| Junyu Li, Fei Zhou, Qi Xie, Nankun Mu, Yining Liu | Efficient Conditional Privacy-Preserving Heterogeneous Broadcast Signcryption for Collision Warning in VANETs | 2025 | Early Access | Security Privacy Encryption Costs Road side unit Internet of Vehicles Authentication Alarm systems Vehicle dynamics Receivers Authentication heterogeneous Cooperative Collision Warning(CCW) broadcast signcryption | Real-time performance is of utmost significance for communication in certain specific scenarios of vehicle-to-infrastructure (V2I) like collision warning systems. Vehicle-to-Everything (V2X) Broadcast signcryption is very suitable for these scenarios. However, current solutions prioritize generality, but may not be suitable for specialized communication situations, and many broadcast signcryption schemes suffer from low communication verification efficiency due to the sequence of decryption before verification. Moreover, most of existing broadcast signcryption schemes with single cryptosystem are not applicable for the heterogeneous networks of different Internet of Vehicles. To address these challenges, an efficient conditional privacy-preserving heterogeneous broadcast signcryption scheme(ECPHBS) is proposed, improving roadside unit verification to support batch verification of ciphertexts through a pre-authentication mechanism, and allowing vehicles to conduct secure communication with roadside units under the certificateless cryptosystem and the identity-based cryptosystem. Meanwhile, a tracking and revocation mechanism was introduced to achieve conditional privacy protection. Our formal security analysis demonstrates that the ECPHBS scheme formally achieves IND-CCA2 security under the CDH assumption and EUF-CMA security under the ECDL problem. Experimental results confirm its superior verification efficiency, especially with an increasing number of receiving RSUs, and a constant communication overhead. Furthermore, the RSU service capability analysis shows that our scheme enables RSUs to fully handle communication requests from approximately 500 vehicles within a 150-meter range, outperforming comparative schemes. | 10.1109/TNSM.2025.3619109 |
| Aitor Brazaola-Vicario, Vasileios Kouvakis, Stylianos E. Trevlakis, Alejandra Ruiz, Alexandros-Apostolos A. Boulogeorgos, Theodoros A. Tsiftsis, Dusit Niyato | High-Fidelity Coherent-One-Way QKD Simulation Framework for 6G Networks: Bridging Theory and Reality | 2025 | Early Access | Protocols Cows Security Optical fiber networks Optical fiber theory Optical fiber polarization Hands Receivers Prevention and mitigation Information theory Coherent-one-way (COW) experimental validation secrecy key rate simulation framework quantum bit error rate (QBER) quantum key distribution (QKD) | Quantum key distribution (QKD) has emerged as a promising solution for guaranteeing information-theoretic security. Inspired by this, a great amount of research effort has been recently put on designing and testing QKD systems as well as articulating preliminary application scenarios. However, due to the considerable high-cost of QKD equipment, a lack of QKD communication system design tools, wide deployment of such systems and networks is challenging. Motivated by this, this paper introduces a QKD communication system design tool. First we articulate key operation elements of the QKD, and explain the feasibility and applicability of coherent-one-way (COW) QKD solutions. Next, we focus on documenting the corresponding simulation framework as well as defining the key performance metrics, i.e., quantum bit error rate (QBER), and secrecy key rate. To verify the accuracy of the simulation framework, we design and deploy a real-world QKD setup. We perform extensive experiments for three deployments of diverse transmission distance in the presence or absence of a QKD eavesdropper. The results reveal an acceptable match between simulations and experiments rendering the simulation framework a suitable tool for QKD communication system design. | 10.1109/TNSM.2025.3619551 |
| Mario Di Mauro | Performance Assessment of Multi-Class 5G Chains: A Non-Product-Form Queueing Networks Approach | 2025 | Early Access | Delays 5G mobile communication Queueing analysis MONOS devices Load modeling Data models Calculus Resource management Quality of service Optimization Performance assessment of 5G chains Queueing Networks Multi-class SFC models | This work presents a performance assessment of 5G Service Function Chains (SFCs) by examining and comparing two architectural models. The first is the Mono chain model, which relies on a single path for data processing through a series of 5G nodes, ensuring straightforward and streamlined service delivery. The second is the Poly (or sliced) chain model, which leverages multiple paths for data flow, enhancing load balancing and resource distribution across nodes to improve network resilience. To evaluate the performance of these models, we introduce a performance indicator that captures two critical stages: the time required for user registration to the 5G infrastructure and the time needed for Protocol Data Unit (PDU) session establishment. From a performance standpoint, these stages are deemed crucial by the European Telecommunications Standards Institute (ETSI), as they can adversely affect both objective and subjective network parameters. Using a non-product-form queueing network approach, we develop an algorithm named ChainPerfEval, which accurately estimates the proposed performance indicator. This approach outperforms standard queueing network models, where the exponential assumption of inter-arrival and/or service times may lead to an inaccurate estimation of the performance indicator. An extensive experimental campaign is conducted using an Open5GS testbed to simulate real-world traffic scenarios, categorizing 5G flows into three priority classes: gold (high priority), silver (moderate priority), and bronze (low priority). The results provide significant insights into the trade-offs between the Mono and Poly chain models, particularly in terms of resource allocation strategies and their impact on SFC performance. Ultimately, this comprehensive analysis offers valuable and actionable recommendations for network operators seeking to optimize service delivery in multi-class 5G environments, ensuring enhanced user experience and efficient resource utilization. | 10.1109/TNSM.2025.3588304 |
| Jiaen Lv, Yifang Zhang, Shaowei Wang | VIDTRA: An Efficient and Resilient Video Preloading System | 2025 | Early Access | Videos Trajectory Codecs Predictive models Computational modeling Data models Bit rate Real-time systems Memory Atmospheric modeling Network situation map trajectory prediction video preloading | With the increasing demand for video-based applications, the clarity and fluidity of videos have garnered widespread attention. Current video playback mechanisms either allocate resources to clients with good channel quality or maintain video playback continuity at low bit rates, affecting the user viewing experience. In this work, we propose a video preloading system, namely VIDTRA, which departs from the current paradigm and explores a resilient design scheme. The central insight in VIDTRA is to utilize network situation maps and user trajectory prediction to forecast the serving cell and received signal strength, thereby determining the duration of video preloading. Considering the predictability of public transport route trajectories, our system is primarily designed to enhance the video watching experience for users on public transport, which encompasses functionalities such as route clustering and the identification of users’ boarding and alighting. Using real-world data collected from user equipments, we thoroughly evaluate and demonstrate the efficacy of VIDTRA. Results from the experimental evaluations show that VIDTRA can precisely estimate the future signal strength received by users and initiate video preloading before they enter areas with weak signal quality, thus reducing the interruptions of the video while guaranteeing the high definition. | 10.1109/TNSM.2025.3620295 |
| Hesam Tajbakhsh, Ricardo Parizotto, Alberto Schaeffer-Filho, Israat Haque | Reinforcement Learning-Based In-Network Load Balancing | 2025 | Early Access | Load management Servers Load modeling Prediction algorithms Data centers Q-learning Predictive models Mathematical models Data models Computational modeling Load Balancing Data Plane Programmability Reinforcement Learning | Ensuring consistent performance becomes increasingly challenging with the growing complexity of applications in data centers. This is where load balancing emerges as a vital component. A load balancer distributes network or application traffic across various servers, resources, or pathways. In this article, we present P4WISE, a load balancer designed for software-defined networks. Operating on both the data and control planes, it employs reinforcement learning to distribute computational loads with granularity at inter and intra-server levels. Evaluation results demonstrate a remarkable 90% accuracy in predicting the optimal load balancing strategy of P4WISE in dynamic scenarios. Notably, unlike supervised or unsupervised methods, it eliminates the need for retraining when the environment undergoes minor or major changes. Instead, P4WISE autonomously adjusts and retrains itself based on observed states within the data center. | 10.1109/TNSM.2025.3621126 |
| Zheng Gao, Danfeng Sun, Jianyong Zhao, Huifeng Wu, Jia Wu | Cost-Minimized Data Edge Access Model for Digital Twin Using Cloud-Edge Collaboration | 2025 | Early Access | Data acquisition Cloud computing Digital twins Costs Edge computing Accuracy Optimization Data models Computational modeling Protocols Data edge access digital twin cloud-edge collaboration edge cost minimization | Industrial applications involving digital twins (e.g., behavior simulation) demand highly accurate, low-latency data, making real-time data acquisition critical. To meet performance demands, devices that do not support asynchronous communication need to acquire data at high frequency. In cloud-edge collaboration schemes, edge computing nodes typically acquire the data. However, high-frequency data acquisition and processing impose considerable costs, posing significant challenges for these resource-constrained nodes. To address this problem, we propose a model called Cost-minimized Data Edge Access (CDEA) that can dynamically minimize the edge costs while satisfying long-term performance requirements. CDEA quantifies data performance by decomposing the workflow of industrial systems into basic action units. These units are used to model data acquisition, data processing, data transmission, and cloud computing. Then, a cost minimization problem is formulated based on these components. To address irregular data changes and the general lack of available statistics on system’s network status, the framework incorporates Lyapunov optimization to transform the long-term guarantee over data performance into a series of instantaneous decision problems. Finally, a heuristic algorithm identifies the optimal data acquisition strategy. To validate CDEA’s effectiveness, we implemented it in two representative digital twin scenarios: cathode plate stripping and AGV transportation. Experimental results demonstrate that CDEA can indeed reduce both edge costs and cloud resources consumption while still ensuring high data performance. | 10.1109/TNSM.2025.3621548 |
| Seyed Soheil Johari, Massimo Tornatore, Nashid Shahriar, Raouf Boutaba, Aladdin Saleh | Active Learning for Transformer-Based Fault Diagnosis in 5G and Beyond Mobile Networks | 2025 | Early Access | Transformers Fault diagnosis Data models Labeling Training 5G mobile communication Costs Computer architecture Active learning Complexity theory Fault Diagnosis Active Learning Transformers | As 5G and beyond mobile networks evolve, their increasing complexity necessitates advanced, automated, and datadriven fault diagnosis methods. While traditional data-driven methods falter with modern network complexities, Transformer models have proven highly effective for fault diagnosis through their efficient processing of sequential and time-series data. However, these Transformer-based methods demand substantial labeled data, which is costly to obtain. To address the lack of labeled data, we propose a novel active learning (AL) approach designed for Transformer-based fault diagnosis, tailored to the time-series nature of network data. AL reduces the need for extensive labeled datasets by iteratively selecting the most informative samples for labeling. Our AL method exploits the interpretability of Transformers, using their attention weights to create dependency graphs that represent processing patterns of data points. By formulating a one-class novelty detection problem on these graphs, we identify whether an unlabeled sample is processed differently from labeled ones in the previous training cycle and designate novel samples for expert annotation. Extensive experiments on real-world datasets show that our AL method achieves higher F1-scores than state-of-the-art AL algorithms with 50% fewer labeled samples and surpasses existing methods by up to 150% in identifying samples related to unseen fault types. | 10.1109/TNSM.2025.3622149 |
| Jingchao Tan, Tiancheng Zhang, Cheng Zhang, Chenyang Wang, Chao Qiu, Xiaofei Wang, Mohsen Guizani | Delay-Aware and Energy-Efficient Integrated Optimization System for 5G Networks | 2025 | Early Access | 5G mobile communication Delays Energy efficiency Energy consumption Optimization Quality of service Resource management Heuristic algorithms Spatiotemporal phenomena Base stations Energy Efficient Delay Aware Deep Reinforcement Learning 5G Networks | To meet the demands of high-capacity and low-delay services, Fifth Generation (5G) Base Stations (BSs) are typically deployed in ultra-dense configurations, especially in urban areas. While this densification enhances coverage and service quality, it also leads to substantially increased energy consumption. However, the dense deployment pattern makes BS workloads more responsive to the spatiotemporal variations in user behavior, offering opportunities for energy-saving strategies that dynamically adjust BS operation states. In this context, we propose a Delay-aware and Energy-efficient Integrated Optimization System (DEIS) based on Deep Reinforcement Learning (DRL), which jointly optimizes energy consumption and network delay while maintaining user satisfaction. DEIS leverages a real-world dataset collected from operational 5G BSs provided by partner network operators, containing both BS deployment data and high-volume user request logs. Extensive simulations demonstrate that DEIS can achieve a 41% reduction in energy consumption while ensuring reliable delay performance. | 10.1109/TNSM.2025.3623778 |
| Akhila Rao, Magnus Boman | Self-Supervised Pretraining for User Performance Prediction Under Scarce Data Conditions | 2025 | Early Access | Generators Training Self-supervised learning Predictive models Noise Data models Data augmentation Base stations Vectors Adaptation models user performance prediction telecom networks mobile networks machine learning self-supervised learning structured data tabular data generalizability sample efficiency | Predicting user performance at the base station in telecom networks is a critical task that can significantly benefit from advanced machine learning techniques. However, labeled data for user performance are scarce and costly to collect, while unlabeled data consisting of base station metrics, are more readily accessible. Self-supervised learning provides a means to leverage this unlabeled data, and has seen remarkable success in the domains of computer vision and natural language processing, with unstructured data. Recently, these methods have been adapted to structured data as well, making them particularly relevant to the telecom domain. We apply self-supervised learning to predict user performance in telecom networks. Our results demonstrate that even with simple self-supervised approaches, the percentage of variance of the target values explained by the model, in low-labeled scenarios (e.g., only 100 labeled samples) can be improved fourfold, from 15% to 60%. Moreover, to promote reproducibility and further research in the domain, we open-source a dataset creation framework and a specific dataset created from it that captures scenarios that have been deemed to be challenging for future networks. | 10.1109/TNSM.2025.3622892 |
| Abdurrahman Elmaghbub, Bechir Hamdaoui | HEEDFUL: Leveraging Sequential Transfer Learning for Robust WiFi Device Fingerprinting Amid Hardware Warm-Up Effects | 2025 | Early Access | Fingerprint recognition Radio frequency Hardware Wireless fidelity Accuracy Performance evaluation Training Wireless communication Estimation Transfer learning WiFi Device Fingerprinting Hardware Warm-up Consideration Hardware Impairment Estimation Sequential Transfer Learning Temporal-Domain Adaptation | Deep Learning-based RF fingerprinting approaches struggle to perform well in cross-domain scenarios, particularly during hardware warm-up. This often-overlooked vulnerability has been jeopardizing their reliability and their adoption in practical settings. To address this critical gap, in this work, we first dive deep into the anatomy of RF fingerprints, revealing insights into the temporal fingerprinting variations during and post hardware stabilization. Introducing HEEDFUL, a novel framework harnessing sequential transfer learning and targeted impairment estimation, we then address these challenges with remarkable consistency, eliminating blind spots even during challenging warm-up phases. Our evaluation showcases HEEDFULs efficacy, achieving remarkable classification accuracies of up to 96% during the initial device operation intervals–far surpassing traditional models. Furthermore, cross-day and crossprotocol assessments confirm HEEDFUL’s superiority, achieving and maintaining high accuracy during both the stable and initial warm-up phases when tested on WiFi signals. Additionally, we release WiFi type B and N RF fingerprint datasets that, for the first time, incorporate both the time-domain representation and real hardware impairments of the frames. This underscores the importance of leveraging hardware impairment data, enabling a deeper understanding of fingerprints and facilitating the development of more robust RF fingerprinting solutions. | 10.1109/TNSM.2025.3624126 |
| Hernani D. Chantre, Nelson L. S. da Fonseca | Cost Analysis of VNF Distributions in 5G MEC-Based Networks With Protection Scheme | 2025 | Early Access | Costs Protection 5G mobile communication Reliability Optimization Computer network reliability Resource management Reliability engineering Multi-access edge computing Low latency communication MEC location problem Protection schemes Multi–access Edge Computing 5G NFV | This paper addresses the optimal placement of Multi-Access Edge Computing (MEC) nodes in 5G networks, aiming to meet stringent performance requirements while minimizing cost. It explores the impact of various Virtual Network Function (VNF) distribution strategies on overall network cost, specifically examining the 1:1, 1:N, and 1:N:K protection schemes. To tackle the MEC location problem, bi-objective nonlinear mathematical models are employed for exact solutions in small-scale networks, while the Non-dominated Sorting Genetic Algorithm II (NSGA-II) is applied for larger networks. The results reveal that VNF distribution can significantly escalate network costs, with fully distributed VNFs incurring the highest expenses. However, the enhanced protection scheme demonstrates improved cost-efficiency. These findings highlight the critical role of strategic MEC placement and intelligent resource allocation in building scalable, resilient, cost-effective 5G infrastructures. | 10.1109/TNSM.2025.3619023 |
| Long Chen, Yukang Jiang, Zishang Qiu, Donglin Zhu, Zhiquan Liu, Zhenzhou Tang | Towards Energy-Saving Deployment in Large-Scale Heterogeneous Wireless Sensor Networks for Q-coverage and C-connectivity: An Efficient Parallel Framework | 2025 | Early Access | Wireless sensor networks Metaheuristics Sensors Costs Three-dimensional displays Mathematical analysis Energy consumption Artificial neural networks Monitoring Data communication Q-Coverage C-connectivity energy-saving large-scale heterogeneous WSNs parallel framework | Efficient deployment of thousands of energy-constrained sensor nodes (SNs) in large-scale wireless sensor networks (WSNs) is critical for reliable data transmission and target sensing. This study addresses the Minimum Energy Q-Coverage and C-Connectivity (MinEQC) problem for heterogeneous SNs in three-dimensional environments. MnPF (Metaheuristic–Neural Network Parallel Framework), a two-phase method that can embed most metaheuristic algorithms (MAs) and neural networks (NNs), is proposed to address the above problem. Phase-I partitions the monitoring region via divide-and-conquer and applies NN-based dimensionality reduction to accelerate parallel optimization of local Q-coverage and C-connectivity. Phase-II employs an MA-based adaptive restoration strategy to restore connectivity among subregions and systematically assess how different partitioning strategies affect the number of restoration steps. Experiments with four NNs and twelve MAs demonstrate efficiency, scalability, and adaptability of MnPF, while ablation studies confirm the necessity of both phases. MnPF bridges scalability and energy efficiency, providing a generalizable approach to SN deployment in large-scale WSNs. | 10.1109/TNSM.2025.3640070 |
| Junfeng Tian, Junyi Wang | D-Chain: A Load-Balancing Blockchain Sharding Protocol Based on Account State Partitioning | 2025 | Early Access | Sharding Blockchains Load modeling Delays Throughput Scalability Resource management Load management Bitcoin System performance Blockchain sharding account split load balance | Sharding has become one of the key technologies for improve the performance of blockchain systems. However, the imbalance of transaction load between shards caused by extremely hot accounts leads to an imbalance in the utilization of system resources as well as the increase of cross-shard transactions with the number of shards limits the expansion of sharding systems, and sharding systems do not achieve the desired performance improvement. We propose a new blockchain sharding system called D-Chain. D-Chain splits and distributes the state of extremely hot accounts into multiple shards, allowing transactions for an account can be processed in multiple shards, thus balancing the load between shards and reducing the number of cross-shard transactions. We have implemented a prototype of D-Chain, and evaluated its performance using real-world Ethereum transactions. Experimental results show that the proposed system achieves a more balanced shard load and outperforms other baselines in terms of throughput, transaction latency, and cross-shard transaction ratio. | 10.1109/TNSM.2025.3640097 |
| Jiahe Xu, Chao Guo, Moshe Zukerman | Virtual Network Embedding for Data Centers With Composable or Disaggregated Architectures | 2025 | Early Access | Servers Data centers Greedy algorithms Virtual links Resource management Power demand Computer architecture Bandwidth Topology Scalability Virtual network embedding virtual data center embedding composable or disaggregated architecture | Virtual Network Embedding (VNE) is an important problem in network virtualization, involving the optimal allocation of resources from substrate networks to service requests in the form of Virtual Networks (VNs). This paper addresses a specific VNE problem in the context of Composable/Disaggregated Data Center (DDC) networks, characterized by the decoupling and reassembly of different resources into resource pools. Existing research on the VNE problem within Data Center (DC) networks primarily focuses on the Server-based DC (SDC) architecture. In the VNE problem within SDCs, a virtual node is typically mapped to a single server to fulfill its requirements for various resources. However, in the case of DDCs, a virtual node needs to be mapped to different resource nodes for different resources. We aim to design an optimization method to achieve the most efficient VNE within DDCs. To this end, we provide an embedding scheme that acts on each arriving VN request to embed the VN with minimized power consumption. Through this scheme, we demonstrate that we also achieve a high long-term acceptance ratio. We provide Mixed Integer Linear Programming (MILP) and scalable greedy algorithms to implement this scheme. We validate the efficiency of our greedy algorithms by comparing their performance against the MILP for small problems and demonstrate their superiority over baseline algorithms through comprehensive evaluations using both synthetic simulations and real-world Google cluster traces. | 10.1109/TNSM.2025.3639958 |
| Sajjad Alizadeh, Majid Khabbazian | On Scalability Power of Payment Channel Networks | 2025 | Early Access | Topology Routing Network topology Scalability Blockchains Trees (botanical) Analytical models Stars Costs Channel capacity Blockchain Scalability Payment Channel Networks Lightning Network | Payment channel networks have great potential to scale cryptocurrency payment systems. However, their scalability power is limited as payments occasionally fail in these networks due to various factors. In this work, we study these factors and analyze their imposing limitations. To this end, we propose a model where a payment channel network is viewed as a compression method. In this model, the compression rate is defined as the ratio of the total number of payments entering the network to the total number of transactions that are placed on the blockchain to handle failed payments or (re)open channels. We analyze the compression rate and its upper limit, referred to as compression capacity, for various payment models, channel-reopening strategies, and network topologies. For networks with a tree topology, we show that the compression rate is inversely proportional to the average path length traversed by payments. For general networks, we show that if payment rates are even slightly asymmetric and channels are not reopened regularly, a constant fraction of payments will always fail regardless of the number of channels, the topology of the network, the routing algorithm used and the amount of allocated funds in the network. We also examine the impact of routing and channel rebalancing on the network’s compression rate. We show that rebalancing and strategic routing can enhance the compression rate in payment channel networks where channels may be reopened, differing from the established literature on credit networks, which suggests these factors do not have an effect. | 10.1109/TNSM.2025.3640098 |
| Sheng-Wei Wang, Show-Shiow Tzeng | An Accurate and Efficient Analytical Model for Security Evaluation of PoW Blockchains With Multiple Independent Selfish Miners | 2025 | Early Access | Blockchains Analytical models Accuracy Security Computational modeling Numerical models Bitcoin Consensus protocol Proof of Work Closed-form solutions Blockchain selfish mining attack Markov chain rewards analysis | Selfish mining poses significant security challenges to Proof-of-Work (PoW) blockchains by allowing strategic miners to gain disproportionate rewards through protocol deviation. While the impact of a single selfish miner has been extensively studied, the security implications of multiple independent selfish miners remain insufficiently understood. This paper presents an accurate and efficient analytical model for security evaluation of PoW blockchains under multiple independent selfish mining behaviors. The blockchain dynamics are modeled as a Markov chain with a novel state aggregation approximation, enabling closed-form estimation of miner rewards. Numerical results show that the proposed model achieves high accuracy, with deviations typically less than 5.09% compared to simulations in a blockchain with two selfish miners. In a blockchain with more than two selfish miners, the proposed analytical model yields more accuracy approximation leading to less than 2% error. We also propose a truncation mechanism to reduce the number of states in the proposed Markov chain. Numerical results show that the proposed analytical model with truncation significantly reduce the computation time while the accuracy is still maintained. Two use cases are presented: determining the profitable threshold of total selfish mining power and analyzing reward dis-proportionality between strong and weak selfish miners. The proposed model provides a practical framework for quantifying incentive-driven security risks and evaluating their impact on blockchain fairness and decentralization. | 10.1109/TNSM.2025.3637840 |
| Weichao Ding, Zhou Zhou, Qi Min, Fei Luo, Wenbo Dong, Hengrun Zhang | VDSV: Client Selection in Federated Learning Based on Value Density and Secondary Verification | 2025 | Early Access | Convergence Training Data models Servers Distributed databases Analytical models Interference Federated learning Costs Artificial intelligence Federated learning Client Selection Data Heterogeneity Value Density Secondary Verification | Client selection has been widely considered in Federated Learning (FL) to reduce communication overhead while ensuring proper convergence performance. Due to data heterogeneity in FL, a representative subset of participants should take into account both intra-and inter-client diversity. While existing works usually emphasize on one of them, this paper proposes a VDSV (client selection based on Value Density and Secondary Verification) framework, which optimizes the client selection strategy from both sides. Therein, intra-and inter-client diversity are respectively measured based on a designed client data score as well as gradient distance and direction. Afterwards, a client selection model is established based on a proposed metric, called client value density. Besides, a secondary validation method is developed to dynamically tweak the current client selection and model aggregation strategies. The general idea of the above design is based on the theoretical convergence analysis and the observation that the client contribution to the global model can get changed throughout the learning process. The experimental results demonstrate that VDSV can achieve higher convergence rates and ensure comparable model performance. In specific, our method can reduce the communication rounds by an average of 37.88%, which saves noticeable communication overhead. | 10.1109/TNSM.2025.3636990 |
| Yang Liu, Wenjun Zhu, Harry Chang, Yang Hong, Geoff Langdale, Kun Qiu, Jin Zhao | Hyperflex: A SIMD-Based DFA Model for Deep Packet Inspection | 2025 | Early Access | Single instruction multiple data Vectors Engines Automata Inspection Payloads Throughput Memory management Compression algorithms Software algorithms Deep Packet Inspection Regular Expression Deterministic Finite Automata | Deep Packet Inspection (DPI) has been extensively employed for network security. It examines traffic payloads by searching for regular expressions (regex) with the Deterministic Finite Automaton (DFA) model. However, as the network bandwidth and ruleset size are increasing rapidly, the conventional DFA model has emerged as a significant performance bottleneck of DPI. Leveraging the Single-Instruction-Multiple-Data (SIMD) instruction to perform state transitions can substantially boost the efficiency of the DFA model. In this paper, we propose Hyperflex, a novel SIMD-based DFA model designed for high-performance regex matching. Hyperflex incorporates a region detection algorithm to identify regions suitable for acceleration by SIMD instructions across the whole DFA graph. Also, we design a hybrid state transition algorithm that enables state transition in both SIMD-accelerated and normal regions, and ensures seamless state transition across the two types of regions. We have implemented Hyperflex on the commodity CPU and evaluated it with real network traffic and DPI regexes. Our evaluation results indicate that Hyperflex reaches a throughput of 8.89Gbit/s, representing an improvement of up to 2.27 times over Mcclellan, the default DFA model of the prominent multi-pattern regex matching engine Hyperscan. As a result, Hyperflex has been successfully deployed in Hyperscan, significantly enhancing its performance. | 10.1109/TNSM.2025.3636946 |
| Yaqing Zhu, Liquan Chen, Suhui Liu, Bo Yang, Shang Gao | Blockchain-Based Lightweight Key Management Scheme for Secure UAV Swarm Task Allocation | 2025 | Early Access | Autonomous aerial vehicles Encryption Protocols Resource management Receivers Controllability Dynamic scheduling Blockchains Authentication Vehicle dynamics Lightweight certificateless pairing-free key management UAV swarm task allocation | Unmanned Aerial Vehicle (UAV) swarms are a cornerstone technology in the rapidly growing low-altitude economy, with significant applications in logistics, smart cities, and emergency response. However, their deployment is constrained by challenges in secure communication, dynamic group coordination, and resource constraints. Although there are various cryptographic techniques, efficient and scalable group key management plays a critical role in secure task allocation in UAV swarms. Existing group key agreement schemes, both symmetric and asymmetric, often fail to adequately address these challenges due to their reliance on centralized control, high computational overhead, sender restrictions, and insufficient protection against physical attacks. To address these issues, we propose PCDCB (Pairing-free Certificateless Dynamic Contributory Broadcast encryption), a blockchain-assisted lightweight key management scheme designed for UAV swarm task allocation. PCDCB is particularly suitable for swarm operations as it supports efficient one-to-many broadcast of task commands, enables dynamic node join/leave, and eliminates key escrow by combining certificateless cryptography with Physical Unclonable Functions (PUFs) for hardware-bound key regeneration. Blockchain is used to maintain tamper-resistant update tables and ensure auditability, while a privacy-preserving mechanism with pseudonyms and a round mapping table provides task anonymity and unlinkability. Comprehensive security analysis confirms that PCDCB is secure and resistant to multiple attacks. Performance evaluation shows that, in large-scale swarm scenarios (n = 100), PCDCB reduces the cost of group key computation by 54.4% (up to 96.9%) and reduces the time to generate the decryption keys by at least 29.7%. In addition, PCDCB achieves the lowest communication cost among all compared schemes and demonstrates strong scalability with increasing group size. | 10.1109/TNSM.2025.3636562 |