Last updated: 2025-08-09 05:01 UTC
All documents
Number of pages: 144
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Jiali Zheng, Jiawen Li | MGRS-PBFT: An Optimized Consensus Algorithm Based on Multi-Group Ring Signatures for Blockchain Privacy Protection | 2025 | Early Access | Blockchains Privacy Protection Consensus algorithm Security Cryptography Fault tolerant systems Fault tolerance Scalability Public key blockchain consensus protocol ring signature practical Byzantine fault tolerance (PBFT) privacy protection | In the realm of blockchain systems, the prominence of privacy and security issues is steadily increasing, thereby necessitating greater focus on privacy protection technologies. The conventional practical Byzantine fault tolerance (PBFT) consensus algorithm is characterized by limited scalability and an absence of privacy protection. Consequently, this study introduces ring signature privacy protection technology and proposes an enhanced PBFT consensus algorithm, named MGRS-PBFT, based on multi-group ring signatures, aimed at bolstering blockchain privacy protection. Firstly, a credit score mechanism is introduced to incentivize or penalize nodes’ behavior and assess their performance. The selection of the primary node is determined through a voting process utilizing the nodes’ credit scores, thereby mitigating the influence of malicious nodes within the system and enhancing overall system security. Secondly, nodes are stratified into multiple groups based on inter-node response speed, thus streamlining the consensus protocol and diminishing system communication complexity. Finally, an identity-based ring signature algorithm is implemented to protect the privacy data of nodes. Experimental results demonstrate that the average consensus delay of MGRS-PBFT is reduced by 46.73% compared to PBFT, 20.87% compared to double-layer PBFT, 18.61% compared to SG-PBFT, and 27.58% compared to CRBFT. Additionally, MGRS-PBFT achieves an average throughput that is 2.44 times that of PBFT, 1.27 times that of double-layer PBFT, 1.23 times that of SG-PBFT, and 1.45 times that of CRBFT. Through security experiments, the analysis demonstrates that MGRS-PBFT outperforms PBFT in terms of its resistance to Byzantine nodes and fault tolerance. | 10.1109/TNSM.2025.3580403 |
Xi Yao, Fagui Liu, Jun Jiang, Guoxiang Zhong, C. L. Philip Chen | Category-constrained Broad Recurrent System for Cloud Anomaly Detection | 2025 | Early Access | Cloud computing Measurement Anomaly detection Feature extraction Training Data models Monitoring Time series analysis Correlation Training data Cloud computing anomaly detection broad learning system temporal dynamics category-constrained | Anomaly detection has become a key focus in maintaining the stability and reliability of the cloud environment. Although with excellent feature extraction ability, deep learning-based anomaly detection methods entail a time-consuming training process. Broad learning system (BLS) provides an alternative supervised way for efficient training. However, due to the imbalance of the collected cloud computing data in which anomaly accounts for a low proportion, sufficient feature extraction from anomaly behaviors with BLS becomes a challenge. Moreover, the input generation of BLS only considers the independence of data, and the generalization of BLS in the correlation modeling of cloud computing data is limited. To tackle the above issues, we introduce an effective anomaly detector, CatBRS, an improved BLS with rebalance operations. Initially, we employ a hybrid resampling method of SMOTE-Tomek to mitigate data imbalance, retain non-synthetic samples for training, and involve synthetic samples in the input generation later. Subsequently, we extend BLS by refining the process of input generation. This enhanced system employs a simple recurrent architecture to model temporal dynamics. Additionally, it integrates an autoencoder-based model with metric learning to obtain category-constrained discriminant features. The improvement in BLS facilitates more comprehensive feature extraction. Finally, extensive experiments are conducted to evaluate the performance of CatBRS on four benchmark datasets. CatBRS shows improvements of up to 3.81% in AUC and 6.09% in F1 compared to suboptimal baseline methods with a low training cost. | 10.1109/TNSM.2025.3576927 |
Bijoy Chand Chatterjee, Vinay Kumar, Eiji Oki | Optimizing Virtual Network Embedding in Spectrally-Spatially Elastic Optical Networks: A Crosstalk-Aware Perspective | 2025 | Early Access | Resource management Substrates Crosstalk Virtual links Virtualization Elastic optical networks Multicore processing Routing Space division multiplexing Optimization Virtual optical network embedding inter-core and inter-mode XT space-division multiplexing spectrally-spatially elastic optical network | The exploration of virtualization in spectrally-spatially elastic optical networks (SS-EONs), particularly focusing on virtual optical network embedding (VONE), is an emerging technology to enhance resource utilization and transport capacity. However, managing both inter-core crosstalk (IC-XT) and inter-mode crosstalk (IM-XT) in virtualized SS-EONs poses significant challenges. To tackle this, the paper proposes an optimization model for VONE in SS-EONs, named VneXT-Aw, which incorporates an XT-aware approach. VneXT-Aw employs node and link mapping techniques to seamlessly integrate VON requests into the substrate SS-EONs, ensuring precise node mapping and capacity adherence. It allocates spectrum slots along routing paths in the substrate network, ensuring spectrum contiguity, continuity, and mode continuity for virtual links, all while considering the XT-aware approach. The optimization problem for VneXT-Aw is formulated as a mixed-integer linear programming (MILP) problem. Recognizing the complexity of MILP for larger instances, the paper introduces two heuristic approaches: MILP-based heuristic (MILP-h) and rank-assisted simulated annealing (Rasa-h). A comparative analysis reveals that MILP-h accommodates more requests than Rasa-h but sacrifices computational efficiency. This trade-off highlights the delicate balance between request accommodation and computational complexity, providing insights into the practical implementation of VON embedding in SS-EONs. Furthermore, VneXT-Aw outperforms the benchmark scheme by utilizing the XT-avoided approach. | 10.1109/TNSM.2025.3578439 |
Nilesh Kumar Jadav, Sudeep Tanwar | FL-ORA: Optimized and Decentralized Resource Allocation Scheme for D2D Communication | 2025 | Early Access | Resource management Device-to-device communication Access control Training Convergence Copper Throughput Energy efficiency Artificial intelligence Whale optimization algorithms D2D communication Federated Learning Whale Optimization Algorithm Differential Evolution Decentralization Resource Allocation Physical Layer Access Control | This article presents an optimized and decentralized resource allocation approach aimed at maximizing the system throughput and energy efficiency of device-to-device (D2D) communication. The proposed scheme modifies the meta-heuristic whale optimization algorithm (WOA) by blending the differential evolution (DE) technique in the WOA’s exploration phase to offer intelligence and reduce the computational overburden. The hybrid WOA (DE+WOA) serves as a physical layer access control that efficiently finds the optimal cellular users (CUs) and D2D users (DUs) based on their channel conditions. The proposed access control acts as a restrictive filter, where only optimal CU-DUs can participate in resource allocation tasks. Furthermore, a dataset has been prepared using the optimal CUs-DUs channel conditions from the hybrid WOA to serve as input for the federated learning (FL)-based resource allocation. We utilized statistical tests (e.g., Spearman’s test) to analyze the generated dataset’s non-independent and identically distributed (non-IID) characteristics, thus providing generalization in the AI training. Allowing only the optimal CUs and DUs (from hybrid WOA) in the FL-based resource allocation substantially reduces the computational cost of AI training and improves energy efficiency. In the FL-based resource allocation, we used a sequential convolutional neural network (CNN) trained on the aforementioned dataset to provide proactive resource allocation decisions. Furthermore, we used momentum-based weight aggregation in the FL to reduce the computational burden on the central server. The proposed scheme is assessed by utilizing different standard metrics, such as training accuracy (98.93%), training time, overall system throughput (35.62 Mbps), energy efficiency (96.42 bits/joule), and resource fairness. | 10.1109/TNSM.2025.3591644 |
Zhangxuan Dang, Yu Zheng, Xinglin Lian, Chunlei Peng, Qiuyu Chen, Xinbo Gao | Semi-Supervised Learning for Anomaly Traffic Detection via Bidirectional Normalizing Flows | 2025 | Early Access | Telecommunication traffic Feature extraction Anomaly detection Training Noise Knowledge engineering Image reconstruction Generative adversarial networks Accuracy Semantics network traffic detection semi-supervised learning anomaly detection | With the rapid development of the Internet, various types of anomaly traffic are threatening network security. However, the difficulty of collecting and labelling anomalous traffic is a significant challenge, so this paper proposes a semi-supervised anomaly detection framework. Considering normal and abnormal traffic have different data distributions, our framework can generate pseudo anomaly samples without prior knowledge of anomalies to achieve the detection of anomaly data. The framework comprises three principal components. Firstly, a pre-trained feature extractor is employed to extract a feature representation of the network traffic. Secondly, a bidirectional normalizing flow module establishes a reversible transformation between the latent data distribution and a Gaussian space. Through this bidirectional mapping, samples first undergo transformation manipulation within the Gaussian distribution space, and are then transported through the generative direction of normalizing flows, translating mathematical transformations into semantic feature evolutions in the latent data space. Finally, a simple classifier explicitly learns the potential differences between anomaly and normal samples to facilitate better anomaly detection. During inference, our framework requires only two modules to detect anomalous samples, leading to a considerable reduction in model size. According to the experiments, our method achieves the state-of-the-art results on the common benchmarking datasets of anomaly network traffic detection. Furthermore, it exhibits good generalisation performance across datasets. | 10.1109/TNSM.2025.3591533 |
Xueling Wu, Long Qu, Maurice Khabbaz | Energy and Interference-Aware Scheduling for Minimizing the Age of Aggregate Information in Multi-hop IoT Networks | 2025 | Early Access | Relays Scheduling Energy consumption Spread spectrum communication Energy efficiency Real-time systems Temperature sensors Minimization Data aggregation Dynamic scheduling Age of Aggregated Information Multi-hop Internet of Things Scheduling Optimization Column Generation | In the realm of complex IoT-based smart city advancements, the real-time reception, processing, and maintenance of up-to-date multi-sourced data is essential for ensuring efficient urban infrastructure operations and functionality. Beyond the typical Age of Information (AoI), such applications vociferate the urgent need for a new metric, capable of capturing and accounting for the age of the aggregated data; namely, the Age of Aggregated Information (AoAI). This paper addresses an AoAI minimization problem for mixed-paths IoT networks. This problem is formulated as a Mixed Integer Linear Program (MILP) that jointly considers data packet scheduling and routing as well as nodal energy and power constraints. To overcome this problem’s notable complexity, the Column Generation Algorithm (CGA) is used to break it down into a Relaxation Master Problem (RMP) and a Pricing Problem (PP) with the objective of identifying optimal scheduling and aggregation strategies. Experimental results demonstrate the potency of the proposed CGA-based algorithm in generating accurate sub-optimal solutions with no more than 1.06% deviation from their optimal counterparts; an outstanding result that existing algorithms have failed to achieve. The variations in AoAI and latency trends were compared and found to be in-line for fixed network configurations. | 10.1109/TNSM.2025.3578347 |
Xiaoyan Dong, Xiaoliang Chen, Zuqing Zhu | On the Risk-aware Connection Defragmentation in OCS-based Data-center Networks | 2025 | Early Access | Optical switches Correlation Artificial intelligence Bandwidth Optical packet switching Training Topology Optical fiber networks Switching circuits Optical interconnections Datacenter networks Optical circuit switching Topology engineering Traffic engineering Risk Flow correlation | The dynamic traffic changes in an optical circuit switching based data-center network (ODCN) can make the utilization of its optical connections fragmented, degrading the efficiency of service provisioning in the ODCN. Consequently, connection defragmentation becomes imperative. However, consolidating traffic onto fewer connections may unilaterally add the risk of bandwidth contention and undermine ODCNs’ robustness against unexpected traffic bursts. To address this issue, we propose risk-aware connection defragmentation (RA-cDF), which explores the topology flexibility of ODCN to consolidate traffic such that the active optical connections through optical circuit switching (OCS) switches can be minimized together with the risk of future bandwidth contention on remaining connections. We formulate a mixed integer linear programming (MILP) model to address the RA-cDF problem exactly, followed by a heuristic to solve it time-efficiently. Extensive simulations confirm the effectiveness of our proposals. | 10.1109/TNSM.2025.3592355 |
Yuhao Hou, Jiazheng Zou, Licheng Wang, Xijie Lu, Xiuhua Lu, Maoli Wang | PRBCP: Publicly Redactable Blockchain with Off-Chain Reputation-Based Consensus Protocol | 2025 | Early Access | Blockchains Decision making Hash functions Consensus protocol Evaluation models Technological innovation Industrial Internet of Things Electronic medical records Training Symbols Redactable blockchain reputation-based consensus dynamic group incentive electronic medical records | Blockchain is renowned for its immutability, a feature that ensures recorded data cannot be modified or deleted. However, malicious entities can exploit this immutability to permanently embed objectionable data. Moreover, the immutability of blockchain can conflict with the “right to be forgotten” provision in the privacy protection laws of the GDPR. Therefore, a seminal redactable blockchain solution is proposed to address the above problem. Recently, one of the primary focuses in redactable blockchain solutions is the design of global editing permission control. The design leverages the consensus voting mechanism, with its core objective being to prevent excessive centralization of editing power, thereby preserving the decentralized nature of blockchain technology. Meanwhile, such schemes exhibit the following phenomena: (i) the members of the editorial decision-making group are fixed and unchanging, and (ii) blockchain nodes display lazy voting behaviors during the editorial voting process. Considering these factors, we introduce a Publicly Redactable Blockchain scheme with an off-chain reputation-based Consensus Protocol (PRBCP). In this scheme, any node in the blockchain network has the opportunity to become an editorial node and perform editing operations. We design a reputation-based off-chain editorial voting consensus protocol leveraging the threshold signature scheme, which enables dynamic updates to the editorial decision-making group membership and enhances nodes’ participation in editorial voting. In addition, we conduct rigorous security proofs and experimental efficiency analyses for our scheme. The results demonstrate that the PRBCP is both secure and efficient. Finally, we instantiate our scheme as a redactable medical blockchain (RMB) system for storing electronic medical records (EMRs). | 10.1109/TNSM.2025.3578659 |
Gabriele Gemmi, Llorenç Cerdà-Alabern, Leonardo Maccari | Next-Generation Wireless Backhaul Design for Rural Areas | 2025 | Early Access | Costs Wireless communication Backhaul networks Relays Logic gates Broadband communication Topology Optical fiber networks Optical fiber devices Internet WISP WBN Digital Divide Wireless Backhaul Resiliency Economic Modeling | Rural areas often face significant challenges in accessing reliable broadband Internet due to high infrastructure costs and low population density. To address this issue, we propose a model for evaluating the performance and the cost of a mesh-based, last-and middle-mile replacement for broadband connection in these underserved regions. We use open data from ten underserved municipalities to assess the demand, plan the mesh network, and estimate the allocated capacity per user. We consider two designs: a low-cost network using the classical 5 unlicensed band, and a high-performance one using mmWave frequency. For both designs, we estimate the Operating Expenditure and the amortized Capital Expenditure using realistic device prices and operating cost estimations. We compare the price of the mesh-based solution with alternatives based on xDSL and satellite connectivity and show that it has competitive prices compared to existing offers, covering a larger portion of households than DSL. We open-source both the code and the elaborated data to reproduce, extend, and improve our results in different settings. | 10.1109/TNSM.2025.3592442 |
Antonio Calagna, Yenchia Yu, Paolo Giaccone, Carla Fabiana Chiasserini | MOSE: A Novel Orchestration Framework for Stateful Microservice Migration at the Edge | 2025 | Early Access | Containers Microservice architectures Protocols Image restoration Quality of experience Autonomous aerial vehicles Kernel Iterative methods Autopilot Source coding Edge computing Service migration Mobile networks Computer vision Machine learning | Stateful migration has emerged as the dominant technology to support microservice mobility at the network edge while ensuring a satisfying experience to mobile end users. This work addresses two pivotal challenges, namely, the implementation and the orchestration of the migration process. We first introduce a novel framework that efficiently implements stateful migration and effectively orchestrates the migration process by fulfilling both network and application KPI targets. Through experimental validation using realistic microservices, we then show that our solution (i) greatly improves migration performance, yielding up to 77% decrease of the migration downtime with respect to the state of the art, and (ii) successfully addresses the strict user QoE requirements of critical scenarios featuring latency-sensitive microservices. Further, we consider two practical use cases, featuring, respectively, a UAV autopilot microservice and a multi-object tracking task, and demonstrate how our framework outperforms current state-of-the-art approaches in configuring the migration process and in meeting KPI targets. | 10.1109/TNSM.2025.3579051 |
Chi Guo, Cong Wang, Qiuzhan Zhou, Juan Li | A Bi-level Scheme for Mixed-Motive and Energy-Efficient Task Offloading in Vehicular Edge Computing Systems | 2025 | Early Access | Servers Optimization Games Mobile handsets Computational efficiency Energy consumption Computational modeling Performance evaluation Resource management Energy efficiency Vehicular Edge Computing Bi-level Optimization Task Offloading Stackelberg Game Multi-Agent Actor-Critic | Edge computing is considered as a promising paradigm to support vehicular applications in the upcoming sixth-generation (6G) vehicular networks. In the context of vehicular edge computing (VEC), the self-interested vehicular users and edge servers work towards incongruous goals. Such mixed-motive setting is detrimental to the collective good, sometimes leading to social dilemmas. To resolve such a conflict, we first formulate a bi-level optimization problem to model mixed-motive task offloading. In this case, vehicular users aim to improve energy efficiency under strict low-latency requirements, whereas edge servers attempt to increase serving efficiency. To address it, we propose a scheme based on bi-level reinforcement learning, i.e., bi-level multi-agent actor-critic (BLMAAC) framework. Specifically, upper-level edge servers make iterative optimization under the best responses of lower-level vehicular users, which can be regarded as a Stackelberg game. Theoretically, we identify the conditions and prove the convergence of the framework that is able to reach Stackelberg equilibrium strategy. By numerical evaluation, the high-utilization edge servers and energy-efficient vehicular users demonstrate the superiority of the bi-level structure. Moreover, the proposed scheme outperforms other actor-critic based learning algorithms and two-stage methods exploring Nash equilibrium strategy. | 10.1109/TNSM.2025.3579598 |
Kashif Mehmood, Katina Kralevska, David Palma | Knowledge-Driven Intent Life-Cycle Management for Cellular Networks | 2025 | Early Access | Knowledge graphs Translation Cellular networks Stakeholders Optimization Ontologies 5G mobile communication Resource description framework Monitoring Quality of service IBN closed-loop control service model knowledge graph learning service and network management optimization | The management of cellular networks and services has evolved due to the rapidly changing demands and complexity of service modeling and management. This paper uses intent-based networking (IBN) as a solution and couples it with contextual information from knowledge graphs (KGs) of network and service components to achieve the objective of service orchestration in cellular networks. Fusing IBN with KGs facilitates an intelligent, flexible, and resilient service orchestration process.We propose an intent completion approach using knowledge graph learning and a mapping model capable of inferring and validating the service intents in the network. Subsequently, these service intents are deployed using available network resources in a simulated fifth generation (5G) non-standalone (NSA) network. The compliance of the deployed intents is monitored, and mutual optimization against their required service key performance indicators is performed using Simultaneous Perturbation Stochastic Approximation (SPSA) and Multiple Gradient Descent Algorithm (MGDA). The numerical results show that the knowledge graph with Gaussian embedding (KG2E) model outperforms other distance-based embedding models for the proposed service KG. Different combinations of strict latency (SL) and non-strict latency (NSL) intents are deployed, and compliance is evaluated for increasing numbers of deployed intents against baseline deployment scenarios. The results show a higher level of compliance for SL intents to target latencies in comparison to NSL intents for the proposed intent deployment and optimization algorithm. | 10.1109/TNSM.2025.3579547 |
Haotian Wang, Jun Tao, Yu Gao, Dingwen Chi, Yuehao Zhu | A Two-Way Auction Approach Toward Data Quality Incentive Mechanisms for Mobile Crowdsensing | 2025 | Early Access | Sensors Mobile computing Data integrity Crowdsensing Data models Recruitment Mobile handsets Hands Games Artificial intelligence Mobile crowdsensing Data quality Incentive mechanism | With the rapid growth of smart devices, mobile crowdsensing is becoming one of the most important and attractive paradigms to acquire information from physical environments. Low-quality data, a notorious but widely found issue, degrades the availability and preciseness of sensing services, especially for these complex sensing task scenarios. However, few existing incentive mechanisms frequently ignore the issue of data quality. In this paper, we define user reputation and user task preferences in a new perspective, while predicting the number of users likely to upload high-quality data by combining Poisson distribution. Then, the maximum expectation algorithm is employed to evaluate the parameter values of the Poisson distribution. Subsequently, a two-way auction mechanism is proposed, which encourages users to participate in the sensing task and improves the match between tasks and users. We adopt the number of high-quality data that the user may upload as a factor in the user’s offer to maximize the quality of data received by the platform. The analysis based on the model lays a theoretical foundation on the incentive process of mobile crowdsensing considering data quality. The evaluation results show that our mechanism outperforms other existing techniques, in terms of robustness and efficiency. | 10.1109/TNSM.2025.3592489 |
Zhichao Zhang, Yanan Cheng, Zhaoxin Zhang, Xinran Liu, Ning Li | 6Hound: An Efficient IPv6 DNS Resolver Discovery Model Based on Reinforcement Learning | 2025 | Early Access | Domain Name System Internet Heuristic algorithms Protocols Deep learning 6G mobile communication Training Resource management Reinforcement learning Probes IPv6 scanning DNS resolver Internet-wide scanning reinforcement learning target generation | DNS resolvers are an important measurement targets in the IPv4/IPv6 Internet for cybersecurity and network management. However, due to the vast address space of IPv6, it is infeasible to discover IPv6 DNS resolvers using brute-force Internet-wide scanning as in IPv4. To address this issue, researchers have developed target generation algorithms (TGAs) to discover active targets in the IPv6 address space. However, most TGAs utilize ICMP as the probing protocol, and depend on large, high-quality ICMP seed address datasets. When the same TGA methods are applied to the UDP/53 protocol, which has a limited number of seed addresses, the efficiency of discovering DNS resolvers is low. To solve this issue, we developed 6Hound to efficiently discover DNS resolvers in the IPv6 Internet. To mitigate the scarcity of UDP/53 seed addresses, we proposed the Pattern-merged Tree, which strategically expands the scanning space by utilizing ICMP seed addresses. To efficiently discover de-aliased active addresses within these merged patterns, we proposed a hierarchical multi-armed bandit to control the distribution of probe packets. We introduced the Sliced Address Generation algorithm and a dynamic alias detection mechanism to enhance the hit rate of each detection round and avoid the misleading effects of aliased addresses. In the experiments conducted in the native IPv6 Internet, we discovered about a million de-aliased active DNS resolver addresses under a budget scale of 50M, which is 110% to 465% higher than the state-of-the-art baseline methods. | 10.1109/TNSM.2025.3580281 |
Xiaoming He, Zijing He, Wenyun Li, Gang Liu, Xuan Wei | Framework for Real-Time Monitoring of Packet Loss Caused by Network Congestion | 2025 | Early Access | Packet loss Monitoring Real-time systems Accuracy Multiprotocol label switching Encapsulation Artificial intelligence Transport protocols Time-frequency analysis Telecommunication traffic Framework packet loss real-time monitoring network congestion | Network congestion induces performance degradation and increases the uncertainty of service delivery, so it is essential to monitor it in real time. In this paper, we discuss the requirements of real-time monitoring of packet loss caused by congestion, present the problems and challenges faced by existing measurement techniques in monitoring congestion induced packet loss, and propose a comprehensive packet loss monitoring framework. The proposed framework is described in detail and its realizability is demonstrated. The proposed scheme is capable to not only determine the time and location of packet loss occurrence, make the accurate statistics of discarded packets, parse what traffic flows are contained in discarded packets and identify what traffic flows lead to microburst, but also obtain accurate packet loss ratio results with zero error. More importantly, our proposed scheme can achieve little or even no interference to network, and is applicable to any data plane without modifying the forwarding chip and packet header as existing measurement methods do. Experimental results have verified the effectiveness of our proposed scheme. Furthermore, we present three typical application scenarios to demonstrate the advantages of the proposed framework | 10.1109/TNSM.2025.3578056 |
Hamza Mokhtar, Xiaoqiang Di, Zhengang Jiang, Jing Chen, Abdelrhman Hassan | Efficient Spatiotemporal Prediction Transformer for Cooperative Satellite Remote Sensing | 2025 | Early Access | Satellites Remote sensing Data communication Telecommunication traffic Delays Real-time systems Network topology Accuracy Topology Spatiotemporal phenomena Remote sensing data Spatiotemporal Prediction Satellites network traffic Attention mechanism Encoder–decoder | Satellite remote sensing cooperation is essential for ensuring efficient data transmission in real-time applications. Network traffic prediction plays a crucial role in optimizing data transmission strategies, managing congestion, and reducing network latency. However, current research work on network traffic prediction frequently fails to fully exploit the complex spatial-temporal dependencies inherent in satellite network traffic. To address this limitation and improve the accuracy of long-term network traffic prediction, we propose an Efficient Spatiotemporal Prediction Transformer (ESPformer) for dynamic data transmission in cooperative satellite remote sensing. The proposed scheme not only considers propagation delays but also captures the temporal and spatial relationships among the network traffic. In particular, we design a spatial-temporal multi-head attention mechanism within an encoder-decoder transformer to capture the dynamic spatial dependencies and predict the network topology and its parameters, including traffic flow and bandwidth. By leveraging historical traffic data and the network traffic conditions, the model estimates expected queuing delays. Finally, based on the volume of traffic predicted and the changes in network conditions, we dynamically adjust the transmission strategies to maintain an efficient relaying mechanism. Therefore, our model enables an adaptive transmission strategy and offers an optimal delay reduction in real-time satellite data transmission. Extensive experiments conducted on four well-known traffic datasets demonstrate that the ESPformer significantly outperforms state-of-the-art baselines across all key performance metrics. | 10.1109/TNSM.2025.3580444 |
Qiangqiang Shi, Jin Liu, Lai Wei, Jiajia Jiao, Bing Han, Zhongdai Wu | SegCoT: Dependable Intrusion Detection System based on Segment-wise CoTransformer for Ship Communication Networks | 2025 | Early Access | Marine vehicles Feature extraction Intrusion detection Accuracy Artificial intelligence Security Maritime communications Data mining Training Threat modeling Intrusion detection deep learning intrusion detection system ship communication networks | Modern vessels integrate a massive digital infrastructure and navigation-dependent operating systems, allowing for ship-to-shore and ship-to-ship collaborative communication. However, the heightened interconnection of various maritime infrastructures inevitably amplifies the risk of vessel navigation and communication. Existing intrusion detection techniques were usually built on individual network events, failing to account for the multi-event long-term dependency problem caused by the high latency and low bandwidth of ship communication networks, therefore cannot tackle sophisticated cyber-ship attacks, resulting in lower accuracy in intrusion detection. In this paper, we propose a dependable Intrusion Detection System(IDS) based on Segment-wise CoTransformer(SegCoT) to detect cyber-ship intrusion events, which primarily contains a two-stage Network Pattern Extraction Component (NPEC) and an Intrusion Event Identification Component (IEIC). The NPEC automates the extraction of long-term dependency of massive intrusion events employing a SegEvent-wise Attention (SEA). Furthermore, the extracted dependencies are leveraged by the IEIC for specific intrusion type detection from a spatio-temporal feature fusion perspective. Based on a cyber-ship dataset collected from real ocean-going vessels, the proposed model achieves 99% intrusion detection accuracy, outperforming the existing state-of-the-art approaches. | 10.1109/TNSM.2025.3580471 |
Shiwen Zhang, Feixiang Ren, Wei Liang, Kuanching Li, Nam Ling | GPVO-FL: Grouped Privacy-Preserving and Verification-Outsourced Federated Learning in Cloud-Edge Collaborative Environment | 2025 | Early Access | Privacy Protection Security Federated learning Servers Computational modeling Accuracy Training Cloud computing Edge computing Federated Learning Privacy-preserving Verification-outsourced Cloud-Edge Collaboration | As a form of distributed machine learning, Federated learning allows users to complete training without sharing local data, thereby protecting user privacy to a certain extent. However, the gradients uploaded by users during the training process can still leak user privacy. Additionally, malicious or lazy cloud servers may tamper with or forge the aggregated results before returning them to users, causing significant losses to the entire training process. Existing solutions focus on security issues, but most privacy protection schemes based on complex cryptographic primitives require high computational power and communication bandwidth. Moreover, to verify the aggregated results, each user must compute proofs, which imposes an additional computational burden on users. Therefore, designing more efficient and lightweight solutions that ensure security while adapting to resource-constrained scenarios is necessary. An efficient group-based scheme for privacy preservation and verification outsourcing in federated learning, referred to as GPVO-FL, is introduced in this work. Specifically, we design a lightweight privacy protection mechanism based on group structure and masking techniques to protect user gradients. In addition, we design an outsourced verification mechanism that offloads the verification process to edge servers, thus reducing the computational burden on users. A detailed security and experimental analysis demonstrates the security and efficiency of our scheme. | 10.1109/TNSM.2025.3592122 |
Yuhang Liu, Fanqin Zhou, Lei Feng, Wenjing Li, Jing Gao | DINT-Based DWRR: Decentralised INT-Based Packet Scheduling Method for Multipath Communication | 2025 | Early Access | Telemetry Switches Routing Load management Control systems Real-time systems Dynamic scheduling Performance evaluation MPTCP Bandwidth P4 INT DWRR load balance | Multipath communication is a technique that utilizes multipath transmission to improve network transmission efficiency. Multipath transport protocols like MPTCP usually require complex signaling control and lack adaptability to instantaneous network changes. The advent of programmable switches and INT has addressed these issues to some extent. In this paper, we propose DINT-Based DWRR (Decentralised INT-Based Dynamic Weight Round Robin), a dynamic weight round-robin packet scheduler based on non-centralized telemetry technology. It aims to collect telemetry information and update path weights with millisecond granularity, and efficiently achieve load balancing while reducing telemetry overhead. The core idea of DINT-Based DWRR is to leverage data-plane programmability to achieve the convergence of the forwarding node and the computing node. The forwarding nodes forward the packets using the DWRR (Dynamic Weight Round Robin) method and periodically generate telemetry messages. The computing nodes are dispersed across the forwarding nodes and efficiently update weights to the forwarding nodes. After testing in various experimental scenarios, it is proven that DINT-Based DWRR can provide better scheduling policies, reduce the link packet loss rate, and increase link bandwidth utilization. | 10.1109/TNSM.2025.3592748 |
Dan Tang, Chenguang Zuo, Jiliang Zhang, Keqin Li, Qiuwei Yang, Zheng Qin | MARS: Defending TCP Protocol Abuses in Programmable Data Plane | 2025 | Early Access | Protocols Prevention and mitigation Denial-of-service attack Monitoring Training Switches Receivers Programming Computer languages Bandwidth attack mitigation TCP protocol abuse machine learning heuristic rule programmable data plane | The TCP protocol’s inherent lack of built-in security mechanisms has rendered it susceptible to various network attacks. Conventional defense approaches face dual challenges: insufficient line-rate processing capacity and impractical online deployment requirements. The emergence of P4-based programmable data planes now enables line-speed traffic processing at the hardware level, creating new opportunities for protocol protection. In this context, we present MARS -a data plane-native TCP abuse detection and mitigation system that synergistically combines the Beaucoup traffic monitoring algorithm with artificial neural network (ANN) based anomaly detection, enhanced by adaptive heuristic mitigation rules. Through comprehensive benchmarking against existing TCP defense mechanisms, our solution demonstrates 12.95% higher throughput maintenance and 25.93% improved congestion window recovery ratio during attack scenarios. Furthermore, the proposed framework establishes several novel evaluation metrics specifically for TCP protocol protection systems. | 10.1109/TNSM.2025.3580467 |