Last updated: 2024-04-24 03:01 UTC
All documents
Number of pages: 113
Author(s) | Title | Year | Issue | Keywords | ||
---|---|---|---|---|---|---|
Abdullatif Albaseer, Nima Abdi, Mohamed Abdallah, Marwa Qaraqe, Saif Al-Kuwari | FedPot: A Quality-Aware Collaborative and Incentivized Honeypot-Based Detector for Smart Grid Networks | 2024 | Early Access | Security Data models Costs Training Industrial Internet of Things Data integrity Data privacy AMI Honeypot-Based Detector Security Model Machine Learning Incentive Mechanism Collaborative Learning | Honeypot technologies provide an effective defense strategy for the Industrial Internet of Things (IIoT), particularly in enhancing the Advanced Metering Infrastructure’s (AMI) security by bolstering the network intrusion detection system. For this security paradigm to be fully realized, it necessitates the active participation of small-scale power suppliers (SPSs) in implementing honeypots and engaging in collaborative data sharing with traditional power retailers (TPRs). To motivate this interaction, TPRs incentivize data sharing with tangible rewards. However, without access to an SPS’s confidential data, it is daunting for TPRs to validate shared data, thereby risking SPSs’ privacy and increasing sharing costs due to voluminous honeypot logs. These challenges can be resolved by utilizing Federated Learning (FL), a distributed machine learning (ML) technique that allows for model training without data relocation. However, the conventional FL algorithm lacks the requisite functionality for both the security defense model and the rewards system of the AMI network. This work presents two solutions: first, an enhanced and cost-efficient FedAvg algorithm incorporating a novel data quality measure, and second, FedPot, the development of an effective security model with a fair incentives mechanism under an FL architecture. Accordingly, SPSs are limited to sharing the ML model they learn after efficiently measuring their local data quality, whereas TPRs can verify the participants’ uploaded models and fairly compensate each participant for their contributions through rewards. Moreover, the proposed scheme addresses the problem of harmful participants who share subpar models while claiming high-quality data through a two-step verification approach. Simulation results, drawn from realistic mircorgrid network log datasets, demonstrate that the proposed solutions outperform state-of-the-art techniques by enhancing the security model and guaranteeing fair reward distributions. | 10.1109/TNSM.2024.3387710 |
Taha Ben Salah, Marios Avgeris, Aris Leivadeas, Ioannis Lambadaris | VNF Placement and Dynamic NUMA Node Selection Through Core Consolidation at the Edge and Cloud | 2024 | Early Access | Servers Delays Resource management Costs Random access memory Quality of service Throughput Network Function Virtualization Service Function Chaining Edge Computing Cloud Computing Resource Allocation Non-Uniform Memory Access | The recent networking trends driven primarily by the different virtualization technologies, such as Network Function Virtualization (NFV) and Service Function Chaining (SFC) pave the way for next-generation network services. In the 5G and beyond era, such services usually have strict delay requirements and the wider adoption of the distribution of their computational needs across the Edge-to-Cloud continuum is certainly a step in the right direction. However, the majority of the optimization solutions for placing the virtualized services so far focus on server selection, leaving other areas such as the impact of Non-Uniform Memory Access (NUMA) and CPU core selection underexplored. In this work, we herein formulate the problem of placing services as SFCs on an Edge/Cloud infrastructure, as a Mixed Integer Programming (MIP) problem. Then, we propose a heuristic algorithm called “Dynamic numa node Selection through Cores consolidation – DySCo" to solve it, which optimizes the placement in terms of server, NUMA and core selection. To the best of our knowledge, this is the first attempt to optimize network service placement in an Edge-Cloud interplay. Extensive simulation evaluation shows that DySCo is able to perform close to optimal while finding a solution in a real time fashion. Compared to a mix of baselines and modified solutions from the literature to treat this new problem, DySCo reduces on average the deployment cost by 17.53% and the delay by 28.88% for a given SFC. | 10.1109/TNSM.2024.3387275 |
Yuxing Tian, Lei Liu, Jie Feng, Qingqi Pei, Chen Chen, Jun Du, Celimuge Wu | Towards Robust and Generalizable Federated Graph Neural Networks for Decentralized Spatial-Temporal Data Modeling | 2024 | Early Access | Data models Servers Training Graph neural networks Message passing Sensors Predictive models Federated learning split learning graph neural network spatial-temporal forecasting | Federated learning has been combined with graph learning for modeling spatial-temporal data while maintaining data confidentiality and safety. However, there are still several issues: 1) In practical usage, some clients may be unable to participate in the model inference due to poor network signal, malicious attacks, etc. 2) In the communication process, the uploaded information is easily disturbed by noise. The performance of the graph model will be seriously affected by its low robustness. Additionally, the assumption of identical distribution between the training and testing domain does not hold in practical scenarios, resulting in overfitting and poor generalization ability of the trained models. 3) The relations that exist among clients may change dynamically over time and manually constructing the graph structure of clients may not accurately represent the relations among clients. In this paper, we address all the above limitations by proposing a robust hierarchical split-federated graph model named DCSFG. Specifically, DCSFG combines split-federated learning and spatial-temporal graph model to better capture the spatial-temporal dependencies. We propose a Dropclient method and introduce the uncertainty estimation to enhance the robustness and generlization ability of the model. We also design a dual-sub-decoders structure for clients so that they can perform predictions locally and independently when they are unable to participate in the inference process. A novel hierarchical graph message passing structure is proposed to enable each client to perceive the global and local information. The extensive experimental results demonstrate the effectiveness of DCSFG. | 10.1109/TNSM.2024.3386740 |
Gustavo Z. Bruno, Gabriel M. Almeida, Aditya Sathish, Aloízio P. da Silva, Luiz A. DaSilva Alexandre Huff, Kleber V. Cardoso, Cristiano B. Both | Evaluating the Deployment of a Disaggregated Open RAN Controller On a Distributed Cloud Infrastructure | 2024 | Early Access | Cloud computing Computer architecture Costs Data centers Task analysis Resource management Real-time systems near-RT RIC B5G cloud-native O-RAN disaggregation CNF placement | This article investigates the deployment of a Near-Real-Time Radio Access Network (RAN) Intelligent Controller (near-RT RIC) on a distributed cloud infrastructure composed of multiple physical sites with different amounts of resources and associated costs. The challenge is dynamically adapting the near-RT RIC deployment to the most cost-effective arrangement while meeting the latency requirements between the near-RT RIC and the controlled nodes. We introduce an optimization model to solve the disaggregated near-RT RIC placement problem, considering a cloud-native infrastructure to minimize the placement cost while satisfying the latency-sensitive control loop requirements across the cloud-edge continuum. Moreover, we describe an experimental environment we created using geographically disparate cloud sites. We present data detailing the latencies of the communication links among these sites and the costs incurred in using this real-world infrastructure. We conduct a performance evaluation of the near-RT RIC deployment, comparing the distributed approach versus a traditional monolithic strategy and evaluating positioning costs, deployment, setup and registration times, and the control loop latency considering three scenarios. Our results show that in a cloud-native environment, the disaggregated near-RT RIC allows cost savings of up to 60% in comparison to a monolithic near-RT RIC while satisfying the control loop latency and achieving time efficiency in terms of deployment and registration of xApps and near-RT RIC components. | 10.1109/TNSM.2024.3386902 |
Ru Huo, Xiangfeng Cheng, Chuang Sun, Tao Huang | A Cluster-Based Data Transmission Strategy for Blockchain Network in the Industrial Internet of Things | 2024 | Early Access | Blockchains Industrial Internet of Things Edge computing Data communication Computer architecture Topology Cloud computing Industrial Internet of Things (IIoT) blockchain edge computing clustering data transmission strategy | The proliferation of devices and data in the Industrial Internet of Things (IIoT) has rendered the traditional centralized cloud model unable to meet the stringent requirements of wide-scale and low latency in these IIoT scenarios. As emerging technologies, edge computing enables real-time processing and analysis on devices situated closer to the data source while reducing bandwidth requirements. Blockchain, being decentralized, could enhance data security. Therefore, edge computing and blockchain are integrated in IIoT to reduce latency and improve security. However, the inefficient data transmission of blockchain leads to increased transmission latency in the IIoT. To address this issue, we propose a cluster-based data transmission strategy (CDTS) for blockchain network. Initially, an improved weighted label propagation algorithm (WLPA) is proposed for clustering blockchain nodes. Subsequently, a spanning tree topology construction (STTC) is designed to simplify the blockchain network topology, based on the above node clustering results. Additionally, leveraging clustered nodes and tree topology, we propose a data transmission strategy to speed up data transmission. Simulation experiments show that CDTS effectively reduces data transmission time and better supports large-scale IIoT scenarios. | 10.1109/TNSM.2024.3387120 |
Mohammad Hossein Shokouhi, Mohammad Hadi, Mohammad Reza Pakravan | Mobility-Aware Computation Offloading for Hierarchical Mobile Edge Computing | 2024 | Early Access | Servers Cloud computing Task analysis Computer architecture Edge computing Optimization Costs Computation offloading Edge computing Mobile edge computing Mobility management Resource allocation | Mobile edge computing (MEC) is a promising technology that aims to reduce the total latency of user equipment (UE) by deploying computation resources at the edge of mobile networks. UE mobility is a challenging factor that causes the traditional MEC architecture to suffer from several issues, such as decreased efficiency and frequent service interruptions. One popular method to manage UE mobility is virtual machine (VM) migration, which requires high bandwidth and causes undesirable latency, rendering it impractical for real-time tasks with stringent latency requirements. This paper proposes a hierarchical architecture for MEC networks that facilitates mobility management and mitigates the need for VM migration. In order to utilize this architecture efficiently, a Markov chain-based predictive strategy is introduced to predict UE mobility. Afterward, an optimization problem is formulated to make the optimal long-term offloading decisions for UEs such that their expected cost is minimized subject to latency commitments and resource consumption constraints. Simulation results demonstrate that the proposed scheme reduces the cost of high-mobility UEs by up to 25% compared to traditional schemes. Furthermore, the measures of movement direction predictability and offloading decision popularity are introduced that provide insights into the behavior of the proposed and counterpart schemes. | 10.1109/TNSM.2024.3386845 |
Zhenzhen Han, Guofeng Zhao, Yu Hu, Chuan Xu, Kefei Cheng, Shui Yu | Dynamic Bond Percolation-Based Reliable Topology Evolution Model for Dynamic Networks | 2024 | Early Access | Network topology Topology Predictive models Wireless communication Reliability Mathematical models Interference Dynamic network Markov chain Reliable topology evolution Dynamic bond percolation | With the development of wireless communications, the 6G network is evolving toward dynamics, complexity, and integration. The mobility of nodes and intermittently of links lead to frequent variations in the network topology. When constructing the topology model, the reliability of topology is not only affected by the physical properties of wireless links but also related to the evolution process of nodes and links states, which is indispensable for improving the accuracy of the topology model. In this paper, we propose the evolution model based on dynamic bond percolation to characterize the reliable topology evolution. Firstly, key factors that cause the network topology changes are analyzed, integrating the characteristics of the node mobility and link channel conditions. Especially signal interference, buffer of nodes, and link availability are modeled for wireless link states. Then, the interactions between adjacent links are formulated by an extended Dynamic Bond Percolation (DBP) model to obtain the topology state transition matrix, which can accurately depict the change of link connection. Based on the quantitative analysis of wireless link states, Markov chain and master equation are employed to build the Dynamic Topology Evolution (DTE) model. Meanwhile, the network topology prediction problem is transformed into a linear system solution problem to obtain the steady-state network topology based on the DTE algorithm. Finally, the results suggest that utilizing the DTE model can significantly improve the accuracy of topology prediction and overall network performance. | 10.1109/TNSM.2024.3386613 |
Jun Wang, Hai Pei, Ruiliang Wang, Ruiquan Lin, Zhou Fang, Feng Shu | Defense Management Mechanism for Primary User Emulation Attack Based on Evolutionary Game in Energy Harvesting Cognitive Industrial Internet of Things | 2024 | Early Access | Games Industrial Internet of Things Game theory Throughput Security Jamming Energy harvesting Cognitive Industrial Internet of Things (CIIoT) evolutionary game theory (EGT) primary user emulation attack (PUEA) energy harvesting (EH) | Cognitive Industrial Internet of Things (CIIoT) permits Secondary Users (SUs) to use the spectrum bands owned by Primary Users (PUs) opportunistically. However, in the absence of the PUs, the selfish SUs could mislead the normal SUs to leave the spectrum bands by initiating a Primary User Emulation Attack (PUEA). In addition, the application of Energy Harvesting (EH) technology can exacerbate the threat of security. Because the energy cost of initiating a PUEA is offset to some extent by EH technology which can proactively replenish the energy of the selfish nodes. Thus, EH technology can increase the motivation of the selfish SUs to initiate a PUEA. To address the higher motivation of the selfish SUs attacking in CIIoT scenario where the EH technology is applied, in this paper, an EH-PUEA system model is first established to study the security countermeasures in this severe scenario of PUEA problems. Next, a new reward and punishment defense management mechanism is proposed, and then the dynamics of the selfish SUs and the normal SUs in a CIIoT network are studied based on Evolutionary Game Theory (EGT), and the punishment parameter is adjusted according to the dynamics of the selfish SUs to reduce the proportion of the selfish SUs’ group choosing an attack strategy, so as to increase the throughput achieved by the normal SUs’ group. Finally, the simulation results show that the proposed mechanism is superior to the conventional mechanism in terms of throughput achieved by the normal SUs’ group in CIIoT scenario with EH technology applied. | 10.1109/TNSM.2024.3386617 |
Hussah Albinali, Farag Azzedin | Towards RPL Attacks and Mitigation Taxonomy: Systematic Literature Review Approach | 2024 | Early Access | Taxonomy Security Reviews Routing Measurement Internet of Things Vectors RPL routing attacks attacks classification mitigation methods classification systematic literature review | The RPL protocol was initially created to connect multiple IP-based communication applications in low-power and lossy network environments. However, it has become a target for numerous attacks, which have sparked a need for researchers to develop effective security measures. In this article, we conduct a comprehensive literature review covering 175 papers on RPL attacks and their mitigation solutions. Our rigorous selection process provides a deep understanding of how RPL is exploited for different attacks and the solutions designed to mitigate such attacks. Our study proposes RPL attacks taxonomy based on the attack vector, along with mitigation solutions taxonomy. RPL attacks are classified based on the attack vector, such as generating, dropping, modifying, or replaying specific control messages. The mitigation solutions include authenticating, updating, discarding, encrypting, and isolating attacking nodes. Furthermore, we presented evaluation metrics in the literature to evaluate the performance of mitigation solutions. Besides, we proposed efficiency criteria to assess mitigation solutions’ efficiency. The review aims to uncover innovative techniques that can effectively safeguard networks against routing attacks. | 10.1109/TNSM.2024.3386468 |
Bin Yuan, Chi Zhang, Jiajun Ren, Qunjinming Chen, Biang Xu, Qiankun Zhang, Zhen Li, Deqing Zou, Fan Zhang, Hai Jin | Towards Automated Attack Discovery in SDN Controllers Through Formal Verification | 2024 | Early Access | Control systems Topology Process control Security Protocols Network topology Formal verification SDN security network security model checking | Software-defined Network (SDN), presented to be a novel architecture of network because of its separation of data plane and control plane, brings centralization and extensibility to network management as well as new attacks that exploit the flexibility of SDN. OpenFlow, which is the protocol that is applied by the majority of SDN, leads to the widely used definition of the communication between the controller and the switch resulting in similar implementations regardless of different vendors. In this paper, we focus on the mechanisms of packet processing and topology discovery and their fundamental weaknesses caused by general implementations or device limitations. Despite the common vulnerabilities, the universal standard mechanisms of basic function in SDN also enlighten us to present an automated attack discovery method based on the formal verification with a generic model of SDN system. We describe the abstraction of the SDN components, their key functions, and communications along with the malicious operations that could be executed by malicious hosts and malicious switches and translate them into a formal model of the SDN system. The formal verification carried on with the assertion representing the security properties derived from the common vulnerabilities of the SDN system reports the potential attack paths each of which shows an attack process. Our evaluation shows that our method can discover feasible attack paths efficiently and effectively, with 23 attacks being identified, among which 2 are new. We further demonstrate the practicality of the 2 new attacks. | 10.1109/TNSM.2024.3386404 |
Baobao Chai, Jiguo Yu, Biwei Yan, Yong Yu, Shengling Wang | BSCDA: Blockchain-Based Secure Cross-Domain Data Access Scheme for Internet of Things | 2024 | Early Access | Blockchains Encryption Authentication Servers Organizations Computer science Access control Cross-domain Data Access Certificate Blockchain Key Negotiation | In the current hypergrowth phase of the Internet of Things, cross-domain data access becomes more and more frequently. Whereas, the lack of trust between domains makes cross-domain data access extremely hard. Traditional schemes typically depend on a third party to establish trust between data accessing entities, which can easily result in single point of failure. To conquer the aforementioned challenge, this paper proposes BSCDA, a blockchain-based cross-domain data access scheme designed to enable secure data transmission across domains. The decentralization, transparency, and anti-tampering features of blockchain perfectly solve the issue of single point of failure and foster trust among various domains. Specifically, a certificate management method is developed to address the certificate storage issue by leveraging a mapping table to store the revocation certificate index on the blockchain. This method not only ensures the verifiability of the certificate but also reduces the storage overhead. Additionally, a four-party key agreement mechanism is designed to guarantee the secure data transmission during the process of cross-domain data access. Security analysis prove the feasibility of our proposed scheme. Extensive experiments demonstrate the superiority of our scheme in cross-domain data access. | 10.1109/TNSM.2024.3385777 |
Gurpreet Singh, Keshav Sood, P. Rajalakshmi, Dinh Duc Nha Nguyen, Yong Xiang | Evaluating Federated Learning Based Intrusion Detection Scheme for Next Generation Networks | 2024 | Early Access | Data models Training Intrusion detection Computational modeling Performance evaluation Next generation networking Federated learning Cyber-security Federated learning Intrusion detection deep learning Anomaly detection | The proliferation of billions of heterogeneous Internet of Things (IoT) devices at a rapid pace has resulted in a marked expansion of attack surfaces. Numerous new attacks are constantly emerging to undermine the network’s availability, data confidentiality, and systems’ integrity due to inadequate security measures and resource limitations. Intrusion detection systems (IDSs) are used as the first line of defense to identify early instances of cyber-attacks targeting critical points. However, Next-Generation Networks (NGNs) with dense connectivity pose a challenge for traditional IDS approaches, as they raise concerns about users’ data privacy. Federated learning-based IDSs (Fed-IDSs) are an emerging and promising solution, as they permit the training of machine learning models on decentralized data stored on devices without compromising privacy. However, Fed-IDSs also have some unique issues. We identified that the existing Fed-IDSs have poor performance since the datasets used for evaluation, or the data in the real world, are highly imbalanced, and classes are not uniformly distributed. Motivated by this, we developed a novel IDS to effectively address the problem of class imbalance in federated learning at both the local and global levels. Following this, we evaluated the performance of our Fed-IDS under both independent and identically distributed (IID) and non-IID data settings and observed its generalizability to detect various attacks improved greatly. Extensive experiments are conducted to illustrate the effectiveness and benefits of this proposal. | 10.1109/TNSM.2024.3385385 |
Prajjwal Gupta, Aviral Jain, Sumaiya Thaseen Ikram, Thippa Reddy Gadekallu, Gautam Srivastava | DAIDNet: A Lightweight Domain Aware Architecture for Automated Detection of Network Penetrations | 2024 | Early Access | Artificial neural networks Intrusion detection Feature extraction Training Telecommunication traffic Support vector machines Monitoring Autoencoders Cyber Security Intrusion detection Systems Neural Networks | Intrusion detection and prevention has been an area of active research in the use of machine learning for cyber security practices. Artificial Neural Networks (ANN) are one of the best-known models when it comes to accurately classifying intrusions into attack classes or benign profiles but they are resource-intensive. A server is typically associated with large a amount of high-frequency data. In such a condition, deploying ANN for this purpose can cause significant overhead and delays in the delivery of packets to their intended destination. Furthermore, existing deep learning approaches do not address the similarity between different attack classes, the information regarding which can be used to select the defence strategies. We propose a lightweight architecture called DAIDNet that utilizes the information contained by the domain of classes extracted from packet distributions to make better predictions. Results show that DAIDNet achieves better accuracy while being significantly smaller in size than a baseline ANN model. DAIDNet achieves validation accuracy of 99.66% and 99.98% on the NSL-KDD and CICIDS-2018 datasets, respectively. | 10.1109/TNSM.2024.3384942 |
Jin Yang, Xinyun Jiang, Yulin Lei, Weiheng Liang, Zicheng Ma, Siyu Li | MTSecurity: Privacy-Preserving Malicious Traffic Classification using Graph Neural Network and Transformer | 2024 | Early Access | Cryptography Feature extraction Deep learning Transformers Payloads Privacy Graph neural networks Network intrusion detection encrypted malicious traffic classification deep learning graph neural networks transformer | Encrypting network traffic is an effective means of safeguarding user privacy and sensitive information. However, it also introduces potential vulnerabilities that can be exploited by network attackers, posing significant security risks to the Internet. In response to the challenge of low accuracy in existing methods for classifying encrypted malicious traffic, we propose a novel approach named MTSecurity, which leverages Transformer and Graph Neural Network technologies. This method automatically extracts raw byte features and graph-based traffic interaction features from encrypted malicious flows, combining them to substantially enhance the classification accuracy of encrypted malicious traffic. Furthermore, we introduce a graph structure called the Malicious Traffic Interaction Graph (MTIG) for representing encrypted malicious traffic. MTIG is based on the client-server interaction process and incorporates multidimensional traffic features. Experimental results demonstrate that the proposed MTSecurity model consistently performs well across different datasets, surpassing state-of-the-art methods. It achieves an accuracy of 0.9946 and an F1 score of 0.9940 on the MCFP dataset, and an accuracy of 0.9948 with an F1 score of 0.9934 on the USTC-TFC dataset. | 10.1109/TNSM.2024.3383851 |
Elie Inaty, Ghattas Akkad, Martin Maier | ANFIS-DBA: ANFIS Based Dynamic Bandwidth Allocation Scheme for Latency Driven Cost Effective Next Generation PON | 2024 | Early Access | Optical network units Bandwidth Costs Passive optical networks 6G mobile communication Wavelength division multiplexing Prediction algorithms 6G DBA jitter latency NG-EPON fuzzy logic artificial neural networks ANFIS throughput | Motivated by the six generation (6G) system’s stringent latency and throughput requirements, we propose a high performance dynamic bandwidth allocation (DBA) scheme for the next generation passive optical network (PON). The proposed algorithm performs simultaneous time and wavelengths scheduling, which make the optimization problem extremely complex. For this reason, we propose a novel adaptive neuro-fuzzy inference system based DBA (ANFIS-DBA) algorithm, which simplifies our complex scheduling algorithm. The proposed ANFIS-DBA has the latency and traffic cost as inputs and the allocated channels and bandwidth as output. Its prediction model is based on input–output experimental data generated from the original optimization problem which is then used to create a Sugeno type fuzzy inference system (FIS) for estimating the number of channels that will be scheduled for an optical network unit (ONU). Predicted data resulting from the proposed ANFISDBA are compared with the optimal values, indicating that both models perform nearly the same with a root mean square error (RMSE) of less than one. In addition, a sensitivity analysis is conducted showing that the proposed ANFIS-DBA is more sensitive to the traffic cost input. The numerical achievements of the paper show that the proposed ANFIS-DBA is capable of accommodating a network throughput higher than 1.5 Tbps while securing a latency and jitter below 100 μs and 10 μs, respectively, which fully meets the 6G network performance requirements. | 10.1109/TNSM.2024.3383951 |
Jayasree Sengupta, Mike Kosek, Justus Fries, Simone Ferlin-Reiter, Vaibhav Bajpai | On Cross-Layer Interactions of QUIC, Encrypted DNS and HTTP/3: Design, Evaluation and Dataset | 2024 | Early Access | Protocols Domain Name System Privacy Servers IP networks Encryption Cloud computing QUIC Web HTTP/3 DNS | Every Web session involves a DNS resolution. While, in the last decade, we witnessed a promising trend towards an encrypted Web in general, DNS encryption has only recently gained traction with the standardisation of DNS over TLS (DoT) and DNS over HTTPS (DoH). Meanwhile, the rapid rise of QUIC deployment has now opened up an exciting opportunity to utilise the same protocol to not only encrypt Web communications, but also DNS. In this paper, we evaluate this benefit of using QUIC to coalesce name resolution via DNS over QUIC (DoQ), and Web content delivery via HTTP/3 (H3) with 0-RTT. We compare this scenario using several possible combinations where H3 is used in conjunction with DoH and DoQ, as well as the unencrypted DNS over UDP (DoUDP). We observe, that when using H3 1-RTT, page load times with DoH can get inflated by >30% over fixed-line and by >50% over mobile when compared to unencrypted DNS with DoUDP. However, this cost of encryption can be drastically reduced when encrypted connections are coalesced (DoQ + H3 0-RTT), thereby reducing the page load times by 1/3 over fixed-line and 1/2 over mobile, overall making connection coalescing with QUIC the best option for encrypted communication on the Internet. | 10.1109/TNSM.2024.3383787 |
Lu Yang, Songtao Guo, Defang Liu, Yue Zeng, Xianlong Jiao, Yuhao Zhou | ConViTML: A Convolutional Vision Transformer-Based Meta-Learning Framework for Real-Time Edge Network Traffic Classification | 2024 | Early Access | Feature extraction Metalearning Training Transformers Task analysis Payloads Measurement Meta-learning network traffic classification edge computing visual transformer | Traditional traffic classification methods struggle to identify emerging network traffic due to the need for model retraining, which hampers the real-time response of deployed edge devices. Furthermore, emerging network traffic samples are often scarce, traditional methods often treat a session as a single image, thereby overlooking essential structural features. These factors can result in poor generalization ability of the trained model. To overcome these challenges, we propose ConViTML (Convolutional Vision Transformer-based Meta-Learning), a real-time end-to-end network traffic classification framework that employs meta-learning to avoid model retraining. We propose a novel feature extraction network, Convolutional Visual Transformer (ConViT), merging Convolutional Neural Network (CNN) and Visual Transformer (ViT). ConViT can directly extract low-dimensional discriminative features containing basic and structural features of the session, which is vital for improving detection accuracy and accelerating convergence in a data-scarce environment. Furthermore, we employ a Packet-based Relation Network (PRN) to analyze the matching degree of support samples and query samples. Therefore, accurate classification in novel traffic identification tasks can be achieved with just a few labeled samples, eliminating extensive data collection and labeling operations. Finally, we replace various feature extractors and compare our approach with the classic meta-learning framework Relation Network (RelationNet). Extensive experimental results demonstrate that ConViTML outperforms others with various performance indicators. | 10.1109/TNSM.2024.3383218 |
Jian Li, Qinglin Zhao, Shaohua Teng, Naiqi Wu, Guanghui Li, Yi Sun | HSA-EDI: An Efficient One-Round Integrity Verification for Mobile Edge Caching Using Hierarchical Signature Aggregation | 2024 | Early Access | Servers Data integrity Image edge detection Computational efficiency Heuristic algorithms Maintenance engineering Quality of service Edge data integrity mobile edge computing per-edge one-round signature aggregation | Mobile edge computing allows for high-performance and low-latency applications by delegating computation and data processing tasks to edge servers. However, ensuring the integrity of cached data on these servers can be challenging due to their limited resources. Current designs often use a per-edge multi-round approach, which necessitates multiple communication rounds between each edge server and the application vendor (AppVend). This approach results in high communication and computational costs, as well as the stragglers effect during batch verification. To address these inefficiencies, we propose a Hierarchical Signature Aggregation for Edge Data Integrity (HSA-EDI) verification design. Our design adopts a novel per-edge one-round approach, which significantly reduce the number of communication rounds to one for each edge server, while mitigating the impact of stragglers. Furthermore, it remarkably reduces computational costs through a hierarchical aggregation mechanism. This mechanism supports intra-edge signature aggregation at the edge server level, followed by inter-edge aggregation at the AppVend, which enhances overall efficiency. We then conduct a theoretical analysis of HSA-EDI’s correctness, security, and communication, computation, and storage efficiency. Experimental results validate its superior performance over state-of-the-art designs. | 10.1109/TNSM.2024.3383239 |
Rui He, Bangbang Ren, Junjie Xie, Deke Guo, Laiping Zhao | Efficient Online Scheduling of Service Function Chains Across Multiple Geo-Distributed Regions | 2024 | Early Access | Servers Bandwidth Service function chaining Memory management Hardware Dynamic scheduling Costs Network function virtualization SFC scheduling deadline constraint geo-distributed regions | Traditional network functions are typically implemented using specialized hardware appliances, which are expensive and difficult to upgrade. Network Function Virtualization (NFV) offers an effective approach to address these challenges by implementing comparable functionalities on commercial servers through software-based virtualization. In NFV, a sequence of Virtual Network Functions (VNFs) is orchestrated to form a Service Function Chain (SFC) that provides flexible network services. However, scheduling SFCs with multiple resource constraints to achieve high reliability poses a critical challenge. Existing approaches often assume offline scheduling and overlook the dynamic nature of heterogeneous resource loads across regions. Moreover, they primarily focus on individual VNFs rather than considering the cross-region scheduling of the entire SFC, which can result in increased transmission delay. In this paper, we investigate the problem of service function chain scheduling across multiple regions (SFCS-MR) with deadline constraints, aiming to maximize the success rate of requests. We formulate this problem as an Integer Linear Programming (ILP) model and prove its NP-hardness. To address this problem effectively, we propose a two-stage algorithm that determines whether an SFC requires cross-region scheduling and selects the suitable regions for its execution. Through extensive experimental evaluations, we demonstrate that our cross-region SFC scheduling solution can achieve a maximum improvement of 32.42% in the overall request success rate compared to benchmarks. | 10.1109/TNSM.2024.3383213 |
Xueyang Feng, Zhongyuan Jiang, Wei You, Jie Yang, Xinghua Li, Jianfeng Ma | PSMA: Layered Deployment Scheme for Secure VNF Multiplexing Based on Primary and Secondary Multiplexing Architecture | 2024 | Early Access | Costs Multiplexing Security Integer linear programming Ultra reliable low latency communication Smart manufacturing Integer programming NFV Network Slicing Multiplexing Architecture Security | The adoption of SDN/NFV opens avenues for efficient network slicing deployment and cost control. However, the dynamic cost reduction brought by deployment location optimization is not suitable for all scenarios. To further reduce the cost, we recommend a sharing strategy in NFV. In this paper, we introduce a two-layer VNF multiplexing architecture, named PSMA, which guarantees both efficient VNF sharing operations and secure slicing during multiplexing. Leveraging the SDN/NFV features, the proposed scheme splits data processing and key management and establish secure connections using SDN’s programmable routing. The provided framework integrates comprehensive life-cycle management and key delivery mechanisms. The article substantiates its availability through extensive simulations of VNF reuse on a randomly generated network with Virtual Network Requests (VNR). The empirical results indicate a significant cost reduction of 5% to 10%, particularly pronounced in scenarios involving a substantial number of short-lived and transitional VNFs. | 10.1109/TNSM.2024.3382676 |