diff --git "a/20240318/2403.05822v2.json" "b/20240318/2403.05822v2.json" new file mode 100644--- /dev/null +++ "b/20240318/2403.05822v2.json" @@ -0,0 +1,222 @@ +{ + "title": "\ud835\udd4b\u2062\ud835\udd63\u2062\ud835\udd52\u2062\ud835\udd57\u2062\ud835\udd57\u2062\ud835\udd5a\u2062\ud835\udd54\ud835\udd3e\u2062\u2119\u2062\ud835\udd4b: Breaking the Token Barrier for Efficient Long Traffic Analysis and Generation", + "abstract": "Over the years, network traffic analysis and generation have advanced significantly. From traditional statistical methods, the field has progressed to sophisticated deep learning techniques. This progress has improved the ability to detect complex patterns and security threats, as well as to test and optimize network performance. However, obstacles persist, such as the dependence on labeled data for analysis and the difficulty of generating traffic samples that follow realistic patterns. Pre-trained deep neural networks have emerged as powerful tools to resolve these issues, offering improved performance by learning robust data representations from large unlabeled datasets. Despite their benefits, existing pre-trained models face challenges like token length limitation, which restricts their usefulness in comprehensive traffic analysis and realistic traffic generation.\nTo address these challenges, we introduce TrafficGPT, a deep learning model that\ncan tackle complex challenges related to long flow classification and generation tasks. This model\nuses generative pre-training with the linear attention mechanism, which allows for a substantially increased capacity of up to 12,032 tokens from the previous limit of only 512 tokens.\nTrafficGPT demonstrates superior performance in classification tasks, reaching state-of-the-art levels. In generation tasks, it closely resembles real traffic flows, with low JS divergence and an F1 score close to 0.5 (representing a random guess) in discriminating generated data.\nThese advancements hold promise for future applications in both traffic flow classification and generation tasks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The analysis and generation of network traffic have long been two critical tasks. Network traffic analysis can be utilized to identify patterns, detect security threats, and optimize network performance among other applications[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. Meanwhile, network traffic generation can be used to simulate various scenarios for testing network infrastructure, validating security measures, and training machine learning models to recognize and respond to different network behaviors[4 ###reference_b4###, 5 ###reference_b5###].\nNetwork traffic analysis has made significant strides in recent years, transitioning from traditional statistical methods to more advanced deep learning techniques. Early approaches heavily relied on manually crafted features, which limited their ability to capture complex patterns in raw traffic data[6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. However, the advent of deep learning methods has revolutionized this field by enabling automatic extraction of intricate patterns, leading to remarkable performance improvements[9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###]. Despite these advancements, a critical obstacle remains, i.e., the dependency on labeled training data. The quantity and distribution of labeled data greatly influence the effectiveness and robustness of deep learning models, leading to biases and poor generalization in real-world scenarios.\nOn the other hand, significant progress has been made in generating network traffic, especially with the emergence of software-defined networking and network function virtualization. Research in this field has led to the development of experimental environments that resemble actual networks in terms of node variety and network topology [5 ###reference_b5###]. However, generating diverse and realistic traffic patterns continues to be a major challenge. Despite the increased accessibility of experimental setups, creating traffic that accurately reflects real-world scenarios remains a difficult task.\nIn recent times, pre-trained deep neural networks have emerged as leading methodologies for both network traffic analysis and generation tasks. One such model, ET-BERT[14 ###reference_b14###], uses the BERT architecture and has showcased superior performance compared to models without pre-training across various traffic classification tasks. Another model, Lens[15 ###reference_b15###], employs the T5 architecture for pre-training and has achieved state-of-the-art results in generating packet header fields. By leveraging large amounts of unlabeled data, pre-training-based approaches adeptly learn robust representations. Subsequently, these representations can be seamlessly applied to downstream tasks through fine-tuning with limited labeled data, exemplifying pre-training\u2019s versatility and efficacy in network analysis and generation.\nWhile pre-trained models have many benefits, they encounter two primary challenges. Firstly, the tokenization process in these models needs refinement. Existing methods of tokenization in pre-trained models have shortcomings, as they struggle to accurately reconstruct pcap files from the token lists generated by the model. This limitation hinders their practical usefulness. Secondly, pre-trained models have a significant constraint on token length. Most pre-trained models used for traffic analysis are restricted to a maximum of 512 tokens. This limit is insufficient for realistic traffic analysis. This issue becomes even more pronounced in traffic generation, where the token count for a single packet can exceed 512, making it difficult to generate real-world traffic samples.\nTo address these challenges, we propose TrafficGPT, a deep-learning model that leverages generative pre-training with the linear attention mechanism. Starting with the tokenization issue, we develop a reversible token representation method. This approach allows for the direct generation of pcap files from token lists, effectively solving the problem of accurately reconstructing traffic flows from the model\u2019s output. Furthermore, to overcome the token length limitation, we implement a linear attention mechanism in place of the traditional quadratic self-attention mechanism found in Transformer[16 ###reference_b16###]. This modification significantly increases the model\u2019s capacity, supporting a maximum token length of 12,032. Together, these enhancements greatly enhance the model\u2019s capabilities in both traffic analysis and generation.\nOur major contributions are summarized as follows.\nWe introduce TrafficGPT, a deep-learning model using generative pre-training. It utilizes a linear attention mechanism to replace the traditional quadratic self-attention mechanism in Transformer, enabling a token scope of up to 12,032 tokens, making it suitable for both flow classification and generation tasks.\nWe develop a reversible token representation method, which enables bidirectional mapping between pcap files and token representation.\nThis approach facilitates the direct generation of pcap files from token lists, effectively addressing the challenge of accurately reconstructing traffic flows from the model\u2019s output.\nOur model performs exceptionally well in classification experiments, achieving state-of-the-art results with an average improvement of 2% in Macro F1-Score across various datasets.\nIn the generation evaluation, our model demonstrates its ability to generate traffic flows similar to real ones, with an average JS divergence of 0.1605 for packet headers and 0.2396 for flow features.\nMoreover, the F1 score for discriminating our generated flows is 0.6683, very close to 0.5 (representing a random guess), indicating that our generated flows are highly realistic and difficult to distinguish from actual ones.\nRoadmap. Sec. II ###reference_### introduces related work.\nSec. III ###reference_### elaborates system design and Sec. IV ###reference_### evaluates it.\nSec. V ###reference_### discusses the limitations in our work and promising directions for future research.\nWe conclude in Sec. VI ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "This section provides an overview of the existing literature related to our work, focusing on three main areas: traffic classification, network traffic generation, and advancements in Transformer architectures for handling long sequences efficiently. Each of these areas contributes to the foundation upon which our proposed model is built, addressing the challenges and limitations encountered in current methodologies.\nTraffic Classification.\nThere are several papers on the large-scale pre-training of models in the field of traffic[14 ###reference_b14###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 15 ###reference_b15###].\nHe et al. pretrained a transformer on the payload of encrypted packets[20 ###reference_b20###].\nLin et al. extracted bursts from the traffic and used the burst bytes as the pretext for pre-training the BERT model, naming it ET-BERT[14 ###reference_b14###].\nMeng et al. proposed a generative pre-trained Transformer and used the first three packets with a maximum token size of 512 for training and testing. The results outperformed ET-BERT in several tasks[17 ###reference_b17###].\nZhao et al. introduced a masked autoencoder-based model for traffic classification, converting the initial five packets of each flow into images and subsequently employing pre-training based on the Transformer[18 ###reference_b18###].\nGuthula et al. put forth a hierarchical Transformer architecture for flow modeling, incorporating a packet-burst-flow structure[19 ###reference_b19###].\nWang et al. introduced Lens, a model that leverages the T5 architecture to learn representations from large-scale data[15 ###reference_b15###].\nPrior to the advent of pre-trained models, researchers typically relied on small-scale datasets for model training and testing.\nHayes et al. proposed a system for website fingerprinting on Tor, utilizing random forests to extract fingerprints[6 ###reference_b6###].\nYan et al. examined the situation of keyword-based searching fingerprinting through the development of a hand-crafted feature set[21 ###reference_b21###].\nRimmer et al. introduced a website fingerprinting attack over Tor by comparing multiple neural network structures[9 ###reference_b9###].\nLiu et al. examined the use of an attention-based bidirectional gated recurrent unit neural network for the identification of HTTPS web services[10 ###reference_b10###].\nHolland et al. integrated nPrint with automated machine learning, streamlining the workflow for traffic classification[22 ###reference_b22###].\nLuo et al. developed a Transformer-based IoT device-type identification method[11 ###reference_b11###].\nLin et al. devised an adaptive balancing training method to address dataset imbalances and employed multi-level features for detecting malicious traffic[23 ###reference_b23###].\nLi et al. achieved open-world Android app user action identification via synthesizing traffic and binary analysis[24 ###reference_b24###].\nSong et al. proposed an incremental and interpretable recurrent neural network model for encrypted traffic classification[12 ###reference_b12###].\nQu et al. designed a hierarchical deep learning model capable of integrating multiple flows of information[13 ###reference_b13###].\nGuan et al. leveraged federated learning for encrypted traffic classification[25 ###reference_b25###].\nXie et al. devised data augmentation techniques tailored for TCP traffic, leveraging BYOL[26 ###reference_b26###] for self-supervised learning of robust features[27 ###reference_b27###].\nNetwork Traffic Generation.\nAdeleke et al. comprehensively analyzed the traffic generation tools used by researchers over the past decade, including 92 different tools such as application layer generators, traffic replay tools, model-based traffic generators, and more[5 ###reference_b5###].\nRing et al. utilized Generative Adversarial Network (GAN) techniques to generate flow-based characteristics[28 ###reference_b28###].\nCheng et al. utilized a convolutional neural network-based GAN to conduct the generation of IP packets[29 ###reference_b29###].\nManocchio et al. mitigated the issue of model collapse by incorporating the concept of Manifold Guided Generative Adversarial Networks in the synthetic generation of network flow[30 ###reference_b30###].\nFan et al. integrated the concept of differential privacy into GANs to generate features of flows, aiming to achieve secure sharing of network data[31 ###reference_b31###].\nZolbayar et al. utilized GAN to generate traffic features and investigated their adversarial impact on certain machine learning classifier-based methods in whitebox, blackbox, and restricted-blackbox threat models [32 ###reference_b32###].\nHui et al. introduced a knowledge-enhanced GAN framework for large-scale IoT traffic generation, addressing the limitations of existing IoT synthetic data methods[33 ###reference_b33###].\nYin et al. utilized a time-series GAN to generate packet header fields in the flows[4 ###reference_b4###].\nDu et al. adopted dynamic word embedding and long short-term memory networks to generate the communication patterns between IP addresses and ports[34 ###reference_b34###].\nKim et al. employed GAN to simulate and generate the spectral data in the context of 5G networks[35 ###reference_b35###].\nKholgh et al. fine-tuned GPT-3 to generate ICMP and DNS packets[36 ###reference_b36###].\nEfficient Transformers.\nKeles et al. prove that the time complexity of self-attention is necessarily quadratic in the input length under the Strong Exponential Time Hypothesis[37 ###reference_b37###].\nThis implies that currently we can only resort to approximate algorithms to reduce the complexity of self-attention.\nGuo et al.[38 ###reference_b38###], Beltagy et al.[39 ###reference_b39###], Zaheer et al.[40 ###reference_b40###] and Roy et al.[41 ###reference_b41###] respectively employ different types of sparse self-attention to reduce computational overhead.\nKatharopoulos et al. replaced the dot product with a simple feature map, achieving linear complexity in the Transformer[42 ###reference_b42###].\nLee et al.[43 ###reference_b43###], Wang et al.[44 ###reference_b44###] and Zhang et al.[45 ###reference_b45###] compressed key\u2013value memory in different ways and achieved linear complexity.\nGuo et al. developed low-rank attention and band attention to parameterize the self-attention mechanism[46 ###reference_b46###].\nFan et al. further used the low-rank decomposed self-attention to achieve the linear complexity[47 ###reference_b47###].\nKitaev et al. used Locality-Sensitive Hashing and reversible residual network to reduce computational cost[48 ###reference_b48###].\nPeng et al. proposed a new attention mechanism reformulation that results in linear attention[49 ###reference_b49###].\nSun et al. proposed the Retentive Network, demonstrating its performance to be comparable to that of a Transformer of similar size in language modeling.\nThey highlighted its advantages, including training parallelism, cost-effective deployment, and efficient inference [50 ###reference_b50###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III System Design", + "text": "We present TrafficGPT, a deep-learning model that leverages generative pre-training with the linear attention mechanism. Our approach integrates fundamental principles from ET-BERT[14 ###reference_b14###] and NetGPT[17 ###reference_b17###] while introducing refinements to optimize token representation and enhance the neural network architecture. Our model is tailored to effectively handle long token sequences, improving the generation and classification of network traffic." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Model Architecture", + "text": "###figure_1### The primary objective of TrafficGPT is to learn and represent universal features, focusing on network flows as the basic unit.\nThis model will be tested in scenarios involving traffic generation and classification.\nIts methodology encompasses a series of steps, as depicted in Figure 1 ###reference_###. Initially, it converts network flows into token representations, followed by extensive pre-training on large-scale traffic data. The final phase involves testing the model\u2019s proficiency in understanding traffic tasks, including traffic generation and classification.\nDuring the token representation phase, each network flow is meticulously mapped into a list of tokens. This bijective process ensures that every token list can be accurately converted back into its original flow, thereby maintaining the integrity of the extracted information. In the pre-training stage, the model engages in self-supervised learning using unlabeled flows, employing an autoregressive approach to develop a comprehensive feature representation of network traffic.\nFor the final application stages, strategies for traffic generation and fine-tuning approaches for traffic classification are thoughtfully formulated.\nArchitecturally, TrafficGPT is grounded in a linear attention mechanism[42 ###reference_b42###], enhanced by integrating local attention strategies[51 ###reference_b51###] and the reversible network in Reformer[48 ###reference_b48###], effectively optimizing memory usage.\nThe inclusion of token shift[49 ###reference_b49###] is a strategic choice to expedite the model\u2019s convergence.\nThe mechanisms are detailed in Appendix -A ###reference_###.\nThe model is characterized by a hidden layer dimension of 512, encompassing 12 attention heads and spanning a depth of 24 layers.\nFor a comprehensive breakdown of additional parameters, kindly consult Section IV-A ###reference_###." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Tokenization", + "text": "In the tokenization stage, we optimized the tokenization processes of both ET-BERT[14 ###reference_b14###] and NetGPT[17 ###reference_b17###], achieving a more seamless overall workflow.\nOne key innovation is the integration of time information into tokens. It empowers TrafficGPT to generate timestamp intervals for pcap files. By incorporating temporal data into our tokenization strategy, we have enabled a more native and comprehensive representation of the information contained in pcap files.\n###figure_2### Firstly, we segment the pcap files within the dataset into distinct flows. A flow is precisely defined by a quintuple, representing a sequence of packets sharing identical source addresses, destination addresses, source ports, destination ports, and protocols. Following this segmentation, we proceed to tokenize each flow.\nFigure 2 ###reference_### elucidates the token composition for a singular flow. Within this framework, each flow comprises tokens corresponding to multiple packets, culminating in an end token denoting termination. The tokens assigned to each packet can be dissected into four components:\nPacket Start Token. This pivotal token signifies the commencement of a packet, playing a fundamental role in clearly defining the boundaries between individual packets.\nLink Type Token. This token denotes the specific link layer protocol in use, discerning between protocols like Ethernet or Linux cooked mode. Its critical importance in pcap generation stems from the distinct formats inherent in different link layer protocols.\nTime Interval Tokens. These tokens indicate the time interval between the current packet and the preceding one.\nIn the case of the initial data packet, we establish its time interval as 0.\nWe transform the timestamp into exponential form, representing it with 8 bytes, where each byte functions as a distinct token.\nHex Tokens. These tokens encapsulate all pertinent information for each packet, encompassing values from both the packet header and payload. Considering the presence of both encrypted and non-encrypted data within packets, the decision to convert all bytes into hexadecimal tokens provides a universal representation." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Pre-training", + "text": "To attain efficient multitasking performance, especially in generative tasks, we employ a pre-training approach akin to the auto-regressive method utilized in GPT-2.\nThis methodology, characterized by the incremental generation of sequences, empowers the model to acquire nuanced representations of context, leading to improved generalization across diverse tasks.\nIn this process, the model incrementally predicts the subsequent vocabulary token, leveraging previously generated content as context. This approach enables the model to refine its language understanding and generation capabilities. The auto-regressive method excels particularly in generative tasks, allowing the model to produce text with contextual coherence and semantic consistency.\nSpecifically, in the context of a given input traffic sequence denoted as , the model is trained to predict the probability distribution of the next token based on the preceding tokens in the sequence. This probability distribution is typically determined using a softmax activation function:\nHere, is the target token at position , and represents the sequence of tokens from position 1 to . The function embodies the model\u2019s parameters denoted as .\nFor the training process, the auto-regressive pre-training employs cross-entropy loss, expressed as:\nIn this equation, represents the vocabulary size, and is the one-hot encoded ground truth corresponding to the target token. This loss function quantifies the disparity between the predicted probability distribution and the actual distribution, guiding the model towards optimal token prediction." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Traffic Generation", + "text": "In the traffic generation tasks, we kickstart the process by manually providing a straightforward start token or a predefined set of initial tokens. These initial tokens serve as the foundation for the generation process. They are then input into the pre-trained model, prompting it to predict the subsequent token sequentially until a termination token is reached.\nIt\u2019s important to highlight that proficiently trained models can generate sequences comprising a huge number of tokens.\nThe length of these generated sequences can easily surpass the maximum window length established during the training phase, a limitation typically dictated by the available GPU memory.\nThis phenomenon is analogous to creating tokens through a sliding window. Even though the maximum token length set confines the model\u2019s perspective during training, it consistently generates tokens based on the preceding tokens within the window of its perspective.\nThis continuous generation process enables the model to produce coherent and contextually relevant sequences.\nWe use Top-k sampling to enhance the quality and diversity of generated sequences.\nTop-k sampling is a probabilistic method employed during the generation phase to select the next token from the model\u2019s probability distribution. Instead of choosing the token with the highest probability outright, Top-k sampling involves sampling from the top tokens with the highest probabilities, where is a predefined hyperparameter.\nBy restricting the potential choices to a smaller set of high-probability tokens, we prioritize the model\u2019s most confident predictions. This helps mitigate the risk of introducing irrelevant or nonsensical tokens into the sequence, contributing to more contextually relevant and coherent outputs.\nThe subsequent phase in the workflow involves translating the generated token sequence into a format suitable for representing network traffic data, such as a pcap (Packet Capture) file. This translation process is essentially the inverse of tokenization, whereby each token in the sequence is utilized to construct a corresponding segment of the network traffic data.\nThe process initiates with identifying and extracting individual data packets from the token list. This is achieved by leveraging a designated packet start token, which serves as a delimiter to delineate the boundaries of each packet within the sequence. Subsequently, parsing operations extract pertinent information from the tokens, such as the link type and hexadecimal representations.\nIt\u2019s crucial to note that, despite the model\u2019s proficiency, there\u2019s a slight probability of generating \u201dillegal packets.\u201d These problems might occur when the model generates data packets that cannot be properly parsed due to protocol inconsistencies, such as undefined packet header fields or lengths exceeding protocol-defined limits. To mitigate the impact of such irregularities, a straightforward strategy is employed: any packet deemed \u201dillegal\u201d is discarded, and the generation process restarts from the beginning of the preceding packet using the start token. This iterative approach ensures the production of a coherent and protocol-compliant network traffic data sequence." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Fine-tuning", + "text": "Fine-tuning involves adjusting the parameters of a pre-existing, pre-trained model to better suit the specific requirements of a particular task or domain. In this paper, we align our approach with methodologies seen in ET-BERT[14 ###reference_b14###] and NetGPT[17 ###reference_b17###], focusing specifically on a classification task designed to categorize network flows.\nOur model employs a straightforward yet effective method for fine-tuning. As a first step, we introduce a [cls] token at the beginning of the flow\u2019s token list. This addition signals the model that it is about to undertake a classification task. Subsequently, both the [cls] token and the flow\u2019s tokens are fed into the model. This allows the model to utilize the next output as the label for classification, as depicted in Figure 1 ###reference_###.\nThe model we have developed can handle a total of 260 different tokens. Consequently, using a single token enables the classification of up to 260 distinct classes. In scenarios where the number of classes exceeds 260, we can expand this capacity by employing multiple tokens. For instance, using two tokens can facilitate labeling for as many as classes." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Evaluation", + "text": "We systematically assess the performance of TrafficGPT across various metrics and comparison analyses.\nWe begin by outlining the settings, which include detailed dataset descriptions, data preprocessing methods, and hyper-parameter setups, laying the groundwork for an in-depth evaluation. Following this, we explore both classification and generation evaluations in detail. These analyses compare our models against existing methodologies in tasks such as traffic flow classification and generation." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Settings", + "text": "Datasets.\nWe have assembled a comprehensive compilation of five publicly accessible datasets, totaling an extensive 189 gigabytes in size. The included datasets are ISCXTor2016 [52 ###reference_b52###], USTCTFC2016 [53 ###reference_b53###], ISCXVPN2016 [54 ###reference_b54###], DoHBrw2020 [55 ###reference_b55###], and CICIoT2022 [56 ###reference_b56###], covering a diverse range of network traffic types operating within the TCP/IP framework. This encompasses terminal user internet activity, Virtual Private Network (VPN) traffic, Tor network traffic, and Internet of Things (IoT) communication. Notably, some datasets provide both traffic feature files and labels alongside pcap or pcapng files. Our exclusive focus in the analysis is on the raw packets, and any supplementary data has been disregarded for our study.\nData Pre-processing.\nIn the data preprocessing phase, we start by categorizing traffic into flows using the five-tuple approach, considering source IP address, destination IP, source port, destination port, and protocol.\nThis allows us to isolate and extract specific flows within the network.\nFor traffic that doesn\u2019t fall under the TCP or UDP categories, such as Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) packets, we adopt a method akin to ET-BERT[14 ###reference_b14###], simply discarding them as they are irrelevant to the particular content being transmitted.\nAfter that, 99% of the flows is allocated for the pretraining process, while the remaining 1% is set aside for testing.\nHyper-parameters.\nTrafficGPT is specified with 260 tokens, a feed-forward dimensional of 512, 12 attention heads, and a depth of 24 layers, catering to sequences up to 3,072 (3k) and 12,032 (12k) tokens in length. It incorporates a dropout rate of 0.1 for feed-forward layers, self-attention layers, and post-attention mechanisms to mitigate overfitting.\nThe model utilizes an embedding dimension of 256 and a head dimension of 256.\nIt is also complemented by 8 local attention heads and a local window of 256, enhancing the model\u2019s focus on relevant local sequence segments. The architecture is made reversible, drawing inspiration from the Reformer[48 ###reference_b48###], to improve memory efficiency further. A GLU variant is used for enhanced non-linearity. Additionally, token shifting is applied to improve convergence. The training regimen involves a learning rate of , a batch size of 4, and spans 750,000 steps." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Classification Evaluation", + "text": "AC\nF1\nAC\nF1\nAC\nF1\nAC\nF1\nPERT[20 ###reference_b20###]\n0.9789\n0.9584\n0.9772\n0.8550\nN/A\nN/A\nN/A\nN/A\nET-BERT[14 ###reference_b14###]\n0.9844\n0.9643\n0.9246\n0.9206\n0.4314\n0.9524\n0.6986\nNetGPT[17 ###reference_b17###]\nN/A\nN/A\nN/A\nN/A\n0.9683\n0.8056\n0.9563\n0.9463\nYaTC[18 ###reference_b18###]\n0.9842\n0.9644\n0.9816\n0.9217\n0.9908\n0.9860\n0.8071\n0.7452\nLens[15 ###reference_b15###]\n0.9189\n0.9143\n0.9063\n0.8981\n0.9984\n0.9958\nTrafficGPT(3k)\n0.9829\n0.9540\n0.9483\n0.9912\n0.9912\n0.9856\n0.9854\nTrafficGPT(12k)\n0.9839\n0.9444\n0.9900\n0.9877\nWe conduct experiments to evaluate the performance of flow classification tasks. We introduce two models, named TrafficGPT(3k) and TrafficGPT(12k), each pre-trained using distinct maximum token lengths of 3k and\n12k, respectively.\nFor the Cross-Platform datasets, encompassing both iOS and Android, we adopted the preprocessing approach detailed in ET-BERT[14 ###reference_b14###]. This preprocessing approach involved removing flow files smaller than 5KB and discarding classes with insufficient data. As a result, the dataset for Cross-Platform (iOS) comprised 196 labels, while Cross-Platform (Android) contained 215 labels.\nRegarding the ISCX-VPN-App and USTC-TFC datasets, our preprocessing was aligned with the methodology used in NetGPT[17 ###reference_b17###]. In this case, no flow files were removed. The ISCX-VPN-App dataset was used for application classification across 13 distinct classes. Meanwhile, the USTC-TFC dataset focused on software identification, featuring 20 classes.\nTo mitigate potential dataset-level overfitting associated with specific fields, we excluded MAC addresses, IP addresses, and port information from all datasets before conducting evaluations. This approach prevents the model from relying solely on tricks, such as IP addresses, for classification, ensuring a more suitable evaluation of model performance.\nWe showcase the performance of five distinct pre-trained models: PERT[20 ###reference_b20###], ET-BERT[14 ###reference_b14###], NetGPT[17 ###reference_b17###], YaTC[18 ###reference_b18###], and Lens[15 ###reference_b15###].\nThese pre-trained models showcase their superiority over non-pre-trained counterparts, such as K-fingerprinting[6 ###reference_b6###], FS-Net[57 ###reference_b57###], FlowPrint[58 ###reference_b58###], and TSCRNN[59 ###reference_b59###].\nThe Macro F1-Scores of the aforementioned five pre-trained models and our model\u2019s results are displayed in Table I ###reference_###. We observe that our Macro F1-Score outperforms others across most datasets, showing an average improvement of approximately 2%. This indicates that our model achieves state-of-the-art performance in traffic classification tasks.\nAnother notable finding is that the overall performance of TrafficGPT(12k) is slightly better than TrafficGPT(3k), suggesting that pre-training with a longer token length may yield some benefits.\n###figure_3### In the fine-tuning process detailed in Table I ###reference_###, a maximum token length of 256 was established. Subsequent investigations into the impact of varying maximum token lengths on macro F1 scores, as depicted in Figure 3 ###reference_###, revealed a clear trend. Using TrafficGPT(12k) with token lengths ranging from 32 to 4096, it was observed that classification accuracy increased sharply as the token length expanded from 32 to 128. Beyond this threshold, the improvement in F1 scores began to level off with further increases in token length. A more detailed examination of dataset-specific performances showed subtle differences; notably, on the Cross-Platform (Android) dataset, F1 scores improved consistently with longer token lengths, reaching a peak of 0.9578 for a token length of 4096. In contrast, for other datasets, optimal F1 scores were achieved at a token length of around 256, with no significant benefits from extending the token length. This analysis highlights two critical insights. Firstly, a token length of 256 generally suffices for accurate flow classification across most datasets. Secondly, increasing the token length can significantly boost classification accuracy for specific datasets, such as Cross-Platform (Android).\n###figure_4### ###figure_5### ###figure_6###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Generation Evaluation", + "text": "###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### In Figure 4 ###reference_###, we present three traffic flows generated by our model, utilizing HTTP, DNS, and TLS protocols. These flows are stored in pcap format and visualized using Wireshark. Our model\u2019s traffic closely mimics authentic network patterns. In Figure 4(a) ###reference_sf1###, we observe precise generation of packet header fields, with the requested URL /HTTPConnTest.txt adhering to a standard format, making it indistinguishable from real traffic. Similarly, in Figure 4(b) ###reference_sf2###, DNS packet requests exhibit high realism, showcasing the model\u2019s efficacy in replicating genuine traffic patterns.\nNotably, the TLS flow in Figure 4(c) ###reference_sf3### demonstrates commendable generation quality. However, upon meticulous analysis, a slight deviation from standard protocol specifications is observed as a malformed Client Hello packet.\nThis anomaly indicates limitations in the model\u2019s ability to generate encrypted traffic.\nIn the realm of generative tasks, Natural Language Processing (NLP) has witnessed substantial advancements, accompanied by the emergence of robust evaluation metrics.\nHowever, assessing generated network traffic poses unique challenges that conventional NLP metrics may not adequately address. Unlike text generation, network traffic generation involves intricate patterns and structures, necessitating specialized metrics for accurate evaluation.\nCommon NLP metrics such as BLEU[60 ###reference_b60###], ROUGE[61 ###reference_b61###], or perplexity are tailored for linguistic tasks and may not effectively capture the nuances of traffic generation.\nFor instance, the intricacies involved in packet header and flow feature generation demand metrics capable of quantifying dissimilarity between probability distributions and discerning subtle differences in complex structures.\nIn recognition of these challenges, our evaluation framework incorporates specialized metrics and discriminative models tailored to the unique demands of network traffic generation.\nPacket Header Divergence.\nJensen-Shannon Divergence (JSD) is employed in this study as a key evaluation metric to quantify the dissimilarity between probability distributions. Derived from information theory, JSD offers a symmetric and continuous measure of the difference between two probability distributions, making it particularly suitable for applications such as text classification and clustering. It is computed by averaging the Kullback-Leibler Divergence (KL Divergence) between each distribution and their arithmetic mean.\nThe resulting metric ranges between 0 and 1, with 0 indicating perfect similarity and 1 representing complete dissimilarity.\nNetGPT utilizes JSD as a metric to assess the packet header generation quality [17 ###reference_b17###], and our paper adopts this idea for evaluating the performance.\nTo comprehensively evaluate packet header generation, we introduced assessments for several crucial fields, including IP addresses, ports, packet lengths, and TTL.\nFigure 5 ###reference_### illustrates the Cumulative Distribution Function (CDF) plots for generated traffic compared to real traffic, while Table II ###reference_### presents the JSD scores for generated samples.\nTwo notable observations emerge.\nFirstly, the distribution of the generated data closely aligns with that of the actual data, emphasizing TrafficGPT(12k)\u2019s effectiveness with an average JSD score of 0.1605.\nSecondly, the marginal discrepancies in JSD scores among various headers of the data packets suggest consistent performance across different packet header evaluations. This indicates the model\u2019s ability to maintain coherence in diverse headers of traffic generation.\nFlow Feature Divergence.\nTo further evaluate the quality of flow generation on a broader scale, we introduce a divergence analysis focused on flow features.\nSpecifically, we derive features from the generated flows and compare them with those of the test set using JSD. Flow features play a pivotal role in tasks related to traffic analysis, and in this paper, we leverage the feature generation approach outlined in [6 ###reference_b6###]. We meticulously select the top 6 effective features for computing JSD, and the specific features are detailed in Table IV ###reference_###.\nThe assessment of flow generation is further elucidated through Figure 6 ###reference_###, illustrating CDF plots for flow features, and Table III ###reference_###, which presents the corresponding JSD scores.\nA noteworthy finding is the JSD score of TrafficGPT(12k) for flow features, standing at 0.2396, indicating a somewhat more significant deviation than the packet header scores.\nExamining the CDF plots reveals a similarity in the curves between the generated flow features and the authentic distribution, albeit with some distinctions.\nThese distinctions imply that generating flows may pose a more intricate challenge than generating packet headers. One plausible explanation could be the inherent complexity of capturing long-range token dependencies. Despite these challenges, the generated flow features still resemble the actual distribution, underscoring the model\u2019s capability to capture critical flow characteristics.\nAdditionally, the JSD scores demonstrate that the 12k model significantly outperforms the 3k model in flow generation. This observation emphasizes the positive impact of increasing token length on enhancing the effectiveness of flow generation. The longer context provided by a higher token length allows the model to better comprehend intricate patterns and dependencies within flows, resulting in a more coherent and realistic generation of network traffic flows.\nDiscriminative Model Assessment.\nIn addition to evaluating generative models through JSD analysis, we employ discriminative models to further assess the performance and authenticity of the generated flows. Discriminative models are instrumental in distinguishing between real and synthetic data, providing a complementary perspective to generative model evaluations.\nTo implement discriminative model assessment, we train a classifier on the combined dataset comprising both real and generated flows. This classifier is designed to discern subtle differences and patterns between genuine and synthetic flow data.\nIf the discriminator struggles to differentiate between real and generated data effectively, it suggests that the generative model has successfully captured intricate patterns and features present in the authentic dataset. This difficulty in discrimination implies that the generated flows closely resemble the characteristics of real data, demonstrating high authenticity and realism in the generated samples.\nSpecifically, we adopt a traffic classifier proposed by Qu et al. [13 ###reference_b13###]. This classifier employs a hierarchical structure, capable of taking packet byte inputs, making it well-suited for the context. The hierarchical nature of the classifier allows it to analyze and discern patterns at different levels of abstraction within the data, enhancing its ability to capture intricate details in both real and generated flow data.\nIn the experiment, 1,000 flow samples were generated and stored as pcap files, followed by a binary classification test on an equal number of randomly selected flows from the test dataset. The TrafficGPT(3k) model achieved a Macro F1-Score of 0.6634(\u00b10.0412), while the TrafficGPT(12k) model scored slightly higher at 0.6683(\u00b10.0232).\nThese results indicate a significant challenge for the discriminative model in distinguishing between real and generated flows, suggesting a high degree of realism in the generated data.\nAC\nF1\nAC\nF1\nAC\nF1\nAC\nF1\nRWKV[20 ###reference_b20###]\n0.9275\n0.8992\n0.8625\n0.8269\n0.9750\n0.9750\nRetNet[50 ###reference_b50###]\n0.9675\n0.9629\n0.9350\n0.9190\n0.9906\n0.9906\n0.9863\n0.9856\nTrafficGPT(3k)\n0.9829\n0.9483\n0.9912\n0.9912\n0.9856\n0.9854\nTrafficGPT(12k)\n0.9839\n0.9444\n0.9900\n0.9877" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Comparative Analysis of Linear Mechanisms", + "text": "In addition to TrafficGPT utilized in this paper, we also tested two other models that have shown success in NLP tasks with linear complexity: RWKV and RetNet. Their main mechanisms are detailed in Appendix -A ###reference_###.\nFor each model, we established a learning rate of , a token embedding dimension of 256, a maximum token length of 3k, and a total training step of 3,000,000.\nDue to distinct GPU memory demands for each model, we adjusted the batch size and depth to maximize GPU efficiency. For RetNet, we opted for a batch size of 8 and a model depth of 24. In the case of RWKV, constrained by hardware limitations, we set the batch size to 2 and the model depth to 18.\nIt\u2019s worth noting that, for RetNet, we employed its chunkwise mode to economize on GPU memory, albeit with the trade-off of increased computation time.\nTable V ###reference_### showcases the performance of these models in classification tasks, while Tables VI ###reference_### and VII ###reference_### illustrate their performance in traffic generation. Although they excel in NLP tasks, particularly classification, RWKV and RetNet demonstrate suboptimal performance in tasks related to traffic, especially in traffic generation. In our experiments, we observed that both models often generate packets that cannot be parsed correctly. We conjecture that this issue may arise from RWKV and RetNet\u2019s use of the exponential decay technique, which poses challenges in maintaining correlations between tokens over extended distances." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "The utilization of deep learning in traffic analysis has gained significant traction, owing to its inherent ability for automatic feature extraction[9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. Despite its popularity, achieving high generalization poses a challenge due to the limited number of available samples. Self-supervised pre-training, obviating the need for labeled data, emerges as a pivotal strategy for acquiring and training large-scale traffic data. The superiority of pre-trained models over non-pre-trained counterparts underscores a promising direction for the future of traffic analysis.\nAn inherent limitation in existing self-supervised pre-trained models is the token length constraint, typically capped at 512. This constraint can severely impede the effectiveness of analysis when data packets exceed this limit, leading to a failure in capturing relationships between packets, especially in traffic generation tasks. In response to this challenge, our approach enhances the model using linear attention mechanisms, extending the token length to 12k. Experimental results validate the efficacy of this modification, demonstrating improved performance in both traffic classification and generation tasks.\nWhile our model is pre-trained in an auto-regressive manner, which economically addresses classification and traffic generation tasks, it comes with minor drawbacks. For example, the lack of consideration for classification tasks during pre-training may introduce conceptual gaps.\nAdopting a multi-task training strategy could mitigate this limitation and enhance classification results. By incorporating classification tasks alongside auto-regressive learning during training, the model can develop a more comprehensive understanding of the data, potentially improving its performance across various tasks.\nFurthermore, the current model treats TCP and UDP flows as the basic units, overlooking information correlation between multiple flows. We recognize this as a future direction for improvement, exploring the integration of a multi-flow architecture with self-supervised learning to enhance overall performance potentially.\nRegarding dataset composition, our current dataset primarily consists of TCP/IP data. However, the model architecture is designed to support packet analysis for diverse protocol stacks such as Bluetooth[62 ###reference_b62###, 63 ###reference_b63###, 64 ###reference_b64###], Zigbee[65 ###reference_b65###, 66 ###reference_b66###, 67 ###reference_b67###], etc. This opens up an important avenue for future research, where expanding the dataset to encompass a broader range of protocols could further enhance the model\u2019s versatility and applicability." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "We developed a deep learning model TrafficGPT that is specifically designed for analyzing and generating network traffic. Our model combines generative pre-training with a linear attention mechanism to tackle the challenges associated with traditional approaches to network traffic studies. With a token limit of 12,032 tokens, our model significantly surpasses existing models in terms of capacity, enabling a more comprehensive analysis and generation of long traffic flows.\nOur evaluation showcased our model\u2019s superiority in network traffic classification, where it consistently outperforms other models\nacross various datasets. Moreover, in traffic generation, our model demonstrates a remarkable ability to mimic real network flows, with metrics such as JS divergence attesting to the high quality and realism of the generated traffic." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Comparison of Traffic Classification Macro F1-Scores.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodCross-Platform(iOS)Cross-Platform(Android)ISCX-VPN-AppUSTC-TFC
\n

AC

\n
\n

F1

\n
\n

AC

\n
\n

F1

\n
\n

AC

\n
\n

F1

\n
\n

AC

\n
\n

F1

\n
\n

PERT[20 ###reference_b20###]

\n
\n

0.9789

\n
\n

0.9584

\n
\n

0.9772

\n
\n

0.8550

\n
\n

N/A

\n
\n

N/A

\n
\n

N/A

\n
\n

N/A

\n
\n

ET-BERT[14 ###reference_b14###]

\n
\n

0.9844

\n
\n

0.9643

\n
0.9865\n

0.9246

\n
\n

0.9206

\n
\n

0.4314

\n
\n

0.9524

\n
\n

0.6986

\n
\n

NetGPT[17 ###reference_b17###]

\n
\n

N/A

\n
\n

N/A

\n
\n

N/A

\n
\n

N/A

\n
\n

0.9683

\n
\n

0.8056

\n
\n

0.9563

\n
\n

0.9463

\n
\n

YaTC[18 ###reference_b18###]

\n
\n

0.9842

\n
\n

0.9644

\n
\n

0.9816

\n
\n

0.9217

\n
\n

0.9908

\n
\n

0.9860

\n
\n

0.8071

\n
\n

0.7452

\n
\n

Lens[15 ###reference_b15###]

\n
\n

0.9189

\n
\n

0.9143

\n
\n

0.9063

\n
\n

0.8981

\n
\n

0.9984

\n
\n

0.9958

\n
0.99400.9937
\n

TrafficGPT(3k)

\n
0.9844\n

0.9829

\n
\n

0.9540

\n
\n

0.9483

\n
\n

0.9912

\n
\n

0.9912

\n
\n

0.9856

\n
\n

0.9854

\n
\n

TrafficGPT(12k)

\n
\n

0.9839

\n
0.9863\n

0.9444

\n
0.94981.00001.0000\n

0.9900

\n
\n

0.9877

\n
\n
", + "capture": "TABLE I: Comparison of Traffic Classification Macro F1-Scores." + }, + "2": { + "table_html": "
\n
TABLE II: Traffic Generation Performance Comparison on Packet-level JSD.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nMethod \nsportdportsrc addressdst addresspacket lengthTTLAverage
\nTrafficGPT(3k)\n0.11560.13470.16890.17790.17360.28030.1752
\nTrafficGPT(12k)\n0.13460.15510.16840.23040.18740.08720.1605
\n
", + "capture": "TABLE II: Traffic Generation Performance Comparison on Packet-level JSD." + }, + "3": { + "table_html": "
\n
TABLE III: Traffic Generation Performance Comparison on Flow-level JSD.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nMethod \nfeature 1feature 2feature 3feature 4feature 5feature 6Average
\nTrafficGPT(3k)\n0.74170.26130.11610.40510.29310.24650.3440
\nTrafficGPT(12k)\n0.40280.25290.11460.30430.25810.10460.2396
\n
", + "capture": "TABLE III: Traffic Generation Performance Comparison on Flow-level JSD." + }, + "4": { + "table_html": "
\n
TABLE IV: The six features for calculating flow feature divergence.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
IDFeature Description
1Number of incoming packets.
2Number of outgoing packets as a fraction of the total number
of packets.
3Number of incoming packets as a fraction of the total number
of packets.
4Standard deviation of the outgoing packet ordering list.
5Number of outgoing packets.
6Sum of all items in the alternative concentration feature list.
\n
", + "capture": "TABLE IV: The six features for calculating flow feature divergence." + }, + "5": { + "table_html": "
\n
TABLE V: Comparison of Traffic Classification Macro F1-Scores with Various Linear Complexity Models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nCross-Platform(iOS) \n\nCross-Platform(Android)\n\nISCX-VPN-App\n\nUSTC-TFC\n
\n

AC

\n
\n

F1

\n
\n

AC

\n
\n

F1

\n
\n

AC

\n
\n

F1

\n
\n

AC

\n
\n

F1

\n
\n

RWKV[20 ###reference_b20###]

\n
\n

0.9275

\n
\n

0.8992

\n
\n

0.8625

\n
\n

0.8269

\n
\n

0.9750

\n
\n

0.9750

\n
0.99500.9946
\n

RetNet[50 ###reference_b50###]

\n
\n

0.9675

\n
\n

0.9629

\n
\n

0.9350

\n
\n

0.9190

\n
\n

0.9906

\n
\n

0.9906

\n
\n

0.9863

\n
\n

0.9856

\n
\n

TrafficGPT(3k)

\n
0.9844\n

0.9829

\n
0.9540\n

0.9483

\n
\n

0.9912

\n
\n

0.9912

\n
\n

0.9856

\n
\n

0.9854

\n
\n

TrafficGPT(12k)

\n
\n

0.9839

\n
0.9863\n

0.9444

\n
0.94981.00001.0000\n

0.9900

\n
\n

0.9877

\n
\n
", + "capture": "TABLE V: Comparison of Traffic Classification Macro F1-Scores with Various Linear Complexity Models." + }, + "6": { + "table_html": "
\n
TABLE VI: Traffic Generation Performance Comparison on Packet-level JSD with Various Linear Complexity Models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nMethod \nsportdportsrc addressdst addresspacket lengthTTLAverage
\nRWKV[20]\n1.18411.25950.71591.12871.01920.65100.9931
\nRetNet[50]\n0.93500.95781.09281.02640.99760.58130.9318
\nTrafficGPT(3k)\n0.11560.13470.16890.17790.17360.28030.1752
\nTrafficGPT(12k)\n0.13460.15510.16840.23040.18740.08720.1605
\n
", + "capture": "TABLE VI: Traffic Generation Performance Comparison on Packet-level JSD with Various Linear Complexity Models." + }, + "7": { + "table_html": "
\n
TABLE VII: Traffic Generation Performance Comparison on Flow-level JSD with Various Linear Complexity Models. \u2019None\u2019 Represents Insufficient Data to Compute the Feature.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nMethod \nfeature 1feature 2feature 3feature 4feature 5feature 6Average
\nRWKV[20]\n0.87850.39380.6042None0.93611.08810.7801
\nRetNet[50]\n1.26410.4110.52570.85470.69990.50440.7100
\nTrafficGPT(3k)\n0.74170.26130.11610.40510.29310.24650.3440
\nTrafficGPT(12k)\n0.40280.25290.11460.30430.25810.10460.2396
\n
", + "capture": "TABLE VII: Traffic Generation Performance Comparison on Flow-level JSD with Various Linear Complexity Models. \u2019None\u2019 Represents Insufficient Data to Compute the Feature. " + } + }, + "image_paths": { + "1": { + "figure_path": "2403.05822v2_figure_1.png", + "caption": "Figure 1: The framework of TrafficGPT.", + "url": "http://arxiv.org/html/2403.05822v2/x1.png" + }, + "2": { + "figure_path": "2403.05822v2_figure_2.png", + "caption": "Figure 2: The structure of flow tokens.", + "url": "http://arxiv.org/html/2403.05822v2/x2.png" + }, + "3": { + "figure_path": "2403.05822v2_figure_3.png", + "caption": "Figure 3: Variation of Macro F1-Scores with token length using TrafficGPT(12k) fine-tuning in classification.", + "url": "http://arxiv.org/html/2403.05822v2/x3.png" + }, + "4(a)": { + "figure_path": "2403.05822v2_figure_4(a).png", + "caption": "(a) HTTP Flow\nFigure 4: The flows generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x4.png" + }, + "4(b)": { + "figure_path": "2403.05822v2_figure_4(b).png", + "caption": "(b) DNS Flow\nFigure 4: The flows generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x5.png" + }, + "4(c)": { + "figure_path": "2403.05822v2_figure_4(c).png", + "caption": "(c) TLS Flow\nFigure 4: The flows generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x6.png" + }, + "5(a)": { + "figure_path": "2403.05822v2_figure_5(a).png", + "caption": "(a) sport\nFigure 5: CDF plots of packet headers generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x7.png" + }, + "5(b)": { + "figure_path": "2403.05822v2_figure_5(b).png", + "caption": "(b) dport\nFigure 5: CDF plots of packet headers generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x8.png" + }, + "5(c)": { + "figure_path": "2403.05822v2_figure_5(c).png", + "caption": "(c) src address\nFigure 5: CDF plots of packet headers generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x9.png" + }, + "5(d)": { + "figure_path": "2403.05822v2_figure_5(d).png", + "caption": "(d) dst address\nFigure 5: CDF plots of packet headers generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x10.png" + }, + "5(e)": { + "figure_path": "2403.05822v2_figure_5(e).png", + "caption": "(e) packet length\nFigure 5: CDF plots of packet headers generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x11.png" + }, + "5(f)": { + "figure_path": "2403.05822v2_figure_5(f).png", + "caption": "(f) TTL\nFigure 5: CDF plots of packet headers generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x12.png" + }, + "6(a)": { + "figure_path": "2403.05822v2_figure_6(a).png", + "caption": "(a) feature 1\nFigure 6: CDF plots of flow features generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x13.png" + }, + "6(b)": { + "figure_path": "2403.05822v2_figure_6(b).png", + "caption": "(b) feature 2\nFigure 6: CDF plots of flow features generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x14.png" + }, + "6(c)": { + "figure_path": "2403.05822v2_figure_6(c).png", + "caption": "(c) feature 3\nFigure 6: CDF plots of flow features generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x15.png" + }, + "6(d)": { + "figure_path": "2403.05822v2_figure_6(d).png", + "caption": "(d) feature 4\nFigure 6: CDF plots of flow features generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x16.png" + }, + "6(e)": { + "figure_path": "2403.05822v2_figure_6(e).png", + "caption": "(e) feature 5\nFigure 6: CDF plots of flow features generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x17.png" + }, + "6(f)": { + "figure_path": "2403.05822v2_figure_6(f).png", + "caption": "(f) feature 6\nFigure 6: CDF plots of flow features generated by TrafficGPT(12k).", + "url": "http://arxiv.org/html/2403.05822v2/x18.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2403.05822v2" +} \ No newline at end of file