StevenJingfeng commited on
Commit
127ab9c
·
verified ·
1 Parent(s): 53abfcf

Update method/Readme.md

Browse files
Files changed (1) hide show
  1. method/Readme.md +22 -107
method/Readme.md CHANGED
@@ -6,126 +6,41 @@ RQ2: Can this dataset be extended to apply to other algorithms, enhancing its ut
6
 
7
  Our analysis indicates that among the four machine learning models evaluated, DNN demonstrated the best prediction performance in both fungible token airdrop and normal periods. Specifically, DNN models showed superior predictive accuracy for gas usage during these periods. The results highlighted minimal disparity in prediction loss when comparing a DNN trained on data from normal periods to a DNN trained on data specifically from the token-airdrop period. This finding suggests that DNN models exhibit robust scalability on this dataset, eliminating the need to train a new neural network specifically for token-airdrop events. In addition to evaluating DNN models, we explored the extensibility of incorporating monotonicity constraints and sentiment analysis within the Neural Additive Model (NAM). Although these enhancements did not significantly improve the predictive accuracy of the NAM model on our test dataset, the intrinsic variability and complexity of blockchain data imply that different datasets from different time periods might yield different results. This opens a significant platform for other researchers to utilize and further explore the dataset, enabling comprehensive analyses and advancing financial machine-learning models. Our contributions include developing a comprehensive dataset that integrates both on-chain and off-chain data, compatible with various machine learning algorithms for financial prediction. This dataset forms the cornerstone of a novel research framework, enabling a deeper exploration of the financial market and its mechanisms. By offering a robust and versatile dataset, we facilitate advanced exploration and optimization efforts, driving innovation and enhancing the accuracy and reliability of financial machine-learning models in blockchain technology.
8
 
9
- # Operational Measures
10
 
11
- ## Variables
12
- <!DOCTYPE html>
13
- <html lang="en">
14
- <head>
15
- <meta charset="UTF-8">
16
- <meta name="viewport" content="width=device-width, initial-scale=1.0">
17
-
18
-
19
-
20
-
21
-
22
- </head>
23
- <body>
24
-
25
- <table>
26
- <caption>Variable Description</caption>
27
- <tr>
28
- <th>Variable Name</th>
29
- <th>Description</th>
30
- <th>Unit</th>
31
- <th>Type</th>
32
- </tr>
33
- <tr>
34
- <td>timestamp</td>
35
- <td>Recording of the time of each block</td>
36
- <td></td>
37
- <td>String</td>
38
- </tr>
39
- <tr>
40
- <td>number</td>
41
- <td>The number of blocks on the chain</td>
42
- <td></td>
43
- <td>Numeric</td>
44
- </tr>
45
- <tr>
46
- <td>gas_used</td>
47
- <td>Actual Gas Used</td>
48
- <td>Gwei</td>
49
- <td>Numeric</td>
50
- </tr>
51
- <tr>
52
- <td>gas_limit</td>
53
- <td>The maximum allowed gas per block</td>
54
- <td>Gwei</td>
55
- <td>Numeric</td>
56
- </tr>
57
- <tr>
58
- <td>base_fee_per_gas</td>
59
- <td>The base fee set for each block</td>
60
- <td>Ether</td>
61
- <td>Numeric</td>
62
- </tr>
63
- <tr>
64
- <td>gas_fraction</td>
65
- <td>Fraction between Gas Used and Gas Limit</td>
66
- <td></td>
67
- <td>Numeric</td>
68
- </tr>
69
- <tr>
70
- <td>gas_target</td>
71
- <td>The optimal gas used for each block</td>
72
- <td></td>
73
- <td>Numeric</td>
74
- </tr>
75
- <tr>
76
- <td>Y</td>
77
- <td>Normalized Gas Used</td>
78
- <td></td>
79
- <td>Numeric</td>
80
- </tr>
81
- <tr>
82
- <td>$Y_t$</td>
83
- <td>Response variable equals to the gas_fraction</td>
84
- <td></td>
85
- <td>Numeric</td>
86
- </tr>
87
- </table>
88
-
89
- </body>
90
- </html>
91
-
92
- # Hypothesis Development
93
- ## Machine Learning Algorithm Selection
94
- The machine learning we selected are Linear Regression, Deep Neural Network, XGBoost, and long-term Memory.
95
- # The Machine Learning Workflow
96
- ## Model Development
97
  ### Data Selection
98
  <p>We query Ethereum’s data using Google BigQuery. The raw data contains information on timestamps, block numbers, hash, parent hash, transaction, etc. Since our research aims to predict gas used in the next block, we only keep the relevant features, including time stamp, block number, gas limit, gas used, and base fee. Notably, the NFT-airdrop can substantially boost recipients' and non-recipients' engagement levels in the transactions. As a result, high volatility in gas used will occur and lead to subsequent base fee alternation. Hence, our research is structured around two distinct periods. The first period spans the apex of the ARB airdrop, recognized as the most substantial in 2023, from March 21 to April 1, encompassing 78290 blocks. The second period pertains to the month devoid of significant NFT-airdrop activities, spanning from June 1st, 2023, to July 1st, 2023, and containing 213244 blocks.</p>
99
 
100
  <p>We also query the discord data.
101
 
102
 
103
- ### Data Processing
104
-
105
-
106
  #### On-chain data
107
- <p>In the original dataset, the base fee is denominated in units of Gwei, where each Gwei is equivalent to <code>$10^{-9}$</code> Ether. Consequently, for enhanced interpretability of the dataset, we scale the base fee by <code>$10^{-9}$</code>, expressing it in terms of Ether.</p>
108
 
109
- <p>We create a regressor, denoted as <code>$\alpha$</code>, by computing the ratio of gas used to the gas limit. The predicted variable <code>$Y$</code> represents the normalized gas used, determined by the formula:</p>
110
 
111
- <blockquote>
112
- <p>
113
- \[ Y = \frac{{\text{{gasUsed}} - \text{{gasTarget}}}}{{\text{{gasTarget}}}} \]
114
- </p>
115
- </blockquote>
116
 
117
- <p>For varying periods <code>$k$</code>, the regressor variable for the preceding <code>$k$</code> data points is collected into a list, forming the feature set <code>$X$</code>. The variable <code>$Y$</code> corresponds precisely to the prediction variable for the data point at time <code>$t$</code>.</p>
118
 
119
- #### Off-chain data
 
 
 
 
120
 
 
 
 
 
121
 
122
- ## Results Presentation
123
 
124
- ### Training and Testing
125
- We conduct the train-test split through k-fold cross-validation with k=5, where the whole data set is split into 5 subsets for training, validating, and testing the model. Where the fraction of training data is 0.44, the fraction of validation data is 0.22, and the testing data is 0.34
126
 
127
- ## Model Evaluation
128
- ### Evaluation Criteria
129
- The model is evaluated by Mean Squared Error.
130
- ### Iterative Improvement
131
- The hyperparameter setting is the initial setting learning rate=0.001, while patience is 10, and early stopping is 12 epochs without a decrease of loss. (For both DNN and LSTM)
 
 
6
 
7
  Our analysis indicates that among the four machine learning models evaluated, DNN demonstrated the best prediction performance in both fungible token airdrop and normal periods. Specifically, DNN models showed superior predictive accuracy for gas usage during these periods. The results highlighted minimal disparity in prediction loss when comparing a DNN trained on data from normal periods to a DNN trained on data specifically from the token-airdrop period. This finding suggests that DNN models exhibit robust scalability on this dataset, eliminating the need to train a new neural network specifically for token-airdrop events. In addition to evaluating DNN models, we explored the extensibility of incorporating monotonicity constraints and sentiment analysis within the Neural Additive Model (NAM). Although these enhancements did not significantly improve the predictive accuracy of the NAM model on our test dataset, the intrinsic variability and complexity of blockchain data imply that different datasets from different time periods might yield different results. This opens a significant platform for other researchers to utilize and further explore the dataset, enabling comprehensive analyses and advancing financial machine-learning models. Our contributions include developing a comprehensive dataset that integrates both on-chain and off-chain data, compatible with various machine learning algorithms for financial prediction. This dataset forms the cornerstone of a novel research framework, enabling a deeper exploration of the financial market and its mechanisms. By offering a robust and versatile dataset, we facilitate advanced exploration and optimization efforts, driving innovation and enhancing the accuracy and reliability of financial machine-learning models in blockchain technology.
8
 
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ### Data Selection
11
  <p>We query Ethereum’s data using Google BigQuery. The raw data contains information on timestamps, block numbers, hash, parent hash, transaction, etc. Since our research aims to predict gas used in the next block, we only keep the relevant features, including time stamp, block number, gas limit, gas used, and base fee. Notably, the NFT-airdrop can substantially boost recipients' and non-recipients' engagement levels in the transactions. As a result, high volatility in gas used will occur and lead to subsequent base fee alternation. Hence, our research is structured around two distinct periods. The first period spans the apex of the ARB airdrop, recognized as the most substantial in 2023, from March 21 to April 1, encompassing 78290 blocks. The second period pertains to the month devoid of significant NFT-airdrop activities, spanning from June 1st, 2023, to July 1st, 2023, and containing 213244 blocks.</p>
12
 
13
  <p>We also query the discord data.
14
 
15
 
16
+ ### Data
 
 
17
  #### On-chain data
 
18
 
19
+ #### Off-chain data
20
 
21
+ ### Monotonicity
22
+ <p>In this context, we propose a novel method for predicting gas usage in blockchain transactions, inspired by the concept of pairwise monotonicity as detailed by Chen <a href="#chen2023address">[1]</a>. Unlike traditional methods like EMA, which emphasizes the forgetting of older information, our approach employs a monotonicity representation to attribute varying levels of importance to data over time. Monotonicity has demonstrated its interdisciplinary applicability, as evidenced by works such as Liu et al. <a href="#liu2020certified">[2]</a> and Milani <a href="#milani2016fast">[3]</a>, which focused on individual monotonicity for single variables. Our method is inspired by Chen's work <a href="#chen2023address">[1]</a> for introducing pairwise monotonicity in the financial domain. For instance, past due amounts over a longer period in credit scoring should more significantly impact the scoring of new debt risk. Similarly, older data points are less influential in blockchain transactions, whereas recent data points are more critical for prediction.</p>
 
 
 
23
 
24
+ <p>We apply monotonicity to the α feature, where changes in α for recent blocks result in greater prediction variance than changes in distant previous data points. In the case of k=2, where the prediction uses data from the two previous blocks, the α values are α<sub>1</sub> and α<sub>2</sub> with values (α<sub>1</sub>=a, α<sub>2</sub>=a). Given the higher importance assigned to α<sub>2</sub>, increasing or decreasing α<sub>2</sub> by a certain amount t compared with altering α<sub>1</sub> by the same amount will lead to a higher variation of results. The mathematical equation can be denoted as:</p>
25
 
26
+ <pre>
27
+ |f(α<sub>1</sub>=a, α<sub>2</sub>=a) - f(α<sub>1</sub>=a+t, α<sub>2</sub>=a)| ≤ |f(α<sub>1</sub>=a, α<sub>2</sub>=a) - f(α<sub>1</sub>=a, α<sub>2</sub>=a+t)|
28
+ </pre>
29
+
30
+ <p>The formal definition of pairwise monotonicity is modified from Chen's work <a href="#chen2023address">[1]</a>. The output changing can be positively correlated and negatively correlated with variables. Thereby, given f, the model, we conclude f is weakly monotonic concerning x<sub>β</sub> over x<sub>γ</sub> if:</p>
31
 
32
+ <pre>
33
+ |f(x<sub>β</sub>, x<sub>γ</sub>+c, 𝔁<sub>¬</sub>) - f(x<sub>β</sub>, x<sub>γ</sub>, 𝔁<sub>¬</sub>)| ≤ |f(x<sub>β</sub>+c, x<sub>γ</sub>, 𝔁<sub>¬</sub>) - f(x<sub>β</sub>, x<sub>γ</sub>, 𝔁<sub>¬</sub>)|
34
+ ∀ x<sub>β</sub>, x<sub>γ</sub> such that x<sub>β</sub>=x<sub>γ</sub>, ∀ 𝔁<sub>¬</sub>, ∀ c ∈ ℝ.
35
+ </pre>
36
 
37
+ <p>Under this weak monotonicity definition, we ensure more information is addressed on the nearer data point, enhancing its transparency and explainability.</p>
38
 
39
+ <hr>
 
40
 
41
+ <h3>References</h3>
42
+ <ol>
43
+ <li id="chen2023address">Chen, Y. (2023). Addressing Pairwise Monotonicity in Financial Predictions.</li>
44
+ <li id="liu2020certified">Liu, X., et al. (2020). Certified Individual Monotonicity in Single Variable Systems.</li>
45
+ <li id="milani2016fast">Milani, A. (2016). Fast Monotonicity Checks in Variable Data.</li>
46
+ </ol>