Update method/Readme.md
Browse files- method/Readme.md +44 -1
method/Readme.md
CHANGED
|
@@ -3,4 +3,47 @@
|
|
| 3 |
RQ1: Can this dataset be applied to different machine learning models? <br>
|
| 4 |
RQ2: Can this dataset be extended to apply to other algorithms, enhancing its utility and revealing additional insights in financial market analysis?
|
| 5 |
## Significance
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
RQ1: Can this dataset be applied to different machine learning models? <br>
|
| 4 |
RQ2: Can this dataset be extended to apply to other algorithms, enhancing its utility and revealing additional insights in financial market analysis?
|
| 5 |
## Significance
|
| 6 |
+
|
| 7 |
+
Our analysis indicates that among the four machine learning models evaluated, DNN demonstrated the best prediction performance in both fungible token airdrop and normal periods. Specifically, DNN models showed superior predictive accuracy for gas usage during these periods. The results highlighted minimal disparity in prediction loss when comparing a DNN trained on data from normal periods to a DNN trained on data specifically from the token-airdrop period. This finding suggests that DNN models exhibit robust scalability on this dataset, eliminating the need to train a new neural network specifically for token-airdrop events. In addition to evaluating DNN models, we explored the extensibility of incorporating monotonicity constraints and sentiment analysis within the Neural Additive Model (NAM). Although these enhancements did not significantly improve the predictive accuracy of the NAM model on our test dataset, the intrinsic variability and complexity of blockchain data imply that different datasets from different time periods might yield different results. This opens a significant platform for other researchers to utilize and further explore the dataset, enabling comprehensive analyses and advancing financial machine-learning models. Our contributions include developing a comprehensive dataset that integrates both on-chain and off-chain data, compatible with various machine learning algorithms for financial prediction. This dataset forms the cornerstone of a novel research framework, enabling a deeper exploration of the financial market and its mechanisms. By offering a robust and versatile dataset, we facilitate advanced exploration and optimization efforts, driving innovation and enhancing the accuracy and reliability of financial machine-learning models in blockchain technology.
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
# Hypothesis Development
|
| 11 |
+
## Machine Learning Algorithm Selection
|
| 12 |
+
The machine learning we selected are Linear Regression, Deep Neural Network, XGBoost, and long-term Memory.
|
| 13 |
+
# The Machine Learning Workflow
|
| 14 |
+
## Model Development
|
| 15 |
+
### Data Selection
|
| 16 |
+
<p>We query Ethereum’s data using Google BigQuery. The raw data contains information on timestamps, block numbers, hash, parent hash, transaction, etc. Since our research aims to predict gas used in the next block, we only keep the relevant features, including time stamp, block number, gas limit, gas used, and base fee. Notably, the NFT-airdrop can substantially boost recipients' and non-recipients' engagement levels in the transactions. As a result, high volatility in gas used will occur and lead to subsequent base fee alternation. Hence, our research is structured around two distinct periods. The first period spans the apex of the ARB airdrop, recognized as the most substantial in 2023, from March 21 to April 1, encompassing 78290 blocks. The second period pertains to the month devoid of significant NFT-airdrop activities, spanning from June 1st, 2023, to July 1st, 2023, and containing 213244 blocks.</p>
|
| 17 |
+
|
| 18 |
+
<p>We also query the discord data.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
### Data Processing
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
#### On-chain data
|
| 25 |
+
<p>In the original dataset, the base fee is denominated in units of Gwei, where each Gwei is equivalent to <code>$10^{-9}$</code> Ether. Consequently, for enhanced interpretability of the dataset, we scale the base fee by <code>$10^{-9}$</code>, expressing it in terms of Ether.</p>
|
| 26 |
+
|
| 27 |
+
<p>We create a regressor, denoted as <code>$\alpha$</code>, by computing the ratio of gas used to the gas limit. The predicted variable <code>$Y$</code> represents the normalized gas used, determined by the formula:</p>
|
| 28 |
+
|
| 29 |
+
<blockquote>
|
| 30 |
+
<p>
|
| 31 |
+
\[ Y = \frac{{\text{{gasUsed}} - \text{{gasTarget}}}}{{\text{{gasTarget}}}} \]
|
| 32 |
+
</p>
|
| 33 |
+
</blockquote>
|
| 34 |
+
|
| 35 |
+
<p>For varying periods <code>$k$</code>, the regressor variable for the preceding <code>$k$</code> data points is collected into a list, forming the feature set <code>$X$</code>. The variable <code>$Y$</code> corresponds precisely to the prediction variable for the data point at time <code>$t$</code>.</p>
|
| 36 |
+
|
| 37 |
+
#### Off-chain data
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
## Results Presentation
|
| 41 |
+
|
| 42 |
+
### Training and Testing
|
| 43 |
+
We conduct the train-test split through k-fold cross-validation with k=5, where the whole data set is split into 5 subsets for training, validating, and testing the model. Where the fraction of training data is 0.44, the fraction of validation data is 0.22, and the testing data is 0.34
|
| 44 |
+
|
| 45 |
+
## Model Evaluation
|
| 46 |
+
### Evaluation Criteria
|
| 47 |
+
The model is evaluated by Mean Squared Error.
|
| 48 |
+
### Iterative Improvement
|
| 49 |
+
The hyperparameter setting is the initial setting learning rate=0.001, while patience is 10, and early stopping is 12 epochs without a decrease of loss. (For both DNN and LSTM)
|