File size: 5,099 Bytes
58b1048 13b3ab6 3483232 13b3ab6 3483232 13b3ab6 25b4e4b 13b3ab6 25b4e4b 13b3ab6 25b4e4b 13b3ab6 25b4e4b 13b3ab6 3483232 13b3ab6 90f3431 25b4e4b 363996f 3483232 d8d5187 3483232 d8d5187 3483232 d8d5187 363996f d8d5187 363996f d8d5187 3483232 d8d5187 363996f d8d5187 363996f d8d5187 3483232 4c35e2b 2ce2824 4c35e2b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | ---
license: mit
---
# FinML-Chain: A Blockchain-Integrated Dataset for Enhanced Financial Machine Learning
## Table of Contents
- Data
- Code
- Result
- Reference
## Data
#### Collection for On-chain Data
We collect the data through BigQuery, and the code we used is in [Query](https://huggingface.co/datasets/dkublockchain/FinML_Chain/blob/main/data/DataQuery.txt)
[Code for querying data](./data/DataQuery.txt)
You can also refer to [BigQuery](https://console.cloud.google.com/bigquery?p=bigquery-public-data&d=crypto_ethereum_classic&page=dataset&project=psyched-service-412017&ws=!1m9!1m4!4m3!1sbigquery-public-data!2sethereum_blockchain!3slive_blocks!1m3!3m2!1sbigquery-public-data!2scrypto_ethereum_classic&pli=1) for more information.
#### Collection for Off-chain Data
### On-chain Data Infomation
| Data Files | Data Type | Data Content |
| ------------- | ------------- | ------------- |
| [ETH-Token-airdrop.csv](https://huggingface.co/datasets/dkublockchain/FinML_Chain/blob/main/data/eth-onchain-03%3A2023_04%3A2023.csv) | Raw Data | Critical indicators related to gas during NFT airdrop period |
| [ETH-Normal.csv](https://huggingface.co/datasets/dkublockchain/FinML_Chain/blob/main/data/eth-onchain-06%3A2023-07%3A2023.csv) | Raw Data | Critical indicators related to gas during normal period |
#### On chain Data Dictionary
- **ETH-Token-airdrop.csv and ETH-Normal.csv**
| Variable Name | Description | Type |
|------------------------|-----------------------------------|---------|
| timestamp | Recoding of the time of each block| String |
| number | The number of blocks on the chain | Numeric |
| gas_used | Actual gas used | Numeric |
| gas_limit | The maximum allowed gas per block | Numeric |
| base_fee_per_gas | The base fee set for each block | Numeric |
- **Additional Variables we create**
| Variable Name | Description | Type |
|------------------------|-----------------------------------|---------|
| gas_fraction | Fraction between Gas Used and Gas Limit | Numeric |
| gas_target | The optimal gas used for each block | Numeric |
| Y | Normalized Gas Used | Numeric |
| Y<sub>t | Response variable equals to the gas_fraction| Numeric |
### Off-chain Data Information
| Variable Name | Description | Type |
|------------------------|-----------------------------------|---------|
| chat text | people's chat (sentences) | String |
## Code
| Code Files | Code Description |
| ------------- | ------------- |
| [main_dataset_processing_code.ipynb](https://huggingface.co/datasets/dkublockchain/FinML_Chain/blob/main/code/main_dataset_processing_code.ipynb) | Applying FinBert to process discord information; Applying the NAM model to manipulate monotonicity; Applying Both on-chain data and off-chain data to train the model
| [NAM models.py](https://huggingface.co/datasets/dkublockchain/FinML_Chain/blob/main/code/NAM_models.py) | NAM model
| [baseline_dataset_processing_code.ipynb](https://huggingface.co/datasets/dkublockchain/FinML_Chain/blob/main/code/baseline_dataset_processing_code.ipynb) | Using linear algorithm, DNN, XGBoost and long-short term memory to predict gas used.
## Results
### Baseline results
<table>
<tr>
<td> Baseline loss for Token-airdrop period</td>
<td><img src="./results/s1.png" alt="dex-to-cex"></td>
<td><a href="./results/s1.png">Baseline loss for Token-airdrop period</a></td>
</tr>
<tr>
<td> Baseline variance for Token-airdrop period</td>
<td><img src="./results/s3.png" alt="dex-to-cex"></td>
<td><a href="./results/s3.png">Baseline variance for Token-airdrop period</a></td>
</tr>
</table>
<table>
<tr>
<td> Baseline loss for normal period</td>
<td><img src="./results/s2.png" alt="dex-to-cex"></td>
<td><a href="./results/s2.png">Baseline loss for normal period</a></td>
</tr>
<tr>
<td> Baseline variance for normal period</td>
<td><img src="./results/s4.png" alt="dex-to-cex"></td>
<td><a href="./results/s4.png">Baseline variance for normal period</a></td>
</tr>
</table>
### Two-step training loss (normal training and monotonic training)
We utilized the NAM model due to its inherent transparency characteristic and the ability to isolate variables, facilitating the imposition of monotonicity constraints on specific features. The model is trained on data from two distinct periods, achieving weak pairwise monotonicity over the $\alpha$ feature. In the first step, standard training is conducted to enable the model to learn from the data. In the second step, we impose monotonic constraints.
<table>
<tr>
<td> Two-step training loss </td>
<td><a href="./results/training_loss_2_step.pdf">Two-step training loss</a></td>
</tr>
</table>
|