StevenJingfeng commited on
Commit
cdffe8f
·
verified ·
1 Parent(s): 7dc6333

Update method/Readme.md

Browse files
Files changed (1) hide show
  1. method/Readme.md +10 -1
method/Readme.md CHANGED
@@ -15,10 +15,19 @@ Our analysis indicates that among the four machine learning models evaluated, DN
15
 
16
  ### Data
17
  #### On-chain data
 
18
 
19
  #### Off-chain data
 
20
 
21
  ### Monotonicity
22
  we propose a novel method for predicting gas usage in blockchain transactions, inspired by the concept of pairwise monotonicity as detailed by Chen \cite{chen2023address}. Unlike traditional methods like EMA, which emphasizes the forgetting of older information, our approach employs a monotonicity representation to attribute varying levels of importance to data over time. Monotonicity has demonstrated its interdisciplinary applicability, as evidenced by works such as Liu et al. \cite{liu2020certified} and Milani \cite{milani2016fast}, which focused on individual monotonicity for single variables. Our method is inspired by Chen's work \cite{chen2023address} for introducing pairwise monotonicity in the financial domain. For instance, past due amounts over a longer period in credit scoring should more significantly impact the scoring of new debt risk. Similarly, older data points are less influential in blockchain transactions, whereas recent data points are more critical for prediction.
23
 
24
- We apply monotonicity to the α feature, where changes in α for recent blocks result in greater prediction variance than changes in distant previous data points.
 
 
 
 
 
 
 
 
15
 
16
  ### Data
17
  #### On-chain data
18
+ We utilize Google BigQuery to extract Ethereum's blockchain data, including timestamps, block numbers, hashes, parent hashes, transactions, etc. We retain only the pertinent features to predict gas usage in forthcoming blocks: timestamp, gas limit, gas used, and base fee. We exclude other variables, such as transaction numbers, despite their high correlation with gas usage, based on our specific research focus. Furthermore, our study acknowledges the impact of token airdrops on transaction engagement levels for recipients and non-recipients. According to Guo\cite{guo2023spillover}, token airdrops can significantly influence engagement, resulting in pronounced gas usage volatility and subsequent base fee fluctuations. Consequently, our analysis is bifurcated into two distinct periods. The first period examines the ARB token airdrop, the most substantial airdrop event in 2023, which occurred from March 21 to April 1 and comprised 78,290 blocks. The second period, devoid of significant fungible token airdrop activities, extends from June 1, 2023, to July 1, 2023, encompassing 213,244 blocks. This temporal delineation allows for a comprehensive analysis of the effects of significant airdrop events on Ethereum's gas dynamics.
19
 
20
  #### Off-chain data
21
+ Another dataset is users' discussion text from Discord. Discord hosts vibrant crypto discussions ranging from market analysis to technical debates, yet remains underexplored for sentiment analysis, unlike platforms like Twitter and Reddit, where extensive studies in cryptocurrency sentiment research exist \cite{kraaijeveld2020predictive,mohapatra2019kryptooracle,khan2022business}. We focus on critical communities such as Binance, Uniswap, and the Ethereum Dev channels on Discord. These are the communities for the largest centralized exchange, decentralized exchanges, and Ethereum developers, respectively. Analyzing sentiments from these communities, which influence Ethereum's network activity and gas usage through trading dynamics and developer engagement, provides crucial insights for predicting future gas demand and formulating effective network management strategies. The discussion texts are queries from Discord using the DiscordChatExporter, an open-source tool on GitHub.
22
 
23
  ### Monotonicity
24
  we propose a novel method for predicting gas usage in blockchain transactions, inspired by the concept of pairwise monotonicity as detailed by Chen \cite{chen2023address}. Unlike traditional methods like EMA, which emphasizes the forgetting of older information, our approach employs a monotonicity representation to attribute varying levels of importance to data over time. Monotonicity has demonstrated its interdisciplinary applicability, as evidenced by works such as Liu et al. \cite{liu2020certified} and Milani \cite{milani2016fast}, which focused on individual monotonicity for single variables. Our method is inspired by Chen's work \cite{chen2023address} for introducing pairwise monotonicity in the financial domain. For instance, past due amounts over a longer period in credit scoring should more significantly impact the scoring of new debt risk. Similarly, older data points are less influential in blockchain transactions, whereas recent data points are more critical for prediction.
25
 
26
+ We apply monotonicity to the α feature, where changes in α for recent blocks result in greater prediction variance than changes in distant previous data points.
27
+
28
+ ### FinBert model
29
+ The sentiment extraction from the data is conducted using FinBert, a model proposed by Dogu and Araci in 2019 \cite{araci2019finbert}. FinBert is a BERT-based architecture specifically trained on financial data sets, including the Financial PhraseBank, TRC2-financial, and FiQA Sentiment. This training enables FinBert to achieve state-of-the-art performance in FiQA sentiment scoring. Our research uses FinBert to predict sentiment in our text data, ensuring accurate sentiment analysis aligned with financial contexts.
30
+
31
+ ### NAM model
32
+ The Neural Additive Model (NAM), proposed by Agarwal et al. in 2021 \cite{agarwal2021neural}, offers a transparent framework for utilizing Deep Neural Networks (DNNs) to model individual or combined features. In this architecture, the outputs from all DNNs are aggregated at the final hidden layer, resulting in a unified model. Our research omits interactions between unrelated features, enabling the imposition of weak monotonicity constraints on each feature.
33
+