Update method/Readme.md
Browse files- method/Readme.md +1 -1
method/Readme.md
CHANGED
|
@@ -139,7 +139,7 @@ Another dataset is users' discussion text from Discord. Discord hosts vibrant cr
|
|
| 139 |
</tr>
|
| 140 |
</tbody>
|
| 141 |
</table>
|
| 142 |
-
|
| 143 |
#### Data processing
|
| 144 |
we also incorporate an additional off-chain data source, specifically the discussion text from Discord. To analyze this data, we use a large language model to process English sentences or words, estimating the probability of each sentence being classified as positive, negative, or neutral, ensuring that the total probability sums to 1. After obtaining sentiment information, we organize the corpus sequentially and compute average sentiment scores over both hourly and daily intervals. This sentiment information is denoted as gammma. We then synchronize the on-chain data with the off-chain sentiment using corresponding block data from the previous time chunk, ensuring that only preceding sentiment information is included in the training data.
|
| 145 |
|
|
|
|
| 139 |
</tr>
|
| 140 |
</tbody>
|
| 141 |
</table>
|
| 142 |
+
|
| 143 |
#### Data processing
|
| 144 |
we also incorporate an additional off-chain data source, specifically the discussion text from Discord. To analyze this data, we use a large language model to process English sentences or words, estimating the probability of each sentence being classified as positive, negative, or neutral, ensuring that the total probability sums to 1. After obtaining sentiment information, we organize the corpus sequentially and compute average sentiment scores over both hourly and daily intervals. This sentiment information is denoted as gammma. We then synchronize the on-chain data with the off-chain sentiment using corresponding block data from the previous time chunk, ensuring that only preceding sentiment information is included in the training data.
|
| 145 |
|