Update README.md
Browse files
README.md
CHANGED
|
@@ -97,12 +97,11 @@ You can also refer to [BigQuery](https://console.cloud.google.com/bigquery?p=big
|
|
| 97 |
</table>
|
| 98 |
|
| 99 |
### Two-step training loss (normal training and monotonic training)
|
| 100 |
-
|
| 101 |
|
| 102 |
<table>
|
| 103 |
<tr>
|
| 104 |
<td> Two-step training loss </td>
|
| 105 |
-
<td><img src="./results/training_loss_2_step.pdf" alt="dex-to-cex"></td>
|
| 106 |
<td><a href="./results/training_loss_2_step.pdf">Two-step training loss</a></td>
|
| 107 |
</tr>
|
| 108 |
</table>
|
|
|
|
| 97 |
</table>
|
| 98 |
|
| 99 |
### Two-step training loss (normal training and monotonic training)
|
| 100 |
+
We utilized the NAM model due to its inherent transparency characteristic and the ability to isolate variables, facilitating the imposition of monotonicity constraints on specific features. The model is trained on data from two distinct periods, achieving weak pairwise monotonicity over the $\alpha$ feature. In the first step, standard training is conducted to enable the model to learn from the data. In the second step, we impose monotonic constraints.
|
| 101 |
|
| 102 |
<table>
|
| 103 |
<tr>
|
| 104 |
<td> Two-step training loss </td>
|
|
|
|
| 105 |
<td><a href="./results/training_loss_2_step.pdf">Two-step training loss</a></td>
|
| 106 |
</tr>
|
| 107 |
</table>
|