Update README.md
Browse files
README.md
CHANGED
|
@@ -23,15 +23,7 @@ We compare the time and space efficiency of this model and some competitors. For
|
|
| 23 |
|
| 24 |
The experiments are implemented with an NVIDIA A100-SXM4-40GB. Batch size of 1. The figures show the time and memory needed to run one batch. In the training mode, forward pass and backpropagation are included. In the inferring model, only forward pass is included.
|
| 25 |
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-

|
| 29 |
-

|
| 30 |
-
|
| 31 |
-
# Inferring mode
|
| 32 |
-
|
| 33 |
-

|
| 34 |
-

|
| 35 |
|
| 36 |
# Training code
|
| 37 |
https://github.com/minhtriphan/LongFinBERT-base/tree/main
|
|
@@ -90,4 +82,11 @@ with torch.no_grad():
|
|
| 90 |
For any comments, questions, or feedback, please get in touch with us via phanminhtri2611@gmail.com or triminh.phan@unisg.ch.
|
| 91 |
|
| 92 |
# Paper
|
| 93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
The experiments are implemented with an NVIDIA A100-SXM4-40GB. Batch size of 1. The figures show the time and memory needed to run one batch. In the training mode, forward pass and backpropagation are included. In the inferring model, only forward pass is included.
|
| 25 |
|
| 26 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
# Training code
|
| 29 |
https://github.com/minhtriphan/LongFinBERT-base/tree/main
|
|
|
|
| 82 |
For any comments, questions, or feedback, please get in touch with us via phanminhtri2611@gmail.com or triminh.phan@unisg.ch.
|
| 83 |
|
| 84 |
# Paper
|
| 85 |
+
```
|
| 86 |
+
@article{phan2024longfinbert,
|
| 87 |
+
title={LongFinBERT: A Language Model for Very Long Financial Documents},
|
| 88 |
+
author={Phan, Minh Tri and Senn, Erik},
|
| 89 |
+
journal={Available at SSRN 5011898},
|
| 90 |
+
year={2024}
|
| 91 |
+
}
|
| 92 |
+
```
|