Commit
·
cc55e35
1
Parent(s):
5e9f6ee
Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@ decay of 1e − 5 and a maximal learning of 5e − 4. I train the model using a
|
|
| 27 |
To fine-tune the model, I use several datasets, including:
|
| 28 |
- A manually labeled [multi-label database of German ad-hoc announcements](https://arxiv.org/pdf/2311.07598.pdf) containing 31,771 sentences, each associated with up to 20 possible topics.
|
| 29 |
- An extractive question-answering dataset based on the SQuAD format, which was created using 3,044 ad-hoc announcements processed by OpenAI's ChatGPT to generate and answer questions (see [here](https://huggingface.co/datasets/scherrmann/adhoc_quad)).
|
| 30 |
-
- The [financial phrase bank](https://arxiv.org/abs/1307.5336) of Malo et al. (2013) for sentiment classification, translated to German using [DeepL](https://www.deepl.com/translator) (see [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german))
|
| 31 |
|
| 32 |
### Benchmark Results
|
| 33 |
The further pre-trained German FinBERT model demonstrated the following performances on finance-specific downstream tasks:
|
|
|
|
| 27 |
To fine-tune the model, I use several datasets, including:
|
| 28 |
- A manually labeled [multi-label database of German ad-hoc announcements](https://arxiv.org/pdf/2311.07598.pdf) containing 31,771 sentences, each associated with up to 20 possible topics.
|
| 29 |
- An extractive question-answering dataset based on the SQuAD format, which was created using 3,044 ad-hoc announcements processed by OpenAI's ChatGPT to generate and answer questions (see [here](https://huggingface.co/datasets/scherrmann/adhoc_quad)).
|
| 30 |
+
- The [financial phrase bank](https://arxiv.org/abs/1307.5336) of Malo et al. (2013) for sentiment classification, translated to German using [DeepL](https://www.deepl.com/translator) (see [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german)).
|
| 31 |
|
| 32 |
### Benchmark Results
|
| 33 |
The further pre-trained German FinBERT model demonstrated the following performances on finance-specific downstream tasks:
|