File size: 1,217 Bytes
033f9d4 a3fe653 033f9d4 988cae1 804a488 988cae1 c4e92ca a3fe653 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: apache-2.0
language:
- zh
base_model:
- hfl/chinese-roberta-wwm-ext
tags:
- finance
---
## Model Details
**Model Description:** This is a finance-domain pretrained Chinese language model, which is based on the 125-million-parameter RoBERTa-Base and further pre-trained on 32B tokens of Chinese financial corpora (including a large number of research reports, news, and announcements).
- **Developed by:** See [valuesimplex](https://github.com/valuesimplex) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** Chinese
- **Parent Model:** See the [chinese-roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://dl.acm.org/doi/10.1145/3711896.3737219)
- [GitHub Repo](https://github.com/valuesimplex/FinBERT)
## Direct Use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("valuesimplex-ai-lab/FinBERT2-base")
tokenizer = AutoTokenizer.from_pretrained("valuesimplex-ai-lab/FinBERT2-base")
```
### Further Usage
continual pre-training or fine-tuning:https://github.com/valuesimplex/FinBERT
|