File size: 1,220 Bytes
73f5bca 1c398cc 73f5bca 1c398cc 5928de1 73f5bca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: apache-2.0
language:
- zh
base_model:
- hfl/chinese-roberta-wwm-ext
tags:
- finance
---
## Model Details
**Model Description:** This is a finance-domain pretrained Chinese language model, which is based on the 355-million-parameter RoBERTa-Large and further pre-trained on 32B tokens of Chinese financial corpora (including a large number of research reports, news, and announcements).
- **Developed by:** See [valuesimplex](https://github.com/valuesimplex) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** Chinese
- **Parent Model:** See the [chinese-roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://dl.acm.org/doi/10.1145/3711896.3737219)
- [GitHub Repo](https://github.com/valuesimplex/FinBERT)
## Direct Use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("valuesimplex-ai-lab/FinBERT2-large")
tokenizer = AutoTokenizer.from_pretrained("valuesimplex-ai-lab/FinBERT2-large")
```
### Further Usage
continual pre-training or fine-tuning:https://github.com/valuesimplex/FinBERT
|