Instructions to use ahmedrachid/FinancialBERT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ahmedrachid/FinancialBERT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="ahmedrachid/FinancialBERT")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ahmedrachid/FinancialBERT") model = AutoModelForMaskedLM.from_pretrained("ahmedrachid/FinancialBERT") - Notebooks
- Google Colab
- Kaggle
FinancialBERT is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- TRC2-financial: 1.8M news articles that were published by Reuters between 2008 and 2010.
- Bloomberg News: 400,000 articles between 2006 and 2013.
- Corporate Reports: 192,000 transcripts (10-K & 10-Q)
- Earning Calls: 42,156 documents.
More details on FinancialBERT can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
Created by Ahmed Rachid Hazourli
- Downloads last month
- 294