beethogedeon commited on
Commit
ff32051
·
verified ·
1 Parent(s): 6cdbce1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 3.23 kB
2
+ ---
3
+ language: en
4
+ license: mit
5
+ library_name: transformers
6
+ tags:
7
+ - economics
8
+ - finance
9
+ - bert
10
+ - language-model
11
+ - financial-nlp
12
+ - economic-analysis
13
+ datasets:
14
+ - custom_economic_corpus
15
+ metrics:
16
+ - accuracy
17
+ - f1
18
+ - precision
19
+ - recall
20
+ pipeline_tag: text-classification
21
+ ---
22
+ # SentEconBERT
23
+
24
+ ## Model Description
25
+
26
+ SentEconBERT is a EconBERT-based language model specifically fine-tuned for sentiment analysis on economic and financial text. The model is designed to capture domain-specific language patterns, terminology, and contextual relationships in economic literature, research papers, financial reports, and related documents.
27
+
28
+ > **Note**: The complete details of model architecture, training methodology, evaluation, and performance metrics are available in our paper. Please refer to the citation section below.
29
+
30
+ ## Intended Uses & Limitations
31
+
32
+ ### Intended Uses
33
+
34
+ - **Economic Text Classification**: Categorizing economic documents, papers, or news articles
35
+ - **Sentiment Analysis**: Analyzing market sentiment in financial news and reports
36
+ - **Information Extraction**: Extracting structured data from unstructured economic texts
37
+ - etc.
38
+
39
+ ### Limitations
40
+
41
+ - The model is specialized for economic and financial domains and may not perform as well on general text
42
+ - For detailed discussion of limitations, please refer to our paper
43
+
44
+ ## Training Data
45
+
46
+ SentEconBERT was trained on the FinancialPhraseBank dataset. For comprehensive information about the training data, including sources, size, preprocessing steps, and other details, please refer to our paper.
47
+
48
+ ## Evaluation Results
49
+
50
+ We evaluated EconBERT on several economic NLP tasks and compared its performance with general-purpose and other domain-specific models. The detailed evaluation methodology and complete results are available in our paper.
51
+
52
+ Key findings include:
53
+ - Improved performance on economic domain tasks compared to general BERT models
54
+ - State-of-the-art results on [specific tasks, if applicable]
55
+ - [Any other high-level results worth highlighting]
56
+
57
+ ## How to Use
58
+
59
+ ```python
60
+ from transformers import AutoTokenizer, AutoModel
61
+ # Load model and tokenizer
62
+ tokenizer = AutoTokenizer.from_pretrained("YourUsername/EconBERT")
63
+ model = AutoModel.from_pretrained("YourUsername/EconBERT")
64
+ # Example usage
65
+ text = "The Federal Reserve increased interest rates by 25 basis points."
66
+ inputs = tokenizer(text, return_tensors="pt")
67
+ outputs = model(**inputs)
68
+ ```
69
+
70
+ For task-specific fine-tuning and applications, please refer to our paper and the examples provided in our GitHub repository.
71
+
72
+ ## Citation
73
+
74
+ If you use EconBERT in your research, please cite our paper:
75
+
76
+ ```bibtex
77
+ @article{LastName2025econbert,
78
+ title={EconBERT: A Large Language Model for Economics},
79
+ author={Zhang, Philip and Rojcek, Jakub and Leippold, Markus},
80
+ journal={SSRN Working Paper},
81
+ year={2025},
82
+ volume={},
83
+ pages={},
84
+ publisher={University of Zurich},
85
+ doi={}
86
+ }
87
+ ```
88
+
89
+ ## Additional Information
90
+
91
+ - **Model Type**: BERT
92
+ - **Language(s)**: English
93
+ - **License**: MIT
94
+
95
+ For more detailed information about model architecture, training methodology, evaluation results, and applications, please refer to our paper.