Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ pinned: false
|
|
| 23 |
We are thrilled to introduce a specialized collection of **68 large language models (LLMs)**, meticulously designed for the accounting and finance. The FinText models have been **pre-trained** on domain-specific historical data, addressing challenges like **look-ahead bias** and **information leakage**. These models are crafted to elevate the accuracy and depth of financial research and analysis.
|
| 24 |
|
| 25 |
💡 **Key Features:**
|
| 26 |
-
- **Domain-Specific Training:** FinText utilises diverse financial datasets such as news articles, regulatory filings, IP records,
|
| 27 |
- **Time-Period Specific Models:** Separate models are pre-trained for each year from **2007 to 2023**, ensuring the utmost precision and historical relevance.
|
| 28 |
- **RoBERTa Architecture:** The suite includes both a **base model** with **125 million parameters** and a **smaller variant** with **51 million parameters**.
|
| 29 |
- **Two distinct pre-training durations:** We also introduce a series of models to explore the impact of futher pre-training.
|
|
|
|
| 23 |
We are thrilled to introduce a specialized collection of **68 large language models (LLMs)**, meticulously designed for the accounting and finance. The FinText models have been **pre-trained** on domain-specific historical data, addressing challenges like **look-ahead bias** and **information leakage**. These models are crafted to elevate the accuracy and depth of financial research and analysis.
|
| 24 |
|
| 25 |
💡 **Key Features:**
|
| 26 |
+
- **Domain-Specific Training:** FinText utilises diverse financial datasets such as news articles, regulatory filings, IP records, key information, Speeches (ECB, FED), and more.
|
| 27 |
- **Time-Period Specific Models:** Separate models are pre-trained for each year from **2007 to 2023**, ensuring the utmost precision and historical relevance.
|
| 28 |
- **RoBERTa Architecture:** The suite includes both a **base model** with **125 million parameters** and a **smaller variant** with **51 million parameters**.
|
| 29 |
- **Two distinct pre-training durations:** We also introduce a series of models to explore the impact of futher pre-training.
|