Upload ./README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -33,16 +33,14 @@ configs:
|
|
| 33 |
---
|
| 34 |
|
| 35 |
# 🫘🧮 BeanCounter
|
| 36 |
-
|
| 37 |
## Datset Summary
|
| 38 |
-
|
| 39 |
BeanCounter is a low-toxicity, large-scale, and open dataset of business-oriented text. See [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827) for details of the data collection, analysis, and some explorations of using the data for continued pre-training.
|
| 40 |
|
| 41 |
-
The data is sourced from the Electronic Data Gathering and Retrieval (EDGAR) system operated by the United States Securities and Exchange Commission (SEC). Specifically all filings submitted to EDGAR from 1996 through 2023 (validation splits are based on a random sample of data from January and February of 2024). We include
|
| 42 |
|
| 43 |
- `clean`: 159B tokens of cleaned text
|
| 44 |
- `default`: 111B tokens of cleaned and deduplicated text (referred to as "final" in the paper)
|
| 45 |
-
- `fraud`: 0.3B tokens of text filed during periods of fraud
|
| 46 |
- `sample`: 1.1B tokens randomly sampled from `default` stratified by year
|
| 47 |
|
| 48 |
## How can I use this?
|
|
|
|
| 33 |
---
|
| 34 |
|
| 35 |
# 🫘🧮 BeanCounter
|
|
|
|
| 36 |
## Datset Summary
|
|
|
|
| 37 |
BeanCounter is a low-toxicity, large-scale, and open dataset of business-oriented text. See [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827) for details of the data collection, analysis, and some explorations of using the data for continued pre-training.
|
| 38 |
|
| 39 |
+
The data is sourced from the Electronic Data Gathering and Retrieval (EDGAR) system operated by the United States Securities and Exchange Commission (SEC). Specifically all filings submitted to EDGAR from 1996 through 2023 (validation splits are based on a random sample of data from January and February of 2024). We include four configurations of the dataset: `clean`, `default`, `fraud`, and `sample`. These consist of:
|
| 40 |
|
| 41 |
- `clean`: 159B tokens of cleaned text
|
| 42 |
- `default`: 111B tokens of cleaned and deduplicated text (referred to as "final" in the paper)
|
| 43 |
+
- `fraud`: 0.3B tokens of text filed during periods of fraud according to SEC [Accounting and Auditing Enforcement Releases](https://www.sec.gov/enforcement-litigation/accounting-auditing-enforcement-releases) and [Litigation Releases](https://www.sec.gov/enforcement-litigation/litigation-releases) (Note that this content is not deduplicated)
|
| 44 |
- `sample`: 1.1B tokens randomly sampled from `default` stratified by year
|
| 45 |
|
| 46 |
## How can I use this?
|