--- configs: - config_name: default data_files: - split: train path: - "train/bc-*.jsonl.gz" - split: validation path: - "validation/bc-*.jsonl.gz" - config_name: clean data_files: - split: train path: - "train/bc*.jsonl.gz" - "deduped/bc*.jsonl.gz" - split: validation path: - "validation/bc*.jsonl.gz" - config_name: sample data_files: - split: train path: - "sample/train/bc*.jsonl.gz" - split: validation path: - "sample/validation/bc*.jsonl.gz" - config_name: fraud data_files: - split: train path: - "fraud/bc-*.jsonl.gz" --- # đŸ«˜đŸ§ź BeanCounter ## Datset Summary BeanCounter is a low-toxicity, large-scale, and open dataset of business-oriented text. See [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827) for details of the data collection, analysis, and some explorations of using the data for continued pre-training. The data is sourced from the Electronic Data Gathering and Retrieval (EDGAR) system operated by the United States Securities and Exchange Commission (SEC). Specifically all filings submitted to EDGAR from 1996 through 2023 (validation splits are based on a random sample of data from January and February of 2024). We include four configurations of the dataset: `clean`, `default`, `fraud`, and `sample`. These consist of: - `clean`: 159B tokens of cleaned text - `default`: 111B tokens of cleaned and deduplicated text (referred to as "final" in the paper) - `fraud`: 0.3B tokens of text filed during periods of fraud according to SEC [Accounting and Auditing Enforcement Releases](https://www.sec.gov/enforcement-litigation/accounting-auditing-enforcement-releases) and [Litigation Releases](https://www.sec.gov/enforcement-litigation/litigation-releases) (Note that this content is not deduplicated) - `sample`: 1.1B tokens randomly sampled from `default` stratified by year ## How can I use this? ### License The dataset is provided under the [ODC-By](https://opendatacommons.org/licenses/by/1-0/) license. Cite our work as: ``` @inproceedings{ wang2024beancounter, title={BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text}, author={Siyan Wang and Bradford Levy}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=HV5JhUZGpP} } ``` ### In đŸ€— Datasets To load the random sample config in Datasets, one can run: ```python from datasets import load_dataset beancounter = load_dataset( "blevy41/BeanCounter", name="sample", # Load random sample, clean, or default (referred to as final in paper) ) # Print out split info print(beancounter, "\n") # Inspect an observation print(f"COLUMNS IN DATA: {','.join(beancounter['train'][1000].keys())}\n") print(f"EXCERPT: \n\n{beancounter['train'][1000]['text'][:1000]}") ``` ## What fields are in the data? The data contain seven fields: 1. `accession` - A unique identifier assigned to accepted EDGAR filings 2. `filename` - Each filing consists of one or more attachments. This is the filename of the specific attachment within the filing 3. `text` - Extracted text 4. `type_filing` - The type of the filing. A full index of SEC filing types can be found [here](https://www.sec.gov/submit-filings/forms-index) 5. `type_attachment` - The type of the attachment. For example, an 8-K filing will have a main "8-K" attachment but could also have exhibits of other types such as "EX-99" 6. `date` - The filing date assigned by the EDGAR system 7. `ts_accept` - The timestamp when the filing was accepted by the EDGAR system Note that if a filing is accepted by EDGAR after the [filing deadline](https://www.sec.gov/submit-filings/filer-support-resources/how-do-i-guides/determine-status-my-filing#section1) then EDGAR will not disseminate the form until the next business day and the `date` assigned by the EDGAR system will be the next business day, i.e., after `ts_accept`. Full details of processing can be found in [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827). # Datasheet Questions from the Datasheets for Datasets paper, v7. Jump to section: - [Motivation](#motivation) - [Composition](#composition) - [Collection process](#collection-process) - [Preprocessing/cleaning/labeling](#preprocessingcleaninglabeling) - [Uses](#uses) - [Distribution](#distribution) - [Maintenance](#maintenance) ## Motivation _The questions in this section are primarily intended to encourage dataset creators to clearly articulate their reasons for creating the dataset and to promote transparency about funding interests._ ### For what purpose was the dataset created? _Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description._ BeanCounter is one of the largest business-oriented text dataset and is created to facilitate research in business domain NLP and toxicity in NLP datasets. ### Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The BeanCounter dataset is created by Bradford Levy and Siyan Wang at University of Chicago Booth School of Business. ### Who funded the creation of the dataset? _If there is an associated grant, please provide the name of the grantor and the grant name and number._ There are no specific grants that supported the creation of the dataset; we acknowledge general financial support from University of Chicago Booth School of Business. ### Any other comments? No. ## Composition _Most of these questions are intended to provide dataset consumers with the information they need to make informed decisions about using the dataset for specific tasks. The answers to some of these questions reveal information about compliance with the EU’s General Data Protection Regulation (GDPR) or comparable regulations in other jurisdictions._ ### What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? _Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description._ The instances are publicly available financial disclosure textual documents filed on Securities and Exchange Comission's Electronic Data Gathering and Retrieval system (SEC EDGAR) by entities subject to the Securities Acts of 1933 and 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. ### How many instances are there in total (of each type, if appropriate)? We collected 16,486,145 documents (instances) from more than 16,000 entities. ### Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? _If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable)._ We filter out documents containing very little text or high proportion of white space; see Appendix A in Wang and Levy (2024) for more details. We provide 3 configurations of the dataset: BeanCounter.clean, BeanCounter.final and BeanCounter.sample. BeanCounter.clean is the final set of documents that has been filtered out with the cleaning technique described in Appendix A.3. BeanCounter.final is the set of documents that have been deduplicated on document basis (see Appendix A.4) and BeanCounter.sample is a 1% random sample of the dataset stratified by year. ### What data does each instance consist of? _“Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description._ Each instance consists of: - accession number: unique number assigned to each filing according to the entity's CIK, filing year and number of business days. - file name: name of the document submission including the extension (e.g. .html or .txt). - text: textual content of the document. - filing type: indicated type of submission to fulfill a specific SEC regulation; more specific than form type; e.g. DEF 14A (filing type) vs. DEF (form type). - attachment type: purpose of the document in the particular filing. The two main types are the main filing or exhibits (supplementary materials to the main filing). - date: date of filing submission. - form type: indicated type of submission to fulfill a particular SEC regulation (similar to filing type but less specific). - the accepted timestamp: second-precise timestamp of when the document is accepted into SEC EDGAR. ### Is there a label or target associated with each instance? _If so, please provide a description._ No. ### Is any information missing from individual instances? _If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text._ No information should be missing from instances. ### Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? _If so, please describe how these relationships are made explicit._ Instances are attachments to a particular filing, and each filing can contain one or more attachments. If the filing has more than one attachment (or instance), each attachment in the filing shares the same accession (i.e. the instances are linked by accession). ### Are there recommended data splits (e.g., training, development/validation, testing)? _If so, please provide a description of these splits, explaining the rationale behind them._ The training set contains all data extracted from SEC's EDGAR betwen 1996-2023. The validation set contains 100MB (uncompressed) of documents sampled from the start of 2024 through end of February, 2024. The training and validation sets are partitioned by time to ensure that data in the validation set is largely new and unobserved in the training set, since most entities are required to file updated reports at least annually. ### Are there any errors, sources of noise, or redundancies in the dataset? _If so, please provide a description._ Since the entities are responsible for producing the documents, there is a possibility of misreporting numbers or information in their filings. If these errors are found by the SEC, they can ask for corrections from these entities; otherwise, the errors can go undetected. For discussion on reducing redundancies in the dataset, please see Appendix A.3 and A.4 in the manuscript for details. ### Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? _If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a future user? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate._ The dataset is self contained. ### Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)? _If so, please provide a description._ No, the data does not contain any confidential information. All financial disclosures filed on SEC EDGAR is publicly available. Discussion regarding the license of SEC EDGAR data can be found in beginning of Section 3 in Wang and Levy (2024). ### Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? _If so, please describe why._ We have conducted extensive toxicity analysis of the dataset and determined that it is lower in toxicity compared to other web-based datasets; details can be found in Section 3.4 of the manuscript. Discussions regarding the difference between BeanCounter and other web-based datasets can also be found in the conclusion. Based manual inspection of toxic content in the dataset, we have found rare instances of toxic sentences in filings that include earnings call transcript or discussions of discriminatory communication (with examples) in the context of Human Resources training manuals. ### Does the dataset relate to people? _If not, you may skip the remaining questions in this section._ A small portion of our dataset may related to people in so much as they are mentioned by the entities in our dataset. For example, Tim Cook may be mentioned in our data if Apple, or their competitors, discusses him. ### Does the dataset identify any subpopulations (e.g., by age, gender)? _If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset._ BeanCounter includes references of various subpopulations; we explicitly study the toxicity of text surrounding these mentions and details can be found in Section 3.3 and 3.4 of Wang and Levy (2024). ### Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? _If so, please describe how._ The dataset can contain personally identifiable information; however, the entities have consented to making this information available. See beginning of Section 3 in manuscript for more detailed discussion. ### Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? _If so, please provide a description._ No. ### Any other comments? No. ## Collection process _\[T\]he answers to questions here may provide information that allow others to reconstruct the dataset without access to it._ ### How was the data associated with each instance acquired? _Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how._ The dataset associated with each instance is derived from the SEC's daily archives of filings accepted by the EDGAR system. The EDGAR system accepts a variety of file formats. We process all text and HTML-based files to extracted formatted long-form text from each filing. Full details of the dataset construction process can be found in Appendix A of Wang and Levy (2024). ### What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? _How were these mechanisms or procedures validated?_ The SEC publishes daily archives of all filings accepted by the EDGAR system. We downloaded these in an automated manner, retrying any failed downloads until they succeeded. ### If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? We process all text and HTML-based filings. The "sample" configuration of the BeanCounter dataset consists of a random sample of 1% of the full BeanCounter dataset. We sample this data stratified by year to ensure an even volume of tokens for each year. ### Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? The authors completed all data collection activities themselves. ### Over what timeframe was the data collected? _Does this timeframe match the creation timeframe of the data associated with the instances (e.g. recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created._ The data was collected in February 2024 however the SEC EDGAR system is similar to an append only database where each filing is associated with a timestamp denoting the date and time it was accepted by EDGAR. In that sense, any data collected retroactively, e.g., a filing from 2014, is representative of its content at the time EDGAR accepted it. ### Were any ethical review processes conducted (e.g., by an institutional review board)? _If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation._ No. ### Does the dataset relate to people? _If not, you may skip the remainder of the questions in this section._ A small portion of our dataset may related to people in so much as they are mentioned by the entities in our dataset. For example, Tim Cook may be mentioned in our data if Apple, or their competitors, discusses him. ### Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? All data is collected from SEC EDGAR. ### Were the individuals in question notified about the data collection? _If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself._ They were not. ### Did the individuals in question consent to the collection and use of their data? _If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented._ Yes, all EDGAR filers consent to the SEC's terms of use, which stipulate that "Information presented on www.sec.gov is considered public information and may be copied or further distributed by users of the web site without the SEC’s permission." More details on the SEC's policy can be found [here](https://web.archive.org/web/20240602180519/https://www.sec.gov/privacy#dissemination). ### If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? _If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate)._ Not applicable. ### Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? _If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation._ See Wang and Levy (2024) for a discussion of the implications and impact of the dataset. ### Any other comments? ## Preprocessing/cleaning/labeling _The questions in this section are intended to provide dataset consumers with the information they need to determine whether the “raw” data has been processed in ways that are compatible with their chosen tasks. For example, text that has been converted into a “bag-of-words” is not suitable for tasks involving word order._ ### Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? _If so, please provide a description. If not, you may skip the remainder of the questions in this section._ Yes, filings which are both raw text and HTML-based had some preprocessing and cleaning applied. The goal of these steps is to extract long-form text from the original filings while preserving meaningful formatting such as paragraphs breaks, indentation, and lists. See Wang and Levy (2024) for further details of the exact preprocessing and cleaning. ### Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? _If so, please provide a link or other access point to the “raw” data._ Yes, the raw data is directly available from the SEC and they have pledged to continue to make it available. ### Is the software used to preprocess/clean/label the instances available? _If so, please provide a link or other access point._ Yes, please see supplementary materials document for accessing it. ### Any other comments? ## Uses _These questions are intended to encourage dataset creators to reflect on the tasks for which the dataset should and should not be used. By explicitly highlighting these tasks, dataset creators can help dataset consumers to make informed decisions, thereby avoiding potential risks or harms._ ### Has the dataset been used for any tasks already? _If so, please provide a description._ We explored the utility of BeanCounter by continually pretraining existing models on the dataset and evaluating it on financial and toxicity related tasks; see Section 4 of Wang and Levy (2024) for detailed discussion. ### Is there a repository that links to any or all papers or systems that use the dataset? _If so, please provide a link or other access point._ No, BeanCounter has not been used in other papers and systems. ### What (other) tasks could the dataset be used for? The dataset could be used for tasks that evaluate social biases (e.g. CrowS-Pairs),truthfulness (e.g. TruthfulQA), timeliness (e.g. TempLAMA) and other financial domain knowledge evaluations (e.g. ConvFinQA). ### Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? _For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms?_ While we process all of the filings uploaded to EDGAR, our text extraction process only supports text and HTML-based documents. As a result, the content of other document types, e.g., Excel, will not appear in our dataset. ### Are there tasks for which the dataset should not be used? _If so, please provide a description._ Due to the nature of content in the dataset, models trained on BeanCounter may lack imagination and perform poorly on benchmarks that evaluate the model's creativity; see Conclusion in Wang and Levy (2024) for additional discussions on the idiosyncracy of the data. ### Any other comments? No. ## Distribution ### Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? _If so, please provide a description._ Yes. ### How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? _Does the dataset have a digital object identifier (DOI)?_ The dataset will be available via HuggingFace Hub as a collection of gzipped json files. ### When will the dataset be distributed? It will be made publicly available close to the NeurIPS conference date. ### Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? _If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions._ Yes, the dataset will be distributed under [Open Data Commons Attributions](https://opendatacommons.org/licenses/by/) license. This permissive license allows users to share and adapt the dataset as long as they give credit to the authors. ### Have any third parties imposed IP-based or other restrictions on the data associated with the instances? _If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions._ No. ### Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? _If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation._ No. ### Any other comments? ## Maintenance _These questions are intended to encourage dataset creators to plan for dataset maintenance and communicate this plan with dataset consumers._ ### Who is supporting/hosting/maintaining the dataset? Bradford Levy and Siyan Wang are supporting and maintaining the dataset. ### How can the owner/curator/manager of the dataset be contacted (e.g., email address)? Please refer to the manuscript for email addresses. ### Is there an erratum? _If so, please provide a link or other access point._ Please see the github repository for erratum related to the dataset. ### Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? _If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)?_ Yes, as soon as practicable. The updates can be seen on Github and HuggingFace Hub. ### If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? _If so, please describe these limits and explain how they will be enforced._ No, the entities in the dataset have agreed to make it publicly available in perpetuity. ### Will older versions of the dataset continue to be supported/hosted/maintained? _If so, please describe how. If not, please describe how its obsolescence will be communicated to users._ Yes, the older versions of the dataset will continue to be hosted on Huggingface Hub. ### If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? _If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to other users? If so, please provide a description._ Researchers can interact and use the BeanCounter dataset via Huggingface Hub; we do not provide functionalities beyond what Huggingface Hub provides. ### Any other comments? No.