metadata
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: html_formatted_text
dtype: string
- name: tokens
dtype: int64
splits:
- name: train
num_bytes: 3769973418.129021
num_examples: 5837
- name: test
num_bytes: 64587517.87097861
num_examples: 100
download_size: 1521418659
dataset_size: 3834560936
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
This data originates from https://www.athexgroup.gr/el/market-data/financial-data It is mainly yearly and semesterly company filings, totalling 5937 company filings. 100 of these company filings were put aside for the "test" split to be used in the Greek OCR task, mainly recent ones from 2023 and 2024. "tokens" column contains an integer which is the token count for that specific row, using the GPT-4 tokenizer.
The total amount of tokens using the Llama-3.1-8B tokenizer are: 0.45 B, using the GPT-4 tokenizer 0.9B:
Text Extraction Pipeline:
The text was extracted using Fitz (PyMuPDF)
Data cleaning pipeline:
- Lines present in their entirety in every single page were removed. This was a way to identify artifacts which remained after the text extraction such as company logos an dother text present in every page but meaningless.
- Lines with more than 30% misencoded characters were removed (characters not in the proper latin or greek alphabets were considered improper)
- Lines indicating page counters were removed (for example Page 1 out of 150)
- Triple characters were removed. For example, due to some text extracting issues, a word like "January" might be extracted as "JJJaaannnuuuaaarrryyy". This is not something that possibly occurs in the Greek language so these words were removed.