| ---
|
| license: mit
|
| ---
|
|
|
|
|
| # CrediBench Cleaned
|
|
|
|
|
| **CDB Cleaned** is the pre-processed version of CrediBench. This text data is curated for pre-training, as such we follow general pre-processing steps recommended before pre-training on text.
|
|
|
| ## Summary
|
|
|
|
|
|
|
| ## Included Content
|
|
|
| - `domain`: Domain name from the website.
|
| - `wet_record_txt`: Text content from the domain's corresponding CommonCrawl WET file.
|
| - `language`: The language of the text content, and the predicted confidence of said language.
|
|
|
|
|
| # Pre-Processing
|
|
|
| CDB Cleaned is pre-processed with [nemo-curator](https://github.com/NVIDIA-NeMo/Curator). We processed the text with the following steps:
|
|
|
| 
|
|
|
|
|
|
|
| ## Filtering
|
|
|
| ### Text-Cleaning
|
|
|
| * `UnicodeReformatter`: Uses [ftfy](https://ftfy.readthedocs.io/en/latest/) to fix broken Unicode characters.
|
|
|
| * `NewlineNormalizer`: Uses regex to replace 3 or more consecutive newline characters in each document with only 2 newline characters.
|
|
|
| * `UrlRemover`: uses regex to remove all URLs in each document.
|
|
|
| ### Deduplication
|
|
|
| * `Fuzzy Deduplication`: Uses MinHash and Locality Sensitive Hashing to find and remove near-duplicated documents.
|
|
|
| ## Labelling
|
|
|
| ### Language Management
|
|
|
| * `FastText Language ID`: To label multilingual content at scale we utilize the [FastText](https://fasttext.cc/docs/en/language-identification.html) language identification model.
|
|
|
|
|
| ### Quality Labelling
|
|
|
| * `FastText Quality Filtering`: We train our own quality filterer here, and label each row with a quality rating.
|
|
|
|
|
| # Statistics
|
|
|
| ## File Sizes
|
|
|
| The filtering reduces the text size as follows: 
|
|
|
| ## Deduplication
|
|
|
| The deduplication step identifies large clusters of repeated text. Overall there is about 3,615,276 domains with duplicate text.
|
|
|
| 
|
|
|
| ### Largest Clusters
|
|
|
| The largets cluster of 122,822 domains corresponds to a CloudFlare Scrape Shield message. This occurs because cloudflare protects addressed on websites from spam bots/web-scrapers.
|
|
|
|
|
| >Email Protection | Cloudflare Please enable cookies. Email Protection
|
| >You are unable to access this email address centrumpoloznicze.pl
|
| >The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. You must enable Javascript in your browser in order to decode the e-mail address.
|
| >If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare.
|
| >How does Cloudflare protect email addresses on website from spammers?
|
| >Can I sign up for Cloudflare?
|
| >Cloudflare Ray ID: 8f0443b27bf9c5a5 • Your IP:
|
| >Click to reveal
|
| >18.97.9.170 • Performance & security by Cloudflare
|
|
|
|
|
| The second largest cluster at 54,347 domains corresponds to a similar Web Application Firewall, or some anti-bot service. The third largest cluster is also such a message.
|
|
|
| >One moment, please…
|
| >Loader
|
| >Please wait while your request is being verified…
|
|
|
|
|
|
|
| ## Language Distribution
|
|
|
| CDB has quite a diverse distribution of languages, overall there is about 143 languages. English, German and Chinese are the top 3.
|
| **English only occurs in 14,506,809 domains.**
|
|
|
| 
|
|
|
|
|
| ## Domain Coverage
|
|
|
| When squashing CDB-Dec, Nov, Oct into one large graph after filtering we get about 56.4% domains that contain text, 21.2% contain english text, and 43.6% contain **NO** text.
|
|
|
|
|
| 
|
|
|
|
|
|
|
| ## Tokens
|
|
|
|
|
| Using [xlm-roberta-base](https://huggingface.co/docs/transformers/model_doc/xlm-roberta) tokenizer to estimate the number of tokens by sampling:
|
|
|
|
|
|
|
| ### Token-length distribution
|
|
|
| Computed over a sample of **2,164,574 domains** from CDB-Dec, Oct, Nov.
|
|
|
| | Statistic | Tokens |
|
| |-----------|---------|
|
| | Min | 3 |
|
| | Max | 702,667 |
|
| | Mean | 1,354 |
|
|
|
| ### Truncation rate by sequence length
|
|
|
| Percentage of texts that would lose tokens at each candidate `seq_len`.
|
|
|
| | seq_len | Truncation rate | Texts truncated |
|
| |---------|-----------------|-----------------|
|
| | 64 | 97.24% | 2,104,865 |
|
| | 128 | 93.36% | 2,020,754 |
|
| | 256 | 83.54% | 1,808,331 |
|
| | 384 | 73.55% | 1,591,996 |
|
| | 512 | 64.45% | 1,394,971 |
|
| | 1024 | 37.96% | 821,725 |
|
|
|
| **Note:** 1,394,971 texts (64.4%) exceed XLM-R's 512 max and will be truncated regardless of the `seq_len` chosen.
|
|
|
|
|
| On average we will have the following documents to tokens:
|
|
|
|
|
| | CDB | Domains | Tokens |
|
| |-------|------|--------|
|
| | Dec | 43M | 58B |
|
| | Nov | 39M | 52B |
|
| | Oct | 33M | 44B |
|
| | **Total** | 115M | 155B|
|
| | **Total Unique** | 37M| 155B|
|
|
|
|
|
| # CDB v. OGB-MAG240M
|
|
|
| If we consider CDB across Dec, Nov, Oct as a static graph versus OGB-MAG240M (largest text-attributed graph dataset to-date.)
|
|
|
|
|
|
|
| | Dataset | Domains | Tokens |
|
| |-------|------|--------|
|
| | CDB | 37M | 155B |
|
| | OGB-MAG240M | 120M | 30B |
|
|
|
|
|
|
|
|
|