File size: 5,281 Bytes
b27a55b
 
 
a57af79
 
 
 
 
879895d
a57af79
 
 
 
75ece80
a57af79
 
 
 
 
 
 
7337861
879895d
 
 
da3385a
b3a3baf
879895d
75ece80
7337861
75ece80
7337861
879895d
 
 
 
 
 
 
 
 
 
 
7337861
75ece80
7337861
879895d
 
 
a57af79
7337861
75ece80
 
 
 
5c6781e
289f98e
 
 
 
 
 
 
274a2a1
 
 
 
d45f544
274a2a1
d45f544
274a2a1
 
 
 
 
 
 
 
 
 
 
 
289f98e
5c6781e
d45f544
 
 
 
289f98e
 
 
5c6781e
 
 
a42786a
5c6781e
 
 
 
14cfc6c
 
 
 
 
 
 
 
4e63f31
a42786a
 
 
778b497
 
 
 
 
 
7337861
778b497
7337861
778b497
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44867da
778b497
 
0047c2f
a42786a
44867da
 
 
0047c2f
 
 
a42786a
7337861
75ece80
0047c2f
 
 
 
 
 
 
 
75ece80
 
a57af79
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---

license: mit
---



# CrediBench Cleaned


**CDB Cleaned** is the pre-processed version of CrediBench. This text data is curated for pre-training, as such we follow general pre-processing steps recommended before pre-training on text. 

## Summary 



## Included Content

- `domain`: Domain name from the website.
- `wet_record_txt`: Text content from the domain's corresponding CommonCrawl WET file.
- `language`: The language of the text content, and the predicted confidence of said language.


# Pre-Processing

CDB Cleaned is pre-processed with [nemo-curator](https://github.com/NVIDIA-NeMo/Curator). We processed the text with the following steps: 

![pipeline](plots/commoncrawl_pipeline_flow.png)



## Filtering

### Text-Cleaning

* `UnicodeReformatter`: Uses [ftfy](https://ftfy.readthedocs.io/en/latest/) to fix broken Unicode characters.

* `NewlineNormalizer`: Uses regex to replace 3 or more consecutive newline characters in each document with only 2 newline characters.

* `UrlRemover`: uses regex to remove all URLs in each document. 

### Deduplication

* `Fuzzy Deduplication`: Uses MinHash and Locality Sensitive Hashing to find and remove near-duplicated documents. 

## Labelling

### Language Management

* `FastText Language ID`: To label multilingual content at scale we utilize the [FastText](https://fasttext.cc/docs/en/language-identification.html) language identification model.


### Quality Labelling

* `FastText Quality Filtering`: We train our own quality filterer here, and label each row with a quality rating.


# Statistics

## File Sizes

The filtering reduces the text size as follows: ![gb_size](plots/pipeline_size_funnel.png)

## Deduplication

The deduplication step identifies large clusters of repeated text. Overall there is about 3,615,276 domains with duplicate text. 

![clusters](plots/cluster_statistics.png)

### Largest Clusters

The largets cluster of 122,822 domains corresponds to a CloudFlare Scrape Shield message. This occurs because cloudflare protects addressed on websites from spam bots/web-scrapers.


>Email Protection | Cloudflare Please enable cookies. Email Protection
>You are unable to access this email address centrumpoloznicze.pl
>The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. You must enable Javascript in your browser in order to decode the e-mail address.
>If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare.
>How does Cloudflare protect email addresses on website from spammers?
>Can I sign up for Cloudflare?
>Cloudflare Ray ID: 8f0443b27bf9c5a5 • Your IP:
>Click to reveal
>18.97.9.170 • Performance & security by Cloudflare


The second largest cluster at 54,347 domains corresponds to a similar Web Application Firewall, or some anti-bot service. The third largest cluster is also such a message.

>One moment, please…
>Loader
>Please wait while your request is being verified…



## Language Distribution

CDB has quite a diverse distribution of languages, overall there is about 143 languages. English, German and Chinese are the top 3. 
**English only occurs in 14,506,809 domains.**

![lan_dist](plots/language_frequency.png)


## Domain Coverage

When squashing CDB-Dec, Nov, Oct into one large graph after filtering we get about 56.4% domains that contain text, 21.2% contain english text, and 43.6% contain **NO** text. 


![domain_cov](plots/domain_coverage.png)



## Tokens


Using [xlm-roberta-base](https://huggingface.co/docs/transformers/model_doc/xlm-roberta) tokenizer to estimate the number of tokens by sampling:



### Token-length distribution

Computed over a sample of **2,164,574 domains** from CDB-Dec, Oct, Nov.

| Statistic | Tokens   |
|-----------|---------|
| Min       | 3       |
| Max       | 702,667 |
| Mean      | 1,354   |

### Truncation rate by sequence length

Percentage of texts that would lose tokens at each candidate `seq_len`.

| seq_len | Truncation rate | Texts truncated |

|---------|-----------------|-----------------|

| 64      | 97.24%          | 2,104,865       |

| 128     | 93.36%          | 2,020,754       |

| 256     | 83.54%          | 1,808,331       |

| 384     | 73.55%          | 1,591,996       |

| 512     | 64.45%          | 1,394,971       |

| 1024    | 37.96%          | 821,725         |



**Note:** 1,394,971 texts (64.4%) exceed XLM-R's 512 max and will be truncated regardless of the `seq_len` chosen.


On average we will have the following documents to tokens:


| CDB | Domains | Tokens |
|-------|------|--------|
| Dec   |   43M   |   58B     |
| Nov   |    39M  |    52B    |
| Oct   |    33M  |     44B   |
| **Total** | 115M | 155B|
| **Total Unique** | 37M| 155B|


# CDB v. OGB-MAG240M

If we consider CDB across Dec, Nov, Oct as a static graph versus OGB-MAG240M (largest text-attributed graph dataset to-date.)



| Dataset | Domains | Tokens |
|-------|------|--------|
| CDB   |   37M   |   155B     |
| OGB-MAG240M   |  120M    |    30B    |