shrutiramani7 ealvaradob commited on
Commit
ea67f68
·
verified ·
0 Parent(s):

Duplicate from ealvaradob/phishing-dataset

Browse files

Co-authored-by: Esteban Alvarado <ealvaradob@users.noreply.huggingface.co>

Files changed (9) hide show
  1. .gitattributes +66 -0
  2. README.md +126 -0
  3. combined_full.json +3 -0
  4. combined_reduced.json +3 -0
  5. gitattributes +64 -0
  6. phishing-dataset.py +102 -0
  7. texts.json +3 -0
  8. urls.json +3 -0
  9. webs.json +3 -0
.gitattributes ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ test_all.json filter=lfs diff=lfs merge=lfs -text
57
+ train_all.json filter=lfs diff=lfs merge=lfs -text
58
+ test_dataset.json filter=lfs diff=lfs merge=lfs -text
59
+ train_dataset.json filter=lfs diff=lfs merge=lfs -text
60
+ test.json filter=lfs diff=lfs merge=lfs -text
61
+ train.json filter=lfs diff=lfs merge=lfs -text
62
+ combined_full.json filter=lfs diff=lfs merge=lfs -text
63
+ combined_reduced.json filter=lfs diff=lfs merge=lfs -text
64
+ texts.json filter=lfs diff=lfs merge=lfs -text
65
+ urls.json filter=lfs diff=lfs merge=lfs -text
66
+ webs.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 10K<n<100K
9
+ tags:
10
+ - phishing
11
+ - url
12
+ - html
13
+ - text
14
+ ---
15
+ # Phishing Dataset
16
+
17
+ Phishing datasets compiled from various resources for classification and phishing detection tasks.
18
+
19
+ ## Dataset Details
20
+
21
+ All datasets have been preprocessed in terms of eliminating null, empty and duplicate data. Class balancing has also been performed to avoid possible biases.
22
+ Datasets have the same structure of two columns: `text` and `label`. Text field can contain samples of:
23
+
24
+ - URL
25
+ - SMS messages
26
+ - Email messages
27
+ - HTML code
28
+
29
+ Depending on the dataset it belongs to; if it is the combined dataset it will have all data types. In addition, all records are labeled as **1 (Phishing)** or **0 (Benign)**.
30
+
31
+ ### Source Data
32
+
33
+ Datasets correspond to a compilation of 4 sources, which are described below:
34
+
35
+ - [Mail dataset](https://www.kaggle.com/datasets/subhajournal/phishingemails) that specifies the body text of various emails that can be used to detect phishing emails,
36
+ through extensive text analysis and classification with machine learning. Contains over 18,000 emails
37
+ generated by Enron Corporation employees.
38
+
39
+ - [SMS message dataset](https://data.mendeley.com/datasets/f45bkkt8pr/1) of more than 5,971 text messages. It includes 489 Spam messages, 638 Smishing messages
40
+ and 4,844 Ham messages. The dataset contains attributes extracted from malicious messages that can be used
41
+ to classify messages as malicious or legitimate. The data was collected by converting images obtained from
42
+ the Internet into text using Python code.
43
+
44
+ - [URL dataset](https://www.kaggle.com/datasets/harisudhan411/phishing-and-legitimate-urls) with more than 800,000 URLs where 52% of the domains are legitimate and the remaining 47% are
45
+ phishing domains. It is a collection of data samples from various sources, the URLs were collected from the
46
+ JPCERT website, existing Kaggle datasets, Github repositories where the URLs are updated once a year and
47
+ some open source databases, including Excel files.
48
+
49
+ - [Website dataset](https://data.mendeley.com/datasets/n96ncsr5g4/1) of 80,000 instances of legitimate websites (50,000) and phishing websites (30,000). Each
50
+ instance contains the URL and the HTML page. Legitimate data were collected from two sources: 1) A simple
51
+ keyword search on the Google search engine was used and the first 5 URLs of each search were collected.
52
+ Domain restrictions were used and a maximum of 10 collections from one domain was limited to have a diverse
53
+ collection at the end. 2) Almost 25,874 active URLs were collected from the Ebbu2017 Phishing Dataset
54
+ repository. Three sources were used for the phishing data: PhishTank, OpenPhish and PhishRepo.
55
+
56
+ > It is worth mentioning that, in the case of the website dataset, it was unfeasible to bring the total 80,000 samples due to the heavy processing required.
57
+ > It was limited to search the first 30,000 samples, of which only those with a weight of less than 100KB were used. This will make it easier to use the website dataset if you do not
58
+ > have powerful resources.
59
+
60
+ ### Combined dataset
61
+
62
+ The combined dataset is the one used to train BERT in phishing detection. But, in this repository you can notice that there are
63
+ two datasets named as **combined**:
64
+
65
+ - combined full
66
+ - combined reduced
67
+
68
+ Combined datasets owe their name to the fact that they combine all the data sources mentioned in the previous section. However, there is a notable difference between them:
69
+
70
+ - The full combined dataset contains the 800,000+ URLs of the URL dataset.
71
+ - The reduced combined dataset reduces the URL samples by 95% in order to keep a more balanced combination of data.
72
+
73
+ Why was that elimination made in the reduced combined dataset? Completely unifying all URL samples would make URLs 97% of the total, and emails, SMS and websites just 3%.
74
+ Missing data types from specific populations could bias the model and not reflect the realities of the environment in which it is run. There would be no representativeness
75
+ for the other data types and the model could ignore them. In fact, a test performed on the combined full dataset showed deplorable results in phishing classification with BERT.
76
+ Therefore it is recommended to use the reduced combined dataset. The combined full dataset was added for experimentation only.
77
+
78
+ #### Processing combined reduced dataset
79
+
80
+ Primarily, this dataset is intended to be used in conjunction with the BERT language model. Therefore, it has
81
+ not been subjected to traditional preprocessing that is usually done for NLP tasks, such as Text Classification.
82
+
83
+ _You may be wondering, is stemming, lemmatization, stop word removal, etc., necessary to improve the performance of BERT?_
84
+
85
+ In general, **NO**. Preprocessing will not change the output predictions. In fact, removing empty words (which
86
+ are considered noise in conventional text representation, such as bag-of-words or tf-idf) can and probably will
87
+ worsen the predictions of your BERT model. Since BERT uses the self-attenuation mechanism, these "stop words"
88
+ are valuable information for BERT. The same goes for punctuation: a question mark can certainly change the
89
+ overall meaning of a sentence. Therefore, eliminating stop words and punctuation marks would only mean
90
+ eliminating the context that BERT could have used to get better results.
91
+
92
+ However, if this dataset plans to be used for another type of model, perhaps preprocessing for NLP tasks should
93
+ be considered. That is at the discretion of whoever wishes to employ this dataset.
94
+
95
+ For more information check these links:
96
+
97
+ - https://stackoverflow.com/a/70700145
98
+ - https://datascience.stackexchange.com/a/113366
99
+
100
+ ### How to use them
101
+
102
+ You can easily use any of these datasets by specifying its name in the following code configuration:
103
+
104
+ ```python
105
+ from datasets import load_dataset
106
+
107
+ dataset = load_dataset("ealvaradob/phishing-dataset", "<desired_dataset>", trust_remote_code=True)
108
+ ```
109
+
110
+ For example, if you want to load combined reduced dataset, you can use:
111
+
112
+ ```python
113
+ dataset = load_dataset("ealvaradob/phishing-dataset", "combined_reduced", trust_remote_code=True)
114
+ ```
115
+
116
+ Due to the implementation of the datasets library, when executing these codes you will see that only a training split is generated.
117
+ The entire downloaded dataset will be inside that split. But if you want to separate it into test and training sets, you could run this code:
118
+
119
+ ```python
120
+ from datasets import Dataset
121
+ from sklearn.model_selection import train_test_split
122
+
123
+ df = dataset['train'].to_pandas()
124
+ train, test = train_test_split(df, test_size=0.2, random_state=42)
125
+ train, test = Dataset.from_pandas(train, preserve_index=False), Dataset.from_pandas(test, preserve_index=False)
126
+ ```
combined_full.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25d1abdeaee577c96d5a11292209485ad30095ff04903de49236b4364ea84b5d
3
+ size 590657583
combined_reduced.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:222fd5f841a0565841f8ede2e21b9b48cfea933a6b5c3a7da3e3a2cbc156d3d5
3
+ size 521149713
gitattributes ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ TEXTS.csv filter=lfs diff=lfs merge=lfs -text
57
+ URLS.csv filter=lfs diff=lfs merge=lfs -text
58
+ WEBS.csv filter=lfs diff=lfs merge=lfs -text
59
+ combined_all.json filter=lfs diff=lfs merge=lfs -text
60
+ combined_reduced.json filter=lfs diff=lfs merge=lfs -text
61
+ texts.json filter=lfs diff=lfs merge=lfs -text
62
+ urls.json filter=lfs diff=lfs merge=lfs -text
63
+ webs.json filter=lfs diff=lfs merge=lfs -text
64
+ combined_full.json filter=lfs diff=lfs merge=lfs -text
phishing-dataset.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # TODO: Address all TODOs and remove all explanatory comments
15
+ """TODO: Add a description here."""
16
+
17
+
18
+ import csv
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @InProceedings{ealvaradob:dataset,
29
+ title = {Phishing Datasets},
30
+ author={Esteban Alvarado},
31
+ year={2024}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ Dataset designed for phishing classification tasks in various data types.
37
+ """
38
+ _HOMEPAGE = ""
39
+
40
+ _LICENSE = ""
41
+
42
+ _URLS = {
43
+ "texts": "texts.json",
44
+ "urls": "urls.json",
45
+ "webs": "webs.json",
46
+ "combined_full": "combined_full.json",
47
+ "combined_reduced": "combined_reduced.json"
48
+ }
49
+
50
+
51
+ class PhishingDatasets(datasets.GeneratorBasedBuilder):
52
+ """Phishing Datasets Configuration"""
53
+
54
+ VERSION = datasets.Version("1.1.0")
55
+
56
+ BUILDER_CONFIGS = [
57
+ datasets.BuilderConfig(name="texts", version=VERSION, description="text subset"),
58
+ datasets.BuilderConfig(name="urls", version=VERSION, description="urls subset"),
59
+ datasets.BuilderConfig(name="webs", version=VERSION, description="webs subset"),
60
+ datasets.BuilderConfig(name="combined_full", version=VERSION, description="combined dataset that have all URLs"),
61
+ datasets.BuilderConfig(name="combined_reduced", version=VERSION, description="combined dataset that doesn't have all URLs for representativity issues"),
62
+ ]
63
+
64
+ DEFAULT_CONFIG_NAME = "combined_reduced"
65
+
66
+ def _info(self):
67
+ features = datasets.Features(
68
+ {
69
+ "text": datasets.Value("string"),
70
+ "label": datasets.Value("int64"),
71
+ }
72
+ )
73
+ return datasets.DatasetInfo(
74
+ description=_DESCRIPTION,
75
+ features=features,
76
+ supervised_keys=("text", "label"),
77
+ homepage=_HOMEPAGE,
78
+ license=_LICENSE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ urls = _URLS[self.config.name]
84
+ data_dir = dl_manager.download_and_extract(urls)
85
+ return [
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.TRAIN,
88
+ gen_kwargs={
89
+ "filepath": data_dir,
90
+ "split": "train",
91
+ },
92
+ ),
93
+ ]
94
+
95
+ def _generate_examples(self, filepath, split):
96
+ with open(filepath, encoding="utf-8") as f:
97
+ data = json.load(f)
98
+ for index, sample in enumerate(data):
99
+ yield index, {
100
+ "text": sample['text'],
101
+ "label": sample['label']
102
+ }
texts.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2479fdb94abb59332cc747f7b823a1651921b9828240c5dad0ffdba66ab02581
3
+ size 52079789
urls.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32cfba42892c915b041a3f0ca6ffd0f484b2590c4c2ca91b13d3ea1330b2c9bd
3
+ size 73157496
webs.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cad05dd39b6384e1fe2f0880852004eeb1bed704394544e523decb193190f3f0
3
+ size 465420302