seirin16 commited on
Commit
4b0fa34
·
verified ·
1 Parent(s): 7ad0bf1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -3
README.md CHANGED
@@ -1,3 +1,106 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - text-classification
5
+ - image-classification
6
+ - multimodal
7
+ tags:
8
+ - web-security
9
+ - phishing-detection
10
+ - html-analysis
11
+ - screenshot
12
+ - manually-verified
13
+ dataset_info:
14
+ features:
15
+ - name: url
16
+ dtype: string
17
+ description: Website URL and folder name for resource access
18
+ - name: html_content
19
+ dtype: string
20
+ description: "Path to HTML file (format: {url}/page.html)"
21
+ - name: screenshot_content
22
+ dtype: image
23
+ description: "Path to screenshot image (format: {url}/screenshot.png)"
24
+ - name: label
25
+ dtype: string
26
+ description: Classification label from metadata.txt (phishing/legitimate)
27
+ splits:
28
+ - name: train
29
+ num_bytes: 1024000000
30
+ num_examples: 1000
31
+ configs:
32
+ - config_name: default
33
+ data_files:
34
+ - split: train
35
+ path: data/dataset.csv
36
+
37
+ ---
38
+ # Web Content Classification Dataset with HTML and Screenshots
39
+
40
+ ## Dataset
41
+
42
+ Description: This dataset contains 1,000 carefully curated examples of web content designed for phishing detection classification tasks. Each example consists of a website URL along with its complete HTML content, a visual screenshot, and a manually verified classification label.
43
+
44
+ Important Note: This entire dataset was meticulously created through manual processes. Each website was individually visited, analyzed, and verified to ensure the highest quality and accuracy of labels.
45
+
46
+ ## Dataset Structure
47
+
48
+ The dataset is organized with a main CSV file that references all resources using a consistent folder structure:
49
+
50
+ ### CSV Columns:
51
+ - url: Website URL and folder name for resource access
52
+ - html_content: Path to HTML file (`{url}/page.html`)
53
+ - screenshot_content: Path to screenshot image (`{url}/screenshot.png`)
54
+ - label: Classification label (phishing/legitimate)
55
+
56
+ ### Directory Structure:
57
+
58
+ dataset:
59
+ files:
60
+ - dataset.csv
61
+ folders:
62
+ - name: example-domain-1.com
63
+ files:
64
+ - page.html
65
+ - screenshot.png
66
+ - metadata.txt
67
+ - name: example-domain-2.com
68
+ files:
69
+ - page.html
70
+ - screenshot.png
71
+ - metadata.txt
72
+
73
+ ## File Reference System
74
+
75
+ The dataset uses a consistent referencing system where each URL serves as both the website address and folder name:
76
+
77
+ - HTML Content: "{url}/page.html (e.g., `example.com/page.html`)"
78
+ - Screenshot: "{url}/screenshot.png (e.g., `example.com/screenshot.png`) "
79
+ - Label: Value contained in `{url}/metadata.txt`
80
+
81
+ ## Collection Methodology
82
+
83
+ ### Manual Curation Process
84
+ This dataset was created through an extensive manual verification process:
85
+
86
+ 1. Individual Website Visits: Each of the 1,000 websites was personally visited and analyzed
87
+ 2. Manual Verification: Every classification label was manually assigned after careful examination
88
+ 3. Quality Control: Each entry was individually checked for accuracy and completeness
89
+ 4. Content Capture: HTML and screenshots were captured while ensuring proper rendering
90
+
91
+ ### Data Points Collected:
92
+
93
+ data_points:
94
+ URL: "The exact folder name and web address"
95
+ HTML content: "Full source code saved as {url}/page.html"
96
+ Visual representation: "High-quality screenshot saved as `{url}/screenshot.png`"
97
+ Classification label: "Value stored in `{url}/metadata.txt` (verified as phishing/legitimate)"
98
+
99
+ ## Usage
100
+
101
+ using_with_hugging_face_datasets: |
102
+ ```python
103
+ from datasets import load_dataset
104
+
105
+ dataset = load_dataset("your-username/your-dataset-name")
106
+