File size: 3,302 Bytes
4b0fa34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: cc-by-nc-4.0
task_categories:
  - text-classification
  - image-classification
  - multimodal
tags:
  - web-security
  - phishing-detection
  - html-analysis
  - screenshot
  - manually-verified
dataset_info:
  features:
    - name: url
      dtype: string
      description: Website URL and folder name for resource access
    - name: html_content
      dtype: string
      description: "Path to HTML file (format: {url}/page.html)"
    - name: screenshot_content
      dtype: image
      description: "Path to screenshot image (format: {url}/screenshot.png)"
    - name: label
      dtype: string
      description: Classification label from metadata.txt (phishing/legitimate)
  splits:
    - name: train
      num_bytes: 1024000000
      num_examples: 1000
  configs:
    - config_name: default
      data_files:
        - split: train
          path: data/dataset.csv

---
# Web Content Classification Dataset with HTML and Screenshots

## Dataset

Description: This dataset contains 1,000 carefully curated examples of web content designed for phishing detection classification tasks. Each example consists of a website URL along with its complete HTML content, a visual screenshot, and a manually verified classification label.

Important Note: This entire dataset was meticulously created through manual processes. Each website was individually visited, analyzed, and verified to ensure the highest quality and accuracy of labels.

## Dataset Structure

The dataset is organized with a main CSV file that references all resources using a consistent folder structure:

### CSV Columns:
- url: Website URL and folder name for resource access
- html_content: Path to HTML file (`{url}/page.html`)
- screenshot_content: Path to screenshot image (`{url}/screenshot.png`) 
- label: Classification label (phishing/legitimate)

### Directory Structure:

dataset:
  files:
    - dataset.csv
  folders:
    - name: example-domain-1.com
      files:
        - page.html
        - screenshot.png
        - metadata.txt
    - name: example-domain-2.com
      files:
        - page.html
        - screenshot.png
        - metadata.txt

## File Reference System

The dataset uses a consistent referencing system where each URL serves as both the website address and folder name:

- HTML Content: "{url}/page.html (e.g., `example.com/page.html`)"
- Screenshot: "{url}/screenshot.png (e.g., `example.com/screenshot.png`) "
- Label: Value contained in `{url}/metadata.txt`

## Collection Methodology

### Manual Curation Process
This dataset was created through an extensive manual verification process:

1. Individual Website Visits: Each of the 1,000 websites was personally visited and analyzed
2. Manual Verification: Every classification label was manually assigned after careful examination
3. Quality Control: Each entry was individually checked for accuracy and completeness
4. Content Capture: HTML and screenshots were captured while ensuring proper rendering

### Data Points Collected:

data_points:
  URL: "The exact folder name and web address"
  HTML content: "Full source code saved as {url}/page.html"
  Visual representation: "High-quality screenshot saved as `{url}/screenshot.png`"
  Classification label: "Value stored in `{url}/metadata.txt` (verified as phishing/legitimate)"