|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
task_categories: |
|
|
- text-classification |
|
|
- image-classification |
|
|
- multimodal |
|
|
tags: |
|
|
- web-security |
|
|
- phishing-detection |
|
|
- html-analysis |
|
|
- screenshot |
|
|
- manually-verified |
|
|
dataset_info: |
|
|
features: |
|
|
- name: url |
|
|
dtype: string |
|
|
description: Website URL and folder name for resource access |
|
|
- name: html_content |
|
|
dtype: string |
|
|
description: "Path to HTML file (format: {url}/page.html)" |
|
|
- name: screenshot_content |
|
|
dtype: image |
|
|
description: "Path to screenshot image (format: {url}/screenshot.png)" |
|
|
- name: label |
|
|
dtype: string |
|
|
description: Classification label from metadata.txt (phishing/legitimate) |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 1024000000 |
|
|
num_examples: 1000 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/dataset.csv |
|
|
|
|
|
--- |
|
|
# Web Content Classification Dataset with HTML and Screenshots |
|
|
|
|
|
## Dataset |
|
|
|
|
|
Description: This dataset contains 1,000 carefully curated examples of web content designed for phishing detection classification tasks. Each example consists of a website URL along with its complete HTML content, a visual screenshot, and a manually verified classification label. |
|
|
|
|
|
Important Note: This entire dataset was meticulously created through manual processes. Each website was individually visited, analyzed, and verified to ensure the highest quality and accuracy of labels. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset is organized with a main CSV file that references all resources using a consistent folder structure: |
|
|
|
|
|
### CSV Columns: |
|
|
- url: Website URL and folder name for resource access |
|
|
- html_content: Path to HTML file (`{url}/page.html`) |
|
|
- screenshot_content: Path to screenshot image (`{url}/screenshot.png`) |
|
|
- label: Classification label (phishing/legitimate) |
|
|
|
|
|
### Directory Structure: |
|
|
|
|
|
dataset: |
|
|
files: |
|
|
- dataset.csv |
|
|
folders: |
|
|
- name: example-domain-1.com |
|
|
files: |
|
|
- page.html |
|
|
- screenshot.png |
|
|
- metadata.txt |
|
|
- name: example-domain-2.com |
|
|
files: |
|
|
- page.html |
|
|
- screenshot.png |
|
|
- metadata.txt |
|
|
|
|
|
## File Reference System |
|
|
|
|
|
The dataset uses a consistent referencing system where each URL serves as both the website address and folder name: |
|
|
|
|
|
- HTML Content: "{url}/page.html (e.g., `example.com/page.html`)" |
|
|
- Screenshot: "{url}/screenshot.png (e.g., `example.com/screenshot.png`) " |
|
|
- Label: Value contained in `{url}/metadata.txt` |
|
|
|
|
|
## Collection Methodology |
|
|
|
|
|
### Manual Curation Process |
|
|
This dataset was created through an extensive manual verification process: |
|
|
|
|
|
1. Individual Website Visits: Each of the 1,000 websites was personally visited and analyzed |
|
|
2. Manual Verification: Every classification label was manually assigned after careful examination |
|
|
3. Quality Control: Each entry was individually checked for accuracy and completeness |
|
|
4. Content Capture: HTML and screenshots were captured while ensuring proper rendering |
|
|
|
|
|
### Data Points Collected: |
|
|
|
|
|
data_points: |
|
|
URL: "The exact folder name and web address" |
|
|
HTML content: "Full source code saved as {url}/page.html" |
|
|
Visual representation: "High-quality screenshot saved as `{url}/screenshot.png`" |
|
|
Classification label: "Value stored in `{url}/metadata.txt` (verified as phishing/legitimate)" |
|
|
|
|
|
|