Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
text: string
url: null
html_content: null
screenshot_content: null
label: null
to
{'url': Value('string'), 'html_content': Value('string'), 'screenshot_content': Image(mode=None, decode=True), 'label': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1914, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              text: string
              url: null
              html_content: null
              screenshot_content: null
              label: null
              to
              {'url': Value('string'), 'html_content': Value('string'), 'screenshot_content': Image(mode=None, decode=True), 'label': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "multimodal" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Web Content Classification Dataset with HTML and Screenshots

Dataset

Description: This dataset contains 1,000 carefully curated examples of web content designed for phishing detection classification tasks. Each example consists of a website URL along with its complete HTML content, a visual screenshot, and a manually verified classification label.

Important Note: This entire dataset was meticulously created through manual processes. Each website was individually visited, analyzed, and verified to ensure the highest quality and accuracy of labels.

Dataset Structure

The dataset is organized with a main CSV file that references all resources using a consistent folder structure:

CSV Columns:

  • url: Website URL and folder name for resource access
  • html_content: Path to HTML file ({url}/page.html)
  • screenshot_content: Path to screenshot image ({url}/screenshot.png)
  • label: Classification label (phishing/legitimate)

Directory Structure:

dataset: files: - dataset.csv folders: - name: example-domain-1.com files: - page.html - screenshot.png - metadata.txt - name: example-domain-2.com files: - page.html - screenshot.png - metadata.txt

File Reference System

The dataset uses a consistent referencing system where each URL serves as both the website address and folder name:

  • HTML Content: "{url}/page.html (e.g., example.com/page.html)"
  • Screenshot: "{url}/screenshot.png (e.g., example.com/screenshot.png) "
  • Label: Value contained in {url}/metadata.txt

Collection Methodology

Manual Curation Process

This dataset was created through an extensive manual verification process:

  1. Individual Website Visits: Each of the 1,000 websites was personally visited and analyzed
  2. Manual Verification: Every classification label was manually assigned after careful examination
  3. Quality Control: Each entry was individually checked for accuracy and completeness
  4. Content Capture: HTML and screenshots were captured while ensuring proper rendering

Data Points Collected:

data_points: URL: "The exact folder name and web address" HTML content: "Full source code saved as {url}/page.html" Visual representation: "High-quality screenshot saved as {url}/screenshot.png" Classification label: "Value stored in {url}/metadata.txt (verified as phishing/legitimate)"

Downloads last month
11