da-wiki-icc / README.md
V4ldeLund's picture
Update README.md
a045f93 verified
metadata
dataset_info:
  features:
    - name: url
      dtype: string
    - name: image_urls
      dtype: string
    - name: images
      dtype: binary
    - name: captions
      dtype: string
    - name: neighbouring_context
      dtype: string
    - name: row_id
      dtype: int64
    - name: full_text_row_id
      dtype: int64
    - name: has_full_text
      dtype: bool
    - name: full text
      dtype: string
  splits:
    - name: train
      num_examples: 170585
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/merged_*.parquet
language:
  - da
pretty_name: Danish Wikipedia - Image, Caption, Context
size_categories:
  - 100K<n<1M
license: cc-by-4.0
task_categories:
  - image-to-text
  - feature-extraction
task_ids:
  - image-captioning

Dataset Card for “Danish Wikipedia — Image, Caption, Context”

Dataset Description

  • Source: Danish Wikipedia text + images from Wikimedia
  • Records: 170,585 image–text pairs
  • Storage: Split across ~5 GiB Parquet parts (data/merged_*.parquet)
  • Language: Danish (da)

Summary

This dataset contains images from Danish Wikipedia articles paired with:

  • captions — the local image caption
  • neighbouring_context — the surrounding section/block text where the image appears in markdown
  • full text — the full article markdown, stored once per article, with all other rows pointing to the canonical copy via full_text_row_id

Dataset Structure

Files & Splits

  • Single split: train
  • Files: data/merged_*.parquet (~5 GiB each)

Important: The pointer columns row_id and full_text_row_id are valid within each merged file.

Fields

Field Type Description
url string Wikipedia article URL.
image_urls string Image URL
images binary Raw image bytes; cast to datasets.Image() at load time.
captions string Cleaned image caption text.
neighbouring_context string Section text near the image.
row_id int64 Row index within the merged file.
full_text_row_id int64 Row id within the same merged file that holds the article’s full text.
has_full_text bool True only for the first row per article.
full text string Full article markdown, present only where has_full_text == True.

Dataset Creation

Rationale

Previous Image-text datasets like alexandrainst/nordjylland-news-image-captioning and alexandrainst/da-withave only captions as context for every image. With recent advancements in Vision Language modelling, we think it is important to have more context to improve the performance of the models. We are hoping that the dataset will become a foundation for Danish VLMs and Vision-language benchmarks.

Source Data

  • Articles and Images: Danish Wikipedia

Data Curation

The dataset is built from scraping Wikipedia and doing HTML to Markdown conversion, and then images and captions were extracted using regular expressions. For markdown conversion MarkItDown was used

Pipeline Overview

1 Fetch & Convert

  • Download the article HTML and convert it to Markdown using MarkItDown.
  1. Markdown Cleaning

    • Trim to the first # heading (article title) onward.
    • Drop typical ending sections: Noter/Kilder/Referencer/Litteratur/Ekstern(e)/External links/References/Notes/Se også/Further reading.
    • Remove inline citation footnotes (e.g., #cite_note, #cite_ref) and normalize whitespace.
  2. Image Extraction

    • Parse Markdown lines for:
      • Plain images (![alt](url "title"))
      • Linked images ([![alt](url "title")](href "title"))
      • Gallery bullets (- ![...](...))
  3. Neighbouring Context

    • Identify H2 section at the top of the image after markdown parsing and take this section as neighboring context
    • If the image appears before any H2 sections, take the first few intro sentences.
  4. Optimizing

    • Only the first row per article retains the full article markdown.
    • All other rows for that article set "full text" = False and point to the row containing full text via: full_text_row_id.
  5. Splitting to ~5 GiB Parts

    • For efficiency, the dataset was divided into 78 .parquet files (each ~5 GiB )

Comparison to existing sources

Quantitative comparison

Qualitative comparison

By focusing on one language, we have managed to create a more accurate parsing pipeline for the Danish Wikipedia. For example, in the Wikipedia article for Aarhus Hovedbanegård our dataset contains 29 images as opposed to only 9 in da-wit.


Maintainer / Contact

  • Maintainer: Vladimir Salnikov v4ldesalnikov@gmail.com
  • Issues & questions: Please open a discussion on the dataset’s Hugging Face page.