ndl-core-corpus / README.md
Huseyin Kir
data fix
83e3d40
metadata
language:
  - en
pretty_name: 'National Data Library Core Corpus '
size_categories:
  - 100M<n<1B

NDL Core Corpus

Prototyping the AI-ready core of the UK National Data Library

Overview

The NDL Core Corpus is an experimental, AI-ready aggregation of UK public sector data, developed as a minimum viable prototype (MVP) for the proposed National Data Library (NDL).

The dataset demonstrates how heterogeneous public sector data can be:

  • Federated across multiple institutions,
  • Standardised and cleaned to shared norms,
  • Structured and documented to support modern AI use cases, including retrieval-augmented generation (RAG), knowledge graphs, and agentic systems.

This corpus is not an official NDL, but a proof-of-concept designed to move the initiative from conceptual architecture to tangible implementation.


Purpose and Use Cases

The dataset is intended to support:

  • AI experimentation using UK public sector data
  • Knowledge-base construction for AI agents
  • Retrieval-augmented generation (RAG) pipelines
  • Policy research and evaluation
  • Prototyping data infrastructure aligned with ODI’s Data and AI programme

It is especially suited for:

  • Semantic search and question answering
  • Cross-domain pattern discovery
  • Public-sector-aware language models
  • Agentic AI systems that reason over structured metadata and text

Dataset Composition

The corpus aggregates recent and representative UK public sector data from the following sources:

Textual Data

  • GOV.UK – policy guidance and government publications
  • Hansard – UK parliamentary debates
  • legislation.gov.uk – statutory instruments and Acts of Parliament

Structured Data

  • data.gov.uk – top 10 most recent datasets per category
  • Office for National Statistics (ONS)
  • Defra (Department for Environment, Food & Rural Affairs)

Together, these sources form a cross-institutional, multi-modal snapshot of the UK’s public data landscape.


Dataset at a Glance

This section provides high-level quantitative insights into the composition and scale of the NDL Core Corpus.

Records by Source

Source Record count
Hansard 75897
GOV.UK 60406
Office for National Statistics (ONS) 11075
data.gov.uk 10111
legislation.gov.uk 1708
environment.data.gov.uk 933

Data Modality Breakdown

Data type Record count
Textual data 142512
Structured data 17618

Corpus Size Metrics

Metric Value
Total word count 63878333
Total token count 100145266

Token counts are based on the tokenizer used during embedding generation (tiktoken).

Metadata Coverage

Metric Coverage
Records with EU Data Theme tags 43.77%

Metadata Schema

Each record in the NDL Core Corpus follows a shared metadata schema to ensure consistency, traceability, and AI-readiness across heterogeneous sources.

Field name Type Description
identifier string (UUID) Globally unique identifier for the record.
title string Title of the resource or filename where a title is not available.
description string Human-readable description or summary of the resource.
source string Origin of the data (e.g. gov.uk, ons.gov.uk, legislation.gov.uk).
date date (ISO 8601) Original publication or creation date of the resource, where available.
collection_time datetime (ISO 8601) Timestamp indicating when the data was crawled or ingested into the corpus.
open_type string Classification of the openness context (e.g. Open Government, Open Data, Open Source).
license string Usage and redistribution rights associated with the resource.
tags array[string] Automatically assigned EU Data Theme Vocabulary tags describing the content domain.
language string (ISO 639-1) Automatically detected language of the resource content.
format string Data format of the record (e.g. text, parquet).
text string Full extracted textual content of the resource, where applicable.
word_count integer Number of space-delimited words in the text field.
token_count integer Number of tokens calculated using the embedding model tokenizer.
data_file string Relative path to the associated structured data file, if applicable. Data files exists in the ndl-core-structured-data dataset
extra_metadata object Source-specific, sparse metadata not covered by the core schema.

Processing and Standardisation

All component datasets were processed using a shared, automated pipeline to ensure AI-readiness.

Key Properties

  • Standardised formats

    • ISO 8601 for dates and times
    • UTF-8 encoding throughout
    • Consistent null values handling
    • Auto generated EU Data Theme tags
  • Semantic consistency

    • Normalised field names
    • Shared vocabularies where applicable
  • Data quality

    • Deduplication
    • Personal Identifiable Information removal
  • Unified storage

    • Delivered in Apache Parquet for efficient analytical and ML workloads

Details of the related data pipelines can be found at ndl-core-data-pipeline repository

Methodology

The full development process — including crawling, cleaning, transformation, and metadata generation — is documented as a formal methodology and version-controlled in a public GitHub repository linked to this dataset.

The approach builds on prior work from:

  • The ODI’s Data, AI and Collective Intelligence (DCAI) programme
  • ODI frameworks for AI-ready data

Limitations

  • This is a prototype, not a production system.
  • Coverage is selective, not exhaustive.
  • Some semantic harmonisation is necessarily shallow due to source diversity.
  • No guarantee of real-time updates.

The dataset is intended to demonstrate what is possible, not to replace official publication pipelines.


Licensing and Attribution

  • All data originates from UK public sector sources and is reused under their respective open licences (primarily the Open Government Licence).
  • Users are responsible for complying with source-specific licence terms.
  • Provenance is preserved in metadata wherever possible.

Contact and Contribution

This dataset is part of ongoing exploratory work. Issues, suggestions, and extensions are welcome via the linked GitHub repository.