Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
@context: string
@type: string
name: string
description: string
creator: struct<@type: string, name: string>
publisher: struct<@type: string, name: string>
license: struct<@type: string, name: string, url: string>
conditionsOfAccess: string
keywords: list<item: string>
distribution: list<item: struct<@type: string, name: string, description: string, contentUrl: string, encodingFormat: string>>
isBasedOn: struct<@type: string, name: string, url: string>
intendedUse: list<item: string>
limitations: string
conformsTo: list<item: string>
vs
index_type: string
faiss_index: struct<type: string, metric: string>
embedding: struct<model_name: string, provider: string, dimension: int64, normalised: bool>
chunking: struct<strategy: string, unit: string, chunk_size: int64, chunk_overlap: int64>
source: struct<dataset: string, id_field: string>
created_at: timestamp[s]
version: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 563, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              @context: string
              @type: string
              name: string
              description: string
              creator: struct<@type: string, name: string>
              publisher: struct<@type: string, name: string>
              license: struct<@type: string, name: string, url: string>
              conditionsOfAccess: string
              keywords: list<item: string>
              distribution: list<item: struct<@type: string, name: string, description: string, contentUrl: string, encodingFormat: string>>
              isBasedOn: struct<@type: string, name: string, url: string>
              intendedUse: list<item: string>
              limitations: string
              conformsTo: list<item: string>
              vs
              index_type: string
              faiss_index: struct<type: string, metric: string>
              embedding: struct<model_name: string, provider: string, dimension: int64, normalised: bool>
              chunking: struct<strategy: string, unit: string, chunk_size: int64, chunk_overlap: int64>
              source: struct<dataset: string, id_field: string>
              created_at: timestamp[s]
              version: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

NDL Core RAG Index

This dataset contains a FAISS index and associated chunk metadata to support retrieval-augmented generation (RAG) use cases on ndl-core-corpus.


Overview

  • Model: sentence-transformers/all-MiniLM-L6-v2
  • Dimension: 384
  • Normalisation: L2
  • Similarity: cosine (inner product)

Chunking

  • Strategy: recursive character-based chunking
  • Chunk size: 800 characters
  • Overlap: 100 characters

Index–Metadata Alignment

The FAISS index (index.faiss) and the chunk metadata file (data/ndl_core_rag_index.parquet) are strictly index-aligned.

This means:

  • The n-th embedding vector in index.faiss corresponds exactly to the n-th row in data/ndl_core_rag_index.parquet
  • Retrieved FAISS indices can be used directly to look up chunk text, source identifiers, and metadata in the parquet file

This guarantees deterministic and reliable mapping from similarity search results back to their original source records.


LanceDB Search Index

A LanceDB-based search index has been added to support searching for NDL Core datasets by topic and downloading them. This index uses the same all-MiniLM-L6-v2 model and generates embedding vectors based on the concatenation of the title, description, and the first 500 characters of the text. The LanceDB index includes the full records for retrieval.

Source data

Chunks reference records are in the dataset: ndl-core-corpus

See rag_config.json for full, machine-readable configuration.


Example Application

This index is used in a live retrieval-augmented chat application:

🔗 NDL Core RAG Chat
https://huggingface.co/spaces/theodi/ndl-core-rag-chat

The application demonstrates:

  • Semantic retrieval over UK public sector data
  • Deterministic citation of source records
  • End-to-end RAG using the published FAISS index and metadata

Downloads last month
37

Spaces using theodi/ndl-core-rag-index 2

Collection including theodi/ndl-core-rag-index