Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 8, saw 5

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
                  for key, pa_table in ex_iterable.iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 198, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                         ^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 8, saw 5

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Narodne Novine Metadata Graph

Structured metadata snapshot of the Croatian official gazette archive mirrored from narodne-novine.nn.hr.

Coverage

  • Years: 1990-2026
  • Issues: 5031
  • Acts: 97012
  • Graph links: 37183

Files

  • acts.parquet
  • issues.parquet
  • act_links.parquet
  • year_indexes.parquet
  • indexes/<year>.csv
  • indexes/<year>.xlsx
  • subjects.parquet
  • act_subjects.parquet
  • metadata.json

Notes

  • 2015+ records come from NN API / ELI metadata.
  • 1990-2014 records come from legacy search and yearly index exports.
  • Yearly index binaries are included under indexes/ so local imports can restore them exactly.
  • This dataset stores metadata, links, and source URLs. It does not provide article-level legal consolidation.
  • PDF links are preserved when NN exposes them. Many records are HTML-only by source design.

Tables

  • acts.parquet: one row per act with identifiers, titles, dates, source links, issuer/type IRIs, amendment resolution fields, and raw JSON-LD payloads as JSON strings
  • issues.parquet: one row per issue with ordered act-number lists and crawl status
  • act_links.parquet: explicit ELI graph links such as amends, changes, repeals, based_on
  • subjects.parquet: unique legal subject IRIs extracted from JSON-LD
  • act_subjects.parquet: many-to-many join between acts and extracted subjects
  • year_indexes.parquet: fetched yearly CSV/XLSX index metadata

Caveats

  • This dataset is a derived mirror of public official-publication metadata, not the canonical legal source.
  • Some 1990-2014 fields are inferred from legacy HTML and yearly indexes rather than ELI-native structured metadata.
  • resolved_target_eli identifies target documents, not article-level legal diffs.
  • passed_by_iri is strongest for 2015+; many legacy issuer IRIs are historically normalized from issuer text.
  • raw_jsonld_json is empty for most legacy records because the legacy site does not expose equivalent JSON-LD.

Top Document Types

  • ODLUKA: 38011
  • RJESENJE: 23368
  • PRAVILNIK: 14634
  • OSTALO: 6519
  • ZAKON: 4563
  • UREDBA: 4247
  • NAREDBA: 1673
  • PRESUDA: 1230
  • UPUTA: 867
  • IZMJENE_I_DOPUNE: 669

Example

from datasets import load_dataset

acts = load_dataset("parquet", data_files="acts.parquet")["train"]
print(acts[0]["title"])

Provenance

Source website: https://narodne-novine.nn.hr

This is a derived metadata/graph mirror built from public NN endpoints, yearly indexes, and legacy article pages. Downstream users should review source-site terms, preserve source attribution, and verify legal-critical interpretations against the official publication.

Intended Use

  • research and corpus analysis
  • legal-document discovery
  • citation and amendment-graph exploration
  • dataset prototyping for Croatian legislation metadata

Not Intended Use

  • treating this dataset as the sole authoritative legal source
  • deriving article-level consolidated law without additional verification
  • legal advice or compliance decisions without checking the official publication
Downloads last month
103