The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
schema: string
fingerprint: string
period: string
model: struct<model_id: string, hf_repo_id: string, hash_model: string>
evidence: struct<receipts: struct<schema: string, path: string, sha256: string>, payouts: struct<schema: string, path: string, sha256: string, budget_total: double, currency: string>, hashchain: struct<path: string, sha256: string, chunk_size: int64>>
c_line: struct<engine: string, version: string, method_id: string, faiss_k_candidates: int64, top_k: int64, alpha: double, synergy_frac: double, epsilon_dp: double>
metrics: struct<providers_total: int64, receipts_valid: int64, top1_share_avg: double, gini_payouts: double, data_health: string>
generated_at: timestamp[s]
signature: struct<scheme: string, signer_id: string, value: string>
vs
crovia_evidence: struct<protocol: string, trust_bundle: struct<schema: string, sha256: string, period: string>, receipts: struct<count: int64, sha256: string, schema: string>, payouts: struct<sha256: string, schema: string, period: string>, hash_chain: struct<root: string, verified: bool, source: string>, trust_metrics: struct<avg_top1_share: double, dp_epsilon: struct<min: null, max: null>, ci_present: bool>, generated_by: struct<engine: string, version: string, timestamp: timestamp[s]>>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 531, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
schema: string
fingerprint: string
period: string
model: struct<model_id: string, hf_repo_id: string, hash_model: string>
evidence: struct<receipts: struct<schema: string, path: string, sha256: string>, payouts: struct<schema: string, path: string, sha256: string, budget_total: double, currency: string>, hashchain: struct<path: string, sha256: string, chunk_size: int64>>
c_line: struct<engine: string, version: string, method_id: string, faiss_k_candidates: int64, top_k: int64, alpha: double, synergy_frac: double, epsilon_dp: double>
metrics: struct<providers_total: int64, receipts_valid: int64, top1_share_avg: double, gini_payouts: double, data_health: string>
generated_at: timestamp[s]
signature: struct<scheme: string, signer_id: string, value: string>
vs
crovia_evidence: struct<protocol: string, trust_bundle: struct<schema: string, sha256: string, period: string>, receipts: struct<count: int64, sha256: string, schema: string>, payouts: struct<sha256: string, schema: string, period: string>, hash_chain: struct<root: string, verified: bool, source: string>, trust_metrics: struct<avg_top1_share: double, dp_epsilon: struct<min: null, max: null>, ci_present: bool>, generated_by: struct<engine: string, version: string, timestamp: timestamp[s]>>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
🟣 Crovia — CEP Capsules (v1)
Crypto Evidence Packages for the AI Act Era
Portable, offline-verifiable provenance capsules for the world’s most-used open datasets.
If a dataset shaped modern AI, it deserves a receipt.
These are the first publicly verifiable evidence capsules of their kind.
🚀 What Are CEP Capsules?
A CEP.v1 capsule (Crypto Evidence Package) is a compact, self-contained file that proves:
- where attribution signals came from
- how payouts would be computed
- which trust bundle verified the run
- what the hashchain root is
- that everything inside is immutable
Each capsule is:
- 📦 portable (just a JSON file)
- 🔍 independently verifiable (3 lines of Python)
- 🛡 AI Act–aligned (trust bundle, receipts, payouts)
- ⛓ backed by a hashchain (tamper-evident)
Think of this not as a dataset —
but as the evidence layer underneath datasets.
📘 Included Capsules (Dec 2025 Preview)
| Dataset Slice | Capsule |
|---|---|
| C4 | CEP-C4-2025-12.json |
| LAION 5B (sample) | CEP-LAION-2025-12.json |
| Wikipedia | CEP-WIKIPEDIA-2025-12.json |
| Wikitext | CEP-WIKITEXT-2025-12.json |
| FineWeb | CEP-FINEWEB-2025-12.json |
| The Pile | CEP-PILE-2025-12.json |
| ArXiv abstracts | CEP-ARXIV-2025-12.json |
| OpenSubtitles | CEP-OPENSUB-2025-12.json |
| Stack / Code | CEP-STACK-2025-12.json |
| BookCorpus | CEP-BOOKCORPUS-2025-12.json |
These capsules do not contain dataset content.
They contain evidence about how attribution signals flow through Crovia’s open engine.
🧪 Verify Any Capsule in 3 Lines
Save a capsule (e.g. CEP-C4-2025-12.json) locally and run:
import json, hashlib
with open("CEP-C4-2025-12.json") as f:
cep = json.load(f)
root = cep["hashchain"]["root"]
print("Root:", root)
print("Valid:", root == hashlib.sha256(cep["payouts"].encode()).hexdigest())
Everything is verifiable offline, with no token, no API, no Crovia server.
🧠 Why This Matters
Modern AI models are trained on massive public datasets…
…but nobody can prove:
- what signals came from where
- how much each source contributed
- how payouts would flow
- what trust criteria were applied
- whether logs were tampered with
Crovia introduces the evidence layer that the ecosystem was missing.
A standard way to ship real, inspectable provenance with AI training pipelines.
These capsules:
- help researchers audit models
- help companies comply with the AI Act
- help dataset creators receive attribution visibility
- help the community trust what models are built on
👇 Want to Explore or Collaborate?
Crovia is entirely community-driven.
We're looking for:
- dataset maintainers
- compliance researchers
- cryptography engineers
- model evaluators
- people who care about transparent AI
If you want to contribute, explore, or join early pilots:
- Open an issue on this dataset
- Star the dataset to follow updates
- Mention @Crovia on LinkedIn or X — we respond to everyone
🧩 Roadmap (Public Layer)
- CEP.v2 — multi-run lineage
- DSSE Open Integration (semantic signal explorer)
- Verified Dataset Manifests
- Training Pipeline Attestations
🙌 Credits
Crovia is an independent initiative committed to
transparent, evidence-based AI attribution.
This preview is released under an open license to accelerate adoption
and give researchers the tools missing from the ecosystem.
⭐ If you find this useful…
Please star the dataset.
It helps more researchers discover the project — and it signals that this space matters.
Dataset home:
https://huggingface.co/datasets/Crovia/cep-capsules
- Downloads last month
- 72