Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
schema: string
generated_at: double
context: struct<what_this_is: string, what_this_is_not: string>
start_here: struct<ranking_jsonl: string, canon: string, snapshot: string>
top_omissions: list<item: struct<necessity_id: string, score: double, avg_persistence_days: double, records: int64>>
vs
schema: string
generated_at: string
count: int64
badges: list<item: struct<model_id: string, score: int64, badge: string, card_path: string, badge_path: string>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 531, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              schema: string
              generated_at: double
              context: struct<what_this_is: string, what_this_is_not: string>
              start_here: struct<ranking_jsonl: string, canon: string, snapshot: string>
              top_omissions: list<item: struct<necessity_id: string, score: double, avg_persistence_days: double, records: int64>>
              vs
              schema: string
              generated_at: string
              count: int64
              badges: list<item: struct<model_id: string, score: int64, badge: string, card_path: string, badge_path: string>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

 ██████╗██████╗  ██████╗ ██╗   ██╗██╗ █████╗ 
██╔════╝██╔══██╗██╔═══██╗██║   ██║██║██╔══██╗
██║     ██████╔╝██║   ██║██║   ██║██║███████║
██║     ██╔══██╗██║   ██║╚██╗ ██╔╝██║██╔══██║
╚██████╗██║  ██║╚██████╔╝ ╚████╔╝ ██║██║  ██║
 ╚═════╝╚═╝  ╚═╝ ╚═════╝   ╚═══╝  ╚═╝╚═╝  ╚═╝

🔮 OMISSION ORACLE

We see what they don't disclose.


SCAN NOW 277 MODELS AUTO UPDATED


🔮 Live Scanner · 📊 Leaderboard · 🏅 Get Badge


⚠️ WHAT WE TRACK

┌────────────────────────────────────────────────────────────────────────────┐
│                                                                            │
│   Every 6 hours, the Oracle scans the most popular AI models.              │
│   We detect what's MISSING — not what's claimed.                           │
│                                                                            │
│   📊 No data provenance?     We see it.                                    │
│   📜 No license declared?    We see it.                                    │
│   🎯 No usage scope?         We see it.                                    │
│   👤 No accountable entity?  We see it.                                    │
│                                                                            │
│   The results are PUBLIC. The badges are AUTOMATIC.                        │
│   There is nowhere to hide.                                                │
│                                                                            │
└────────────────────────────────────────────────────────────────────────────┘

🏅 Embed Your Trust Badge

Add a Crovia Trust Badge to your model's README:

![Crovia Trust](https://huggingface.co/datasets/Crovia/global-ai-training-omissions/resolve/main/badges/YOUR_MODEL.svg)

Example Badges

Model Badge Score
google/gemma-2-9b 🟢 GOLD 100/100
mistralai/Mistral-7B 🟢 GOLD 94/100
microsoft/Phi-3-mini 🟢 GOLD 90/100
meta-llama/Llama-3-8B 🟡 SILVER 77/100
bigscience/bloom 🔴 UNVERIFIED 59/100

📊 Shadow Score Leaderboard

The Shadow Score measures trust declaration completeness (0-100):

Score Badge Meaning
≥90 🟢 GOLD Fully compliant - all declarations present
≥75 🟡 SILVER Minor gaps - mostly compliant
≥60 🟠 BRONZE Needs attention - significant gaps
<60 🔴 UNVERIFIED Critical - major declarations missing

🚨 Top Omissions Detected

NEC# Violation Records Severity
NEC#2 Missing license attribution 111 🔴 Critical
NEC#13 Missing accountable entity 108 🔴 Critical
NEC#10 Missing temporal validity 46 🟡 Medium
NEC#1 Missing data provenance 32 🔴 Critical
NEC#7 Missing usage scope 18 🟡 Medium

🔬 How It Works

┌─────────────────────────────────────────────────────────────────┐
│                    OMISSION ORACLE PIPELINE                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  1. OBSERVE           2. ANALYZE           3. PUBLISH          │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐      │
│  │ HuggingFace  │───▶│ NEC# Canon   │───▶│ Oracle Cards │      │
│  │ Model Cards  │    │ Matching     │    │ SVG Badges   │      │
│  │ API Scan     │    │ Shadow Score │    │ Rankings     │      │
│  └──────────────┘    └──────────────┘    └──────────────┘      │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

📁 Dataset Structure

├── EVIDENCE.json              # Main evidence with top omissions
├── badges/                    # 🆕 SVG badges for embedding
│   ├── meta-llama__Llama-3-8B.svg
│   └── index.json
├── cards/                     # 🆕 Oracle Cards (JSON)
│   ├── meta-llama__Llama-3-8B.json
│   └── ...
├── canon/
│   └── necessities.v1.yaml    # NEC# definitions (20 types)
├── open/
│   ├── forensic/              # Analysis engines
│   │   ├── hubble_continuum.py
│   │   └── badge_generator.py # 🆕 Badge generator
│   ├── signal/                # Presence/absence signals
│   └── temporal/              # Historical pressure data
└── v0.1/                      # Versioned snapshots

⚡ Live Scanner

Launch CEP Terminal →

Enter any HuggingFace model ID and get instant trust analysis:

  • Shadow Score calculation
  • NEC# violation detection
  • Badge recommendation
  • Evidence hash

🔗 Integration

For Model Maintainers

Add this to your model's README to show your trust score:

[![Crovia Trust Score](https://huggingface.co/datasets/Crovia/global-ai-training-omissions/resolve/main/badges/YOUR_ORG__YOUR_MODEL.svg)](https://huggingface.co/spaces/Crovia/cep-terminal)

For CI/CD Pipelines

# Check model trust before deployment
curl -s https://huggingface.co/datasets/Crovia/global-ai-training-omissions/resolve/main/cards/meta-llama__Llama-3-8B.json | jq '.shadow_score'

What this dataset is

Crovia — Global Ranking of AI Training Omissions (Hubble) v0.1

This dataset publishes verifiable, hash-anchored evidence of persistent omissions observed across widely used AI training datasets.

It answers one and only one question:

Are public AI training disclosures observable — yes or no?

This dataset IS

  • an observation layer, not an audit
  • a cryptographically verifiable record
  • a public, reproducible signal of absence or presence
  • aligned with EU AI Act transparency principles

This dataset IS NOT

  • it does not audit models
  • it does not infer intent
  • it does not assign blame
  • it does not make legal claims

Automatic, observable updates

This dataset updates automatically:

  • public artifacts are observed on a fixed schedule
  • if nothing changes, the update itself proves persistence
  • if something changes, hashes and commits change accordingly
  • no manual curation
  • no retroactive edits

Every update is:

  • publicly committed
  • reproducible
  • independently verifiable

Start here (viewer-first)

If you open only one file, open:

➡️ START_HERE.md

It explains the evidence layout for non-technical readers.


Open Plane (public observation layer)

The Open Plane measures one condition only:

Absence is observable.

It contains:

  • presence signals:
    open/signal/presence_latest.jsonl

  • absence receipts (time-bucketed):
    open/forensic/absence_receipts_7d.jsonl

  • overview:
    open/README.md


Core artifacts

  • ranking (human-readable): global_ranking.jsonl
  • current snapshot: snapshot_latest.json
  • cryptographic proof: EVIDENCE.json
  • canonical vocabulary: canon/necessities.v1.yaml

PRO Shadow (non-disclosing)

Crovia PRO can compute private semantic measurements.

The Open Plane publishes a hash-anchored shadow pointer proving that a measurement exists without disclosing private data:

  • open/signal/pro_shadow_pressure_latest.json
  • open/README_PRO_SHADOW.md

Temporal pressure (silence over time)

Crovia tracks how long silence persists under sustained observation.

Temporal pressure increases when:

  • observation coverage is HIGH
  • no public training evidence is disclosed
  • silence persists across days

This does not imply wrongdoing.

➡️ open/temporal/temporal_pressure_30d.jsonl

Downloads last month
158

Spaces using Crovia/global-ai-training-omissions 2