The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
schema: string
node: string
system: string
role: string
version_source: string
branding: list<item: string>
child 0, item: string
endpoints: list<item: struct<method: string, path: string, auth: string, purpose: string>>
child 0, item: struct<method: string, path: string, auth: string, purpose: string>
child 0, method: string
child 1, path: string
child 2, auth: string
child 3, purpose: string
secrets_required: list<item: string>
child 0, item: string
secrets_optional: list<item: string>
child 0, item: string
cors_allow_origins: list<item: string>
child 0, item: string
datasets: struct<produced: list<item: struct<name: string, kind: string, contents: list<item: string>, publish (... 43 chars omitted)
child 0, produced: list<item: struct<name: string, kind: string, contents: list<item: string>, publisher: string>>
child 0, item: struct<name: string, kind: string, contents: list<item: string>, publisher: string>
child 0, name: string
child 1, kind: string
child 2, contents: list<item: string>
child 0, item: string
child 3, publisher: string
child 1, consumed: list<item: string>
child 0, item: string
spaces: struct<primary: struct<name: string, kind: string, sdk: string, app_port: int64, publisher: string>>
child 0, primary: struct<name: string, kind: string, sdk: string, app_port: int64, publisher: string>
child 0, name: string
child 1, kind: string
child 2, sdk: string
child 3, app_port: int64
child 4, publisher: string
siblings: struct<hub: string, peers: list<item: string>, interaction: string>
child 0, hub: string
child 1, peers: list<item: string>
child 0, item: string
child 2, interaction: string
librarian: struct<inventory: string, rag_corpus: string, findings: string, build_command: string>
child 0, inventory: string
child 1, rag_corpus: string
child 2, findings: string
child 3, build_command: string
ci: struct<workflows: list<item: string>>
child 0, workflows: list<item: string>
child 0, item: string
files: list<item: struct<path: string, size: int64, sha256: string, language: string, purpose: string, titl (... 11 chars omitted)
child 0, item: struct<path: string, size: int64, sha256: string, language: string, purpose: string, title: string>
child 0, path: string
child 1, size: int64
child 2, sha256: string
child 3, language: string
child 4, purpose: string
child 5, title: string
repo: string
file_count: int64
to
{'schema': Value('string'), 'repo': Value('string'), 'file_count': Value('int64'), 'files': List({'path': Value('string'), 'size': Value('int64'), 'sha256': Value('string'), 'language': Value('string'), 'purpose': Value('string'), 'title': Value('string')})}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
schema: string
node: string
system: string
role: string
version_source: string
branding: list<item: string>
child 0, item: string
endpoints: list<item: struct<method: string, path: string, auth: string, purpose: string>>
child 0, item: struct<method: string, path: string, auth: string, purpose: string>
child 0, method: string
child 1, path: string
child 2, auth: string
child 3, purpose: string
secrets_required: list<item: string>
child 0, item: string
secrets_optional: list<item: string>
child 0, item: string
cors_allow_origins: list<item: string>
child 0, item: string
datasets: struct<produced: list<item: struct<name: string, kind: string, contents: list<item: string>, publish (... 43 chars omitted)
child 0, produced: list<item: struct<name: string, kind: string, contents: list<item: string>, publisher: string>>
child 0, item: struct<name: string, kind: string, contents: list<item: string>, publisher: string>
child 0, name: string
child 1, kind: string
child 2, contents: list<item: string>
child 0, item: string
child 3, publisher: string
child 1, consumed: list<item: string>
child 0, item: string
spaces: struct<primary: struct<name: string, kind: string, sdk: string, app_port: int64, publisher: string>>
child 0, primary: struct<name: string, kind: string, sdk: string, app_port: int64, publisher: string>
child 0, name: string
child 1, kind: string
child 2, sdk: string
child 3, app_port: int64
child 4, publisher: string
siblings: struct<hub: string, peers: list<item: string>, interaction: string>
child 0, hub: string
child 1, peers: list<item: string>
child 0, item: string
child 2, interaction: string
librarian: struct<inventory: string, rag_corpus: string, findings: string, build_command: string>
child 0, inventory: string
child 1, rag_corpus: string
child 2, findings: string
child 3, build_command: string
ci: struct<workflows: list<item: string>>
child 0, workflows: list<item: string>
child 0, item: string
files: list<item: struct<path: string, size: int64, sha256: string, language: string, purpose: string, titl (... 11 chars omitted)
child 0, item: struct<path: string, size: int64, sha256: string, language: string, purpose: string, title: string>
child 0, path: string
child 1, size: int64
child 2, sha256: string
child 3, language: string
child 4, purpose: string
child 5, title: string
repo: string
file_count: int64
to
{'schema': Value('string'), 'repo': Value('string'), 'file_count': Value('int64'), 'files': List({'path': Value('string'), 'size': Value('int64'), 'sha256': Value('string'), 'language': Value('string'), 'purpose': Value('string'), 'title': Value('string')})}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Librarian — Sync Layer
This folder makes the repository cleanly ingestible by the central Librarian (the Mapping-and-Inventory hub) and by any RAG pipeline (Hugging Face Datasets, LlamaIndex, LangChain).
What's here
| File | Generated? | Purpose |
|---|---|---|
build_corpus.py |
hand-written | Self-contained, stdlib-only generator. Walks the repo and writes inventory.json + rag_corpus.jsonl. |
inventory.json |
yes — by build_corpus.py |
Every file with path, size, sha256, language, purpose, title. |
rag_corpus.jsonl |
yes — by build_corpus.py |
One JSON record per documentable file: {id, path, title, summary, content, tags}. |
findings.md |
hand-written | Human-readable map of the repo + gap analysis. Augments REPO_MAP.md / SYSTEM_MAP.md; never replaces them. |
manifest.json |
hand-written | Machine-readable "what is this repo" descriptor (node, role, endpoints, datasets, sibling repos). |
.librarianignore |
hand-written | Per-repo extra directory names to skip. One name per line. |
Local rebuild
python3 librarian/build_corpus.py
No third-party dependencies — works on any system with Python 3.9+.
CI
Two workflows are added under .github/workflows/:
librarian-sync.yml— on push tomainand on a daily schedule:- Rebuilds
inventory.json+rag_corpus.jsonl. - Commits any drift back to the branch.
- If
HF_TOKENsecret is set, mirrors thelibrarian/folder to a Hugging Face Dataset (<HF_USER>/<repo>-librarianby default). - If repo variable
DEPLOY_HF_SPACE=true, also (re)deploys a minimal Gradio Space that exposes/status+ a search UI over the corpus.
- Rebuilds
self-heal.yml— when any monitored workflow fails, re-runs only the failed jobs once. If it still fails, opens a deduplicated tracking issue.
Required configuration (per repo)
| Kind | Name | Required? | Notes |
|---|---|---|---|
| Secret | HF_TOKEN |
for HF push | A HuggingFace user-access token with write scope. Without it, the HF push step is skipped cleanly. |
| Variable | HF_USER |
optional | HuggingFace user/org. Default: DJ-Goanna-Coding. |
| Variable | HF_DATASET_NAME |
optional | Dataset name. Default: <repo>-librarian. |
| Variable | DEPLOY_HF_SPACE |
optional | Set to true to also deploy the Gradio search Space. |
The standardized GET /v1/system/status endpoint added to app.py returns
{node, version, status, uptime_seconds, git_sha, librarian_ready, timestamp}.
The Vercel HUD and the central Librarian poll this endpoint on every
sovereign repo with the same contract.
Apply this layer to another repo (3 steps)
This folder + the two workflow files are intentionally repo-agnostic. To onboard AION / TIA / ORACLE / Mapping-and-Inventory:
- Copy
librarian/and.github/workflows/librarian-sync.yml+.github/workflows/self-heal.ymlinto the target repo, unchanged. (Optionally editlibrarian/.librarianignorefor that repo's heavy directories.) - In GitHub → Settings → Secrets and variables → Actions add the
HF_TOKENsecret (and optionally set theHF_USER/HF_DATASET_NAME/DEPLOY_HF_SPACEvariables). - Add
GET /v1/system/statusto that repo'sapp.pyreturning the same shape as ARK_CORE's. If the repo has noapp.py, the "skeleton deployment" task in the directive applies first.
After the first push to main, the central Librarian can ingest the new
<HF_USER>/<repo>-librarian dataset and findings.md to weave the new
node into its map.
Privacy / safety
build_corpus.py will not include the content of:
.git/,__pycache__/, virtualenvs, build/dist caches.- Anything listed in
.librarianignore. .github/agents/(private agent instructions).- Files matching the
SECRETY_NAMESset inbuild_corpus.py(.env,credentials.json,token.json,mexc_keys.json). - Files larger than 4 MiB (still inventoried; content omitted).
- Bytes beyond 64 KiB per file (truncated, with marker).
If you need additional exclusions, prefer adding directory names to
librarian/.librarianignore over editing build_corpus.py so the script
stays portable across all your repos.
- Downloads last month
- 90