Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
- ko
|
| 6 |
+
tags:
|
| 7 |
+
- skill-retrieval
|
| 8 |
+
- information-retrieval
|
| 9 |
+
- agents
|
| 10 |
+
- benchmark
|
| 11 |
+
- retrieval
|
| 12 |
+
pretty_name: SkillRetBench
|
| 13 |
+
size_categories:
|
| 14 |
+
- n<1K
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# SkillRetBench
|
| 18 |
+
|
| 19 |
+
## Dataset summary
|
| 20 |
+
|
| 21 |
+
**SkillRetBench** is a benchmark for **query-to-skill retrieval**: given a natural-language user request, systems must rank or select the correct procedural skill document(s) from a fixed library. The corpus is derived from a real-world production-style agent skill library (**501** skills). The benchmark includes **1,250** queries with gold skill identifiers and optional distractors, spanning five evaluation settings (single skill, multi-skill composition, distractors, outdated/redundant, budget-constrained). Packaged **baseline experiment results** (BM25, dense, hybrid, LLM-style, and SADO-style runs) report standard IR metrics (Recall@k, nDCG@k, MRR, MAP) per setting.
|
| 22 |
+
|
| 23 |
+
SkillRetBench accompanies the **SEFO** framework (Self-Evolving Federated Orchestration) and targets research on skill-aware orchestration, large skill libraries, and retrieval under composition and noise.
|
| 24 |
+
|
| 25 |
+
## Dataset structure
|
| 26 |
+
|
| 27 |
+
The release consists of three JSON artifacts (also listed in `croissant.json`):
|
| 28 |
+
|
| 29 |
+
| File | Description |
|
| 30 |
+
|------|-------------|
|
| 31 |
+
| `skill_corpus.json` | Full skill index: metadata, corpus statistics, and the `skills` array (one object per skill). |
|
| 32 |
+
| `skillretbench_queries.json` | Benchmark definition: `meta` (counts, seed, generator) and `queries` array. |
|
| 33 |
+
| `baseline_results.json` | Aggregated baseline metrics: per-method, per-setting IR scores and a markdown summary table. |
|
| 34 |
+
|
| 35 |
+
### `skill_corpus.json`
|
| 36 |
+
|
| 37 |
+
- **`meta`**: `generated_at` (ISO 8601), `skills_root`, `output_path`, `skill_file_count`.
|
| 38 |
+
- **`statistics`**: `total_skills`, `skills_per_category`, description length stats, `token_count_distribution`, composition graph stats (`skills_with_composition_references`, `total_composition_edges`).
|
| 39 |
+
- **`skills`** (array, length 501): each element includes:
|
| 40 |
+
- `skill_id` (string, canonical id)
|
| 41 |
+
- `skill_name` (string)
|
| 42 |
+
- `description` (string, short routing description)
|
| 43 |
+
- `trigger_phrases` (string array)
|
| 44 |
+
- `anti_triggers` (string array)
|
| 45 |
+
- `korean_triggers` (string array; may be empty)
|
| 46 |
+
- `category` (string)
|
| 47 |
+
- `full_text` (string, full SKILL.md-derived body)
|
| 48 |
+
- `token_count` (integer)
|
| 49 |
+
- `composable_skills` (string array; referenced skill ids)
|
| 50 |
+
- `parse_warnings` (string array)
|
| 51 |
+
|
| 52 |
+
### `skillretbench_queries.json`
|
| 53 |
+
|
| 54 |
+
- **`meta`**: `benchmark`, `generator`, `corpus_meta`, `total_queries` (1250), `counts_by_setting`, `rng_seed` (42).
|
| 55 |
+
- **`queries`** (array, length 1250): each element includes:
|
| 56 |
+
- `query_id` (string, e.g. `SS-0001`)
|
| 57 |
+
- `setting` (enum-like string): `single_skill` | `multi_skill_composition` | `distractor` | `outdated_redundant` | `budget_constrained`
|
| 58 |
+
- `query` (string; English and/or Korean natural language)
|
| 59 |
+
- `gold_skills` (string array; expected skill ids)
|
| 60 |
+
- `distractor_skills` (string array)
|
| 61 |
+
- `budget_tokens` (number or `null`)
|
| 62 |
+
- `difficulty` (string, e.g. `easy`, `medium`)
|
| 63 |
+
- `source` (string, e.g. `trigger_paraphrase`)
|
| 64 |
+
|
| 65 |
+
**Query counts by setting**
|
| 66 |
+
|
| 67 |
+
| Setting | Count |
|
| 68 |
+
|---------|------:|
|
| 69 |
+
| `single_skill` | 400 |
|
| 70 |
+
| `multi_skill_composition` | 200 |
|
| 71 |
+
| `distractor` | 300 |
|
| 72 |
+
| `outdated_redundant` | 150 |
|
| 73 |
+
| `budget_constrained` | 200 |
|
| 74 |
+
|
| 75 |
+
### `baseline_results.json`
|
| 76 |
+
|
| 77 |
+
- **`timestamp`**, **`meta`**: `corpus_skills`, `queries_total`, `queries_by_setting`, `dense_backend` (e.g. `jaccard_fallback`).
|
| 78 |
+
- **`baselines`**: nested object — top-level keys **`BM25`**, **`Dense`**, **`Hybrid`**, **`NaiveLLM`**, **`SADO`**. Under each method, keys match query `setting` names; each leaf holds metrics: `recall@1`, `recall@3`, `recall@5`, `recall@10`, `ndcg@1`, `ndcg@3`, `ndcg@5`, `ndcg@10`, `mrr`, `map` (floats).
|
| 79 |
+
- **`summary_table`**: pre-rendered markdown table (macro-style summary over settings).
|
| 80 |
+
|
| 81 |
+
> **Note:** As documented in the artifact, `NaiveLLM` and `SADO` entries may be **simulated** (no live API) for reproducible baselines; see project script `run_baselines.py`.
|
| 82 |
+
|
| 83 |
+
## Supported tasks
|
| 84 |
+
|
| 85 |
+
- **skill-retrieval** — Rank or classify skills given a query; evaluate with gold `gold_skills` and IR metrics.
|
| 86 |
+
- **information-retrieval** — Same retrieval formulation; comparable to passage/tool retrieval with long procedural documents.
|
| 87 |
+
|
| 88 |
+
## Languages
|
| 89 |
+
|
| 90 |
+
- **English** — Majority of skill text and many queries.
|
| 91 |
+
- **Korean** — Present in `korean_triggers`, many `query` strings, and mixed Korean/English skill descriptions.
|
| 92 |
+
|
| 93 |
+
## Dataset creation
|
| 94 |
+
|
| 95 |
+
### Sources
|
| 96 |
+
|
| 97 |
+
- Skills were built from **Cursor agent `SKILL.md` files** under a single repository’s `.cursor/skills` tree (501 files), parsed into structured records plus full markdown bodies.
|
| 98 |
+
- Queries were **programmatically generated** by `app.sefo.benchmark.query_generator` with fixed **`rng_seed`: 42** for reproducibility.
|
| 99 |
+
|
| 100 |
+
### Methodology (high level)
|
| 101 |
+
|
| 102 |
+
1. **Corpus build** — Parse frontmatter and body; extract triggers, categories, composition edges, and token counts.
|
| 103 |
+
2. **Query generation** — Instantiate templates and paraphrases aligned to skills and settings (single vs. multi-skill, distractors, outdated/redundant, token budget hints).
|
| 104 |
+
3. **Baselines** — Run packaged retrievers/rankers (lexical, dense with configured fallback, hybrid, simulated LLM/SADO-style) and aggregate metrics per setting.
|
| 105 |
+
|
| 106 |
+
## Considerations
|
| 107 |
+
|
| 108 |
+
### Biases
|
| 109 |
+
|
| 110 |
+
- **Corpus domain** — Skills reflect one organization’s automation stack (dev tooling, PM, trading, cloud, etc.); frequency and vocabulary are **not** uniformly representative of all industries or locales.
|
| 111 |
+
- **Authoring style** — Descriptions follow a consistent “when to use / do not use” pattern; models may exploit **format bias** rather than deep semantics.
|
| 112 |
+
- **Query generation** — Synthetic and template-driven queries may **over- or under-represent** realistic user phrasing compared to production logs.
|
| 113 |
+
|
| 114 |
+
### Limitations
|
| 115 |
+
|
| 116 |
+
- **Scale** — 501 skills is medium-scale vs. ecosystems with 10⁵+ tools; retrieval hardness may not transfer linearly.
|
| 117 |
+
- **Gold labels** — Defined by benchmark construction rules; edge cases and valid alternative skill sets may exist.
|
| 118 |
+
- **Static snapshot** — Skill text and ids are frozen for the benchmark revision tied to `meta.generated_at` in the JSON files.
|
| 119 |
+
- **Baseline simulation** — Some methods are noted as simulated in-code; compare only under the same harness and seeds.
|
| 120 |
+
|
| 121 |
+
## License
|
| 122 |
+
|
| 123 |
+
This dataset is released under the **Apache License 2.0**. See [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
| 124 |
+
|
| 125 |
+
## Citation
|
| 126 |
+
|
| 127 |
+
If you use SkillRetBench or this corpus, please cite:
|
| 128 |
+
|
| 129 |
+
```bibtex
|
| 130 |
+
@misc{han2026sefo,
|
| 131 |
+
title = {{SEFO}: Self-Evolving Federated Orchestration with Trusted Skill Governance and Skill Retrieval Benchmark for Recursive Agentic Systems},
|
| 132 |
+
author = {Han, Hyojung and {ThakiCloud AI Research}},
|
| 133 |
+
year = {2026},
|
| 134 |
+
howpublished = {\url{https://github.com/hyojunguy/ai-model-event-stock-analytics}},
|
| 135 |
+
note = {Includes SkillRetBench: 501-skill corpus and 1,250-query retrieval benchmark}
|
| 136 |
+
}
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
Adjust `howpublished` / venue fields when a formal publication DOI is available.
|
| 140 |
+
|
| 141 |
+
## Dataset repository layout (recommended)
|
| 142 |
+
|
| 143 |
+
After upload to the Hugging Face Hub, a typical layout is:
|
| 144 |
+
|
| 145 |
+
```
|
| 146 |
+
skill_corpus.json
|
| 147 |
+
skillretbench_queries.json
|
| 148 |
+
baseline_results.json
|
| 149 |
+
croissant.json
|
| 150 |
+
README.md
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
## Additional documentation
|
| 154 |
+
|
| 155 |
+
- Croissant (ML Commons) metadata: **`croissant.json`** in this folder.
|
| 156 |
+
- Implementation and regeneration: `backend/app/sefo/benchmark/` in the source repository.
|