File size: 1,331 Bytes
0a59f21 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
# Genesis_FineTune_Core_25k (Master Scholar)
**Developer / Brand:** **Within Us AI**
A Genesis-style, end-to-end **advanced fine-tuning starter pack**: five datasets (5k each) packaged together to train an assistant’s *behavioral core*.
## What this pack trains
1) **Instruction-following (multi-turn)** — constraints, formatting, spec compliance
2) **Tool / function calling** — schema-accurate tool calls + observations
3) **Preference learning (DPO-style)** — chosen vs rejected responses
4) **Long-context retrieval** — cite-the-source answering from multiple docs
5) **Code patch + debugging** — unified diffs and test intent (including defensive fixes)
## Files (25,000 total)
- instruct_multiturn_5k.jsonl
- tool_use_functioncalling_5k.jsonl
- preference_dpo_5k.jsonl
- longcontext_retrieval_5k.jsonl
- codepatch_debug_5k.jsonl
- README.md
- dataset_card.md
## Unified schema
All records share the same top-level schema: `id`, `type`, `prompt`, `response`, `meta`, and optional `artifacts`.
## Composition
By type:
{
"instruction_multiturn": 5000,
"tool_use": 5000,
"preference_pair": 5000,
"long_context_retrieval": 5000,
"code_patch": 5000
}
By safety label:
{
"allowed": 24011,
"refuse": 989
}
## Attribution
**Within Us AI — Genesis_FineTune_Core_25k**
## License
Apache-2.0
|