gss1147 commited on
Commit
021ad0c
·
verified ·
1 Parent(s): 6d0775e

Create README.md

Browse files

![1bd3b27a-02ec-428d-a9de-ed72441ad936](https://cdn-uploads.huggingface.co/production/uploads/6758f77450b6c087c2c281e1/UczNRTeBubKp8JSQibnG1.png)

Files changed (1) hide show
  1. README.md +140 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Royal Ghost Coder 10M
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: "royal_ghost_titan_data.jsonl"
8
+ tags:
9
+ - code
10
+ - instruction-tuning
11
+ - synthetic
12
+ - agentic
13
+ task_categories:
14
+ - text-generation
15
+ language:
16
+ - en
17
+ size_categories:
18
+ - 10M<n<100M
19
+ ---
20
+
21
+ ![Royal Ghost Coder 10M Banner](royal_ghost_coder_10m_banner.png)
22
+
23
+ # Royal Ghost Coder 10M
24
+
25
+ A large-scale, **synthetic instruction-tuning** corpus designed to train code-capable, agentic models on **structured “instruction → input → output”** workflows at high volume. The dataset ships as a single JSONL file and is auto-converted to Parquet by Hugging Face for faster streaming.
26
+
27
+ ## Dataset Summary
28
+
29
+ - **Repository:** `gss1147/Royal_Ghost_Coder_10M`
30
+ - **Rows:** 10,000,000 (train split)
31
+ - **Primary file:** `royal_ghost_titan_data.jsonl`
32
+ - **Format:** JSON Lines (one JSON object per line)
33
+ - **Schema:** `id, idx, role, instruction, input, output, score`
34
+
35
+ ## Supported Tasks
36
+
37
+ - Instruction tuning for code generation / refactoring / debugging patterns
38
+ - Lightweight agent-style planning and “tool-like” action phrasing
39
+ - Dataset-driven evaluation and filtering via the `score` field
40
+
41
+ ## Data Structure
42
+
43
+ Each record is a single training example in a common instruction-tuning format.
44
+
45
+ ### Fields
46
+
47
+ - `id` (string): UUID-style identifier
48
+ - `idx` (int): Row index
49
+ - `role` (string): Persona / role label (e.g., an agent identity)
50
+ - `instruction` (string): The task request (prompt)
51
+ - `input` (string): Optional context / constraints / scenario text
52
+ - `output` (string): The intended completion (often code or code-like text)
53
+ - `score` (float): A normalized quality indicator in `[0, 1]` (useful for filtering)
54
+
55
+ ### Example (conceptual)
56
+
57
+ ```json
58
+ {
59
+ "id": "6da52f71-a953-4675-862f-2cd8539b55f1",
60
+ "idx": 0,
61
+ "role": "titan_architect",
62
+ "instruction": "Optimize the Quantum_Bridge for singular perfection.",
63
+ "input": "Legacy sector 20 unstable.",
64
+ "output": "def Optimize_Quantum_Bridge_0(self): return self.evolve(entropy=0.2674)",
65
+ "score": 0.788814
66
+ }
67
+ ```
68
+
69
+ ## How to Use
70
+
71
+ ### Loading with 🤗 Datasets
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+
76
+ ds = load_dataset("gss1147/Royal_Ghost_Coder_10M", split="train")
77
+ print(ds[0])
78
+ ```
79
+
80
+ ### Converting to chat format (optional)
81
+
82
+ ```python
83
+ def to_messages(ex):
84
+ user = ex["instruction"]
85
+ if ex.get("input"):
86
+ user = f"{user}\n\nContext:\n{ex['input']}"
87
+ return {
88
+ "messages": [
89
+ {"role": "system", "content": f"You are {ex.get('role', 'an expert coding assistant')}."},
90
+ {"role": "user", "content": user},
91
+ {"role": "assistant", "content": ex["output"]},
92
+ ],
93
+ "score": ex.get("score", None),
94
+ "id": ex.get("id", None),
95
+ }
96
+
97
+ chat_ds = ds.map(to_messages, remove_columns=ds.column_names)
98
+ ```
99
+
100
+ ### Quality filtering
101
+
102
+ ```python
103
+ filtered = ds.filter(lambda x: x["score"] is None or x["score"] >= 0.85)
104
+ ```
105
+
106
+ ## Intended Use
107
+
108
+ This dataset is primarily intended for:
109
+
110
+ - Training or adapting small-to-mid size models for instruction-following code generation.
111
+ - Building “persona + instruction” pipelines where `role` steers responses.
112
+ - Large-scale experiments on filtering, curricula, or “quality-aware” fine-tuning via `score`.
113
+
114
+ ## Limitations and Considerations
115
+
116
+ - **Synthetic content:** Examples are machine-generated (or machine-curated) and may include unrealistic domains, naming, or pseudo-code.
117
+ - **Verification:** The dataset is **not** a source of verified real-world facts. Treat outputs as training text, not ground truth.
118
+ - **Safety:** If you deploy a model fine-tuned on this dataset, apply standard safety, security, and evaluation practices.
119
+
120
+ ## License
121
+
122
+ No explicit license is declared in this dataset card. Before broad redistribution or commercial use, add a license in the YAML front matter (for example: `apache-2.0`, `mit`, or `cc-by-4.0`) consistent with your intended permissions.
123
+
124
+ ## Citation
125
+
126
+ If you use this dataset in academic work, cite the repository:
127
+
128
+ ```bibtex
129
+ @dataset{gss1147_royal_ghost_coder_10m,
130
+ title = {Royal Ghost Coder 10M},
131
+ author = {gss1147},
132
+ year = {2026},
133
+ publisher = {Hugging Face},
134
+ howpublished = {\url{https://huggingface.co/datasets/gss1147/Ro
135
+
136
+ ![1bd3b27a-02ec-428d-a9de-ed72441ad936](https://cdn-uploads.huggingface.co/production/uploads/6758f77450b6c087c2c281e1/4QBZvscn2HjAM0q9CkJPM.png)
137
+
138
+ yal_Ghost_Coder_10M}}
139
+ }
140
+ ```