The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Glyphic Dataset v1
A structured dataset for training language models to understand and generate Glyphic Language — a symbolic protocol designed for drift‑resistant agent cognition.
This dataset contains:
Text → Glyph mappings
Glyph → Text mappings
Structured Meaning representations
CTX envelope examples (identity, intent, memory, behavior, safety, state, thought)
It is the reference dataset for training Glyphic‑aware LLMs. Dataset Contents
The dataset includes three primary JSONL files:
- text_to_glyph.jsonl
Each line contains: json
{ "text": "The agent remembers a promise.", "glyph": "<G:...>" }
- glyph_to_text.jsonl
Each line contains: json
{ "glyph": "<G:...>", "text": "The agent remembers a promise." }
- structured_meaning.jsonl
Each line contains: json
{ "text": "The agent remembers a promise.", "meaning": { "actor": "agent", "action": "remember", "object": "promise", "context": {...} }, "glyph": "<G:...>" }
These files are generated using the Glyphic Language Toolkit: Code
https://github.com/GlyphicMind-Solutions/Glyphic-Language
How to Load the Dataset
Using Hugging Face datasets: python
from datasets import load_dataset
ds = load_dataset("GlyphicMind/glyphic-dataset-v1", split="train")
You can inspect entries: python
print(ds[0])
Dataset Schema Text → Glyph
text: natural language sentence
glyph: encoded Glyphic sequence
Glyph → Text
glyph: symbolic sequence
text: natural language reconstruction
Structured Meaning
text: natural language
meaning: structured semantic representation
glyph: encoded symbolic sequence
The meaning schema is defined in: Code
glyphic-language/docs/semantic_model.md
Intended Use
This dataset is designed for:
training LLMs to understand Glyphic
training LLMs to generate Glyphic
symbolic reasoning research
drift‑resistant agent architectures
CTX‑based identity, intent, memory, and behavior modeling
protocol‑driven agent communication
It is not a general‑purpose natural language dataset. How to Train a Glyphic‑Aware Model
A full training pipeline is provided in: Code
glyphic-language/training/
Typical steps:
Generate or extend the dataset using:
Code
generator/run_generator.py
Load the dataset with Hugging Face datasets
Fine‑tune a base model (LLaMA/Mistral/etc.)
Export as .gguf for inference
Use Glyphic envelopes at runtime to eliminate drift
A reference model is available at: Code
GlyphicMind/glyphic-llm-v1
Regenerating or Extending the Dataset
To regenerate or extend this dataset:
Clone the Glyphic Language Toolkit:
Code
https://github.com/GlyphicMind-Solutions/Glyphic-Language
Modify dictionary entries, templates, or CTX files
Run the generator:
bash
python -m generator.run_generator
Validate using:
bash
python -m interpreter.interpreter --validate
See:
training/dataset_generation_guide.md
generator/templates_*
dictionary/
syntax/
Why Glyphic Reduces LLM Drift
Glyphic provides:
- Deterministic structure
Meaning is encoded symbolically, not as free‑form prose. 2. Strict grammar
BNF‑defined syntax prevents ambiguity. 3. CTX protocol
Identity, intent, memory, behavior, safety, and state are explicit fields. 4. Envelope validation
Controllers enforce structure before and after LLM inference. 5. Separation of concerns
The LLM becomes a stateless pattern engine; Glyphic holds the meaning. License
This dataset is licensed under:
Creative Commons Attribution 4.0 International (CC‑BY 4.0)
You may reuse, modify, and build upon this dataset with attribution. Citation Code
Glyphic Dataset v1 (2026). GlyphicMind Solutions. https://huggingface.co/GlyphicMind/glyphic-dataset-v1
- Downloads last month
- 12