TYNOS-1.2B-Opus (Trained with Vexa)
TYNOS-1.2B-Opus is a 1.2 billion parameter language model trained using the Vexa Crystalline Intelligence architecture - a novel approach that replaces traditional neural network training with knowledge crystallization.
Model Details
- Model Type: Transformer-based LLM with Vexa Crystalline Intelligence
- Parameters: 1.2B
- Context Length: 4096 tokens
- Training Method: Vexa Crystallization (not traditional LoRA/PEFT)
Vexa Architecture
Unlike traditional training that modifies neural weights through gradient descent, Vexa uses:
Glyph-Based Knowledge Representation: Concepts are encoded as 512-dimensional Glyph vectors
GlyphLattice: A knowledge graph (NetworkX) storing Glyphs with typed relationships
5-Phase Crystallization Pipeline:
- Ingest: Parse JSONL datasets (Alpaca, ShareGPT, etc.)
- Extract: NLP analysis to extract relation triples
- Encode: Convert to 512-dim vectors (SentenceTransformer)
- Integrate: Weave into knowledge lattice
- Calibrate: Tune tension, resonance, decay
Frozen Base Model: The underlying Transformer remains unchanged
Activation Propagation: Query lattice with semantic vectors, propagate through relationships
Response Synthesis: Generate using frozen model + lattice context
Files
model.safetensors- Base model in safetensors format (2.34 GB)tynos-1.2b-opus.gguf- GGUF format for llama.cpp (2.34 GB)
Usage
With Transformers (PyTorch)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Zandy-Wandy/TYNOS-1.2B-Opus", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Zandy-Wandy/TYNOS-1.2B-Opus")
inputs = tokenizer("Hello, I am", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
With Vexa Crystalline Intelligence
from vexa_integration.synthesizer import VexaSynthesizer
from vexa_integration.lattice import GlyphLattice
# Load crystallized knowledge
lattice = GlyphLattice.load("output/vexa_lattice/tynos_lattice.json.gz")
# Create synthesizer with frozen model
synthesizer = VexaSynthesizer(
model_dir="model",
lattice=lattice
)
# Generate with lattice-grounded context
response = synthesizer.generate("Explain quantum computing")
print(response)
With llama.cpp (GGUF)
llama-cli -m tynos-1.2b-opus.gguf -p "Hello" -n 256
Training Data
Crystallized from 456,828 high-quality examples:
- Alpaca (52k)
- ShareGPT (134k)
- OpenHermes (164k)
- Dolly (15k)
- MathInstruct (50k)
- Medical QA (41k)
Performance
Vexa crystallization achieves ~10 minute training for 2B models vs 21+ hours for traditional methods - a 100x+ speedup.
Limitations
- Base model weights are frozen; knowledge is stored in the GlyphLattice
- Requires Vexa integration modules for full crystalline intelligence features
- GGUF file contains base model only (lattice knowledge separate)
License
Apache 2.0