--- license: apache-2.0 base_model: Qwen/Qwen3-1.7B tags: - vibescript - code-compression - lora - gguf - qwen3 language: - en pipeline_tag: text-generation --- # VibeScript - Code to DSL Converter **vibecoder-discern** converts natural language and code into VibeScript - a compact symbolic DSL for expressing programming concepts. ## What is VibeScript? VibeScript compresses verbose code into symbolic notation: | Code | VibeScript | |------|------------| | `function add(a, b) { return a + b; }` | `Ω> add!(a, b)` | | `const users = await db.query(...)` | `δ.m.p.query()` | | `app.get('/api/users', ...)` | `θ.m.route(θ.e, ζ.x)` | | `if (error) { throw new Error(...) }` | `~system~γ#error!` | ## Model Variants | Path | Format | Size | Use Case | |------|--------|------|----------| | `/lora-adapter/` | LoRA | ~13MB | Merge with your own Qwen3-1.7B | | `/merged-model/` | HuggingFace | ~3.4GB | Ready-to-use transformers | | `/gguf/` | GGUF Q4_K_M | ~1.1GB | llama.cpp / Ollama | ## Quick Start ### llama.cpp (GGUF) ```bash # Download wget https://huggingface.co/calebboud/vibescript/resolve/main/gguf/vibecoder-discern-1.7B-Q4_K_M.gguf # Run llama-cli -m vibecoder-discern-1.7B-Q4_K_M.gguf \ -p "Convert this to vibescript: function multiply(x, y) { return x * y; }" \ -n 100 --temp 0.7 ``` ### Transformers (Merged Model) ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("calebboud/vibescript", subfolder="merged-model") tokenizer = AutoTokenizer.from_pretrained("calebboud/vibescript", subfolder="merged-model") prompt = "Convert this to vibescript: console.log('Hello World')" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### LoRA Adapter (Merge Yourself) ```python from peft import PeftModel from transformers import AutoModelForCausalLM base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-1.7B") model = PeftModel.from_pretrained(base, "calebboud/vibescript", subfolder="lora-adapter") merged = model.merge_and_unload() ``` ## Training Details - **Base Model:** Qwen/Qwen3-1.7B - **Method:** LoRA (r=8, alpha=16) - **Target Modules:** q_proj, k_proj, v_proj, o_proj - **Dataset:** 885 code → vibescript examples - **Task:** CAUSAL_LM ## VibeScript Symbols | Symbol | Meaning | |--------|---------| | `Ω>` | Function definition | | `Σ` | Route/scaffold | | `δ` | Database operations | | `θ` | HTTP/API | | `γ` | Error handling | | `ζ` | Structure/scaffold | | `α` | Analysis | | `ε` | Dependencies | ## Coming Soon - **vibecoder-expand**: VibeScript → Code (reverse direction) ## License Apache 2.0