| title: README | |
| emoji: ๐ | |
| colorFrom: green | |
| colorTo: gray | |
| sdk: static | |
| pinned: false | |
| # VERBAREX | |
| We focus on fine-tuning large language models for improved factual consistency and instruction adherence. | |
| Our primary goal is bridging the gap between raw base models and practical, stable assistants. We test and release models optimized for specific behavioral constraints, minimizing hallucinations in open-ended generation. | |
| ### Current Projects | |
| * **LuminoLex Series:** A set of fine-tuned models based on the Qwen architecture, optimized for strict identity retention and factual accuracy. | |
| * *LuminoLex-14B:* Our flagship instruct-tuned model. | |
| ### Roadmap | |
| * **Foundation Models:** We are currently establishing data pipelines and infrastructure to pre-train custom architectures from scratch. |