YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

library_name: transformers tags:

qwen3

slm

cognitive-ai

logic

verbarex license: apache-2.0 base_model: Qwen/Qwen3-1.7B language:

en

LuminoLex-1.7B (Quantum 4D Edition)

LuminoLex-1.7B is a high-performance Small Language Model (SLM) engineered by VERBAREX. It leverages a unique 4D Cognitive Architecture to deliver reasoning capabilities typically found in much larger models.

Model Details

Model Description

LuminoLex-1.7B is built upon the Qwen-3 architecture but undergoes a deep adaptation process using high-rank LoRA and the NEFTune technique. It is designed to be a "Pure Brain" model, operating in native Float16 to maintain maximum logical fidelity.

Developed by: VERBAREX

Model type: Causal Language Model (SLM)

Language(s) (NLP): English

License: Apache 2.0

Finetuned from model: Qwen/Qwen3-1.7B

Model Sources

Repository: VERBAREX/LuminoLex-1.7B

Architecture: 4D Cognitive Architecture (ACL, QPB, HIM, TLF)

Uses

Direct Use

LuminoLex is optimized for complex reasoning, mathematical problem solving, and algorithmic code generation. It is intended for deployment in environments where low latency and high cognitive density are required.

Out-of-Scope Use

This model is not intended for high-stakes medical or legal advice without human oversight. Despite its advanced reasoning, it remains a 1.7B parameter model and may exhibit limitations in broad world-knowledge retrieval compared to LLMs.

Bias, Risks, and Limitations

LuminoLex is trained to be factually grounded; however, users should be aware of potential hallucinations in niche data areas. The "Quantum Probability Branching" attempts to mitigate logic errors, but verification is recommended for mission-critical code.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer import torch

model = AutoModelForCausalLM.from_pretrained( "VERBAREX/LuminoLex-1.7B", torch_dtype=torch.float16, device_map="auto" )

Training Details

Training Data

The model utilizes a Tri-Core Balanced Dataset strategy:

General Logic: UltraChat (Conversational flow)

Mathematics: Orca-Math (Step-by-step reasoning)

Coding: Evol-Instruct-Code (Programming logic)

Training Procedure

Training Hyperparameters

Precision: Full Float16 (non-mixed)

Optimizer: AdamW

Learning Rate: 2e-4 (Cosine schedule)

LoRA Rank: 64 (Alpha: 128)

NEFTune Noise Alpha: 5

Technical Specifications

Model Architecture and Objective

LuminoLex integrates an experimental 4D Cognitive Architecture:

Autonoetic Consciousness Layer (ACL): Self-verification of identity and constraints.

Quantum Probability Branching (QPB): Parallel path evaluation for logic.

Holographic Intent Mesh (HIM): Nuanced intent analysis.

Temporal Logic Folding (TLF): Future-state projection for safer logic.

Compute Infrastructure

Software

Transformers, PEFT, PyTorch.

More Information

Developed by VERBAREX to push the boundaries of Small Language Models through cognitive layer injection.

Model Card Contact

For inquiries, contact the VERBAREX research team.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support