# Genue Matrix 8B ## Model Overview Genue Matrix 8B is a specialized reasoning engine designed for algebraic word problems, symbolic logic, and mathematical proofs. It leverages a weighted multi-source training strategy to maximize its logical deduction capabilities in a compact 8B parameter package. ## The Breakthrough This model achieved a record-low training loss of 0.35 on the MathInstruct dataset, a highly competitive benchmark for reasoning models. By utilizing a Cosine Soft-Landing scheduler and TIGER-DNA data weighting, Genue Matrix maintains coherence in complex multi-step derivations where standard 8B models typically experience logic collapse. ## Training Details - **Base Model**: Llama 3 8B (via Genue Matrix Prime) - **Dataset**: MathInstruct (2,000 weighted samples of GSM8K, MATH, AQuA, and ProofWiki) - **Final Training Loss**: 0.35 - **Scheduler**: Cosine Annealing (for superior convergence) ## Performance & Tests - **Algebraic Reasoning**: High (Verified via MathInstruct) - **Word Problem Logic**: High (Sub-0.40 loss convergence) - **Symbolic Consistency**: High ## Usage The model follows the Alpaca instruction format: ``` ### Instruction: [Your algebraic or logic problem] ### Response: ```