LucentLogico

LucentLogico is a family of compact reasoning-specialized language models developed by Lucent Research. Each model is fine-tuned from IBM Granite 4.0 base architectures and optimized for structured, multi-step analytical reasoning.

The LucentLogico series focuses on mathematical derivation, algorithmic code reasoning, and formal logic tasks, with explicit intermediate reasoning steps emphasized during training.


Model Variants

LucentLogico-3B

  • Base: ibm-granite/granite-4.0-micro-base
  • Parameter Class: ~3B
  • Target Use: High-capacity compact reasoning systems

LucentLogico-1B

  • Base: ibm-granite/granite-4.0-1b-base
  • Parameter Class: ~1B
  • Target Use: Efficient reasoning with reduced compute requirements

LucentLogico-350M

  • Base: ibm-granite/granite-4.0-350m-base
  • Parameter Class: ~350M
  • Target Use: Lightweight reasoning experimentation and edge deployment

All variants are trained using the same reasoning-focused dataset and training philosophy, scaled to their respective parameter sizes.


Training Dataset

All LucentLogico models were fine-tuned on:

Lucid-Research/advanced-reasoning-v1-smol
36,000 curated instruction–response pairs dedicated exclusively to advanced reasoning.

Dataset Composition

The dataset is a balanced tri-domain blend.

Mathematical Reasoning (12,000 samples)

Source: MetaMathQA

  • Multi-step mathematical derivations
  • Symbolic manipulation
  • Competition-style reasoning problems
  • Explicit step-by-step solutions

Code & Algorithmic Reasoning (12,000 samples)

Sources: Magicoder-OSS-Instruct-75K, CodeAlpaca-20k

  • Natural language specification to code
  • Algorithm design tasks
  • Debugging and refinement examples
  • Structured execution planning

Formal Logic & STEM Reasoning (12,000 samples)

Source: SlimOrca

  • Logic puzzles
  • Proof-style reasoning
  • Scientific inference
  • Multi-hop structured deduction

Design Principles

The LucentLogico series was trained with the following priorities:

  • Explicit intermediate reasoning in every example
  • Balanced cross-domain analytical capability
  • Reduced reasoning drift
  • Structured decomposition of complex problems
  • Standardized instruction–response formatting

The training dataset deliberately excludes conversational and alignment-focused data in order to maintain strict specialization in reasoning performance.


Intended Use

LucentLogico models are designed for:

  • Step-by-step mathematical reasoning
  • Algorithmic code synthesis
  • Logical deduction and proof-style analysis
  • Technical reasoning systems
  • Educational analytical applications

Limitations

  • Not optimized for general conversation or roleplay
  • May produce verbose outputs due to step-emphasis training
  • Not fine-tuned for alignment or preference modeling
  • Outputs should be validated before production use

Attribution and Licensing

LucentLogico models were fine-tuned on Lucid-Research/advanced-reasoning-v1-smol, which incorporates or derives from:

  • MetaMathQA
  • Magicoder-OSS-Instruct-75K
  • CodeAlpaca-20k
  • SlimOrca

Users are responsible for complying with the original licenses of all upstream datasets.

Each LucentLogico variant follows the licensing terms of its respective IBM Granite base model:

  • ibm-granite/granite-4.0-micro-base
  • ibm-granite/granite-4.0-1b-base
  • ibm-granite/granite-4.0-350m-base
Downloads last month
16
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lucid-Research/LucentLogico-3B

Finetuned
(2)
this model
Quantizations
2 models

Dataset used to train Lucid-Research/LucentLogico-3B

Collection including Lucid-Research/LucentLogico-3B