File size: 3,688 Bytes
59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 59a409e 58320b2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | ---
library_name: transformers
tags:
- CoT
- reasoning
license: apache-2.0
datasets:
- Lucid-Research/advanced-reasoning-v1-smol
base_model:
- ibm-granite/granite-4.0-micro-base
---
# LucentLogico
LucentLogico is a family of compact reasoning-specialized language models developed by Lucent Research. Each model is fine-tuned from IBM Granite 4.0 base architectures and optimized for structured, multi-step analytical reasoning.
The LucentLogico series focuses on mathematical derivation, algorithmic code reasoning, and formal logic tasks, with explicit intermediate reasoning steps emphasized during training.
---
## Model Variants
### LucentLogico-3B
- Base: ibm-granite/granite-4.0-micro-base
- Parameter Class: ~3B
- Target Use: High-capacity compact reasoning systems
### LucentLogico-1B
- Base: ibm-granite/granite-4.0-1b-base
- Parameter Class: ~1B
- Target Use: Efficient reasoning with reduced compute requirements
### LucentLogico-350M
- Base: ibm-granite/granite-4.0-350m-base
- Parameter Class: ~350M
- Target Use: Lightweight reasoning experimentation and edge deployment
All variants are trained using the same reasoning-focused dataset and training philosophy, scaled to their respective parameter sizes.
---
## Training Dataset
All LucentLogico models were fine-tuned on:
**Lucid-Research/advanced-reasoning-v1-smol**
36,000 curated instruction–response pairs dedicated exclusively to advanced reasoning.
### Dataset Composition
The dataset is a balanced tri-domain blend.
#### Mathematical Reasoning (12,000 samples)
Source: MetaMathQA
- Multi-step mathematical derivations
- Symbolic manipulation
- Competition-style reasoning problems
- Explicit step-by-step solutions
#### Code & Algorithmic Reasoning (12,000 samples)
Sources: Magicoder-OSS-Instruct-75K, CodeAlpaca-20k
- Natural language specification to code
- Algorithm design tasks
- Debugging and refinement examples
- Structured execution planning
#### Formal Logic & STEM Reasoning (12,000 samples)
Source: SlimOrca
- Logic puzzles
- Proof-style reasoning
- Scientific inference
- Multi-hop structured deduction
---
## Design Principles
The LucentLogico series was trained with the following priorities:
- Explicit intermediate reasoning in every example
- Balanced cross-domain analytical capability
- Reduced reasoning drift
- Structured decomposition of complex problems
- Standardized instruction–response formatting
The training dataset deliberately excludes conversational and alignment-focused data in order to maintain strict specialization in reasoning performance.
---
## Intended Use
LucentLogico models are designed for:
- Step-by-step mathematical reasoning
- Algorithmic code synthesis
- Logical deduction and proof-style analysis
- Technical reasoning systems
- Educational analytical applications
---
## Limitations
- Not optimized for general conversation or roleplay
- May produce verbose outputs due to step-emphasis training
- Not fine-tuned for alignment or preference modeling
- Outputs should be validated before production use
---
## Attribution and Licensing
LucentLogico models were fine-tuned on **Lucid-Research/advanced-reasoning-v1-smol**, which incorporates or derives from:
- MetaMathQA
- Magicoder-OSS-Instruct-75K
- CodeAlpaca-20k
- SlimOrca
Users are responsible for complying with the original licenses of all upstream datasets.
Each LucentLogico variant follows the licensing terms of its respective IBM Granite base model:
- ibm-granite/granite-4.0-micro-base
- ibm-granite/granite-4.0-1b-base
- ibm-granite/granite-4.0-350m-base |