--- base_model: NewstaR/Fizik-0.6B-Preview tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Model: Fizik-0.6B-Pro Note: In rare cases, the model may use a different `````` tag format. This does not affect performance or output quality. We're aware of the issue and are working on a fix. --- ## Description `Fizik-0.6B-Pro` is a refined reasoning model trained on the `Fizik-SFT-Reasoning` dataset — 11,000 examples of structured, step-by-step thinking. Every sample is tagged with `...`, and **all non-reasoning content was removed**. This model was built to fix the core issue in the Fizik Preview version: inconsistent reasoning behavior. Now, reasoning is always active when prompted, with no ambiguity. --- ## Behavior - **Always reasons when prompted** The model consistently follows the `` structure without skipping steps. - **No fallback to non-reasoning answers** Reasoning is treated as the default behavior. - **Performs well on multi-step tasks** Especially in areas like math, logic, and multi-hop QA. --- ## Intended Use - Tasks that require explicit reasoning - Safe deployment where reliable logic is needed - Research on controlled thought generation --- ## Limitations - Will **not respond naturally** to prompts that expect short or intuitive answers. - Use `Fizik-0.6B-Full` if you need toggleable reasoning behavior. ---