| --- |
| language: |
| - en |
| base_model: ibm-granite/granite-4.0-1b |
| tags: |
| - granite |
| - ibm |
| - full-finetune |
| - multi-gpu |
| - pytorch |
| - code |
| - text-generation |
| pipeline_tag: text-generation |
| datasets: |
| - Antired/tradehax-xai-grok-trading-visual-prompts |
| - lvogel123/arc-agi-1-grok-4 |
| - Crownelius/Hyper-Creative-Grok-V1 |
| - Crownelius/Hyper-Logic-Grok-V1 |
| - Crownelius/Hyper-UltraData-Grok-V1 |
| - TeichAI/brainstorm-v3.1-grok-4-fast-200x |
| - TeichAI/grok-code-fast-1-1000x |
| - nmayorga7/math-grok-4 |
| - Liontix/grok-code-fast-1-200x |
| - Antired/tradehax-xai-grok-image-capabilities |
| --- |
| |
| # IBM-Grok4-UltraFast-Coder-1B |
|
|
| This model is a full fine-tuned derivative of ibm-granite/granite-4.0-1b. |
|
|
| Training notes: |
| - Full model fine-tuning |
| - No adapters |
| - No LoRA |
| - No QLoRA |
| - Dual-GPU DDP training on Kaggle |
| - |