Uploaded model

  • Developed by: finnianx
  • License: apache-2.0
  • Finetuned from model : unsloth/LFM2-1.2B

This lfm2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

A version of LFM 2 1.2b finetuned on the ytz20/LMSYS-Chat-GPT-5-Chat-Response dataset. Mimics the behavior and response style of Chatgpt 5. Trained on responses only to increase accuracy.

Training Parameters

  • Lora Rank: r = 32
  • Lora Alpha: lora_alpha = 32
  • Learning rate: learning_rate = 2e-4
  • Training epochs: num_train_epochs = 1
Downloads last month
197
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for finnianx/GPT-5-LFM-2-1.2b-Distill

Base model

LiquidAI/LFM2-1.2B
Finetuned
unsloth/LFM2-1.2B
Quantized
(2)
this model

Dataset used to train finnianx/GPT-5-LFM-2-1.2b-Distill