Safetensors
GGUF
English
lfm2
conversational
wop's picture
Update README.md
296a704 verified
metadata
license: mit
datasets:
  - wop/Extreme-Reasoning-CoT
  - wop/Unlimited-Creativity-Chain-of-Thought
language:
  - en
base_model:
  - LiquidAI/LFM2.5-350M

Creativity-lfm2-5-350M

A fine tune of LFM 2.5 350M on the datasets:

  • wop/Extreme-Reasoning-CoT
  • wop/Unlimited-Creativity-Chain-of-Thought

The model was trained for 60 steps using Unsloth in google colab using the LFM docs.

Train loss graph:

image