Isotope v0.1

Isotope v0.1 is an experimental fine-tune of the LiquidAI/LFM2.5-1.2B-Instruct architecture. It was trained using the MLX framework on Apple Silicon.

Experimental Status This model is a research artifact created for educational purposes and pipeline testing. It is not intended for production use. The training dataset is synthetic and subject to change entirely between repository commits without notice.

Model Details

  • Model Name: Isotope v0.1
  • Base Model: LiquidAI/LFM2.5-1.2B-Instruct
  • Library: MLX-LM
  • Hardware: Trained on a Macbook Air M4 with 24GB or RAM
  • License: MIT

Purpose

The primary goal of the Isotope project is to experiment with fine-tuning pipelines on local hardware. It serves as a sandbox for testing:

  1. Synthetic data generation.
  2. Hyperparameter tuning within MLX.
  3. The capabilities of small-scale (<3B parameter) models.

Training Data

The model is currently trained on a small, evolving synthetic dataset. Because this is a playground for experiments, the specific focus of the data (e.g., coding, creative writing, reasoning) may shift significantly between versions.

Downloads last month
23
Safetensors
Model size
0.3B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for itspaultal/isotope_v0.1

Quantized
(34)
this model