chai-mlx weights
MLX-format safetensors weights for Chai-1, converted from the released TorchScript distribution for use with chai-mlx.
Files
| File | Contents | Size |
|---|---|---|
config.json |
Serialized ChaiConfig |
2 KB |
model.safetensors.index.json |
Sharded weight map | 210 KB |
model-trunk.safetensors |
Pairformer trunk | 680 MB |
model-diffusion_module.safetensors |
Diffusion module | 512 MB |
model-confidence_head.safetensors |
Confidence head | 59 MB |
model-token_embedder.safetensors |
Token input embedder | 6.6 MB |
model-feature_embedding.safetensors |
Feature embedding stack | 4.8 MB |
model-bond_loss_input_proj.safetensors |
Bond feature projection | 2 KB |
Total size: ~1.2 GB.
Loading
from chai_mlx import ChaiMLX
model = ChaiMLX.from_pretrained("josephjojoe/chai-mlx")
model_fp32 = ChaiMLX.from_pretrained(
"josephjojoe/chai-mlx",
compute_dtype="float32",
)
config.json sets:
config_version = "1"compute_dtype = "reference"
reference is the default runtime precision policy in chai-mlx.
Use compute_dtype="float32" to keep the MLX port in fp32 throughout.
Provenance
These weights are a 1:1 tensor rename of Chai Discovery's released Chai-1 TorchScript checkpoints into MLX-compatible safetensors. No retraining, finetuning, or numerical modification was performed.
License
Apache-2.0, inherited from upstream Chai-1. See NOTICE in the
chai-mlx repo for attribution.
- Downloads last month
- 93
Model size
0.3B params
Tensor type
F32
·
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support