Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,28 @@ model-index:
|
|
| 15 |
results: []
|
| 16 |
---
|
| 17 |
|
| 18 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
The Model [Alejandroolmedo/OpenThinker-32B-4bit-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-32B-4bit-mlx) was converted to MLX format from [open-thoughts/OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) using mlx-lm version **0.20.5**.
|
| 21 |
|
|
|
|
| 15 |
results: []
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# **About:**
|
| 19 |
+
|
| 20 |
+
**A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.**
|
| 21 |
+
|
| 22 |
+
**This model is a fine-tuned version of **[**__Qwen/Qwen2.5-32B-Instruct__**](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)** on the **[**__OpenThoughts-114k dataset__**](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)** dataset. This model improves upon the **[**__Bespoke-Stratos-32B model__**](https://huggingface.co/bespokelabs/Bespoke-Stratos-32B)**, which used 17k examples (**[**__Bespoke-Stratos-17k dataset__**](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)**).**
|
| 23 |
+
|
| 24 |
+
*Special thanks to the folks at Open Thoughts for fine-tuning this version of Qwen/Qwen2.5-32B-Instruct. More information about it can be found here:*
|
| 25 |
+
|
| 26 |
+
[https://huggingface.co/open-thoughts/OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) (Base Model)
|
| 27 |
+
|
| 28 |
+
[**__https://github.com/open-thoughts/open-thoughts__**](https://github.com/open-thoughts/open-thoughts) (Open Thoughts Git Repo)
|
| 29 |
+
|
| 30 |
+
I simply converted it to MLX format (using mlx-lm version **0.20.5**.) with a quantization of 4-bit for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
|
| 31 |
+
|
| 32 |
+
## Other Types:
|
| 33 |
+
| Link | Type | Size| Notes |
|
| 34 |
+
|-------|-----------|-----------|-----------|
|
| 35 |
+
| [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-32B-8bit-mlx) | 8-bit | 34.80 GB | **Best Quality** |
|
| 36 |
+
| [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-32B-4bit-mlx) | 4-bit | 18.40 GB | Good Quality|
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
# Alejandroolmedo/OpenThinker-32B-4bit-mlx
|
| 40 |
|
| 41 |
The Model [Alejandroolmedo/OpenThinker-32B-4bit-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-32B-4bit-mlx) was converted to MLX format from [open-thoughts/OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) using mlx-lm version **0.20.5**.
|
| 42 |
|