Qwen3-25B Exp — A Smaller version of the Qwen3 32B

Introduction

I created the Qwen3 25B Exp, a model, which is a pruned version of Qwen3 32B.
I shrunk the model by removing some layers of the model, turning from 64 layers to 48 layers.
This model is now ~78% the size of the original 32B model.

Running it locally

Running the model locally is pretty easy, here's how we can run it using transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="AryavA/Qwen3-25B-Exp")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)

Risks & Considerations

Right now, the model isn't tested, the model can error out sometimes, or not reason correctly, it can also output illegal content.

Downloads last month
4
Safetensors
Model size
25B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AryavA/Qwen3-25B-Exp

Base model

Qwen/Qwen3-32B
Finetuned
(162)
this model