warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit
This model warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit was converted to MLX format from cerebras/GLM-4.7-Flash-REAP-23B-A3B using mlx-lm version 0.30.4.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- -
Model size
23B params
Tensor type
BF16
路
U32
路
F32
路
Hardware compatibility
Log In
to view the estimation
8-bit
Model tree for warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit
Base model
zai-org/GLM-4.7-Flash
Finetuned
cerebras/GLM-4.7-Flash-REAP-23B-A3B