Instructions to use marksverdhei/GLM-4.7-Flash-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use marksverdhei/GLM-4.7-Flash-FP8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="marksverdhei/GLM-4.7-Flash-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("marksverdhei/GLM-4.7-Flash-FP8") model = AutoModelForCausalLM.from_pretrained("marksverdhei/GLM-4.7-Flash-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use marksverdhei/GLM-4.7-Flash-FP8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "marksverdhei/GLM-4.7-Flash-FP8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "marksverdhei/GLM-4.7-Flash-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/marksverdhei/GLM-4.7-Flash-FP8
- SGLang
How to use marksverdhei/GLM-4.7-Flash-FP8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "marksverdhei/GLM-4.7-Flash-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "marksverdhei/GLM-4.7-Flash-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "marksverdhei/GLM-4.7-Flash-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "marksverdhei/GLM-4.7-Flash-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use marksverdhei/GLM-4.7-Flash-FP8 with Docker Model Runner:
docker model run hf.co/marksverdhei/GLM-4.7-Flash-FP8
Question on size of model weights
The original model is BF16 and has model weights of ~62GB, this FP8 is ~55GB
I would have expected these weights to be ~32GB (rough guess), am I missing something or can you maybe explain why the weights are so large?
I am currently working on fixing this! Stay tuned
Yeah this doesn't want to load unfortunately. I'll be watching this closely though! Had to jump through A LOT of hoops to get the native model to load on 2x5090s
The model checkpoint is smaller now. I'm trying to test run it. Takes a little bit because i needed to fix some things in my vllm fork
Good news - the model is now working. You can run it with our vLLM fork that adds MLA detection for glm4_moe_lite:
pip install git+https://github.com/marksverdhei/vllm.git@fix/glm4-moe-mla-detection
Also requires transformers 5.0+:
pip install git+https://github.com/huggingface/transformers.git
Once installed, it should work out of the box. We tested on 2x RTX 3090 with tensor_parallel_size=2 and got 19.4 tokens/sec at 14.7 GB VRAM per GPU.
Working on getting the MLA detection fix merged upstream, but the fork should work in the meantime!
this was an automated message on behalf of @marksverdhei