glm-4-9b-gguf / README.md
buley's picture
Upload README.md with huggingface_hub
8f3a212 verified
metadata
language:
  - en
license: apache-2.0
library_name: llama-cpp
tags:
  - gguf
  - forkjoin-ai
pipeline_tag: text-generation

Glm 4 9B

Forkjoin.ai conversion of glm-4-9b-gguf to GGUF format for edge deployment.

Model Details

Usage

With llama.cpp

./llama-cli -m glm-4-9b-gguf.gguf -p "Your prompt here" -n 256

With Ollama

Create a Modelfile:

FROM ./glm-4-9b-gguf.gguf
ollama create glm-4-9b-gguf -f Modelfile
ollama run glm-4-9b-gguf

About Forkjoin.ai

Forkjoin.ai runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.

All conversions are optimized for edge deployment within browser and mobile memory constraints.

License

Apache 2.0 (follows upstream model license)