Zeta 2 GUFF

This is direct GUFF of zed-industries/zeta-2.

Quantizations prefixed with I in this repo do not use an "importance matrix", so the quality of those models might be limited.

Zeta 2 is a code edit prediction (also known as next-edit suggestion) model finetuned from ByteDance-Seed/Seed-Coder-8B-Base.

Given code context, edits history and an editable region around the cursor, it predicts the rewritten content for that region.

Zed Editor + Llama.cpp

This guide assumes that you will use GPU and it has enough vram to load model in full.

I wasn’t able to get significantly better predictions from this model compared with the previous Zeta model, so quality may vary.

  1. install llama.cpp (preferably with GPU acceleration)
  2. Download model manually (optionally you can use -hf option in later commands to load model from HuggingFace)
  3. run model to check if it works:
    llama-cli -m model.guff
    
  4. start Llama.cpp server:
    llama-server -m zeta2-Q4_K_M.gguf --port 13377 --ctx-size 4096 --jinja -ngl 100 --host 0.0.0.0 --api-key "APIKEY"
    
Attribute Explenation
-m zeta2-Q4_K_M.gguf Loads the model from file
--port 13377 Makes the server listen on port 13377 instead of the default 8080.
--ctx-size 4096 Sets the context window size to 4096 tokens.
--jinja Use embeded jijna template insted of default
-ngl 100 Offloads up to 100 layers to the GPU, if supported.
--host 0.0.0.0 Binds the server to all network interfaces, so it can accept connections from other machines on your network, not just localhost.
--api-key "APIKEY" Zed requires some key to be set
  1. Open Zed Editor Settings(GUI), and choose AI. Under Edit Predictions, Click Configure.
  2. Scroll down to section OpenAI compatible api
  3. Set api key to APIKEY !!! Press Enter !!!, this step is not optional even if you use only localhost( at the time of writing of this guide )
  4. Set api url to http://localhost:13378/v1/completions !!! Press Enter !!!
  5. Set model to zeta2-Q4_K_M.gguf !!! Press Enter !!!
  6. (optional) set max output tokens to 256
  7. scroll up
  8. Set Provider to OpenAI cpmatible api
  9. Restart zed
  10. Completions should work now. (quality may vary)

Zed Editor + Ollama mini-guide Ollama support seams to not be the beast at the moment I recomend using llama.cpp

  1. Pull model ( This example will use qwant Q4_K_M, you can use diffrent qwant if you want )

    ollama pull hf.co/bluevoid-pl/zeta2-GUFF:Q4_K_M
    
  2. Configure Zed Editor

    1. Open Settings(GUI), and choose AI. Under Edit Predictions, Click Configure.
    2. Scroll down
    3. Confirm the host URL is: http://localhost:11434 <-- If you changed default this need to be modified
    4. Set Model to bluevoid-pl/zeta2-GUFF:Q4_K_M <-- Same qwant as before
    5. Scroll to top
    6. Set Provider to Ollama

If you have any proposals/recommendations leave them in community discussions.

Info
  • Developed by: Zed Industries
  • License: Apache-2.0
  • Fine-tuned from: ByteDance-Seed/Seed-Coder-8B-Base
  • Model version: 0225-s3-seed
Downloads last month
1,408
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bluevoid-pl/zeta2-GUFF

Quantized
(13)
this model