--- base_model: ByteDance-Seed/Seed-OSS-36B-Instruct model_creator: ByteDance-Seed model_name: Seed-OSS-36B-Instruct quantized_by: Second State Inc. pipeline_tag: text-generation library_name: transformers ---

# Seed-OSS-36B-Instruct-GGUF ## Original Model [ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct) ## Run with LlamaEdge - LlamaEdge version: coming soon - Prompt template - Prompt type: - `seed-oss-think` for think mode - `seed-oss-no-think` for no think mode - Prompt string - `Thinking` mode ```text system You are Doubao, a helpful AI assistant. user {user_message_1} assistant {thinking_content} {assistant_message_1} user {user_message_2} assistant ``` - `No-thinking` mode ```text system You are Doubao, a helpful AI assistant. system You are an intelligent assistant that can answer questions in one step without the need for reasoning and thinking, that is, your thinking budget is 0. Next, please skip the thinking process and directly start answering the user's questions. user {user_message_1} assistant {assistant_message_1} user {user_message_2} assistant ``` - Context size: `512000` - Run as LlamaEdge service ```bash wasmedge --dir .:. \ --nn-preload default:GGML:AUTO:Seed-OSS-36B-Instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template seed-oss-no-think \ --ctx-size 512000 \ --model-name seed-oss ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Seed-OSS-36B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q2_K.gguf) | Q2_K | 2 | 13.6 GB| smallest, significant quality loss - not recommended for most purposes | | [Seed-OSS-36B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 19.1 GB| small, substantial quality loss | | [Seed-OSS-36B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 17.6 GB| very small, high quality loss | | [Seed-OSS-36B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 15.9 GB| very small, high quality loss | | [Seed-OSS-36B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 20.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Seed-OSS-36B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 21.8 GB| medium, balanced quality - recommended | | [Seed-OSS-36B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 20.7 GB| small, greater quality loss | | [Seed-OSS-36B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 25.0 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Seed-OSS-36B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 25.6 GB| large, very low quality loss - recommended | | [Seed-OSS-36B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 25.0 GB| large, low quality loss - recommended | | [Seed-OSS-36B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q6_K.gguf) | Q6_K | 6 | 29.7 GB| very large, extremely low quality loss | | [Seed-OSS-36B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 38.4 GB| very large, extremely low quality loss - not recommended | | [Seed-OSS-36B-Instruct-f16-00001-of-00003.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-f16-00001-of-00003.gguf) | f16 | 16 | 30.0 GB| | | [Seed-OSS-36B-Instruct-f16-00002-of-00003.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-f16-00002-of-00003.gguf) | f16 | 16 | 30.0 GB| | | [Seed-OSS-36B-Instruct-f16-00003-of-00003.gguf](https://huggingface.co/second-state/Seed-OSS-36B-Instruct-GGUF/blob/main/Seed-OSS-36B-Instruct-f16-00003-of-00003.gguf) | f16 | 16 | 12.4 GB| | *Quantized with llama.cpp b6301.*