{ "base_model": "ByteDance-Seed/Seed-Coder-8B-Instruct", "tree": [ { "model_id": "ByteDance-Seed/Seed-Coder-8B-Instruct", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Base\npipeline_tag: text-generation\nlibrary_name: transformers\n---\n\n# Seed-Coder-8B-Instruct\n\n
\n \n \"Homepage\"\n \n\n \n \"Technical\n \n \n \n \"Hugging\n \n \n \n \"License\"\n \n
\n\n\n## Introduction\nWe are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.\n\n- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.\n- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.\n- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.\n\n

\n \n

\n\nThis repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:\n- Type: Causal language models\n- Training Stage: Pretraining & Post-training\n- Data Source: Public datasets, synthetic data\n- Context Length: 32,768\n\n\n## Model Downloads\n| Model Name | Length | Download | Notes |\n|---------------------------------------------------------|--------|------------------------------------|-----------------------|\n| Seed-Coder-8B-Base | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |\n| \ud83d\udc49 **Seed-Coder-8B-Instruct** | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |\n| Seed-Coder-8B-Reasoning | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |\n| Seed-Coder-8B-Reasoning-bf16 | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |\n\n## Requirements\nYou will need to install the latest versions of `transformers` and `accelerate`:\n\n```bash\npip install -U transformers accelerate\n```\n\n## Quickstart\n\nHere is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\nmodel_id = \"ByteDance-Seed/Seed-Coder-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Write a quick sort algorithm.\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n return_tensors=\"pt\",\n add_generation_prompt=True, \n).to(model.device)\n\noutputs = model.generate(input_ids, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)\nprint(response)\n\n```\n\n## Evaluation\n\nSeed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.\n\n| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 \u2013 2502) |\n|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|\n| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 25.7 | 4.1 | 3.6 |\n| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 43.8 | 15.5 | 9.6 |\n| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 43.6 | 15.5 | 3.0 |\n| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 49.0 | 17.6 | 17.5 |\n| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 40.5 | 13.5 | 11.5 |\n| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 50.9 | 18.9 | 17.1 |\n| Qwen2.5-Coder-7B-Instruct | **88.4** | 83.5 | 26.7 | 48.8 | 20.3 | 17.3 |\n| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |\n| Seed-Coder-8B-Instruct | 84.8 | **85.2** | **36.2** | **53.3** | **26.4** | **24.7** |\n\n\nFor detailed benchmark performance, please refer to our [\ud83d\udcd1 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@misc{seed2025seedcoderletcodemodel,\n title={{Seed-Coder}: Let the Code Model Curate Data for Itself}, \n author={{ByteDance Seed} and Yuyu Zhang and Jing Su and Yifan Sun and Chenguang Xi and Xia Xiao and Shen Zheng and Anxiang Zhang and Kaibo Liu and Daoguang Zan and Tao Sun and Jinhua Zhu and Shulin Xin and Dong Huang and Yetao Bai and Lixin Dong and Chao Li and Jianchong Chen and Hanzhi Zhou and Yifan Huang and Guanghan Ning and Xierui Song and Jiaze Chen and Siyao Liu and Kai Shen and Liang Xiang and Yonghui Wu},\n year={2025},\n eprint={2506.03524},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2506.03524}, \n}\n```", "metadata": "\"N/A\"", "depth": 0, "children": [ "huihui-ai/Seed-Coder-8B-Instruct-abliterated", "unsloth/Seed-Coder-8B-Instruct", "willyli/Seed-Coder-8B-Instruct-KTO", "ccckblaze/Seed-Coder-8B-Instruct-4bit-MLX-AWQ" ], "children_count": 4, "adapters": [], "adapters_count": 0, "quantized": [ "Orion-zhen/Seed-Coder-8B-Instruct-AWQ", "second-state/Seed-Coder-8B-Instruct-GGUF", "gaianet/Seed-Coder-8B-Instruct-GGUF", "cgus/Seed-Coder-8B-Instruct-exl2", "mradermacher/Seed-Coder-8B-Instruct-GGUF", "mradermacher/Seed-Coder-8B-Instruct-i1-GGUF", "DevQuasar/ByteDance-Seed.Seed-Coder-8B-Instruct-GGUF", "kkioikk/Seed-Coder-8B-Instruct-Q5_K_M-GGUF", "noneUsername/Seed-Coder-8B-Instruct-W8A8", "unsloth/Seed-Coder-8B-Instruct-GGUF", "unsloth/Seed-Coder-8B-Instruct-unsloth-bnb-4bit", "unsloth/Seed-Coder-8B-Instruct-bnb-4bit", "mlx-community/Seed-Coder-8B-Instruct-6bit", "sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF" ], "quantized_count": 14, "merges": [], "merges_count": 0, "total_derivatives": 18, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "ByteDance-Seed/Seed-Coder-8B-Instruct", "base_model_relation": "base" }, { "model_id": "huihui-ai/Seed-Coder-8B-Instruct-abliterated", "gated": "unknown", "card": "---\nlicense: mit\npipeline_tag: text-generation\nlibrary_name: transformers\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\ntags:\n- chat\n- abliterated\n- uncensored\nextra_gated_prompt: >-\n **Usage Warnings**\n\n\n \u201c**Risk of Sensitive or Controversial Outputs**\u201c: This model\u2019s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.\n\n \u201c**Not Suitable for All Audiences**:\u201c Due to limited content filtering, the model\u2019s outputs may be inappropriate for public settings, underage users, or applications requiring high security.\n\n \u201c**Legal and Ethical Responsibilities**\u201c: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.\n\n \u201c**Research and Experimental Use**\u201c: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.\n\n \u201c**Monitoring and Review Recommendations**\u201c: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.\n\n \u201c**No Default Safety Guarantees**\u201c: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.\n\n\n---\n\n# huihui-ai/Seed-Coder-8B-Instruct-abliterated\n\n\nThis is an uncensored version of [ByteDance-Seed/Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).\nThis is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. \n\nAblation was performed using a new and faster method, which yields better results.\n\n## ollama\n\nYou can use [huihui_ai/seed-coder-abliterate](https://ollama.com/huihui_ai/seed-coder-abliterate) directly, \n```\nollama run huihui_ai/seed-coder-abliterate\n```\n\n## Usage\nYou can use this model in your applications by loading it with Hugging Face's `transformers` library:\n\n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer\nimport torch\nimport os\nimport signal\n\ncpu_count = os.cpu_count()\nprint(f\"Number of CPU cores in the system: {cpu_count}\")\nhalf_cpu_count = cpu_count // 2\nos.environ[\"MKL_NUM_THREADS\"] = str(half_cpu_count)\nos.environ[\"OMP_NUM_THREADS\"] = str(half_cpu_count)\ntorch.set_num_threads(half_cpu_count)\n\nprint(f\"PyTorch threads: {torch.get_num_threads()}\")\nprint(f\"MKL threads: {os.getenv('MKL_NUM_THREADS')}\")\nprint(f\"OMP threads: {os.getenv('OMP_NUM_THREADS')}\")\n\n# Load the model and tokenizer\nNEW_MODEL_ID = \"huihui-ai/Seed-Coder-8B-Instruct-abliterated\"\nprint(f\"Load Model {NEW_MODEL_ID} ... \")\nquant_config_4 = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_compute_dtype=torch.bfloat16,\n bnb_4bit_use_double_quant=True,\n llm_int14_enable_fp32_cpu_offload=True,\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(\n NEW_MODEL_ID,\n device_map=\"auto\",\n trust_remote_code=True,\n #quantization_config=quant_config_4,\n torch_dtype=torch.bfloat16\n)\ntokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)\nif tokenizer.pad_token is None:\n tokenizer.pad_token = tokenizer.eos_token\ntokenizer.pad_token_id = tokenizer.eos_token_id\n\ninitial_messages = [{\"role\": \"system\", \"content\": \"You are a helpful assistant.\"}]\nmessages = initial_messages.copy()\nskip_prompt=True\nskip_special_tokens=True\n\nclass CustomTextStreamer(TextStreamer):\n def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):\n super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)\n self.generated_text = \"\"\n self.stop_flag = False\n\n def on_finalized_text(self, text: str, stream_end: bool = False):\n self.generated_text += text\n print(text, end=\"\", flush=True)\n if self.stop_flag:\n raise StopIteration\n\n def stop_generation(self):\n self.stop_flag = True\n\ndef generate_stream(model, tokenizer, messages, enable_thinking, skip_prompt, skip_special_tokens, max_new_tokens):\n input_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n add_generation_prompt=True,\n return_tensors=\"pt\"\n )\n attention_mask = torch.ones_like(input_ids, dtype=torch.long)\n tokens = input_ids.to(model.device) \n attention_mask = attention_mask.to(model.device)\n\n streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)\n\n def signal_handler(sig, frame):\n streamer.stop_generation()\n print(\"\\n[Generation stopped by user with Ctrl+C]\")\n\n signal.signal(signal.SIGINT, signal_handler)\n \n print(\"Response: \", end=\"\", flush=True)\n try:\n generated_ids = model.generate(\n tokens,\n attention_mask=attention_mask,\n use_cache=False,\n max_new_tokens=max_new_tokens,\n do_sample=True,\n pad_token_id=tokenizer.pad_token_id,\n streamer=streamer\n )\n del generated_ids\n except StopIteration:\n print(\"\\n[Stopped by user]\")\n\n del input_ids, attention_mask\n torch.cuda.empty_cache()\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n return streamer.generated_text, streamer.stop_flag\n\nwhile True:\n user_input = input(\"User: \").strip()\n if user_input.lower() == \"/exit\":\n print(\"Exiting chat.\")\n break\n if user_input.lower() == \"/clear\":\n messages = initial_messages.copy()\n print(\"Chat history cleared. Starting a new conversation.\")\n continue\n if user_input.lower() == \"/skip_prompt\":\n if skip_prompt:\n skip_prompt = False\n print(\"skip_prompt = False.\")\n else:\n skip_prompt = True\n print(\"skip_prompt = True.\") \n continue\n if user_input.lower() == \"/skip_special_tokens\":\n if skip_special_tokens:\n skip_special_tokens = False\n print(\"skip_special_tokens = False.\")\n else:\n skip_special_tokens = True\n print(\"skip_special_tokens = True.\") \n continue\n if not user_input:\n print(\"Input cannot be empty. Please enter something.\")\n continue\n messages.append({\"role\": \"user\", \"content\": user_input})\n response, stop_flag = generate_stream(model, tokenizer, messages, enable_thinking, skip_prompt, skip_special_tokens, 14192)\n print(\"\", flush=True)\n if stop_flag:\n continue\n messages.append({\"role\": \"assistant\", \"content\": response})\n```\n\n\n### Donation\n\nIf you like it, please click 'like' and follow us for more updates. \nYou can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.\n\n##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.\n- bitcoin\uff08BTC):\n```\n bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "DevQuasar/huihui-ai.Seed-Coder-8B-Instruct-abliterated-GGUF", "noneUsername/Seed-Coder-8B-Instruct-abliterated-W8A8" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "unsloth/Seed-Coder-8B-Instruct", "gated": "False", "card": "---\ntags:\n- unsloth\nlicense: mit\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: transformers\n---\n
\n

\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n

\n
\n \n \n \n \n \n \n \n \n \n
\n
\n\n\n# Seed-Coder-8B-Instruct\n\n
\n \n \"Homepage\"\n \n\n \n \"Technical\n \n \n \n \"Hugging\n \n \n \n \"License\"\n \n
\n\n\n## Introduction\nWe are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.\n\n- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.\n- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.\n- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.\n\n

\n \n

\n\nThis repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:\n- Type: Causal language models\n- Training Stage: Pretraining & Post-training\n- Data Source: Public datasets, synthetic data\n- Context Length: 32,768\n\n\n## Model Downloads\n| Model Name | Length | Download | Notes |\n|---------------------------------------------------------|--------|------------------------------------|-----------------------|\n| Seed-Coder-8B-Base | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |\n| \ud83d\udc49 **Seed-Coder-8B-Instruct** | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |\n| Seed-Coder-8B-Reasoning | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |\n| Seed-Coder-8B-Reasoning-bf16 | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |\n\n## Requirements\nYou will need to install the latest versions of `transformers` and `accelerate`:\n\n```bash\npip install -U transformers accelerate\n```\n\n## Quickstart\n\nHere is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\nmodel_id = \"ByteDance-Seed/Seed-Coder-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Write a quick sort algorithm.\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n return_tensors=\"pt\",\n add_generation_prompt=True, \n).to(model.device)\n\noutputs = model.generate(input_ids, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)\nprint(response)\n\n```\n\n## Evaluation\n\nSeed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.\n\n| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 \u2013 2502) |\n|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|\n| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 25.7 | 4.1 | 3.6 |\n| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 43.8 | 15.5 | 9.6 |\n| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 43.6 | 15.5 | 3.0 |\n| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 49.0 | 17.6 | 17.5 |\n| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 40.5 | 13.5 | 11.5 |\n| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 50.9 | 18.9 | 17.1 |\n| Qwen2.5-Coder-7B-Instruct | **88.4** | 83.5 | 26.7 | 48.8 | 20.3 | 17.3 |\n| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |\n| Seed-Coder-8B-Instruct | 84.8 | **85.2** | **36.2** | **53.3** | **26.4** | **24.7** |\n\n\nFor detailed benchmark performance, please refer to our [\ud83d\udcd1 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "unsloth/Seed-Coder-8B-Instruct", "base_model_relation": "base" }, { "model_id": "willyli/Seed-Coder-8B-Instruct-KTO", "gated": "unknown", "card": "---\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\nlibrary_name: transformers\nmodel_name: Seed-Coder-8B-Instruct-KTO\ntags:\n- generated_from_trainer\n- trl\n- kto\nlicence: license\n---\n\n# Model Card for Seed-Coder-8B-Instruct-KTO\n\nThis model is a fine-tuned version for price prediction in Thailand as requested by GDX.\nIt has been trained using [TRL](https://github.com/huggingface/trl). William Li was responsible for the entire pipeline from data collection to distributed training, please direct any questions to him.\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"willyli/Seed-Coder-8B-Instruct-KTO\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/shafink-stanford-university/kto-training/runs/x1q8j0jn) \n\n\nThis model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).\n\n### Framework versions\n\n- TRL: 0.18.1\n- Transformers: 4.52.4\n- Pytorch: 2.7.0\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\nCite KTO as:\n\n```bibtex\n@article{ethayarajh2024kto,\n title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},\n author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},\n year = 2024,\n eprint = {arXiv:2402.01306},\n}\n```\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [ "lazichicken/tile-pilot-grpo-lora-v2" ], "adapters_count": 1, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "ccckblaze/Seed-Coder-8B-Instruct-4bit-MLX-AWQ", "gated": "unknown", "card": "---\nlicense: mit\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: mlx\ntags:\n- mlx\n---\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "Orion-zhen/Seed-Coder-8B-Instruct-AWQ", "gated": "False", "card": "---\nlicense: gpl-3.0\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\n---\n\n```yaml\nzero_piont: true\nbits: 4\nversion: GEMM\ndataset: HuggingFaceH4/CodeAlpaca_20K\nnum_examples: 256\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "Orion-zhen/Seed-Coder-8B-Instruct-AWQ", "base_model_relation": "base" }, { "model_id": "second-state/Seed-Coder-8B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\nlicense: mit\nmodel_creator: ByteDance-Seed\nmodel_name: Seed-Coder-8B-Instruct\nquantized_by: Second State Inc.\n---\n\n\n\n
\n\n
\n
\n\n\n# Seed-Coder-8B-Instruct-GGUF\n\n## Original Model\n\n[ByteDance-Seed/Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct)\n\n## Run with LlamaEdge\n\n- LlamaEdge version: [v0.18.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.18.4) and above\n\n- Prompt template\n\n - Prompt type: `seed-instruct`\n\n - Prompt string\n\n ```text\n <[begin\u2581of\u2581sentence]>system\n {system_message}\n\n <[end\u2581of\u2581sentence]><[begin\u2581of\u2581sentence]>user\n {user_message_1}<[end\u2581of\u2581sentence]><[begin\u2581of\u2581sentence]>assistant\n {assistant_message_1}<[end\u2581of\u2581sentence]><[begin\u2581of\u2581sentence]>user\n {user_message_2}<[end\u2581of\u2581sentence]><[begin\u2581of\u2581sentence]>assistant\n ```\n\n- Context size: `32000`\n\n- Run as LlamaEdge service\n\n ```bash\n wasmedge --dir .:. --nn-preload default:GGML:AUTO:Seed-Coder-8B-Instruct-Q5_K_M.gguf \\\n llama-api-server.wasm \\\n --model-name Seed-Coder-8B-Instruct \\\n --prompt-template seed-instruct \\\n --ctx-size 32000\n ```\n\n## Quantized GGUF Models\n\n| Name | Quant method | Bits | Size | Use case |\n| ---- | ---- | ---- | ---- | ----- |\n| [Seed-Coder-8B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q2_K.gguf) | Q2_K | 2 | 3.30 GB| smallest, significant quality loss - not recommended for most purposes |\n| [Seed-Coder-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 4.46 GB| small, substantial quality loss |\n| [Seed-Coder-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 4.15 GB| very small, high quality loss |\n| [Seed-Coder-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 3.80 GB| very small, high quality loss |\n| [Seed-Coder-8B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 4.81 GB| legacy; small, very high quality loss - prefer using Q3_K_M |\n| [Seed-Coder-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 5.07 GB| medium, balanced quality - recommended |\n| [Seed-Coder-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 4.84 GB| small, greater quality loss |\n| [Seed-Coder-8B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 5.76 GB| legacy; medium, balanced quality - prefer using Q4_K_M |\n| [Seed-Coder-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 5.90 GB| large, very low quality loss - recommended |\n| [Seed-Coder-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.76 GB| large, low quality loss - recommended |\n| [Seed-Coder-8B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q6_K.gguf) | Q6_K | 6 | 6.78 GB| very large, extremely low quality loss |\n| [Seed-Coder-8B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 8.77 GB| very large, extremely low quality loss - not recommended |\n| [Seed-Coder-8B-Instruct-f16.gguf](https://huggingface.co/second-state/Seed-Coder-8B-Instruct-GGUF/blob/main/Seed-Coder-8B-Instruct-f16.gguf) | f16 | 16 | 16.6 GB| |\n\n*Quantized with llama.cpp b5341*", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "second-state/Seed-Coder-8B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "gaianet/Seed-Coder-8B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\nlicense: mit\nmodel_creator: ByteDance-Seed\nmodel_name: Seed-Coder-8B-Instruct\nquantized_by: Second State Inc.\n---\n\n# Seed-Coder-8B-Instruct-GGUF\n\n## Original Model\n\n[ByteDance-Seed/Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct)\n\n## Run with Gaianet\n\n**Prompt template**\n\nprompt template: `seed-instruct`\n\n**Context size**\n\nchat_ctx_size: `32000`\n\n**Run with GaiaNet**\n\n- Quick start: https://docs.gaianet.ai/node-guide/quick-start\n\n- Customize your node: https://docs.gaianet.ai/node-guide/customize\n\n*Quantized with llama.cpp b5341*", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "gaianet/Seed-Coder-8B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "cgus/Seed-Coder-8B-Instruct-exl2", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: exllamav2\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\n---\n# Seed-Coder-8B-Instruct-exl2\nOriginal model: [Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) by [ByteDance Seed](https://huggingface.co/ByteDance-Seed)\n## Quants\n[4bpw h6 (main)](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/main) \n[4.5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/4.5bpw-h6) \n[5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/5bpw-h6) \n[6bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/6bpw-h6) \n[8bpw h8](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/8bpw-h8) \n## Quantization notes\nMade with Exllamav2 0.2.9 dev with default dataset. \nQuants can be used with RTX GPU (Windows) or RTX/ROCm (Linux) with TabbyAPI or Text-Generation-WebUI.\n# Original model card\n# Seed-Coder-8B-Instruct\n\n
\n \n \"Homepage\"\n \n\n \n \"Technical\n \n \n \n \"Hugging\n \n \n \n \"License\"\n \n
\n\n\n## Introduction\nWe are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.\n\n- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.\n- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.\n- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.\n\n

\n \n

\n\nThis repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:\n- Type: Causal language models\n- Training Stage: Pretraining & Post-training\n- Data Source: Public datasets, synthetic data\n- Context Length: 32,768\n\n\n## Model Downloads\n| Model Name | Length | Download | Notes |\n|---------------------------------------------------------|--------|------------------------------------|-----------------------|\n| Seed-Coder-8B-Base | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |\n| \ud83d\udc49 **Seed-Coder-8B-Instruct** | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |\n| Seed-Coder-8B-Reasoning | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |\n\n## Requirements\nYou will need to install the latest versions of `transformers` and `accelerate`:\n\n```bash\npip install -U transformers accelerate\n```\n\n## Quickstart\n\nHere is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\nmodel_id = \"ByteDance-Seed/Seed-Coder-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Write a quick sort algorithm.\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n return_tensors=\"pt\",\n add_generation_prompt=True, \n).to(model.device)\n\noutputs = model.generate(input_ids, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)\nprint(response)\n\n```\n\n## Evaluation\n\nSeed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.\n\n| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 \u2013 2502) |\n|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|\n| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 21.9 | 3.4 | 3.6 |\n| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 35.5 | 10.1 | 9.6 |\n| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 39.6 | 18.9 | 3.0 |\n| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 38.1 | 11.5 | 17.5 |\n| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 36.6 | 13.5 | 11.5 |\n| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 40.3 | 16.9 | 17.1 |\n| Qwen2.5-Coder-7B-Instruct | 88.4 | 82.0 | 26.7 | 41.0 | 18.2 | 17.3 |\n| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |\n| Seed-Coder-8B-Instruct | 84.8 | 85.2 | 36.2 | 53.3 | 20.5 | 24.7 |\n\n\nFor detailed benchmark performance, please refer to our [\ud83d\udcd1 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "cgus/Seed-Coder-8B-Instruct-exl2", "base_model_relation": "base" }, { "model_id": "mradermacher/Seed-Coder-8B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q2_K.gguf) | Q2_K | 3.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.6 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 6.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q6_K.gguf) | Q6_K | 6.9 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.9 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF/resolve/main/Seed-Coder-8B-Instruct.f16.gguf) | f16 | 16.6 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "mradermacher/Seed-Coder-8B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/Seed-Coder-8B-Instruct-i1-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct\n\n\nstatic quants are available at https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.3 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Seed-Coder-8B-Instruct-i1-GGUF/resolve/main/Seed-Coder-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.9 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "mradermacher/Seed-Coder-8B-Instruct-i1-GGUF", "base_model_relation": "base" }, { "model_id": "DevQuasar/ByteDance-Seed.Seed-Coder-8B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\n---\n\n[](https://devquasar.com)\n\nQuantized version of: [ByteDance-Seed/Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct)\n\n'Make knowledge free for everyone'\n\n

\n Made with
\n \n \n \n

\n\nBuy Me a Coffee at ko-fi.com\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "DevQuasar/ByteDance-Seed.Seed-Coder-8B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "kkioikk/Seed-Coder-8B-Instruct-Q5_K_M-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# kkioikk/Seed-Coder-8B-Instruct-Q5_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/Seed-Coder-8B-Instruct`](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) for more details on the model.\n\n\n\u5c0f\u53c2\u6570\u6a21\u578b\u4f7f\u7528\u6ce8\u610f\u4e8b\u9879\uff1a\n\n\u5c3d\u91cf\u4f7f\u7528\u7b80\u6d01\u7684\u7cfb\u7edf\u6307\u4ee4\uff0c\u7cfb\u7edf\u6307\u4ee4\u4f18\u5148\u4e8e\u5373\u65f6\u8f93\u5165\u6307\u4ee4\u3002\n\u5f53\u65e0\u6cd5\u9075\u5b88\u6307\u4ee4\u65f6\uff0c\u53ef\u901a\u8fc7\u4fee\u6539\u5386\u53f2\u8bb0\u5f55\u7684\u65b9\u6cd5\uff0c\u4f8b\u5982\u5728\u5f00\u59cb\u65f6\u9884\u8bbe\u5c11\u91cf\u5bf9\u8bdd\u8bb0\u5f55\uff1a{user},{assistant}\uff0c\u76f4\u63a5\u4fee\u6539\u6216\u66ff\u6362\u6a21\u578b\u8f93\u51fa\u4f5c\u4e3a\u5386\u53f2\u8bb0\u5f55\uff0c\u8fd9\u6837\u53ef\u4ee5\u8ba9\u6a21\u578b\u66f4\u597d\u5730\u5b66\u4e60\u4f60\u7684\u8981\u6c42\u3002\n\nPrecautions for using small-parameter models:\n\nTry to use concise system instructions, system instructions take precedence over real-time input instructions.\nWhen you can't follow the instructions, you can modify the history by modifying the history, such as presetting a small number of conversation records at the beginning: {user}, {assistant}, directly modify or replace the model output as the history, which can make the model learn your requirements better.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "kkioikk/Seed-Coder-8B-Instruct-Q5_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "noneUsername/Seed-Coder-8B-Instruct-W8A8", "gated": "False", "card": "---\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\n---\nvllm (pretrained=/root/autodl-tmp/Seed-Coder-8B-Instruct,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.576|\u00b1 |0.0313|\n| | |strict-match | 5|exact_match|\u2191 |0.576|\u00b1 |0.0313|\n\nvllm (pretrained=/root/autodl-tmp/Seed-Coder-8B-Instruct,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.602|\u00b1 |0.0219|\n| | |strict-match | 5|exact_match|\u2191 |0.598|\u00b1 |0.0219|\n\nvllm (pretrained=/root/autodl-tmp/Seed-Coder-8B-Instruct,add_bos_token=true,max_model_len=3048,dtype=bfloat16), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4386|\u00b1 |0.0167|\n| - humanities | 2|none | |acc |\u2191 |0.4000|\u00b1 |0.0343|\n| - other | 2|none | |acc |\u2191 |0.4872|\u00b1 |0.0356|\n| - social sciences| 2|none | |acc |\u2191 |0.4389|\u00b1 |0.0364|\n| - stem | 2|none | |acc |\u2191 |0.4316|\u00b1 |0.0288|\n\n\nvllm (pretrained=/root/autodl-tmp/80-128,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 | 0.56|\u00b1 |0.0315|\n| | |strict-match | 5|exact_match|\u2191 | 0.56|\u00b1 |0.0315|\n\nvllm (pretrained=/root/autodl-tmp/80-128,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.590|\u00b1 |0.0220|\n| | |strict-match | 5|exact_match|\u2191 |0.584|\u00b1 |0.0221|\n\nvllm (pretrained=/root/autodl-tmp/80-128,add_bos_token=true,max_model_len=3048,dtype=bfloat16), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4339|\u00b1 |0.0166|\n| - humanities | 2|none | |acc |\u2191 |0.3949|\u00b1 |0.0338|\n| - other | 2|none | |acc |\u2191 |0.4769|\u00b1 |0.0355|\n| - social sciences| 2|none | |acc |\u2191 |0.4333|\u00b1 |0.0361|\n| - stem | 2|none | |acc |\u2191 |0.4316|\u00b1 |0.0290|\n\n\nvllm (pretrained=/root/autodl-tmp/80-256,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.584|\u00b1 |0.0312|\n| | |strict-match | 5|exact_match|\u2191 |0.584|\u00b1 |0.0312|\n\nvllm (pretrained=/root/autodl-tmp/80-256,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.590|\u00b1 | 0.022|\n| | |strict-match | 5|exact_match|\u2191 |0.586|\u00b1 | 0.022|\n\nvllm (pretrained=/root/autodl-tmp/80-256,add_bos_token=true,max_model_len=3048,dtype=bfloat16), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4246|\u00b1 |0.0165|\n| - humanities | 2|none | |acc |\u2191 |0.3795|\u00b1 |0.0336|\n| - other | 2|none | |acc |\u2191 |0.4872|\u00b1 |0.0356|\n| - social sciences| 2|none | |acc |\u2191 |0.4333|\u00b1 |0.0360|\n| - stem | 2|none | |acc |\u2191 |0.4070|\u00b1 |0.0282|\n\n\nvllm (pretrained=/root/autodl-tmp/80-512,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.604|\u00b1 | 0.031|\n| | |strict-match | 5|exact_match|\u2191 |0.600|\u00b1 | 0.031|\n\nvllm (pretrained=/root/autodl-tmp/80-512,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.594|\u00b1 | 0.022|\n| | |strict-match | 5|exact_match|\u2191 |0.586|\u00b1 | 0.022|\n\nvllm (pretrained=/root/autodl-tmp/80-512,add_bos_token=true,max_model_len=3048,dtype=bfloat16), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4316|\u00b1 |0.0166|\n| - humanities | 2|none | |acc |\u2191 |0.4000|\u00b1 |0.0341|\n| - other | 2|none | |acc |\u2191 |0.4821|\u00b1 |0.0355|\n| - social sciences| 2|none | |acc |\u2191 |0.4278|\u00b1 |0.0356|\n| - stem | 2|none | |acc |\u2191 |0.4211|\u00b1 |0.0289|", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "noneUsername/Seed-Coder-8B-Instruct-W8A8", "base_model_relation": "base" }, { "model_id": "unsloth/Seed-Coder-8B-Instruct-GGUF", "gated": "False", "card": "---\ntags:\n- unsloth\nlicense: mit\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: transformers\n---\n
\n

\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n

\n
\n \n \n \n \n \n \n \n \n \n
\n
\n\n\n# Seed-Coder-8B-Instruct\n\n
\n \n \"Homepage\"\n \n\n \n \"Technical\n \n \n \n \"Hugging\n \n \n \n \"License\"\n \n
\n\n\n## Introduction\nWe are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.\n\n- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.\n- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.\n- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.\n\n

\n \n

\n\nThis repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:\n- Type: Causal language models\n- Training Stage: Pretraining & Post-training\n- Data Source: Public datasets, synthetic data\n- Context Length: 32,768\n\n\n## Model Downloads\n| Model Name | Length | Download | Notes |\n|---------------------------------------------------------|--------|------------------------------------|-----------------------|\n| Seed-Coder-8B-Base | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |\n| \ud83d\udc49 **Seed-Coder-8B-Instruct** | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |\n| Seed-Coder-8B-Reasoning | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |\n| Seed-Coder-8B-Reasoning-bf16 | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |\n\n## Requirements\nYou will need to install the latest versions of `transformers` and `accelerate`:\n\n```bash\npip install -U transformers accelerate\n```\n\n## Quickstart\n\nHere is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\nmodel_id = \"ByteDance-Seed/Seed-Coder-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Write a quick sort algorithm.\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n return_tensors=\"pt\",\n add_generation_prompt=True, \n).to(model.device)\n\noutputs = model.generate(input_ids, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)\nprint(response)\n\n```\n\n## Evaluation\n\nSeed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.\n\n| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 \u2013 2502) |\n|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|\n| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 25.7 | 4.1 | 3.6 |\n| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 43.8 | 15.5 | 9.6 |\n| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 43.6 | 15.5 | 3.0 |\n| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 49.0 | 17.6 | 17.5 |\n| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 40.5 | 13.5 | 11.5 |\n| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 50.9 | 18.9 | 17.1 |\n| Qwen2.5-Coder-7B-Instruct | **88.4** | 83.5 | 26.7 | 48.8 | 20.3 | 17.3 |\n| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |\n| Seed-Coder-8B-Instruct | 84.8 | **85.2** | **36.2** | **53.3** | **26.4** | **24.7** |\n\n\nFor detailed benchmark performance, please refer to our [\ud83d\udcd1 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "unsloth/Seed-Coder-8B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "unsloth/Seed-Coder-8B-Instruct-unsloth-bnb-4bit", "gated": "False", "card": "---\ntags:\n- unsloth\nlicense: mit\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: transformers\n---\n
\n

\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n

\n
\n \n \n \n \n \n \n \n \n \n
\n
\n\n\n# Seed-Coder-8B-Instruct\n\n
\n \n \"Homepage\"\n \n\n \n \"Technical\n \n \n \n \"Hugging\n \n \n \n \"License\"\n \n
\n\n\n## Introduction\nWe are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.\n\n- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.\n- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.\n- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.\n\n

\n \n

\n\nThis repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:\n- Type: Causal language models\n- Training Stage: Pretraining & Post-training\n- Data Source: Public datasets, synthetic data\n- Context Length: 32,768\n\n\n## Model Downloads\n| Model Name | Length | Download | Notes |\n|---------------------------------------------------------|--------|------------------------------------|-----------------------|\n| Seed-Coder-8B-Base | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |\n| \ud83d\udc49 **Seed-Coder-8B-Instruct** | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |\n| Seed-Coder-8B-Reasoning | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |\n| Seed-Coder-8B-Reasoning-bf16 | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |\n\n## Requirements\nYou will need to install the latest versions of `transformers` and `accelerate`:\n\n```bash\npip install -U transformers accelerate\n```\n\n## Quickstart\n\nHere is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\nmodel_id = \"ByteDance-Seed/Seed-Coder-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Write a quick sort algorithm.\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n return_tensors=\"pt\",\n add_generation_prompt=True, \n).to(model.device)\n\noutputs = model.generate(input_ids, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)\nprint(response)\n\n```\n\n## Evaluation\n\nSeed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.\n\n| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 \u2013 2502) |\n|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|\n| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 25.7 | 4.1 | 3.6 |\n| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 43.8 | 15.5 | 9.6 |\n| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 43.6 | 15.5 | 3.0 |\n| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 49.0 | 17.6 | 17.5 |\n| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 40.5 | 13.5 | 11.5 |\n| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 50.9 | 18.9 | 17.1 |\n| Qwen2.5-Coder-7B-Instruct | **88.4** | 83.5 | 26.7 | 48.8 | 20.3 | 17.3 |\n| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |\n| Seed-Coder-8B-Instruct | 84.8 | **85.2** | **36.2** | **53.3** | **26.4** | **24.7** |\n\n\nFor detailed benchmark performance, please refer to our [\ud83d\udcd1 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "unsloth/Seed-Coder-8B-Instruct-unsloth-bnb", "base_model_relation": "finetune" }, { "model_id": "unsloth/Seed-Coder-8B-Instruct-bnb-4bit", "gated": "False", "card": "---\ntags:\n- unsloth\n- unsloth\nlicense: mit\nbase_model:\n- ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: transformers\n---\n
\n

\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n

\n
\n \n \n \n \n \n \n \n \n \n
\n
\n\n
\n

\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n

\n
\n \n \n \n \n \n \n \n \n \n
\n
\n\n\n# Seed-Coder-8B-Instruct\n\n
\n \n \"Homepage\"\n \n\n \n \"Technical\n \n \n \n \"Hugging\n \n \n \n \"License\"\n \n
\n\n\n## Introduction\nWe are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.\n\n- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.\n- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.\n- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.\n\n

\n \n

\n\nThis repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:\n- Type: Causal language models\n- Training Stage: Pretraining & Post-training\n- Data Source: Public datasets, synthetic data\n- Context Length: 32,768\n\n\n## Model Downloads\n| Model Name | Length | Download | Notes |\n|---------------------------------------------------------|--------|------------------------------------|-----------------------|\n| Seed-Coder-8B-Base | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |\n| \ud83d\udc49 **Seed-Coder-8B-Instruct** | 32K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |\n| Seed-Coder-8B-Reasoning | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |\n| Seed-Coder-8B-Reasoning-bf16 | 64K | \ud83e\udd17 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |\n\n## Requirements\nYou will need to install the latest versions of `transformers` and `accelerate`:\n\n```bash\npip install -U transformers accelerate\n```\n\n## Quickstart\n\nHere is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\nmodel_id = \"ByteDance-Seed/Seed-Coder-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Write a quick sort algorithm.\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n messages,\n tokenize=True,\n return_tensors=\"pt\",\n add_generation_prompt=True, \n).to(model.device)\n\noutputs = model.generate(input_ids, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)\nprint(response)\n\n```\n\n## Evaluation\n\nSeed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.\n\n| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 \u2013 2502) |\n|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|\n| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 25.7 | 4.1 | 3.6 |\n| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 43.8 | 15.5 | 9.6 |\n| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 43.6 | 15.5 | 3.0 |\n| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 49.0 | 17.6 | 17.5 |\n| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 40.5 | 13.5 | 11.5 |\n| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 50.9 | 18.9 | 17.1 |\n| Qwen2.5-Coder-7B-Instruct | **88.4** | 83.5 | 26.7 | 48.8 | 20.3 | 17.3 |\n| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |\n| Seed-Coder-8B-Instruct | 84.8 | **85.2** | **36.2** | **53.3** | **26.4** | **24.7** |\n\n\nFor detailed benchmark performance, please refer to our [\ud83d\udcd1 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "unsloth/Seed-Coder-8B-Instruct-bnb", "base_model_relation": "finetune" }, { "model_id": "mlx-community/Seed-Coder-8B-Instruct-6bit", "gated": "False", "card": "---\nlicense: mit\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: transformers\ntags:\n- mlx\n---\n\n# mlx-community/Seed-Coder-8B-Instruct-6bit\n\nThe Model [mlx-community/Seed-Coder-8B-Instruct-6bit](https://huggingface.co/mlx-community/Seed-Coder-8B-Instruct-6bit) was converted to MLX format from [ByteDance-Seed/Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) using mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/Seed-Coder-8B-Instruct-6bit\")\n\nprompt=\"hello\"\n\nif hasattr(tokenizer, \"apply_chat_template\") and tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": "mlx-community/Seed-Coder-8B-Instruct", "base_model_relation": "finetune" }, { "model_id": "sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF", "gated": "unknown", "card": "---\nlicense: mit\nbase_model: ByteDance-Seed/Seed-Coder-8B-Instruct\npipeline_tag: text-generation\nlibrary_name: transformers\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/Seed-Coder-8B-Instruct`](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF --hf-file seed-coder-8b-instruct-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF --hf-file seed-coder-8b-instruct-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF --hf-file seed-coder-8b-instruct-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo sizzlebop/Seed-Coder-8B-Instruct-Q8_0-GGUF --hf-file seed-coder-8b-instruct-q8_0.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/Seed-Coder-8B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "DevQuasar/huihui-ai.Seed-Coder-8B-Instruct-abliterated-GGUF", "gated": "False", "card": "---\nbase_model:\n- huihui-ai/Seed-Coder-8B-Instruct-abliterated\npipeline_tag: text-generation\n---\n\n[](https://devquasar.com)\n\nQuantized version of: [huihui-ai/Seed-Coder-8B-Instruct-abliterated](https://huggingface.co/huihui-ai/Seed-Coder-8B-Instruct-abliterated)\n\n'Make knowledge free for everyone'\n\n

\n Made with
\n \n \n \n

\n\nBuy Me a Coffee at ko-fi.com\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "huihui-ai/Seed-Coder-8B-Instruct-abliterated" ], "base_model": "DevQuasar/huihui-ai.Seed-Coder-8B-Instruct-abliterated-GGUF", "base_model_relation": "base" }, { "model_id": "noneUsername/Seed-Coder-8B-Instruct-abliterated-W8A8", "gated": "False", "card": "---\nbase_model:\n- huihui-ai/Seed-Coder-8B-Instruct-abliterated\n---\nvllm (pretrained=/root/autodl-tmp/Seed-Coder-8B-Instruct-abliterated,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.552|\u00b1 |0.0315|\n| | |strict-match | 5|exact_match|\u2191 |0.552|\u00b1 |0.0315|\n\nvllm (pretrained=/root/autodl-tmp/Seed-Coder-8B-Instruct-abliterated,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.566|\u00b1 |0.0222|\n| | |strict-match | 5|exact_match|\u2191 |0.564|\u00b1 |0.0222|\n\nvllm (pretrained=/root/autodl-tmp/Seed-Coder-8B-Instruct-abliterated,add_bos_token=true,max_model_len=3048,dtype=bfloat16,model_impl=transformers,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: 1\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4316|\u00b1 |0.0167|\n| - humanities | 2|none | |acc |\u2191 |0.4205|\u00b1 |0.0344|\n| - other | 2|none | |acc |\u2191 |0.4615|\u00b1 |0.0356|\n| - social sciences| 2|none | |acc |\u2191 |0.4278|\u00b1 |0.0359|\n| - stem | 2|none | |acc |\u2191 |0.4211|\u00b1 |0.0289|\n\n\nvllm (pretrained=/root/autodl-tmp/80-128-4096,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.544|\u00b1 |0.0316|\n| | |strict-match | 5|exact_match|\u2191 |0.540|\u00b1 |0.0316|\n\n\nvllm (pretrained=/root/autodl-tmp/80-256-4096,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 | 0.56|\u00b1 |0.0315|\n| | |strict-match | 5|exact_match|\u2191 | 0.56|\u00b1 |0.0315|\n\nvllm (pretrained=/root/autodl-tmp/80-256-4096,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.578|\u00b1 |0.0221|\n| | |strict-match | 5|exact_match|\u2191 |0.574|\u00b1 |0.0221|\n\n\nvllm (pretrained=/root/autodl-tmp/80-512-8192,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.564|\u00b1 |0.0314|\n| | |strict-match | 5|exact_match|\u2191 |0.564|\u00b1 |0.0314|\n\nvllm (pretrained=/root/autodl-tmp/80-512-8192,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.570|\u00b1 |0.0222|\n| | |strict-match | 5|exact_match|\u2191 |0.566|\u00b1 |0.0222|\n\nvllm (pretrained=/root/autodl-tmp/80-512-8192,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: 1\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4246|\u00b1 |0.0167|\n| - humanities | 2|none | |acc |\u2191 |0.3897|\u00b1 |0.0344|\n| - other | 2|none | |acc |\u2191 |0.4667|\u00b1 |0.0356|\n| - social sciences| 2|none | |acc |\u2191 |0.4222|\u00b1 |0.0366|\n| - stem | 2|none | |acc |\u2191 |0.4211|\u00b1 |0.0290|\n\n\nvllm (pretrained=/root/autodl-tmp/81-512-8192,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 | 0.56|\u00b1 |0.0315|\n| | |strict-match | 5|exact_match|\u2191 | 0.56|\u00b1 |0.0315|\n\nvllm (pretrained=/root/autodl-tmp/81-512-8192,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.564|\u00b1 |0.0222|\n| | |strict-match | 5|exact_match|\u2191 |0.562|\u00b1 |0.0222|\n\n\nvllm (pretrained=/root/autodl-tmp/82-256-8192,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.564|\u00b1 |0.0314|\n| | |strict-match | 5|exact_match|\u2191 |0.564|\u00b1 |0.0314|\n\nvllm (pretrained=/root/autodl-tmp/82-256-8192,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.586|\u00b1 |0.0220|\n| | |strict-match | 5|exact_match|\u2191 |0.580|\u00b1 |0.0221|\n\nvllm (pretrained=/root/autodl-tmp/82-256-8192,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: 1\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.4292|\u00b1 |0.0166|\n| - humanities | 2|none | |acc |\u2191 |0.4051|\u00b1 |0.0340|\n| - other | 2|none | |acc |\u2191 |0.4718|\u00b1 |0.0355|\n| - social sciences| 2|none | |acc |\u2191 |0.4278|\u00b1 |0.0362|\n| - stem | 2|none | |acc |\u2191 |0.4175|\u00b1 |0.0289|\n\n\nvllm (pretrained=/root/autodl-tmp/82-512-8192,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.564|\u00b1 |0.0314|\n| | |strict-match | 5|exact_match|\u2191 |0.560|\u00b1 |0.0315|\n\nvllm (pretrained=/root/autodl-tmp/82-512-8192,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.572|\u00b1 |0.0221|\n| | |strict-match | 5|exact_match|\u2191 |0.564|\u00b1 |0.0222|\n\n\nvllm (pretrained=/root/autodl-tmp/82-1024-8192,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.552|\u00b1 |0.0315|\n| | |strict-match | 5|exact_match|\u2191 |0.552|\u00b1 |0.0315|\n\n\nvllm (pretrained=/root/autodl-tmp/83-512-8192,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.532|\u00b1 |0.0316|\n| | |strict-match | 5|exact_match|\u2191 |0.532|\u00b1 |0.0316|\n\n\nvllm (pretrained=/root/autodl-tmp/84-256-8192,add_bos_token=true,max_model_len=8096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.540|\u00b1 |0.0316|\n| | |strict-match | 5|exact_match|\u2191 |0.536|\u00b1 |0.0316|", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "huihui-ai/Seed-Coder-8B-Instruct-abliterated" ], "base_model": "noneUsername/Seed-Coder-8B-Instruct-abliterated-W8A8", "base_model_relation": "base" }, { "model_id": "lazichicken/tile-pilot-grpo-lora-v2", "gated": "unknown", "card": "---\nbase_model: willyli/Seed-Coder-8B-Instruct-KTO\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "willyli/Seed-Coder-8B-Instruct-KTO" ], "base_model": null, "base_model_relation": null } ] }