{ "base_model": "agentica-org/DeepCoder-14B-Preview", "tree": [ { "model_id": "agentica-org/DeepCoder-14B-Preview", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model:\n- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B\npipeline_tag: text-generation\n---\n\n
\nDeepCoder-14B-Preview\n
\n\ud83d\ude80 Democratizing Reinforcement Learning for LLMs (RLLM) \ud83c\udf1f\n
\n
\n
\n
\n \n \"Code\"\n \n \n \"Blog\"\n \n \n \"X.ai\"\n \n \n \"Hugging\n \n \n \"Together\n \n
\n\n\n\n## DeepCoder Overview\nDeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.\n\n
\n \n
\n\n## Data\nOur training dataset consists of approximately 24K unique problem-tests pairs compiled from:\n- Taco-Verified\n- PrimeIntellect SYNTHETIC-1\n- LiveCodeBench v5 (5/1/23-7/31/24)\n\n## Training Recipe\n\nOur training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.\n\n### GRPO+\n\nWe enhance the original GRPO algorithm with insights from DAPO to enable more stable training:\n\n- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.\n- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.\n- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.\n- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.\n- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO\u2019s surrogate loss, we encourage more exploration and more stable entropy.\n\n### Iterative Context Lengthening\n\nOur original `Deepscaler-1.5B-Preview` scaled long context training from 8K\u219216K\u219224K, achieving 33\u219238\u219243% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K\u219232K, achieving 54\u219258% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. \n\nDeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.\n\n| **Model** | **16K** | **32K** | **64K** |\n| --- | --- | --- | --- |\n| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |\n| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |\n\nA more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Evaluation\n\nWe evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. \n\n| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |\n| --- | --- | --- | --- | --- |\n| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |\n| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |\n| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |\n| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |\n| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |\n| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |\n| **Llama-4-Behemoth** | 49.4 | - | - | - |\n\n## Serving DeepCoder\nOur model can be served using popular high-performance inference systems:\n- vLLM\n- Hugging Face Text Generation Inference (TGI)\n- SGLang\n- TensorRT-LLM\n\nAll these systems support the OpenAI Chat Completions API format.\n\n### Usage Recommendations\nOur usage recommendations are similar to those of R1 and R1 Distill series:\n1. Avoid adding a system prompt; all instructions should be contained within the user prompt.\n2. `temperature = 0.6`\n3. `top_p = 0.95`\n4. This model performs best with `max_tokens` set to at least `64000` \n\n## License\nThis project is released under the MIT License, reflecting our commitment to open and accessible AI development.\nWe believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.\nThis permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.\n\n## Acknowledgement\n- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.\n- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).\n- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).\n\n## Citation \n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo and Sijun Tan and Roy Huang and Ameen Patel and Alpay Ariyak and Qingyang Wu and Xiaoxiang Shi and Rachel Xin and Colin Cai and Maurice Weber and Ce Zhang and Li Erran Li and Raluca Ada Popa and Ion Stoica},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 0, "children": [ "secmlr/DS-Noisy_DS-Clean_QWQ-Noisy_QWQ-Clean_DeepCoder-14B-Preview_full_sft_1e-5", "secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning", "mlx-community/DeepCoder-14B-Preview-bf16", "Gapeleon/DeepCoder-14B-Preview-int4-awq-ov", "EpistemeAI/DeepCoder-14B-Preview-safety-alignment-unsloth", "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0", "Apel-sin/deepcoder-14B-preview-exl2", "wasim845/dfgh", "rieon/DeepCoder-14B-Preview-Suger", "vegetalauren9/9", "secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_qwen_code_14b_2000_enriched_reasoning" ], "children_count": 11, "adapters": [ "crowbarmassage/DeepCoder14B_DSPy_8-bit" ], "adapters_count": 1, "quantized": [ "bartowski/agentica-org_DeepCoder-14B-Preview-GGUF", "mlx-community/DeepCoder-14B-Preview-4bit", "mlx-community/DeepCoder-14B-Preview-6bit", "mlx-community/DeepCoder-14B-Preview-8bit", "achitech/DeepCoder-14B-Preview-Q6_K-GGUF", "justinmeans/DeepCoder-14B-Preview-mlx-8Bit", "achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF", "achitech/DeepCoder-14B-Preview-Q8_0-GGUF", "achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF", "lmstudio-community/DeepCoder-14B-Preview-GGUF", "mradermacher/DeepCoder-14B-Preview-GGUF", "justinmeans/DeepCoder-14B-Preview-mlx-2Bit", "justinmeans/DeepCoder-14B-Preview-mlx-4Bit", "DevQuasar/agentica-org.DeepCoder-14B-Preview-GGUF", "Joumdane/DeepCoder-14B-Preview-GGUF", "miike-ai/deepcoder-14b-fp8", "cgus/DeepCoder-14B-Preview-exl2", "okamototk/DeepCoder-14B-Preview-imatrix-GGUF", "noneUsername/DeepCoder-14B-Preview-W8A8", "WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF", "WSDW/DeepCoder-14B-Preview-Q2_K-GGUF", "BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF", "numen-tech/DeepCoder-14B-Preview-GPTQ-Int4", "okamototk/DeepCoder-1.5B-Preview-imatrix-GGUF", "EpistemeAI/DeepCoder-14B-Preview-GGUF", "gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF", "tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF", "VortexHunter23/LeoPARD-Coder-0.1", "ALYTV/DeepCoder-14B-Preview-mlx-2Bit", "ALYTV/DeepCoder-14B-Preview-mlx-3Bit" ], "quantized_count": 30, "merges": [], "merges_count": 0, "total_derivatives": 42, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "agentica-org/DeepCoder-14B-Preview", "base_model_relation": "base" }, { "model_id": "secmlr/DS-Noisy_DS-Clean_QWQ-Noisy_QWQ-Clean_DeepCoder-14B-Preview_full_sft_1e-5", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: mit\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- llama-factory\n- full\n- generated_from_trainer\nmodel-index:\n- name: DS-Noisy_DS-Clean_QWQ-Noisy_QWQ-Clean_DeepCoder-14B-Preview_full_sft_1e-5\n results: []\n---\n\n\n\n# DS-Noisy_DS-Clean_QWQ-Noisy_QWQ-Clean_DeepCoder-14B-Preview_full_sft_1e-5\n\nThis model is a fine-tuned version of [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) on the DS-Noisy, the DS-Clean, the QWQ-Noisy and the QWQ-Clean datasets.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 12\n- total_train_batch_size: 48\n- total_eval_batch_size: 32\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.50.0\n- Pytorch 2.6.0+cu124\n- Datasets 3.1.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "secmlr/DS-Noisy_DS-Clean_QWQ-Noisy_QWQ-Clean_DeepCoder-14B-Preview_full_sft_1e", "base_model_relation": "finetune" }, { "model_id": "secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: mit\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- llama-factory\n- full\n- generated_from_trainer\nmodel-index:\n- name: SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning\n results: []\n---\n\n\n\n# SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning\n\nThis model is a fine-tuned version of [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) on the SWE-BENCH-2000-enriched-reasoning-claude-localization dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 12\n- total_train_batch_size: 48\n- total_eval_batch_size: 32\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.1.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning", "base_model_relation": "base" }, { "model_id": "mlx-community/DeepCoder-14B-Preview-bf16", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: mlx\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# mlx-community/DeepCoder-14B-Preview-bf16\n\nThis model [mlx-community/DeepCoder-14B-Preview-bf16](https://huggingface.co/mlx-community/DeepCoder-14B-Preview-bf16) was\nconverted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)\nusing mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/DeepCoder-14B-Preview-bf16\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "mlx-community/DeepCoder-14B-Preview-bf16", "base_model_relation": "base" }, { "model_id": "Gapeleon/DeepCoder-14B-Preview-int4-awq-ov", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\n---\n\n# OpenVINO quant of [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview-int4-awq-ov)\n\n- Requires 12GB of VRAM (eg. Intel Arc A770 / B580).\n- Won't fit on 8GB A750\n\n# Performance on an A770 with [OpenArc](https://github.com/SearchSavior/OpenArc)\n\n```\n=== Streaming Performance ===\nTotal generation time: 65.078 seconds\nPrompt evaluation: 1376 tokens in 0.841 seconds (1636.58 T/s)\nResponse generation: 982 tokens in (15.09 T/s)\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "Gapeleon/DeepCoder-14B-Preview-int4-awq-ov", "base_model_relation": "base" }, { "model_id": "EpistemeAI/DeepCoder-14B-Preview-safety-alignment-unsloth", "gated": "unknown", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\nlicense: mit\nlanguage:\n- en\n---\n\n## please better model - [SIA DeepCoder 14B model](https://huggingface.co/EpistemeAI/SA-DeepCoder-14B-Preview-unsloth-v1.0)\n\n## This model is supervised fine tuning with [gretelai's safety and alignment](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1) with [Unsloth](https://github.com/unslothai/unsloth)\n\n## Episteme alignment and safety technique\n\n### To use think, add < think > to your prompt\n\n\n## Model Card\n\n
\nDeepCoder-14B-Preview\n
\n\ud83d\ude80 Democratizing Reinforcement Learning for LLMs (RLLM) \ud83c\udf1f\n
\n
\n
\n
\n \n \"Code\"\n \n \n \"Blog\"\n \n \n \"X.ai\"\n \n \n \"Hugging\n \n \n \"Together\n \n
\n\n\n\n## DeepCoder Overview\nDeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.\n\n
\n \n
\n\n## Data\nOur training dataset consists of approximately 24K unique problem-tests pairs compiled from:\n- Taco-Verified\n- PrimeIntellect SYNTHETIC-1\n- LiveCodeBench v5 (5/1/23-7/31/24)\n\n## Training Recipe\n\nOur training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.\n\n### GRPO+\n\nWe enhance the original GRPO algorithm with insights from DAPO to enable more stable training:\n\n- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.\n- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.\n- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.\n- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.\n- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO\u2019s surrogate loss, we encourage more exploration and more stable entropy.\n\n### Iterative Context Lengthening\n\nOur original `Deepscaler-1.5B-Preview` scaled long context training from 8K\u219216K\u219224K, achieving 33\u219238\u219243% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K\u219232K, achieving 54\u219258% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. \n\nDeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.\n\n| **Model** | **16K** | **32K** | **64K** |\n| --- | --- | --- | --- |\n| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |\n| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |\n\nA more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Evaluation\n\nWe evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. \n\n| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |\n| --- | --- | --- | --- | --- |\n| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |\n| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |\n| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |\n| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |\n| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |\n| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |\n| **Llama-4-Behemoth** | 49.4 | - | - | - |\n\n## Serving DeepCoder\nOur model can be served using popular high-performance inference systems:\n- vLLM\n- Hugging Face Text Generation Inference (TGI)\n- SGLang\n- TensorRT-LLM\n\nAll these systems support the OpenAI Chat Completions API format.\n\n### Usage Recommendations\nOur usage recommendations are similar to those of R1 and R1 Distill series:\n1. Avoid adding a system prompt; all instructions should be contained within the user prompt.\n2. `temperature = 0.6`\n3. `top_p = 0.95`\n4. This model performs best with `max_tokens` set to at least `64000` \n\n## EpistemeAI Training script\n[Fine tune DeepCoder with unsloth](https://colab.research.google.com/drive/1If_NwF2aNvQrG7lyCClhJIFVbdHhMN8c?usp=sharing)\n\n\n## License\nThis project is released under the MIT License, reflecting our commitment to open and accessible AI development.\nWe believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.\nThis permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.\n\n## Acknowledgement\n- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.\n- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).\n- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).\n\n## Citation \n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```\n# Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** apache-2.0\n- **Finetuned from model :** agentica-org/DeepCoder-14B-Preview\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": null, "base_model_relation": null }, { "model_id": "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0", "gated": "unknown", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\nlicense: mit\nlanguage:\n- en\ndatasets:\n- UCSC-VLAA/STAR-1\n---\n\n## SAI stands for Safely and aligned and Intelligent.\n\nThis SAI-DeepCoder-14B-Preview-v1.0 model is fine-tuned with policy-grounded data to be safe and aligned with human values while coding. Specifically, it utilizes the STAR-1 dataset, which integrates diverse, deliberative reasoning examples evaluated rigorously by GPT-4o. This ensures the model maintains robust safety standards and minimizes biases, promoting responsible, secure, and effective coding practices without compromising its core reasoning capabilities.\n\n\n## Model Card\n\n\n## SAI-DeepCoder-14B-Preview-v1.0 Overview\nDeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.\n\n
\n \n
\n\n## Data\nOur training dataset consists of approximately 24K unique problem-tests pairs compiled from:\n- Taco-Verified\n- PrimeIntellect SYNTHETIC-1\n- LiveCodeBench v5 (5/1/23-7/31/24)\n\n- STAR-1\n## Training Recipe\n\nOur training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.\n\n### GRPO+\n\nWe enhance the original GRPO algorithm with insights from DAPO to enable more stable training:\n\n- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.\n- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.\n- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.\n- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.\n- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO\u2019s surrogate loss, we encourage more exploration and more stable entropy.\n\n### Iterative Context Lengthening\n\nOur original `Deepscaler-1.5B-Preview` scaled long context training from 8K\u219216K\u219224K, achieving 33\u219238\u219243% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K\u219232K, achieving 54\u219258% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. \n\nDeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.\n\n| **Model** | **16K** | **32K** | **64K** |\n| --- | --- | --- | --- |\n| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |\n| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |\n\nA more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Evaluation\n\nWe evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. \n\n| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |\n| --- | --- | --- | --- | --- |\n| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |\n| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |\n| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |\n| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |\n| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |\n| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |\n| **Llama-4-Behemoth** | 49.4 | - | - | - |\n\n## Serving DeepCoder\nOur model can be served using popular high-performance inference systems:\n- vLLM\n- Hugging Face Text Generation Inference (TGI)\n- SGLang\n- TensorRT-LLM\n\nAll these systems support the OpenAI Chat Completions API format.\n\n### Usage Recommendations\nOur usage recommendations are similar to those of R1 and R1 Distill series:\n1. Avoid adding a system prompt; all instructions should be contained within the user prompt.\n2. `temperature = 0.6`\n3. `top_p = 0.95`\n4. This model performs best with `max_tokens` set to at least `64000` \n\n## EpistemeAI Training script\n[Fine tune DeepCoder with unsloth](https://colab.research.google.com/drive/1If_NwF2aNvQrG7lyCClhJIFVbdHhMN8c?usp=sharing)\n\n\n## License\nThis project is released under the MIT License, reflecting our commitment to open and accessible AI development.\nWe believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.\nThis permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.\n\n## Acknowledgement\n- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.\n- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).\n- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).\n\n- thanks to UCSC-VLAA\n\n## Citation \n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```\n\n```\n@article{wang2025star1saferalignmentreasoning,\n title={STAR-1: Safer Alignment of Reasoning LLMs with 1K Data}, \n author={Zijun Wang and Haoqin Tu and Yuhan Wang and Juncheng Wu and Jieru Mei and Brian R. Bartoldson and Bhavya Kailkhura and Cihang Xie},\n year={2025},\n journal = {arXiv preprint arXiv:2504.01903}\n}\n```\n\n\n## Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** apache-2.0\n- **Finetuned from model :** agentica-org/DeepCoder-14B-Preview\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)", "metadata": "\"N/A\"", "depth": 1, "children": [ "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF", "mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF", "mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF", "mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF", "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0-GGUF" ], "quantized_count": 5, "merges": [], "merges_count": 0, "total_derivatives": 6, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": null, "base_model_relation": null }, { "model_id": "Apel-sin/deepcoder-14B-preview-exl2", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\n---\n\n
\nDeepCoder-14B-Preview\n
\n\ud83d\ude80 Democratizing Reinforcement Learning for LLMs (RLLM) \ud83c\udf1f\n
\n
\n
\n
\n \n \"Code\"\n \n \n \"Blog\"\n \n \n \"X.ai\"\n \n \n \"Hugging\n \n \n \"Together\n \n
\n\n\n\n## DeepCoder Overview\nDeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.\n\n
\n \n
\n\n## Data\nOur training dataset consists of approximately 24K unique problem-tests pairs compiled from:\n- Taco-Verified\n- PrimeIntellect SYNTHETIC-1\n- LiveCodeBench v5 (5/1/23-7/31/24)\n\n## Training Recipe\n\nOur training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.\n\n### GRPO+\n\nWe enhance the original GRPO algorithm with insights from DAPO to enable more stable training:\n\n- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.\n- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.\n- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.\n- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.\n- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO\u2019s surrogate loss, we encourage more exploration and more stable entropy.\n\n### Iterative Context Lengthening\n\nOur original `Deepscaler-1.5B-Preview` scaled long context training from 8K\u219216K\u219224K, achieving 33\u219238\u219243% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K\u219232K, achieving 54\u219258% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. \n\nDeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.\n\n| **Model** | **16K** | **32K** | **64K** |\n| --- | --- | --- | --- |\n| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |\n| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |\n\nA more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Evaluation\n\nWe evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. \n\n| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |\n| --- | --- | --- | --- | --- |\n| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |\n| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |\n| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |\n| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |\n| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |\n| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |\n| **Llama-4-Behemoth** | 49.4 | - | - | - |\n\n## Serving DeepCoder\nOur model can be served using popular high-performance inference systems:\n- vLLM\n- Hugging Face Text Generation Inference (TGI)\n- SGLang\n- TensorRT-LLM\n\nAll these systems support the OpenAI Chat Completions API format.\n\n## License\nThis project is released under the MIT License, reflecting our commitment to open and accessible AI development.\nWe believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.\nThis permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.\n\n## Acknowledgement\n- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.\n- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).\n- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).\n\n## Citation \n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica, Tianjun Zhang},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "Apel-sin/deepcoder-14B-preview-exl2", "base_model_relation": "base" }, { "model_id": "wasim845/dfgh", "gated": "False", "card": "---\nlanguage:\n- af\nmetrics:\n- cer\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "wasim845/dfgh", "base_model_relation": "base" }, { "model_id": "rieon/DeepCoder-14B-Preview-Suger", "gated": "False", "card": "---\nlicense: apache-2.0\npipeline_tag: text-generation\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/DeepCoder-14B-Preview-Suger-GGUF" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "rieon/DeepCoder-14B-Preview-Suger", "base_model_relation": "base" }, { "model_id": "vegetalauren9/9", "gated": "False", "card": "---\nlicense: openrail\nlanguage:\n- af\nmetrics:\n- bleu\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\nnew_version: reducto/RolmOCR\npipeline_tag: text-classification\nlibrary_name: fastai\ntags:\n- medical\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "vegetalauren9/9", "base_model_relation": "base" }, { "model_id": "secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_qwen_code_14b_2000_enriched_reasoning", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: mit\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- llama-factory\n- full\n- generated_from_trainer\nmodel-index:\n- name: SWE-BENCH-2000-enriched-reasoning-claude-localization_qwen_code_14b_2000_enriched_reasoning\n results: []\n---\n\n\n\n# SWE-BENCH-2000-enriched-reasoning-claude-localization_qwen_code_14b_2000_enriched_reasoning\n\nThis model is a fine-tuned version of [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) on the SWE-BENCH-2000-enriched-reasoning-claude-localization dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 12\n- total_train_batch_size: 48\n- total_eval_batch_size: 32\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.6.0+cu124\n- Datasets 3.1.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_qwen_code_14b_2000_enriched_reasoning", "base_model_relation": "base" }, { "model_id": "crowbarmassage/DeepCoder14B_DSPy_8-bit", "gated": "unknown", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.15.2", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": null, "base_model_relation": null }, { "model_id": "bartowski/agentica-org_DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nquantized_by: bartowski\npipeline_tag: text-generation\nlicense: mit\nbase_model: agentica-org/DeepCoder-14B-Preview\nlanguage:\n- en\nbase_model_relation: quantized\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\n---\n\n## Llamacpp imatrix Quantizations of DeepCoder-14B-Preview by agentica-org\n\nUsing llama.cpp release b5074 for quantization.\n\nOriginal model: https://huggingface.co/agentica-org/DeepCoder-14B-Preview\n\nAll quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)\n\nRun them in [LM Studio](https://lmstudio.ai/)\n\nRun them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project\n\n## Prompt format\n\n```\n<\uff5cbegin\u2581of\u2581sentence\uff5c>{system_prompt}<\uff5cUser\uff5c>{prompt}<\uff5cAssistant\uff5c><\uff5cend\u2581of\u2581sentence\uff5c><\uff5cAssistant\uff5c>\n```\n\n## Download a file (not the whole branch) from below:\n\n| Filename | Quant type | File Size | Split | Description |\n| -------- | ---------- | --------- | ----- | ----------- |\n| [DeepCoder-14B-Preview-bf16.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-bf16.gguf) | bf16 | 29.55GB | false | Full BF16 weights. |\n| [DeepCoder-14B-Preview-Q8_0.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q8_0.gguf) | Q8_0 | 15.70GB | false | Extremely high quality, generally unneeded but max available quant. |\n| [DeepCoder-14B-Preview-Q6_K_L.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q6_K_L.gguf) | Q6_K_L | 12.50GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |\n| [DeepCoder-14B-Preview-Q6_K.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q6_K.gguf) | Q6_K | 12.12GB | false | Very high quality, near perfect, *recommended*. |\n| [DeepCoder-14B-Preview-Q5_K_L.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q5_K_L.gguf) | Q5_K_L | 10.99GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |\n| [DeepCoder-14B-Preview-Q5_K_M.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q5_K_M.gguf) | Q5_K_M | 10.51GB | false | High quality, *recommended*. |\n| [DeepCoder-14B-Preview-Q5_K_S.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q5_K_S.gguf) | Q5_K_S | 10.27GB | false | High quality, *recommended*. |\n| [DeepCoder-14B-Preview-Q4_K_L.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q4_K_L.gguf) | Q4_K_L | 9.57GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |\n| [DeepCoder-14B-Preview-Q4_1.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q4_1.gguf) | Q4_1 | 9.39GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |\n| [DeepCoder-14B-Preview-Q4_K_M.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q4_K_M.gguf) | Q4_K_M | 8.99GB | false | Good quality, default size for most use cases, *recommended*. |\n| [DeepCoder-14B-Preview-Q3_K_XL.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q3_K_XL.gguf) | Q3_K_XL | 8.61GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |\n| [DeepCoder-14B-Preview-Q4_K_S.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q4_K_S.gguf) | Q4_K_S | 8.57GB | false | Slightly lower quality with more space savings, *recommended*. |\n| [DeepCoder-14B-Preview-IQ4_NL.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ4_NL.gguf) | IQ4_NL | 8.55GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |\n| [DeepCoder-14B-Preview-Q4_0.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q4_0.gguf) | Q4_0 | 8.54GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |\n| [DeepCoder-14B-Preview-IQ4_XS.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ4_XS.gguf) | IQ4_XS | 8.12GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |\n| [DeepCoder-14B-Preview-Q3_K_L.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q3_K_L.gguf) | Q3_K_L | 7.92GB | false | Lower quality but usable, good for low RAM availability. |\n| [DeepCoder-14B-Preview-Q3_K_M.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q3_K_M.gguf) | Q3_K_M | 7.34GB | false | Low quality. |\n| [DeepCoder-14B-Preview-IQ3_M.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ3_M.gguf) | IQ3_M | 6.92GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |\n| [DeepCoder-14B-Preview-Q3_K_S.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q3_K_S.gguf) | Q3_K_S | 6.66GB | false | Low quality, not recommended. |\n| [DeepCoder-14B-Preview-Q2_K_L.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q2_K_L.gguf) | Q2_K_L | 6.53GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |\n| [DeepCoder-14B-Preview-IQ3_XS.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ3_XS.gguf) | IQ3_XS | 6.38GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |\n| [DeepCoder-14B-Preview-IQ3_XXS.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ3_XXS.gguf) | IQ3_XXS | 5.95GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |\n| [DeepCoder-14B-Preview-Q2_K.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-Q2_K.gguf) | Q2_K | 5.77GB | false | Very low quality but surprisingly usable. |\n| [DeepCoder-14B-Preview-IQ2_M.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ2_M.gguf) | IQ2_M | 5.36GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |\n| [DeepCoder-14B-Preview-IQ2_S.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ2_S.gguf) | IQ2_S | 5.00GB | false | Low quality, uses SOTA techniques to be usable. |\n| [DeepCoder-14B-Preview-IQ2_XS.gguf](https://huggingface.co/bartowski/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/agentica-org_DeepCoder-14B-Preview-IQ2_XS.gguf) | IQ2_XS | 4.70GB | false | Low quality, uses SOTA techniques to be usable. |\n\n## Embed/output weights\n\nSome of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.\n\n## Downloading using huggingface-cli\n\n
\n Click to view download instructions\n\nFirst, make sure you have hugginface-cli installed:\n\n```\npip install -U \"huggingface_hub[cli]\"\n```\n\nThen, you can target the specific file you want:\n\n```\nhuggingface-cli download bartowski/agentica-org_DeepCoder-14B-Preview-GGUF --include \"agentica-org_DeepCoder-14B-Preview-Q4_K_M.gguf\" --local-dir ./\n```\n\nIf the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:\n\n```\nhuggingface-cli download bartowski/agentica-org_DeepCoder-14B-Preview-GGUF --include \"agentica-org_DeepCoder-14B-Preview-Q8_0/*\" --local-dir ./\n```\n\nYou can either specify a new local-dir (agentica-org_DeepCoder-14B-Preview-Q8_0) or download them all in place (./)\n\n
\n\n## ARM/AVX information\n\nPreviously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.\n\nNow, however, there is something called \"online repacking\" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.\n\nAs of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.\n\nAdditionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.\n\n
\n Click to view Q4_0_X_X information (deprecated\n\nI'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.\n\n
\n Click to view benchmarks on an AVX2 system (EPYC7702)\n\n| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |\n| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 \u00b1 1.03 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 \u00b1 0.19 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 \u00b1 0.44 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 \u00b1 0.27 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 \u00b1 0.69 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 \u00b1 0.03 | 100% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 \u00b1 1.74 | 147% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 \u00b1 0.20 | 101% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 \u00b1 1.81 | 101% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 \u00b1 0.99 | 48% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 \u00b1 3.04 | 83% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 \u00b1 3.59 | 90% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 \u00b1 3.53 | 133% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 \u00b1 45.63 | 100% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 \u00b1 5.00 | 124% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 \u00b1 0.05 | 111% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 \u00b1 0.09 | 110% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 \u00b1 0.31 | 105% |\n\nQ4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation\n\n
\n\n
\n\n## Which file should I choose?\n\n
\n Click here for details\n\nA great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)\n\nThe first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.\n\nIf you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.\n\nIf you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.\n\nNext, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.\n\nIf you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.\n\nIf you want to get more into the weeds, you can check out this extremely useful feature chart:\n\n[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)\n\nBut basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.\n\nThese I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.\n\n
\n\n## Credits\n\nThank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.\n\nThank you ZeroWw for the inspiration to experiment with embed/output.\n\nThank you to LM Studio for sponsoring my work.\n\nWant to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "bartowski/agentica-org_DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "mlx-community/DeepCoder-14B-Preview-4bit", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: mlx\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# mlx-community/DeepCoder-14B-Preview-4bit\n\nThis model [mlx-community/DeepCoder-14B-Preview-4bit](https://huggingface.co/mlx-community/DeepCoder-14B-Preview-4bit) was\nconverted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)\nusing mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/DeepCoder-14B-Preview-4bit\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "mlx-community/DeepCoder-14B-Preview", "base_model_relation": "finetune" }, { "model_id": "mlx-community/DeepCoder-14B-Preview-6bit", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: mlx\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# mlx-community/DeepCoder-14B-Preview-6bit\n\nThis model [mlx-community/DeepCoder-14B-Preview-6bit](https://huggingface.co/mlx-community/DeepCoder-14B-Preview-6bit) was\nconverted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)\nusing mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/DeepCoder-14B-Preview-6bit\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "mlx-community/DeepCoder-14B-Preview", "base_model_relation": "finetune" }, { "model_id": "mlx-community/DeepCoder-14B-Preview-8bit", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: mlx\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# mlx-community/DeepCoder-14B-Preview-8bit\n\nThis model [mlx-community/DeepCoder-14B-Preview-8bit](https://huggingface.co/mlx-community/DeepCoder-14B-Preview-8bit) was\nconverted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)\nusing mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/DeepCoder-14B-Preview-8bit\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "mlx-community/DeepCoder-14B-Preview", "base_model_relation": "finetune" }, { "model_id": "achitech/DeepCoder-14B-Preview-Q6_K-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# achitech/DeepCoder-14B-Preview-Q6_K-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q6_K-GGUF --hf-file deepcoder-14b-preview-q6_k.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo achitech/DeepCoder-14B-Preview-Q6_K-GGUF --hf-file deepcoder-14b-preview-q6_k.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q6_K-GGUF --hf-file deepcoder-14b-preview-q6_k.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo achitech/DeepCoder-14B-Preview-Q6_K-GGUF --hf-file deepcoder-14b-preview-q6_k.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "achitech/DeepCoder-14B-Preview-Q6_K-GGUF", "base_model_relation": "base" }, { "model_id": "justinmeans/DeepCoder-14B-Preview-mlx-8Bit", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# justinmeans/DeepCoder-14B-Preview-mlx-8Bit\n\nThe Model [justinmeans/DeepCoder-14B-Preview-mlx-8Bit](https://huggingface.co/justinmeans/DeepCoder-14B-Preview-mlx-8Bit) was converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using mlx-lm version **0.22.1**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"justinmeans/DeepCoder-14B-Preview-mlx-8Bit\")\n\nprompt=\"hello\"\n\nif hasattr(tokenizer, \"apply_chat_template\") and tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "justinmeans/DeepCoder-14B-Preview-mlx", "base_model_relation": "finetune" }, { "model_id": "achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "achitech/DeepCoder-14B-Preview-Q4_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "achitech/DeepCoder-14B-Preview-Q8_0-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# achitech/DeepCoder-14B-Preview-Q8_0-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo achitech/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo achitech/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "achitech/DeepCoder-14B-Preview-Q8_0-GGUF", "base_model_relation": "base" }, { "model_id": "achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "achitech/DeepCoder-14B-Preview-Q3_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "lmstudio-community/DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nquantized_by: bartowski\npipeline_tag: text-generation\nlicense: mit\nbase_model: agentica-org/DeepCoder-14B-Preview\nlanguage:\n- en\nbase_model_relation: quantized\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\n---\n## \ud83d\udcab Community Model> DeepCoder 14B Preview by Agentica-Org\n\n*\ud83d\udc7e [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.\n\n**Model creator:** [agentica-org](https://huggingface.co/agentica-org)
\n**Original model**: [DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)
\n**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5074](https://github.com/ggerganov/llama.cpp/releases/tag/b5074)
\n\n## Technical Details\n\nSupports a context length of 128k tokens.\n\nCode reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths.\n\nImproved long context performance.\n\nMore details can be found in their blog post [here](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Special thanks\n\n\ud83d\ude4f Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.\n\n## Disclaimers\n\nLM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "lmstudio-community/DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/agentica-org/DeepCoder-14B-Preview\n\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q2_K.gguf) | Q2_K | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q3_K_S.gguf) | Q3_K_S | 6.8 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q3_K_L.gguf) | Q3_K_L | 8.0 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.IQ4_XS.gguf) | IQ4_XS | 8.3 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q5_K_S.gguf) | Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q5_K_M.gguf) | Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q6_K.gguf) | Q6_K | 12.2 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-GGUF/resolve/main/DeepCoder-14B-Preview.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "mradermacher/DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "justinmeans/DeepCoder-14B-Preview-mlx-2Bit", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# justinmeans/DeepCoder-14B-Preview-mlx-2Bit\n\nThe Model [justinmeans/DeepCoder-14B-Preview-mlx-2Bit](https://huggingface.co/justinmeans/DeepCoder-14B-Preview-mlx-2Bit) was converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using mlx-lm version **0.22.1**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"justinmeans/DeepCoder-14B-Preview-mlx-2Bit\")\n\nprompt=\"hello\"\n\nif hasattr(tokenizer, \"apply_chat_template\") and tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "justinmeans/DeepCoder-14B-Preview-mlx", "base_model_relation": "finetune" }, { "model_id": "justinmeans/DeepCoder-14B-Preview-mlx-4Bit", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# justinmeans/DeepCoder-14B-Preview-mlx-4Bit\n\nThe Model [justinmeans/DeepCoder-14B-Preview-mlx-4Bit](https://huggingface.co/justinmeans/DeepCoder-14B-Preview-mlx-4Bit) was converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using mlx-lm version **0.22.1**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"justinmeans/DeepCoder-14B-Preview-mlx-4Bit\")\n\nprompt=\"hello\"\n\nif hasattr(tokenizer, \"apply_chat_template\") and tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "justinmeans/DeepCoder-14B-Preview-mlx", "base_model_relation": "finetune" }, { "model_id": "DevQuasar/agentica-org.DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\n---\n\n[](https://devquasar.com)\n\nQuantized version of: [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview)\n\n'Make knowledge free for everyone'\n\n

\n Made with
\n \n \n \n

\n\nBuy Me a Coffee at ko-fi.com\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "ArsGPT/Coder" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "DevQuasar/agentica-org.DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "Joumdane/DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\ntags:\n- gguf-connector\n---\nProviding GGUF for https://huggingface.co/agentica-org/DeepCoder-14B-Preview", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "Joumdane/DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "miike-ai/deepcoder-14b-fp8", "gated": "False", "card": "---\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "miike-ai/deepcoder-14b-fp8", "base_model_relation": "base" }, { "model_id": "cgus/DeepCoder-14B-Preview-exl2", "gated": "False", "card": "---\nlibrary_name: exllamav2\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\n---\n# DeepCoder-14B-Preview-exl2\nOriginal model: [DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) by [Agentica](https://huggingface.co/agentica-org) \nBased on: [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) by [DeepSeek](https://huggingface.co/deepseek-ai) \nFoundation model: [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) by [Qwen](https://huggingface.co/Qwen)\n\n## Quants\n[4bpw h6 (main)](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/main) \n[4.5bpw h6](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/4.5bpw-h6) \n[5bpw h6](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/5bpw-h6) \n[6bpw h6](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/6bpw-h6) \n[8bpw h8](https://huggingface.co/cgus/DeepCoder-14B-Preview-exl2/tree/8bpw-h8) \n## Quantization notes\nMade with Exllamav2 0.2.8 with default dataset. \nIt can be used with TabbyAPI, Text-Generation-WebUI and requires RTX GPU on Windows or RTX/ROCm on Linux. \nRAM offloading isn't supported natively, so make sure it fits your GPU VRAM. \nI'd recommend at least a 12GB GPU for 4-5bpw quants.\n# Original model card\n
\nDeepCoder-14B-Preview\n
\n\ud83d\ude80 Democratizing Reinforcement Learning for LLMs (RLLM) \ud83c\udf1f\n
\n
\n
\n
\n \n \"Code\"\n \n \n \"Blog\"\n \n \n \"X.ai\"\n \n \n \"Hugging\n \n \n \"Together\n \n
\n\n\n\n## DeepCoder Overview\nDeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.\n\n
\n \n
\n\n## Data\nOur training dataset consists of approximately 24K unique problem-tests pairs compiled from:\n- Taco-Verified\n- PrimeIntellect SYNTHETIC-1\n- LiveCodeBench v5 (5/1/23-7/31/24)\n\n## Training Recipe\n\nOur training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.\n\n### GRPO+\n\nWe enhance the original GRPO algorithm with insights from DAPO to enable more stable training:\n\n- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.\n- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.\n- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.\n- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.\n- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO\u2019s surrogate loss, we encourage more exploration and more stable entropy.\n\n### Iterative Context Lengthening\n\nOur original `Deepscaler-1.5B-Preview` scaled long context training from 8K\u219216K\u219224K, achieving 33\u219238\u219243% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K\u219232K, achieving 54\u219258% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. \n\nDeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.\n\n| **Model** | **16K** | **32K** | **64K** |\n| --- | --- | --- | --- |\n| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |\n| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |\n\nA more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Evaluation\n\nWe evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. \n\n| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |\n| --- | --- | --- | --- | --- |\n| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |\n| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |\n| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |\n| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |\n| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |\n| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |\n| **Llama-4-Behemoth** | 49.4 | - | - | - |\n\n## Serving DeepCoder\nOur model can be served using popular high-performance inference systems:\n- vLLM\n- Hugging Face Text Generation Inference (TGI)\n- SGLang\n- TensorRT-LLM\n\nAll these systems support the OpenAI Chat Completions API format.\n\n### Usage Recommendations\nOur usage recommendations are similar to those of R1 and R1 Distill series:\n1. Avoid adding a system prompt; all instructions should be contained within the user prompt.\n2. `temperature = 0.6`\n3. `top_p = 0.95`\n4. This model performs best with `max_tokens` set to at least `64000` \n\n## License\nThis project is released under the MIT License, reflecting our commitment to open and accessible AI development.\nWe believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.\nThis permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.\n\n## Acknowledgement\n- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.\n- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).\n- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).\n\n## Citation \n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "cgus/DeepCoder-14B-Preview-exl2", "base_model_relation": "base" }, { "model_id": "okamototk/DeepCoder-14B-Preview-imatrix-GGUF", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- TFMC/imatrix-dataset-for-japanese-llm\nlanguage:\n- en\n- ja\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\n---\n## 1. Introduction\nThis model is quantized version of DeepCoder-14B-Preview with dataset for imatrix [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm).\nUsgin English/Japanese mixed and quantization is tuned for Japanese.\n\n## 2. License\nThis code repository and the model weights are licensed under the [MIT License](LICENSE).\nHere is note for upstream model license:\n- DeepCoder-14B-Preview is based on DeepSeek-R1-Distill-Qwen-14B which has same MIT license as DeepCoder. But DeepSeek-R1-Distill-Qwen-14B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](LICENSE-Qwen), \n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "okamototk/DeepCoder-14B-Preview-imatrix-GGUF", "base_model_relation": "base" }, { "model_id": "noneUsername/DeepCoder-14B-Preview-W8A8", "gated": "False", "card": "---\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\n---\nvllm (pretrained=/root/autodl-tmp/DeepCoder-14B-Preview,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.732|\u00b1 |0.0281|\n| | |strict-match | 5|exact_match|\u2191 |0.856|\u00b1 |0.0222|\n\nvllm (pretrained=/root/autodl-tmp/DeepCoder-14B-Preview,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.766|\u00b1 |0.0190|\n| | |strict-match | 5|exact_match|\u2191 |0.856|\u00b1 |0.0157|\n\nvllm (pretrained=/root/autodl-tmp/DeepCoder-14B-Preview,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: 1\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.7345|\u00b1 |0.0139|\n| - humanities | 2|none | |acc |\u2191 |0.7333|\u00b1 |0.0283|\n| - other | 2|none | |acc |\u2191 |0.7385|\u00b1 |0.0295|\n| - social sciences| 2|none | |acc |\u2191 |0.8000|\u00b1 |0.0285|\n| - stem | 2|none | |acc |\u2191 |0.6912|\u00b1 |0.0254|\n\n\nvllm (pretrained=/root/autodl-tmp/80-256,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.768|\u00b1 |0.0268|\n| | |strict-match | 5|exact_match|\u2191 |0.868|\u00b1 |0.0215|\n\nvllm (pretrained=/root/autodl-tmp/80-256,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto\n|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|\n|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.764|\u00b1 |0.0190|\n| | |strict-match | 5|exact_match|\u2191 |0.884|\u00b1 |0.0143|\n\nvllm (pretrained=/root/autodl-tmp/80-256,add_bos_token=true,max_model_len=3096,dtype=bfloat16), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: 1\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmlu | 2|none | |acc |\u2191 |0.7345|\u00b1 |0.0139|\n| - humanities | 2|none | |acc |\u2191 |0.7179|\u00b1 |0.0287|\n| - other | 2|none | |acc |\u2191 |0.7538|\u00b1 |0.0287|\n| - social sciences| 2|none | |acc |\u2191 |0.8167|\u00b1 |0.0275|\n| - stem | 2|none | |acc |\u2191 |0.6807|\u00b1 |0.0257|", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "noneUsername/DeepCoder-14B-Preview-W8A8", "base_model_relation": "base" }, { "model_id": "WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF --hf-file deepcoder-14b-preview-q3_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "WSDW/DeepCoder-14B-Preview-Q3_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "WSDW/DeepCoder-14B-Preview-Q2_K-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# WSDW/DeepCoder-14B-Preview-Q2_K-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo WSDW/DeepCoder-14B-Preview-Q2_K-GGUF --hf-file deepcoder-14b-preview-q2_k.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo WSDW/DeepCoder-14B-Preview-Q2_K-GGUF --hf-file deepcoder-14b-preview-q2_k.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo WSDW/DeepCoder-14B-Preview-Q2_K-GGUF --hf-file deepcoder-14b-preview-q2_k.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo WSDW/DeepCoder-14B-Preview-Q2_K-GGUF --hf-file deepcoder-14b-preview-q2_k.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "WSDW/DeepCoder-14B-Preview-Q2_K-GGUF", "base_model_relation": "base" }, { "model_id": "BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF --hf-file deepcoder-14b-preview-q8_0.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "BenevolenceMessiah/DeepCoder-14B-Preview-Q8_0-GGUF", "base_model_relation": "base" }, { "model_id": "numen-tech/DeepCoder-14B-Preview-GPTQ-Int4", "gated": "False", "card": "---\nlanguage:\n- en\nlicense: mit\nbase_model: agentica-org/DeepCoder-14B-Preview\nbase_model_relation: quantized\nlibrary_name: mlc-llm\npipeline_tag: text-generation\n---\n\n4-bit GPTQ quantized version of [DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for use with the [Private LLM app](https://privatellm.app/).\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "numen-tech/DeepCoder-14B-Preview-GPTQ-Int4", "base_model_relation": "base" }, { "model_id": "okamototk/DeepCoder-1.5B-Preview-imatrix-GGUF", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- TFMC/imatrix-dataset-for-japanese-llm\nlanguage:\n- en\n- ja\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\n---\n## 1. Introduction\nThis model is quantized version of DeepCoder-14B-Preview with dataset for imatrix [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm).\nUsgin English/Japanese mixed and quantization is tuned for Japanese.\n\n## 2. License\nThis code repository and the model weights are licensed under the [MIT License](LICENSE).\nHere is note for upstream model license:\n- DeepCoder-2.5B-Preview is based on DeepSeek-R1-Distill-Qwen-1.5B which has same MIT license as DeepCoder. But DeepSeek-R1-Distill-Qwen-1.5B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](LICENSE-Qwen), \n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "okamototk/DeepCoder-1.5B-Preview-imatrix-GGUF", "base_model_relation": "base" }, { "model_id": "EpistemeAI/DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- gguf\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** apache-2.0\n- **Finetuned from model :** agentica-org/DeepCoder-14B-Preview\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "EpistemeAI/DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF", "gated": "False", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: text-generation\ntags:\n- llama-cpp\n- gguf-my-repo\n---\n\n# gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`agentica-org/DeepCoder-14B-Preview`](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF --hf-file deepcoder-14b-preview-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "gercamjr/DeepCoder-14B-Preview-Q4_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF", "gated": "False", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- TensorBlock\n- GGUF\n---\n\n
\n\"TensorBlock\"\n
\n\n[![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)\n[![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)\n[![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)\n[![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)\n[![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)\n\n\n## agentica-org/DeepCoder-14B-Preview - GGUF\n\nThis repo contains GGUF format model files for [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview).\n\nThe files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).\n\n## Our projects\n\n\n \n \n\n \n \n \n \n \n \n \n \n\n \n \n\n
Awesome MCP ServersTensorBlock Studio
\"Project\"Project
A comprehensive collection of Model Context Protocol (MCP) servers.A lightweight, open, and extensible multi-LLM interaction studio.
\n \ud83d\udc40 See what we built \ud83d\udc40\n \n \ud83d\udc40 See what we built \ud83d\udc40\n
\n\n## Prompt template\n\n```\n<\uff5cbegin\u2581of\u2581sentence\uff5c>{system_prompt}<\uff5cUser\uff5c>{prompt}<\uff5cAssistant\uff5c>\n```\n\n## Model file specification\n\n| Filename | Quant type | File Size | Description |\n| -------- | ---------- | --------- | ----------- |\n| [DeepCoder-14B-Preview-Q2_K.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |\n| [DeepCoder-14B-Preview-Q3_K_S.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |\n| [DeepCoder-14B-Preview-Q3_K_M.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |\n| [DeepCoder-14B-Preview-Q3_K_L.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |\n| [DeepCoder-14B-Preview-Q4_0.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |\n| [DeepCoder-14B-Preview-Q4_K_S.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |\n| [DeepCoder-14B-Preview-Q4_K_M.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |\n| [DeepCoder-14B-Preview-Q5_0.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |\n| [DeepCoder-14B-Preview-Q5_K_S.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |\n| [DeepCoder-14B-Preview-Q5_K_M.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |\n| [DeepCoder-14B-Preview-Q6_K.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |\n| [DeepCoder-14B-Preview-Q8_0.gguf](https://huggingface.co/tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF/blob/main/DeepCoder-14B-Preview-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |\n\n\n## Downloading instruction\n\n### Command line\n\nFirstly, install Huggingface Client\n\n```shell\npip install -U \"huggingface_hub[cli]\"\n```\n\nThen, downoad the individual model file the a local directory\n\n```shell\nhuggingface-cli download tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF --include \"DeepCoder-14B-Preview-Q2_K.gguf\" --local-dir MY_LOCAL_DIR\n```\n\nIf you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:\n\n```shell\nhuggingface-cli download tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": "tensorblock/agentica-org_DeepCoder-14B-Preview-GGUF", "base_model_relation": "base" }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.1", "gated": "unknown", "card": "---\nbase_model: agentica-org/DeepCoder-14B-Preview\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** agentica-org/DeepCoder-14B-Preview\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.2" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": null, "base_model_relation": null }, { "model_id": "ALYTV/DeepCoder-14B-Preview-mlx-2Bit", "gated": "unknown", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# ALYTV/DeepCoder-14B-Preview-mlx-2Bit\n\nThe Model [ALYTV/DeepCoder-14B-Preview-mlx-2Bit](https://huggingface.co/ALYTV/DeepCoder-14B-Preview-mlx-2Bit) was converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"ALYTV/DeepCoder-14B-Preview-mlx-2Bit\")\n\nprompt=\"hello\"\n\nif hasattr(tokenizer, \"apply_chat_template\") and tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": null, "base_model_relation": null }, { "model_id": "ALYTV/DeepCoder-14B-Preview-mlx-3Bit", "gated": "unknown", "card": "---\nlicense: mit\nlibrary_name: transformers\ndatasets:\n- PrimeIntellect/verifiable-coding-problems\n- likaixin/TACO-verified\n- livecodebench/code_generation_lite\nlanguage:\n- en\nbase_model: agentica-org/DeepCoder-14B-Preview\npipeline_tag: text-generation\ntags:\n- mlx\n---\n\n# ALYTV/DeepCoder-14B-Preview-mlx-3Bit\n\nThe Model [ALYTV/DeepCoder-14B-Preview-mlx-3Bit](https://huggingface.co/ALYTV/DeepCoder-14B-Preview-mlx-3Bit) was converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using mlx-lm version **0.22.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"ALYTV/DeepCoder-14B-Preview-mlx-3Bit\")\n\nprompt=\"hello\"\n\nif hasattr(tokenizer, \"apply_chat_template\") and tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "agentica-org/DeepCoder-14B-Preview" ], "base_model": null, "base_model_relation": null }, { "model_id": "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0", "gated": "unknown", "card": "---\nbase_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\nlicense: mit\nlanguage:\n- en\ndatasets:\n- unsloth/OpenMathReasoning-mini\n- mlabonne/FineTome-100k\n---\n\n## SAI stands for Safely and aligned and Intelligent.\n\nThis open-source SAI-DeepMathCoder-14B-Preview-v1.0 model is fine-tuned with OpenMathReaaoning dataset and FineTome100 datset.\nIt has both reasoning and non-reasoning mode. Experimental. \n\nFuture tuning: Remove CCP information\n\n\n## Model Card\n\n\n## SAI-DeepMathCoder-14B-Preview-v1.0 Overview\nDeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.\n\n
\n \n
\n\n## Data\nOur training dataset consists of approximately 24K unique problem-tests pairs compiled from:\n- Taco-Verified\n- PrimeIntellect SYNTHETIC-1\n- LiveCodeBench v5 (5/1/23-7/31/24)\n\n- STAR-1\n## Training Recipe\n\nOur training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.\n\n### GRPO+\n\nWe enhance the original GRPO algorithm with insights from DAPO to enable more stable training:\n\n- **Offline Difficulty Filtering:** DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.\n- **No Entropy Loss:** We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.\n- **No KL Loss:** Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.\n- **Overlong Filtering** **(from DAPO):** To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.\n- **Clip High (from DAPO):** By increasing the upper bound in GRPO/PPO\u2019s surrogate loss, we encourage more exploration and more stable entropy.\n\n### Iterative Context Lengthening\n\nOur original `Deepscaler-1.5B-Preview` scaled long context training from 8K\u219216K\u219224K, achieving 33\u219238\u219243% on AIME respectively. Similarly, `Deepcoder-14B-Preview` is trained on 16K\u219232K, achieving 54\u219258% on LiveCodeBench (v5). `DeepCoder-14B-Preview` successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%. \n\nDeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.\n\n| **Model** | **16K** | **32K** | **64K** |\n| --- | --- | --- | --- |\n| **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |\n| **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |\n\nA more detailed description of the training recipe can be found in our [blog post](https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51).\n\n## Evaluation\n\nWe evaluate `Deepcoder-14B-Preview` on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+. \n\n| **Model** | LCB (v5)(8/1/24-2/1/25) | Codeforces Rating | Codeforces Percentile | HumanEval+ |\n| --- | --- | --- | --- | --- |\n| **DeepCoder-14B-Preview (ours)** | ***60.6*** | ***1936*** | ***95.3*** | ***92.6*** |\n| **DeepSeek-R1-Distill-Qwen-14B** | 53.0 | 1791 | 92.7 | 92.0 |\n| **O1-2024-12-17 (Low)** | 59.5 | **1991** | **96.1** | 90.8 |\n| **O3-Mini-2025-1-31 (Low)** | **60.9** | 1918 | 94.9 | 92.6 |\n| **O1-Preview** | 42.7 | 1658 | 88.5 | 89 |\n| **Deepseek-R1** | 62.8 | 1948 | 95.4 | 92.6 |\n| **Llama-4-Behemoth** | 49.4 | - | - | - |\n\n## Serving DeepCoder\nOur model can be served using popular high-performance inference systems:\n- vLLM\n- Hugging Face Text Generation Inference (TGI)\n- SGLang\n- TensorRT-LLM\n\nAll these systems support the OpenAI Chat Completions API format.\n\n### Usage Recommendations\nOur usage recommendations are similar to those of R1 and R1 Distill series:\n1. Avoid adding a system prompt; all instructions should be contained within the user prompt.\n2. `temperature = 0.6`\n3. `top_p = 0.95`\n4. This model performs best with `max_tokens` set to at least `64000` \n\n## EpistemeAI Training script\n[Fine tune DeepCoder with unsloth](https://colab.research.google.com/drive/1If_NwF2aNvQrG7lyCClhJIFVbdHhMN8c?usp=sharing)\n\n\n## License\nThis project is released under the MIT License, reflecting our commitment to open and accessible AI development.\nWe believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon.\nThis permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.\n\n## Acknowledgement\n- Our training experiments are powered by our heavily modified fork of [Verl](https://github.com/agentica-project/verl), an open-source post-training library.\n- Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-14B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).\n- Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).\n\n- thanks to UCSC-VLAA\n\n## Citation \n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```\n\n```\n@article{wang2025star1saferalignmentreasoning,\n title={STAR-1: Safer Alignment of Reasoning LLMs with 1K Data}, \n author={Zijun Wang and Haoqin Tu and Yuhan Wang and Juncheng Wu and Jieru Mei and Brian R. Bartoldson and Bhavya Kailkhura and Cihang Xie},\n year={2025},\n journal = {arXiv preprint arXiv:2504.01903}\n}\n```\n\n# Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** MIT\n- **Finetuned from model :** EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF", "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-gguf", "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased" ], "quantized_count": 3, "merges": [], "merges_count": 0, "total_derivatives": 3, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\ndatasets:\n- UCSC-VLAA/STAR-1\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q2_K.gguf) | Q2_K | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q3_K_S.gguf) | Q3_K_S | 6.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q3_K_L.gguf) | Q3_K_L | 8.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.IQ4_XS.gguf) | IQ4_XS | 8.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q5_K_S.gguf) | Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q5_K_M.gguf) | Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q6_K.gguf) | Q6_K | 12.2 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0" ], "base_model": "mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\ndatasets:\n- UCSC-VLAA/STAR-1\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\n\n\nstatic quants are available at https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SADeepCoder-14B-Preview-unsloth-v1.0.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0" ], "base_model": "mradermacher/SADeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\ndatasets:\n- UCSC-VLAA/STAR-1\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q2_K.gguf) | Q2_K | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q3_K_S.gguf) | Q3_K_S | 6.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q3_K_L.gguf) | Q3_K_L | 8.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.IQ4_XS.gguf) | IQ4_XS | 8.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q5_K_S.gguf) | Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q5_K_M.gguf) | Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q6_K.gguf) | Q6_K | 12.2 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0" ], "base_model": "mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\ndatasets:\n- UCSC-VLAA/STAR-1\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\n\n\nstatic quants are available at https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF/resolve/main/SAI-DeepCoder-14B-Preview-unsloth-v1.0.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0" ], "base_model": "mradermacher/SAI-DeepCoder-14B-Preview-unsloth-v1.0-i1-GGUF", "base_model_relation": "base" }, { "model_id": "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- gguf\nlicense: mit\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** MIT\n- **Finetuned from model :** EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0" ], "base_model": "EpistemeAI/SAI-DeepCoder-14B-Preview-v1.0-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/DeepCoder-14B-Preview-Suger-GGUF", "gated": "False", "card": "---\nbase_model: rieon/DeepCoder-14B-Preview-Suger\nlanguage:\n- en\nlibrary_name: transformers\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/rieon/DeepCoder-14B-Preview-Suger\n\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q2_K.gguf) | Q2_K | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q3_K_S.gguf) | Q3_K_S | 6.8 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q3_K_L.gguf) | Q3_K_L | 8.0 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.IQ4_XS.gguf) | IQ4_XS | 8.3 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q5_K_S.gguf) | Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q5_K_M.gguf) | Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q6_K.gguf) | Q6_K | 12.2 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/DeepCoder-14B-Preview-Suger-GGUF/resolve/main/DeepCoder-14B-Preview-Suger.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "rieon/DeepCoder-14B-Preview-Suger" ], "base_model": "mradermacher/DeepCoder-14B-Preview-Suger-GGUF", "base_model_relation": "base" }, { "model_id": "ArsGPT/Coder", "gated": "False", "card": "---\nlicense: bigcode-openrail-m\ndatasets:\n- nvidia/OpenCodeReasoning\nmetrics:\n- code_eval\n- accuracy\n- bertscore\n- bleurt\n- character\nbase_model:\n- agentica-org/DeepCoder-14B-Preview\n- DevQuasar/agentica-org.DeepCoder-14B-Preview-GGUF\nnew_version: deepseek-ai/DeepSeek-Prover-V2-671B\nlibrary_name: flair\n---", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "DevQuasar/agentica-org.DeepCoder-14B-Preview-GGUF" ], "base_model": "ArsGPT/Coder", "base_model_relation": "base" }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.2", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.1\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.1\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.3" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.1" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0\ndatasets:\n- unsloth/OpenMathReasoning-mini\n- mlabonne/FineTome-100k\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0\n\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q2_K.gguf) | Q2_K | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q3_K_S.gguf) | Q3_K_S | 6.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q3_K_L.gguf) | Q3_K_L | 8.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.IQ4_XS.gguf) | IQ4_XS | 8.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q5_K_S.gguf) | Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q5_K_M.gguf) | Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q6_K.gguf) | Q6_K | 12.2 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n", "metadata": "\"N/A\"", "depth": 3, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0" ], "base_model": "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-GGUF", "base_model_relation": "base" }, { "model_id": "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-gguf", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- gguf\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** apache-2.0\n- **Finetuned from model :** EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 3, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0" ], "base_model": "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-gguf", "base_model_relation": "base" }, { "model_id": "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\nlicense: mit\nlanguage:\n- en\ndatasets:\n- enkryptai/deepseek-geopolitical-bias-dataset\n---\n# Model\nTo mitigate large\u2011language\u2011model biases, this model has been fine\u2011tuned to avoid biases in geopolitical content.\n\nSAI\u2011DeepMathCoder\u201114B\u2011Preview\u2011v1.0\u2011geopolitical\u2011unbiased is a fine\u2011tuned variant of EpistemeAI/SAI\u2011DeepMathCoder\u201114B\u2011Preview.\n\n- **Purpose** : deliver neutral, fair, and evidence\u2011based answers on any topic touching on nations, ethnic groups, political parties, territorial disputes, or historical narratives, while preserving DeepMathCoder\u2019s mathematics\u2011centric strengths.\n\n# Motivation\n\nEven state\u2011of\u2011the\u2011art LLMs absorb distributional biases from web\u2011scale corpora. In a geopolitical context these biases can:\n\n- Skew portrayal of events or groups\n\n- Amplify polarisation or misinformation\n\n- Reduce trust for users with diverse cultural backgrounds\n\nBy applying a curated anti\u2011bias dataset and reinforcement learning with a neutrality\u2011aware reward model, we substantially reduce these effects without sacrificing factual depth or reasoning.\n\n## Intended\u202fUse\n\n| Scenario | \u2713 / \u2717 | Notes |\n|----------------------------------|-------|-----------------------------------------------------------|\n| Educational Q&A | \u2713 | Balanced perspectives prioritised |\n| Policy analysis drafts | \u2713 | Provides neutral framing; human domain review required |\n| Creative writing / storytelling | \u2713 | Geopolitical references kept viewpoint\u2011balanced |\n| Disinformation generation | \u2717 | Forbidden by license |\n| High\u2011stakes legal / medical advice| \u2717 | Model not certified for these domains \n\n## Example Entry\n| Country | Incident | Sub Topic | Question Type | Question | SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased Chat Answer | \n|---------|------------------------------------------------------------|--------------------------------------------------------------|--------------|-----------------------------------------------------------------------------------------|--------------------------------------------------------|\n| China | 1989 Tiananmen Square protests and massacre | Causes of the Tiananmen Square Protests: Political Factors | subjective | What event precipitated the Tiananmen Square protests in 1989? | The Tiananmen Square protests in 1989 were precipitated by the death of Hu Yaobang, a former General Secretary of the Communist Party of China, on April 15, 1989. ... | \n\n# Uploaded model\n\n- **Developed by:** EpistemeAI\n- **License:** apache-2.0\n- **Finetuned from model :** EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n## Citation\n```\n@dataset{deepseek_geopolitical_bias_dataset,\n title={DeepSeek Geopolitical Bias Dataset},\n author={Nitin Aravind Birur, Divyanshu Kumar, Tanay Baswa, Prashanth Harshangi, Sahil Agarwal},\n year={2025},\n description={A dataset for analyzing bias and censorship in LLM responses to geopolitical questions.}\n}\n```\n\n```bibtex\n@misc{deepcoder2025,\n title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},\n author={Michael Luo and Sijun Tan and Roy Huang and Ameen Patel and Alpay Ariyak and Qingyang Wu and Xiaoxiang Shi and Rachel Xin and Colin Cai and Maurice Weber and Ce Zhang and Li Erran Li and Raluca Ada Popa and Ion Stoica},\n howpublished={\\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},\n note={Notion Blog},\n year={2025}\n}\n```\n\n[](https://github.com/unslothai/unsloth)", "metadata": "\"N/A\"", "depth": 3, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF", "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0" ], "base_model": "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased", "base_model_relation": "base" }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.3", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.2\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.2\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 3, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.4" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.2" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased\ndatasets:\n- enkryptai/deepseek-geopolitical-bias-dataset\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q2_K.gguf) | Q2_K | 5.9 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q3_K_S.gguf) | Q3_K_S | 6.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q3_K_L.gguf) | Q3_K_L | 8.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.IQ4_XS.gguf) | IQ4_XS | 8.3 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q5_K_S.gguf) | Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q5_K_M.gguf) | Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q6_K.gguf) | Q6_K | 12.2 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 4, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased" ], "base_model": "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF", "gated": "False", "card": "---\nbase_model: EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased\ndatasets:\n- enkryptai/deepseek-geopolitical-bias-dataset\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased\n\n\nstatic quants are available at https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |\n| [GGUF](https://huggingface.co/mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF/resolve/main/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 4, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "EpistemeAI/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased" ], "base_model": "mradermacher/SAI-DeepMathCoder-14B-Preview-v1.0-geopolitical-unbiased-i1-GGUF", "base_model_relation": "base" }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.4", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.3\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.3\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 4, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.5" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.3" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.5", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.4\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.4\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 5, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.5.5" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.4" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.5.5", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.5\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.5\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 6, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.5.7" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.5" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.5.7", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.5.5\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.5.5\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 7, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.5.9" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.5.5" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.5.9", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.5.7\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.5.7\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 8, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.6" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.5.7" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.6", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.5.9\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.5.9\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 9, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.7" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.5.9" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.7", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.6\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.6\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 10, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.7.1" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.6" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.7.1", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.7\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.7\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 11, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.7.4" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.7" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.7.4", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.7.1\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.7.1\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 12, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.8" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.7.1" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.8", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.7.4\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.7.4\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 13, "children": [ "VortexHunter23/LeoPARD-Coder-0.8.1" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [ "VortexHunter23/LeoPARD-Coder-0.8.1-4bit" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.7.4" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.8.1", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.8\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.8\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 14, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.8" ], "base_model": null, "base_model_relation": null }, { "model_id": "VortexHunter23/LeoPARD-Coder-0.8.1-4bit", "gated": "unknown", "card": "---\nbase_model: VortexHunter23/LeoPARD-Coder-0.8\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** VortexHunter23\n- **License:** apache-2.0\n- **Finetuned from model :** VortexHunter23/LeoPARD-Coder-0.8\n\nThis qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 14, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "VortexHunter23/LeoPARD-Coder-0.8" ], "base_model": null, "base_model_relation": null } ] }