| | --- |
| | license: other |
| | license_name: deepseek-license |
| | license_link: https://github.com/deepseek-ai/DeepSeek-Coder-V2/raw/main/LICENSE-MODEL |
| | tags: |
| | - code |
| | language: |
| | - code |
| | base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct |
| | model_creator: DeepSeek AI |
| | model_name: DeepSeek-Coder-V2-Lite-Instruct |
| | model_type: deepseek2 |
| | datasets: |
| | - m-a-p/CodeFeedback-Filtered-Instruction |
| | quantized_by: CISC |
| | --- |
| | |
| | # DeepSeek-Coder-V2-Lite-Instruct - SOTA GGUF |
| | - Model creator: [DeepSeek AI](https://huggingface.co/deepseek-ai) |
| | - Original model: [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |
| |
|
| | <!-- description start --> |
| | ## Description |
| |
|
| | This repo contains State Of The Art quantized GGUF format model files for [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct). |
| |
|
| | Quantization was done with an importance matrix that was trained for ~250K tokens (64 batches of 4096 tokens) of answers from the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) dataset. |
| |
|
| | Fill-in-Middle token metadata has been added, see [example](#simple-llama-cpp-python-example-fill-in-middle-code). |
| |
|
| | NOTE: Due to some of the tensors in this model being oddly shaped a consequential portion of the quantization fell back to IQ4_NL instead of the specified method, causing somewhat larger (and "smarter"; even IQ1_M is quite usable) model files than usual! |
| |
|
| | <!-- description end --> |
| |
|
| |
|
| | <!-- prompt-template start --> |
| | ## Prompt template: DeepSeek v2 |
| |
|
| | ``` |
| | User: {prompt} |
| | |
| | Assistant: |
| | ``` |
| |
|
| | <!-- prompt-template end --> |
| |
|
| |
|
| | <!-- compatibility_gguf start --> |
| | ## Compatibility |
| | |
| | These quantised GGUFv3 files are compatible with llama.cpp from May 29th 2024 onwards, as of commit [fb76ec2](https://github.com/ggerganov/llama.cpp/commit/fb76ec31a9914b7761c1727303ab30380fd4f05c) |
| | |
| | They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp. |
| | |
| | ## Explanation of quantisation methods |
| | |
| | <details> |
| | <summary>Click to see details</summary> |
| | |
| | The new methods available are: |
| | |
| | * GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw) |
| | * GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw |
| | * GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw |
| | * GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw |
| | * GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw |
| | * GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw |
| | * GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw |
| | * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw |
| | * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw |
| | * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw |
| | * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw |
| | * GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw |
| | |
| | Refer to the Provided Files table below to see what files use which methods, and how. |
| | </details> |
| | <!-- compatibility_gguf end --> |
| |
|
| | <!-- README_GGUF.md-provided-files start --> |
| | ## Provided files |
| | |
| | | Name | Quant method | Bits | Size | Max RAM required | Use case | |
| | | ---- | ---- | ---- | ---- | ---- | ----- | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf) | IQ1_S | 1 | 4.5 GB| 5.5 GB | smallest, significant quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf) | IQ1_M | 1 | 4.7 GB| 5.7 GB | very small, significant quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 2 | 5.1 GB| 6.1 GB | very small, high quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf) | IQ2_XS | 2 | 5.4 GB| 6.4 GB | very small, high quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf) | IQ2_S | 2 | 5.4 GB| 6.4 GB | small, substantial quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf) | IQ2_M | 2 | 5.7 GB| 6.7 GB | small, greater quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 3 | 6.3 GB| 7.3 GB | very small, high quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf) | IQ3_XS | 3 | 6.5 GB| 7.5 GB | small, substantial quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf) | IQ3_S | 3 | 6.8 GB| 7.8 GB | small, greater quality loss | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf) | IQ3_M | 3 | 6.9 GB| 7.9 GB | medium, balanced quality - recommended | |
| | | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf) | IQ4_NL | 4 | 8.1 GB| 9.1 GB | small, substantial quality loss | |
| |
|
| | Generated importance matrix file: [DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat](https://huggingface.co/CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat) |
| |
|
| | **Note**: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. |
| |
|
| | <!-- README_GGUF.md-provided-files end --> |
| | |
| | <!-- README_GGUF.md-how-to-run start --> |
| | ## Example `llama.cpp` command |
| |
|
| | Make sure you are using `llama.cpp` from commit [fb76ec3](https://github.com/ggerganov/llama.cpp/commit/fb76ec31a9914b7761c1727303ab30380fd4f05c) or later. |
| |
|
| | ```shell |
| | ./llama-cli -ngl 28 -m DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf --color -c 131072 --temp 0 --repeat-penalty 1.1 -p "User: {prompt}\n\nAssistant:" |
| | ``` |
| |
|
| | Change `-ngl 28` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. |
| |
|
| | Change `-c 131072` to the desired sequence length. |
| |
|
| | If you are low on V/RAM try quantizing the K-cache with `-ctk q8_0` or even `-ctk q4_0` for big memory savings (depending on context size). |
| | There is a similar option for V-cache (`-ctv`), however that requires Flash Attention [which is not working yet with this model](https://github.com/ggerganov/llama.cpp/issues/7343). |
| |
|
| | For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) |
| |
|
| | ## How to run from Python code |
| |
|
| | You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) module. |
| |
|
| | ### How to load this model in Python code, using llama-cpp-python |
| |
|
| | For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/). |
| |
|
| | #### First install the package |
| |
|
| | Run one of the following commands, according to your system: |
| |
|
| | ```shell |
| | # Prebuilt wheel with basic CPU support |
| | pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu |
| | # Prebuilt wheel with NVidia CUDA acceleration |
| | pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.) |
| | # Prebuilt wheel with Metal GPU acceleration |
| | pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal |
| | # Build base version with no GPU acceleration |
| | pip install llama-cpp-python |
| | # With NVidia CUDA acceleration |
| | CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python |
| | # Or with OpenBLAS acceleration |
| | CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python |
| | # Or with CLBLast acceleration |
| | CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python |
| | # Or with AMD ROCm GPU acceleration (Linux only) |
| | CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python |
| | # Or with Metal GPU acceleration for macOS systems only |
| | CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python |
| | # Or with Vulkan acceleration |
| | CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python |
| | # Or with Kompute acceleration |
| | CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python |
| | # Or with SYCL acceleration |
| | CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python |
| | |
| | # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: |
| | $env:CMAKE_ARGS = "-DLLAMA_CUDA=on" |
| | pip install llama-cpp-python |
| | ``` |
| |
|
| | #### Simple llama-cpp-python example code |
| |
|
| | ```python |
| | from llama_cpp import Llama |
| | |
| | # Chat Completion API |
| | |
| | llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072) |
| | print(llm.create_chat_completion( |
| | repeat_penalty = 1.1, |
| | messages = [ |
| | { |
| | "role": "user", |
| | "content": "Pick a LeetCode challenge and solve it in Python." |
| | } |
| | ] |
| | )) |
| | ``` |
| |
|
| | #### Simple llama-cpp-python example fill-in-middle code |
| |
|
| | ```python |
| | from llama_cpp import Llama |
| | |
| | # Completion API |
| | |
| | prompt = "def add(" |
| | suffix = "\n return sum\n\n" |
| | |
| | llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072) |
| | output = llm.create_completion( |
| | temperature = 0.0, |
| | repeat_penalty = 1.0, |
| | prompt = prompt, |
| | suffix = suffix |
| | ) |
| | |
| | # Models sometimes repeat suffix in response, attempt to filter that |
| | response = output["choices"][0]["text"] |
| | response_stripped = response.rstrip() |
| | unwanted_response_suffix = suffix.rstrip() |
| | unwanted_response_length = len(unwanted_response_suffix) |
| | |
| | filtered = False |
| | if unwanted_response_suffix and response_stripped[-unwanted_response_length:] == unwanted_response_suffix: |
| | response = response_stripped[:-unwanted_response_length] |
| | filtered = True |
| | |
| | print(f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{suffix}\033[0m") |
| | ``` |
| |
|
| | <!-- README_GGUF.md-how-to-run end --> |
| | |
| | <!-- original-model-card start --> |
| | <!-- markdownlint-disable first-line-h1 --> |
| | <!-- markdownlint-disable html --> |
| | <!-- markdownlint-disable no-duplicate-header --> |
| | |
| | <div align="center"> |
| | <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" /> |
| | </div> |
| | <hr> |
| | <div align="center" style="line-height: 1;"> |
| | <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> |
| | <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> |
| | <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> |
| | <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | </div> |
| | |
| | <div align="center" style="line-height: 1;"> |
| | <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> |
| | <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> |
| | <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> |
| | <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | </div> |
| | |
| | <div align="center" style="line-height: 1;"> |
| | <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;"> |
| | <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;"> |
| | <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> |
| | </a> |
| | </div> |
| | <p align="center"> |
| | <a href="#4-api-platform">API Platform</a> | |
| | <a href="#5-how-to-run-locally">How to Use</a> | |
| | <a href="#6-license">License</a> | |
| | </p> |
| | |
| | |
| | <p align="center"> |
| | <a href="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf"><b>Paper Link</b>👁️</a> |
| | </p> |
| | |
| | # DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence |
| | |
| | ## 1. Introduction |
| | We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from DeepSeek-Coder-V2-Base with 6 trillion tokens sourced from a high-quality and multi-source corpus. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-Coder-V2-Base, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. |
| | |
| | <p align="center"> |
| | <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png?raw=true"> |
| | </p> |
| | |
| | In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found in the paper. |
| | |
| | ## 2. Model Downloads |
| | |
| | We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the [DeepSeekMoE](https://arxiv.org/pdf/2401.06066) framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public. |
| | |
| | <div align="center"> |
| | |
| | | **Model** | **#Total Params** | **#Active Params** | **Context Length** | **Download** | |
| | | :-----------------------------: | :---------------: | :----------------: | :----------------: | :----------------------------------------------------------: | |
| | | DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) | |
| | | DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | |
| | | DeepSeek-Coder-V2-Base | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base) | |
| | | DeepSeek-Coder-V2-Instruct | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) | |
| | |
| | </div> |
| | |
| | |
| | ## 3. Chat Website |
| | |
| | You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: [coder.deepseek.com](https://coder.deepseek.com/sign_in) |
| | |
| | ## 4. API Platform |
| | We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/). Sign up for over millions of free tokens. And you can also pay-as-you-go at an unbeatable price. |
| | |
| | <p align="center"> |
| | <img width="40%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/model_price.jpg?raw=true"> |
| | </p> |
| | |
| | |
| | ## 5. How to run locally |
| | **Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.** |
| | |
| | ### Inference with Huggingface's Transformers |
| | You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. |
| | |
| | #### Code Completion |
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | import torch |
| | tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True) |
| | model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() |
| | input_text = "#write a quick sort algorithm" |
| | inputs = tokenizer(input_text, return_tensors="pt").to(model.device) |
| | outputs = model.generate(**inputs, max_length=128) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | ``` |
| | |
| | #### Code Insertion |
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | import torch |
| | tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True) |
| | model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() |
| | input_text = """<|fim▁begin|>def quick_sort(arr): |
| | if len(arr) <= 1: |
| | return arr |
| | pivot = arr[0] |
| | left = [] |
| | right = [] |
| | <|fim▁hole|> |
| | if arr[i] < pivot: |
| | left.append(arr[i]) |
| | else: |
| | right.append(arr[i]) |
| | return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>""" |
| | inputs = tokenizer(input_text, return_tensors="pt").to(model.device) |
| | outputs = model.generate(**inputs, max_length=128) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):]) |
| | ``` |
| | |
| | #### Chat Completion |
| | |
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | import torch |
| | tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True) |
| | model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() |
| | messages=[ |
| | { 'role': 'user', 'content': "write a quick sort algorithm in python."} |
| | ] |
| | inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) |
| | # tokenizer.eos_token_id is the id of <|EOT|> token |
| | outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) |
| | print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) |
| | ``` |
| | |
| |
|
| |
|
| | The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository. |
| |
|
| | An example of chat template is as belows: |
| |
|
| | ```bash |
| | <|begin▁of▁sentence|>User: {user_message_1} |
| | |
| | Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2} |
| | |
| | Assistant: |
| | ``` |
| |
|
| | You can also add an optional system message: |
| |
|
| | ```bash |
| | <|begin▁of▁sentence|>{system_message} |
| | |
| | User: {user_message_1} |
| | |
| | Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2} |
| | |
| | Assistant: |
| | ``` |
| |
|
| | ### Inference with vLLM (recommended) |
| | To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650. |
| |
|
| | ```python |
| | from transformers import AutoTokenizer |
| | from vllm import LLM, SamplingParams |
| | |
| | max_model_len, tp_size = 8192, 1 |
| | model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct" |
| | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| | llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True) |
| | sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id]) |
| | |
| | messages_list = [ |
| | [{"role": "user", "content": "Who are you?"}], |
| | [{"role": "user", "content": "write a quick sort algorithm in python."}], |
| | [{"role": "user", "content": "Write a piece of quicksort code in C++."}], |
| | ] |
| | |
| | prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list] |
| | |
| | outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params) |
| | |
| | generated_text = [output.outputs[0].text for output in outputs] |
| | print(generated_text) |
| | ``` |
| |
|
| |
|
| |
|
| | ## 6. License |
| |
|
| | This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-CODE). The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use. |
| |
|
| |
|
| | ## 7. Contact |
| | If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com). |
| |
|