| | --- |
| | datasets: |
| | - Trelis/function_calling_v3 |
| | license: other |
| | extra_gated_prompt: "Purchase access to this repo [HERE](https://buy.stripe.com/3cs3cY5tPdmbaMU6ps)!" |
| | tags: |
| | - function-calling |
| | - function calling |
| | --- |
| | # Function Calling Fine-tuned DeepSeek Chat 67B |
| |
|
| | Purchase access to this model [here](https://buy.stripe.com/3cs3cY5tPdmbaMU6ps). |
| |
|
| | This model is fine-tuned for function calling. |
| | - The function metadata format is the same as used for OpenAI. |
| | - The model is suitable for commercial use. |
| | - There is no GGUF yet as I'm awaiting a tokenizer.model file from the base repo. |
| |
|
| | Check out other fine-tuned function calling models [here](https://trelis.com/function-calling/). |
| |
|
| | ## Quick Server Setup |
| | Runpod one click templates: (You must add a HuggingFace Hub access token (HUGGING_FACE_HUB_TOKEN) to the environment variables as this is a gated model.) |
| | - [TGI 8bit EETQ](https://runpod.io/gsc?template=j29uypqrc1&ref=jmfkcdio). |
| | - [TGI API AWQ](https://runpod.io/gsc?template=cfzbdwjcpx&ref=jmfkcdio) |
| | |
| | Runpod Affiliate [Link](https://runpod.io?ref=jmfkcdio) (helps support the Trelis channel). |
| | |
| | ## Inference Scripts |
| | See below for sample prompt format. |
| | |
| | Complete inference scripts are available for purchase [here](https://trelis.com/enterprise-server-api-and-inference-guide/): |
| | - Easily format prompts using tokenizer.apply_chat_format (starting from openai formatted functions and a list of messages) |
| | - Automate catching, handling and chaining of function calls. |
| | |
| | ## Prompt Format |
| | ``` |
| | B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n" |
| | B_INST, E_INST = "User: ", "\n\nAssistant:" #Deepseek |
| | prompt = f"{B_INST}{B_FUNC}{functionList.strip()}{E_FUNC}{user_prompt.strip()}{E_INST}\n\n" |
| | ``` |
| | |
| | ### Using tokenizer.apply_chat_template |
| | For an easier application of the prompt, you can set up as follows: |
| | |
| | Set up `messages`: |
| | ``` |
| | [ |
| | { |
| | "role": "function_metadata", |
| | "content": "FUNCTION_METADATA" |
| | }, |
| | { |
| | "role": "user", |
| | "content": "What is the current weather in London?" |
| | }, |
| | { |
| | "role": "function_call", |
| | "content": "{\n \"name\": \"get_current_weather\",\n \"arguments\": {\n \"city\": \"London\"\n }\n}" |
| | }, |
| | { |
| | "role": "function_response", |
| | "content": "{\n \"temperature\": \"15 C\",\n \"condition\": \"Cloudy\"\n}" |
| | }, |
| | { |
| | "role": "assistant", |
| | "content": "The current weather in London is Cloudy with a temperature of 15 Celsius" |
| | } |
| | ] |
| | ``` |
| | |
| | with `FUNCTION_METADATA` as: |
| | ``` |
| | [ |
| | { |
| | "type": "function", |
| | "function": { |
| | "name": "get_current_weather", |
| | "description": "This function gets the current weather in a given city", |
| | "parameters": { |
| | "type": "object", |
| | "properties": { |
| | "city": { |
| | "type": "string", |
| | "description": "The city, e.g., San Francisco" |
| | }, |
| | "format": { |
| | "type": "string", |
| | "enum": ["celsius", "fahrenheit"], |
| | "description": "The temperature unit to use." |
| | } |
| | }, |
| | "required": ["city"] |
| | } |
| | } |
| | }, |
| | { |
| | "type": "function", |
| | "function": { |
| | "name": "get_clothes", |
| | "description": "This function provides a suggestion of clothes to wear based on the current weather", |
| | "parameters": { |
| | "type": "object", |
| | "properties": { |
| | "temperature": { |
| | "type": "string", |
| | "description": "The temperature, e.g., 15 C or 59 F" |
| | }, |
| | "condition": { |
| | "type": "string", |
| | "description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'" |
| | } |
| | }, |
| | "required": ["temperature", "condition"] |
| | } |
| | } |
| | } |
| | ] |
| | ``` |
| | and then apply the chat template to get a formatted prompt: |
| | ``` |
| | tokenizer = AutoTokenizer.from_pretrained('Trelis/deepseek-llm-67b-chat-function-calling-v3', trust_remote_code=True) |
| | |
| | prompt = tokenizer.apply_chat_template(prompt, tokenize=False) |
| | ``` |
| | If you are using a gated model, you need to first run: |
| | ``` |
| | pip install huggingface_hub |
| | huggingface-cli login |
| | ``` |
| |
|
| | ### Manual Prompt: |
| | ``` |
| | User: You have access to the following functions. Use them if required: |
| | |
| | [ |
| | { |
| | "type": "function", |
| | "function": { |
| | "name": "get_stock_price", |
| | "description": "Get the stock price of an array of stocks", |
| | "parameters": { |
| | "type": "object", |
| | "properties": { |
| | "names": { |
| | "type": "array", |
| | "items": { |
| | "type": "string" |
| | }, |
| | "description": "An array of stocks" |
| | } |
| | }, |
| | "required": [ |
| | "names" |
| | ] |
| | } |
| | } |
| | }, |
| | { |
| | "type": "function", |
| | "function": { |
| | "name": "get_big_stocks", |
| | "description": "Get the names of the largest N stocks by market cap", |
| | "parameters": { |
| | "type": "object", |
| | "properties": { |
| | "number": { |
| | "type": "integer", |
| | "description": "The number of largest stocks to get the names of, e.g. 25" |
| | }, |
| | "region": { |
| | "type": "string", |
| | "description": "The region to consider, can be \"US\" or \"World\"." |
| | } |
| | }, |
| | "required": [ |
| | "number" |
| | ] |
| | } |
| | } |
| | } |
| | ] |
| | |
| | Get the price of Apple's stock |
| | |
| | Assistant: |
| | |
| | { |
| | "name": "get_stock_price", |
| | "arguments": { |
| | "names": [ |
| | "AAPL" |
| | ] |
| | } |
| | }<|end▁of▁sentence|> |
| | ``` |
| |
|
| | # Dataset |
| | See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function_calling_v3). |
| |
|
| | # License |
| | This model may be used commercially for inference according to the terms of the DeepSeek license, or for further fine-tuning and inference. Users may not re-publish or re-sell this model in the same or derivative form (including fine-tunes). |
| |
|
| | ** |
| | The original model card follows below: |
| | ** |
| | <p align="center"> |
| | <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true"> |
| | </p> |
| | <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p> |
| | <hr> |
| |
|
| |
|
| |
|
| |
|
| | ### 1. Introduction of Deepseek LLM |
| |
|
| | Introducing DeepSeek LLM, an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community. |
| |
|
| | |
| | ### 2. Model Summary |
| | `deepseek-llm-67b-chat` is a 67B parameter model initialized from `deepseek-llm-67b-base` and fine-tuned on extra instruction data. |
| | - **Home Page:** [DeepSeek](https://deepseek.com/) |
| | - **Repository:** [deepseek-ai/deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM) |
| | - **Chat With DeepSeek LLM:** [DeepSeek-LLM](https://chat.deepseek.com/) |
| |
|
| |
|
| | ### 3. How to Use |
| | Here give some examples of how to use our model. |
| | #### Chat Completion |
| | ```python |
| | import torch |
| | from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig |
| | |
| | model_name = "deepseek-ai/deepseek-llm-67b-chat" |
| | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| | model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") |
| | model.generation_config = GenerationConfig.from_pretrained(model_name) |
| | model.generation_config.pad_token_id = model.generation_config.eos_token_id |
| | |
| | messages = [ |
| | {"role": "user", "content": "Who are you?"} |
| | ] |
| | input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") |
| | outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100) |
| | |
| | result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) |
| | print(result) |
| | ``` |
| |
|
| | Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input. |
| |
|
| | ``` |
| | User: {messages[0]['content']} |
| | |
| | Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']} |
| | |
| | Assistant: |
| | ``` |
| |
|
| | **Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input. |
| |
|
| | ### 4. License |
| | This code repository is licensed under the MIT License. The use of DeepSeek LLM models is subject to the Model License. DeepSeek LLM supports commercial use. |
| |
|
| | See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details. |
| |
|
| | ### 5. Contact |
| |
|
| | If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com). |
| |
|
| |
|