Text Generation
Transformers
PyTorch
Safetensors
English
Chinese
llama
code
text-generation-inference
Instructions to use codefuse-ai/CodeFuse-CodeLlama-34B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use codefuse-ai/CodeFuse-CodeLlama-34B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="codefuse-ai/CodeFuse-CodeLlama-34B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("codefuse-ai/CodeFuse-CodeLlama-34B") model = AutoModelForCausalLM.from_pretrained("codefuse-ai/CodeFuse-CodeLlama-34B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use codefuse-ai/CodeFuse-CodeLlama-34B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "codefuse-ai/CodeFuse-CodeLlama-34B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "codefuse-ai/CodeFuse-CodeLlama-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/codefuse-ai/CodeFuse-CodeLlama-34B
- SGLang
How to use codefuse-ai/CodeFuse-CodeLlama-34B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "codefuse-ai/CodeFuse-CodeLlama-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "codefuse-ai/CodeFuse-CodeLlama-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "codefuse-ai/CodeFuse-CodeLlama-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "codefuse-ai/CodeFuse-CodeLlama-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use codefuse-ai/CodeFuse-CodeLlama-34B with Docker Model Runner:
docker model run hf.co/codefuse-ai/CodeFuse-CodeLlama-34B
Commit ·
7b92b5b
1
Parent(s): fbd6063
Update README.md
Browse files
README.md
CHANGED
|
@@ -26,11 +26,19 @@ The context length of finetuning is 4K while it is able to be finetuned by 16k c
|
|
| 26 |
|
| 27 |
## Performance
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
|
| 31 |
-
|
|
| 32 |
-
| CodeLlama-
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
<br>
|
| 36 |
|
|
@@ -130,11 +138,19 @@ CodeFuse-CodeLlama34B-MFT 是一个通过QLoRA对基座模型CodeLlama-34b-Pytho
|
|
| 130 |
|
| 131 |
## 评测表现(代码)
|
| 132 |
|
| 133 |
-
| 模型
|
| 134 |
-
|
|
| 135 |
-
| CodeLlama-
|
| 136 |
-
|
|
| 137 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
<br>
|
| 139 |
|
| 140 |
## Requirements
|
|
|
|
| 26 |
|
| 27 |
## Performance
|
| 28 |
|
| 29 |
+
|
| 30 |
+
| Model | HumanEval(pass@1) | Date |
|
| 31 |
+
|:----------------------------|:-----------------:|:-------:|
|
| 32 |
+
| **CodeFuse-CodeLlama-34B** | **74.4%** | 2023.9 |
|
| 33 |
+
| WizardCoder-Python-34B-V1.0 | 73.2% | 2023.8 |
|
| 34 |
+
| GPT-4(zero-shot) | 67.0% | 2023.3 |
|
| 35 |
+
| PanGu-Coder2 15B | 61.6% | 2023.8 |
|
| 36 |
+
| CodeLlama-34b-Python | 53.7% | 2023.8 |
|
| 37 |
+
| CodeLlama-34b | 48.8% | 2023.8 |
|
| 38 |
+
| GPT-3.5(zero-shot) | 48.1% | 2022.11 |
|
| 39 |
+
| OctoCoder | 46.2% | 2023.8 |
|
| 40 |
+
| StarCoder-15B | 33.6% | 2023.5 |
|
| 41 |
+
| LLaMA 2 70B(zero-shot) | 29.9% | 2023.7 |
|
| 42 |
|
| 43 |
<br>
|
| 44 |
|
|
|
|
| 138 |
|
| 139 |
## 评测表现(代码)
|
| 140 |
|
| 141 |
+
| 模型 | HumanEval(pass@1) | 日期 |
|
| 142 |
+
|:----------------------------|:-----------------:|:-------:|
|
| 143 |
+
| **CodeFuse-CodeLlama-34B** | **74.4%** | 2023.9 |
|
| 144 |
+
| WizardCoder-Python-34B-V1.0 | 73.2% | 2023.8 |
|
| 145 |
+
| GPT-4(zero-shot) | 67.0% | 2023.3 |
|
| 146 |
+
| PanGu-Coder2 15B | 61.6% | 2023.8 |
|
| 147 |
+
| CodeLlama-34b-Python | 53.7% | 2023.8 |
|
| 148 |
+
| CodeLlama-34b | 48.8% | 2023.8 |
|
| 149 |
+
| GPT-3.5(zero-shot) | 48.1% | 2022.11 |
|
| 150 |
+
| OctoCoder | 46.2% | 2023.8 |
|
| 151 |
+
| StarCoder-15B | 33.6% | 2023.5 |
|
| 152 |
+
| LLaMA 2 70B(zero-shot) | 29.9% | 2023.7 |
|
| 153 |
+
|
| 154 |
<br>
|
| 155 |
|
| 156 |
## Requirements
|