Text Generation
Transformers
PyTorch
Safetensors
English
Chinese
llama
code
text-generation-inference
Instructions to use codefuse-ai/CodeFuse-CodeLlama-34B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use codefuse-ai/CodeFuse-CodeLlama-34B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="codefuse-ai/CodeFuse-CodeLlama-34B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("codefuse-ai/CodeFuse-CodeLlama-34B") model = AutoModelForCausalLM.from_pretrained("codefuse-ai/CodeFuse-CodeLlama-34B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use codefuse-ai/CodeFuse-CodeLlama-34B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "codefuse-ai/CodeFuse-CodeLlama-34B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "codefuse-ai/CodeFuse-CodeLlama-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/codefuse-ai/CodeFuse-CodeLlama-34B
- SGLang
How to use codefuse-ai/CodeFuse-CodeLlama-34B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "codefuse-ai/CodeFuse-CodeLlama-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "codefuse-ai/CodeFuse-CodeLlama-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "codefuse-ai/CodeFuse-CodeLlama-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "codefuse-ai/CodeFuse-CodeLlama-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use codefuse-ai/CodeFuse-CodeLlama-34B with Docker Model Runner:
docker model run hf.co/codefuse-ai/CodeFuse-CodeLlama-34B
Commit ·
f2533c0
1
Parent(s): 859fef0
update readme.md to add links to github repos
Browse files
README.md
CHANGED
|
@@ -24,6 +24,17 @@ The context length of finetuning is 4K while it is able to be finetuned by 16k c
|
|
| 24 |
|
| 25 |
<br>
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
## Performance
|
| 28 |
|
| 29 |
|
|
@@ -143,6 +154,16 @@ CodeFuse-CodeLlama34B-MFT 是一个通过QLoRA对基座模型CodeLlama-34b-Pytho
|
|
| 143 |
|
| 144 |
<br>
|
| 145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
## 评测表现(代码)
|
| 147 |
|
| 148 |
| 模型 | HumanEval(pass@1) | 日期 |
|
|
|
|
| 24 |
|
| 25 |
<br>
|
| 26 |
|
| 27 |
+
## Code Community
|
| 28 |
+
|
| 29 |
+
**Homepage**: 🏡 https://github.com/codefuse-ai (**Please give us your support with a Star🌟 + Fork🚀 + Watch👀**)
|
| 30 |
+
|
| 31 |
+
+ If you wish to fine-tune the model yourself, you can visit ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
|
| 32 |
+
|
| 33 |
+
+ If you wish to deploy the model yourself, you can visit ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
|
| 34 |
+
|
| 35 |
+
+ If you wish to see a demo of the model, you can visit ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
|
| 36 |
+
|
| 37 |
+
|
| 38 |
## Performance
|
| 39 |
|
| 40 |
|
|
|
|
| 154 |
|
| 155 |
<br>
|
| 156 |
|
| 157 |
+
## 代码社区
|
| 158 |
+
**大本营**: 🏡 https://github.com/codefuse-ai (**欢迎为我们的项目一键三连 Star🌟 + Fork🚀 + Watch👀**)
|
| 159 |
+
|
| 160 |
+
+ 如果您想自己微调该模型,可以访问 ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
|
| 161 |
+
|
| 162 |
+
+ 如果您想自己部署该模型,可以访问 ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
|
| 163 |
+
|
| 164 |
+
+ 如果您想观看该模型示例,可以访问 ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
|
| 165 |
+
|
| 166 |
+
|
| 167 |
## 评测表现(代码)
|
| 168 |
|
| 169 |
| 模型 | HumanEval(pass@1) | 日期 |
|