Instructions to use Salesforce/codegen2-1B_P with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/codegen2-1B_P with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Salesforce/codegen2-1B_P", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-1B_P", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-1B_P", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Salesforce/codegen2-1B_P with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Salesforce/codegen2-1B_P" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/codegen2-1B_P", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Salesforce/codegen2-1B_P
- SGLang
How to use Salesforce/codegen2-1B_P with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Salesforce/codegen2-1B_P" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/codegen2-1B_P", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Salesforce/codegen2-1B_P" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/codegen2-1B_P", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Salesforce/codegen2-1B_P with Docker Model Runner:
docker model run hf.co/Salesforce/codegen2-1B_P
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ license: apache-2.0
|
|
| 8 |
|
| 9 |
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
|
| 10 |
|
| 11 |
-
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages]() by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
|
| 12 |
|
| 13 |
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
|
| 14 |
|
|
@@ -76,7 +76,7 @@ You might want to truncate the model output with `<eom>`.
|
|
| 76 |
|
| 77 |
## Training data
|
| 78 |
|
| 79 |
-
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](). Supported languages (and frameworks) are as follows:
|
| 80 |
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
|
| 81 |
|
| 82 |
## Training procedure
|
|
@@ -87,7 +87,7 @@ Please refer to the paper for more details.
|
|
| 87 |
|
| 88 |
## Evaluation results
|
| 89 |
|
| 90 |
-
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper]() for more details.
|
| 91 |
|
| 92 |
## Intended use and limitations
|
| 93 |
|
|
@@ -102,6 +102,6 @@ However, the model is intended for and best at **program synthesis**, that is, g
|
|
| 102 |
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
|
| 103 |
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
|
| 104 |
journal={arXiv preprint},
|
| 105 |
-
year={
|
| 106 |
}
|
| 107 |
```
|
|
|
|
| 8 |
|
| 9 |
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
|
| 10 |
|
| 11 |
+
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
|
| 12 |
|
| 13 |
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
|
| 14 |
|
|
|
|
| 76 |
|
| 77 |
## Training data
|
| 78 |
|
| 79 |
+
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
|
| 80 |
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
|
| 81 |
|
| 82 |
## Training procedure
|
|
|
|
| 87 |
|
| 88 |
## Evaluation results
|
| 89 |
|
| 90 |
+
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
|
| 91 |
|
| 92 |
## Intended use and limitations
|
| 93 |
|
|
|
|
| 102 |
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
|
| 103 |
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
|
| 104 |
journal={arXiv preprint},
|
| 105 |
+
year={2023}
|
| 106 |
}
|
| 107 |
```
|