StarCoder 2 and The Stack v2: The Next Generation
Paper • 2402.19173 • Published • 156
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "sovitrath/starcoder2-3b-instruct" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "sovitrath/starcoder2-3b-instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'This is a instruction-tuned version of the Starcoder2-3B model. It has been trained using the same repository and dataset used for Starcoder2-15B. It uses the same prompt generation technique as the Starcoder2-15B mode. So, it can be used as a drop in replacement by just changing the model path.
Running code language models locally. This model can easily run on:
Using FP16
import transformers
import torch
pipeline = transformers.pipeline(
model="outputs_starcoder3b_4e",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
def respond(instruction: str, response_prefix: str) -> str:
messages = [{"role": "user", "content": instruction}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False)
prompt += response_prefix
teminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("###"),
]
result = pipeline(
prompt,
max_length=1024,
num_return_sequences=1,
do_sample=False,
eos_token_id=teminators,
pad_token_id=pipeline.tokenizer.eos_token_id,
truncation=True,
)
response = response_prefix + result[0]["generated_text"][len(prompt) :].split("###")[0].rstrip()
return response
instruction = "Write the Transformer encoder in PyTorch."
response_prefix = ""
print(respond(instruction, response_prefix))
Output:
```python
import torch
import torch.nn as nn
class TransformerEncoder(nn.Module):
def __init__(self, d_model, nhead, num_layers, dim_feedforward=2048, dropout=0.1):
super(TransformerEncoder, self).__init__()
self.encoder_layer = nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward, dropout)
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, num_layers)
def forward(self, src):
return self.transformer_encoder(src)
```
See the repository for setup details.
MODEL_KEY=bigcode/starcoder2-3b
LR=1e-5
EPOCH=4
SEQ_LEN=1280
WARMUP_RATIO=0.05
OUTPUT_DIR=outputs_starcoder3b_4e
DATASET_FILE=train_data.jsonl
accelerate launch -m star_align.train \
--model_key $MODEL_KEY \
--model_name_or_path $MODEL_KEY \
--use_flash_attention True \
--datafile_paths $DATASET_FILE \
--output_dir $OUTPUT_DIR \
--bf16 True \
--num_train_epochs $EPOCH \
--max_training_seq_length $SEQ_LEN \
--pad_to_max_length False \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 256 \
--group_by_length False \
--ddp_find_unused_parameters False \
--logging_steps 1 \
--log_level info \
--optim adafactor \
--max_grad_norm -1 \
--warmup_ratio $WARMUP_RATIO \
--learning_rate $LR \
--lr_scheduler_type linear \
--attention_dropout 0.0 \
--residual_dropout 0.0 \
--embedding_dropout 0.0
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement here.
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "sovitrath/starcoder2-3b-instruct" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sovitrath/starcoder2-3b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'