How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "stillerman/santacoder-ruby"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "stillerman/santacoder-ruby",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/stillerman/santacoder-ruby
Quick Links

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Model

This model is a fine-tuned version of BigCode/SantaCoder on the Ruby portion of The Stack.

Training

This model was trained using character-level FIM with this script invoked like this

train.py --model_path=bigcode/santacoder --dataset_name=bigcode/the-stack-dedup \
  --subset=data/ruby --data_column content --split=train \
  --seq_length 2048 --max_steps 4000 --batch_size 3 \
  --gradient_accumulation_steps 8 --learning_rate 5e-5 \
  --num_warmup_steps 500 --eval_freq 1000 --save_freq 1000 \
  --log_freq 1 --num_workers=12 --no_fp16 --streaming \
  --fim_rate=0.5 --fim_spm_rate=0.5

on a 40GB A100 for 48 hours.

Performance

MultiPL-E HumanEval Ruby

  • pass@1 = 0.10
  • pass@10 = 0.14
Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using stillerman/santacoder-ruby 2