Instructions to use HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca") model = AutoModelForCausalLM.from_pretrained("HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
- SGLang
How to use HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca with Docker Model Runner:
docker model run hf.co/HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
Instruct_Yi-6B_Dolly15K
Fine-tuned from Yi-6B, used Dolly15k for the dataset. 90% for training, 10% validation. Trained for 2.0 epochs using Lora. Trained with 2048 context window. Compared with https://huggingface.co/HenryJJ/Instruct_Yi-6B_Dolly15K, I add additional CodeAlpaca_20K dataset that good at coding.
Model Details
- Trained by: trained by HenryJJ.
- Model type: Instruct_Yi-6B_Dolly15K is an auto-regressive language model based on the Llama 2 transformer architecture.
- Language(s): English
- License for Instruct_Yi-6B_Dolly15K: apache-2.0 license
Prompting
Prompt Template With Context
<|startoftext|>[INST]{instruction} {context}[/INST]{response}<|endoftext|>
<|startoftext|>[INST]
Write a 10-line poem about a given topic
The topic is about racecars
[/INST]
Prompt Template Without Context
<|startoftext|>[INST]
Who was the was the second president of the United States?
[/INST]
Training script:
Fully opensourced at: https://github.com/hengjiUSTC/learn-llm/blob/main/trl_finetune.py. Run on aws g4dn.12xlarge instance for 10 hours.
python3 trl_finetune.py --config configs/yi_6b-large.yml
- Downloads last month
- 243