Instructions to use internlm/AlchemistCoder-CL-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use internlm/AlchemistCoder-CL-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="internlm/AlchemistCoder-CL-7B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/AlchemistCoder-CL-7B") model = AutoModelForCausalLM.from_pretrained("internlm/AlchemistCoder-CL-7B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use internlm/AlchemistCoder-CL-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "internlm/AlchemistCoder-CL-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/AlchemistCoder-CL-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/internlm/AlchemistCoder-CL-7B
- SGLang
How to use internlm/AlchemistCoder-CL-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "internlm/AlchemistCoder-CL-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/AlchemistCoder-CL-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "internlm/AlchemistCoder-CL-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/AlchemistCoder-CL-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use internlm/AlchemistCoder-CL-7B with Docker Model Runner:
docker model run hf.co/internlm/AlchemistCoder-CL-7B
AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data
[π€ HuggingFace] [π Paper] [π Project Page]
β¨ Highlights
Abstract: Open-source Large Language Models (LLMs) and their specialized variants, particularly Code LLMs, have recently delivered impressive performance. However, previous Code LLMs are typically fine-tuned on single-source data with limited quality and diversity, which may insufficiently elicit the potential of pre-trained Code LLMs. In this paper, we present AlchemistCoder, a series of Code LLMs with enhanced code generation and generalization capabilities fine-tuned on multi-source data. To achieve this, we pioneer to unveil inherent conflicts among the various styles and qualities in multi-source code corpora and introduce data-specific prompts with hindsight relabeling, termed AlchemistPrompts, to harmonize different data sources and instruction-response pairs. Additionally, we propose incorporating the data construction process into the fine-tuning data as code comprehension tasks, including instruction evolution, data filtering, and code review. Extensive experiments demonstrate that AlchemistCoder holds a clear lead among all models of the same size (6.7B/7B) and rivals or even surpasses larger models (15B/33B/70B), showcasing the efficacy of our method in refining instruction-following capabilities and advancing the boundaries of code intelligence.
- AlchemistPrompts: Designed as data-specific prompts for harmonizing inherent conflicts in multi-source data and mitigating the instruction/response misalignment at a fined-grained level.
- Code Comprehenstion Tasks: Sourced from the process of data construction, consisting of instruction evolution, data filtering, and code review.
- Harmonized Multi-source Data: Instruction tuned on 200M tokens, including 6 types of high-quality data.
- Superior Model Performance: Surpassing all the open-source models of the same size (6.7/7B), and rivaling or even beating larger models (15B/33B/70B/ChatGPT) on 6 code benchmarks.
- Advanced generic capabilities: Demonstrated by the significant improvements on MMLU, BBH, and GSM8K.
π Quick Start
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("internlm/AlchemistCoder-CL-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("internlm/AlchemistCoder-CL-7B", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
model = model.eval()
input_text = "Implement the Dijkstra algorithm in Python"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π§ͺ Evaluation and Fine-tune
Please refer to AlchemistCoder and InternLM.
π Acknowledgments
AlchemistCoder is built with InternLM and OpenCompass. Thanks for their awesome work!
π§ Contact
If you have any questions, please create an issue on this repository or contact us at:
π Citation
If you find our work useful, please consider citing:
@misc{song2024alchemistcoder,
title={AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data},
author={Zifan Song and Yudong Wang and Wenwei Zhang and Kuikun Liu and Chengqi Lyu and Demin Song and Qipeng Guo and Hang Yan and Dahua Lin and Kai Chen and Cairong Zhao},
year={2024},
eprint={2405.19265},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 820
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "internlm/AlchemistCoder-CL-7B"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/AlchemistCoder-CL-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'