Instructions to use GanjinZero/wombat-7b-delta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GanjinZero/wombat-7b-delta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="GanjinZero/wombat-7b-delta")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("GanjinZero/wombat-7b-delta") model = AutoModelForCausalLM.from_pretrained("GanjinZero/wombat-7b-delta") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use GanjinZero/wombat-7b-delta with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "GanjinZero/wombat-7b-delta" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GanjinZero/wombat-7b-delta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/GanjinZero/wombat-7b-delta
- SGLang
How to use GanjinZero/wombat-7b-delta with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "GanjinZero/wombat-7b-delta" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GanjinZero/wombat-7b-delta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "GanjinZero/wombat-7b-delta" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GanjinZero/wombat-7b-delta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use GanjinZero/wombat-7b-delta with Docker Model Runner:
docker model run hf.co/GanjinZero/wombat-7b-delta
Model details
Organization developing the model
Alibaba DAMO Academy, Tsinghua University
Model date
Wombat-7B was released at 2023/04/13.
Model version
Wombat-7B
Training dataset
The training data of Wombat-7B is released in the RRHF.
Model type
Wombat-7B is a general-purpose instruction-following language model aligned with chatGPT (as proxy human preferences), fine-tuned from Alpaca models.
We use a novel method named RRHF (Rank Response to align Human Feedback) to fine-tune Alpaca.
How to use To recover Wombats from delta parameters:
python apply_delta.py \
--base ./llama-7b \
--target ./wombat-7b \
--delta GanjinZero/wombat-7b-delta
where apply_delta.py is from code.
To infer with Wombats: Please refer to code.
Citations details
Please cite our paper on Arxiv:
@misc{yuan2023rrhf,
title={RRHF: Rank Responses to Align Language Models with Human Feedback without tears},
author={Zheng Yuan and Hongyi Yuan and Chuanqi Tan and Wei Wang and Songfang Huang and Fei Huang},
year={2023},
eprint={2304.05302},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
License
Data are licensed under the CC BY NC 4.0 license.
Where to send questions or comments about the model
Questions, comments, and discussions about Wombats and RRHF can be sent via the GitHub repository of the project, by opening an issue.
or send emails to yuanzheng.yuanzhen@alibaba-inc.com, yuanhy20@mails.tsinghua.edu.cn or chuanqi.tcq@alibaba-inc.com.
Primary intended uses
The primary use of Wombat-7B and Wombat-7B-GPT4 is research on learning from human feedback and is a prototype of RRHF methods.
Primary intended users
The primary intended users of Wombat-7B and Wombat-7B-GPT4 are researchers in natural language processing, machine learning and artificial intelligence.
Out-of-scope use cases
Wombat-7B and Wombat-7B-GPT4 are not finetuned with proxy human feedback of OpenAI chatGPT and GPT4 and are not intended for use in production systems.
Any usage must not be competing with the OpenAI API.
- Downloads last month
- 15