OpenAssistant/oasst1
Viewer • Updated • 88.8k • 20.5k • 1.51k
How to use lightblue/qarasu-14B-chat-plus-unleashed with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True, dtype="auto")How to use lightblue/qarasu-14B-chat-plus-unleashed with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "lightblue/qarasu-14B-chat-plus-unleashed"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "lightblue/qarasu-14B-chat-plus-unleashed",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/lightblue/qarasu-14B-chat-plus-unleashed
How to use lightblue/qarasu-14B-chat-plus-unleashed with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "lightblue/qarasu-14B-chat-plus-unleashed" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "lightblue/qarasu-14B-chat-plus-unleashed",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "lightblue/qarasu-14B-chat-plus-unleashed" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "lightblue/qarasu-14B-chat-plus-unleashed",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use lightblue/qarasu-14B-chat-plus-unleashed with Docker Model Runner:
docker model run hf.co/lightblue/qarasu-14B-chat-plus-unleashed
Qwen/Qwen-14B-Chat + Karasu's finetuning datasets
In our internal evaluations, we find the Qarasu model to have particularly high performance on the MTーBench benchmark. We are currently awaiting external evaluations.
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True)
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
The same as the 'plus' checkpoint, but with about 6K refusals ("申し訳ありませんが、。。。") filtered out from the category dataset
Peter Devine
Sho Higuchi
Yuuki Yamanaka
Atom Sonoda
Shunichi Taniguchi
Renju Aoki