databricks/databricks-dolly-15k
Viewer • Updated • 15k • 33.5k • 962
How to use rinna/nekomata-7b-instruction with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="rinna/nekomata-7b-instruction", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", trust_remote_code=True, dtype="auto")How to use rinna/nekomata-7b-instruction with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "rinna/nekomata-7b-instruction"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "rinna/nekomata-7b-instruction",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/rinna/nekomata-7b-instruction
How to use rinna/nekomata-7b-instruction with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "rinna/nekomata-7b-instruction" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "rinna/nekomata-7b-instruction",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "rinna/nekomata-7b-instruction" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "rinna/nekomata-7b-instruction",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use rinna/nekomata-7b-instruction with Docker Model Runner:
docker model run hf.co/rinna/nekomata-7b-instruction
rinna/nekomata-7b-instruction
The model is the instruction-tuned version of rinna/nekomata-7b. It adopts the Alpaca input format.
Model architecture
A 32-layer, 4096-hidden-size transformer-based language model. Please refer to the Qwen paper for architecture details.
Fine-tuning
The fine-tuning data is the subset of the following datasets.
Contributors
Release date
December 21, 2023
Please refer to rinna's LM benchmark page (Sheet 20231221).
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/nekomata-7b-instruction", trust_remote_code=True)
# Use GPU with bf16
# model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="auto", trust_remote_code=True, bf16=True)
# Use GPU with fp16
# model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="auto", trust_remote_code=True, fp16=True)
# Use CPU
# model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="cpu", trust_remote_code=True)
# Automatically select device and precision
model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="auto", trust_remote_code=True)
instruction = "次の日本語を英語に翻訳してください。"
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 入力:
{input}
### 応答:
"""
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=200,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
次の日本語を英語に翻訳してください。
### 入力:
大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使 用して自己教師あり学習または半教師あり学習によって訓練が行われる。
### 応答:
A large language model (LLM) is a computer language model composed of artificial neural networks with many parameters (from tens of millions to billions) trained by self-supervised learning or semi-supervised learning using a large amount of unlabeled text.<|endoftext|>
"""
Please refer to rinna/nekomata-7b for tokenization details.
@misc{rinna-nekomata-7b-instruction,
title = {rinna/nekomata-7b-instruction},
author = {Zhao, Tianyu and Sawada, Kei},
url = {https://huggingface.co/rinna/nekomata-7b-instruction}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}