frankminors123/chinese-shepherd-critic-dataset
Viewer • Updated • 1.32k • 104 • 1
How to use frankminors123/Chinese-Shepherd with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="frankminors123/Chinese-Shepherd") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("frankminors123/Chinese-Shepherd")
model = AutoModelForCausalLM.from_pretrained("frankminors123/Chinese-Shepherd")We trained a Chinese version of Shepherd based on Chinese-LLaMA-2-7B, and we used 2 V100 GPUs with 32G for supervised fine-tuning based on LoRA.
We designed the appropriate prompt template, and the dataset we used has been published in this HuggingFace repository: frankminors123/chinese-shepherd-critic-dataset, please go to the data page to view details.
The prompt template used is as follows:
PROMPT_TEMPLATE = (
"请试着评论下面问题的答案.\n"
"### 问题:\n{question}\n### 答案:\n{answer}\n### 评论:\n"
)