Text Generation
Transformers
PyTorch
Chinese
English
llama
text-generation-inference
unsloth
trl
sft
yi
conversational
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TouchNight/HumanlikeRP")
model = AutoModelForCausalLM.from_pretrained("TouchNight/HumanlikeRP")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
HumanlikeRP
It is an attempt to build a Humanlike chatbot.
Designed to make it give short reply like a real human.
It is a failure, the dataset used to train it has weak context relevancy. So it often generates irrelevant answer. And it is also overfitting.
Chat Format
This model has been trained to use ChatML format.
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
Uploaded model
- Developed by: TouchNight
- License: apache-2.0
- Finetuned from model : cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 13

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TouchNight/HumanlikeRP") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)