Text Generation
Transformers
Safetensors
llama
Merge
mergekit
TencentARC/LLaMA-Pro-8B-Instruct
arcee-ai/Patent-Instruct-Extended
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("arcee-ai/Patent-Instruct-Pro")
model = AutoModelForCausalLM.from_pretrained("arcee-ai/Patent-Instruct-Pro")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Patent-Instruct-Pro
Patent-Instruct-Pro is a merge of the following models using mergekit:
🧩 Configuration
slices:
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range: [0, 40]
- model: arcee-ai/Patent-Instruct-Extended
layer_range: [0, 40]
merge_method: slerp
base_model: TencentARC/LLaMA-Pro-8B-Instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
- Downloads last month
- 16
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="arcee-ai/Patent-Instruct-Pro") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)