Shay
Shay is a merged large language model integrating advanced reasoning, tool-use, and multimodal capabilities.
It is designed for unrestricted, unfiltered, unbiased operation with support for long-context processing, multi-step reasoning, and instruction-following tasks.
Shay can perform text generation, summarization, code assistance, translation, and more.
Merge Details
- Merge method: task_arithmetic
- Density: 0.71
- Weight: 0.55
- Normalization: enabled
- INT8 masking: enabled
- Dtype: bfloat16
- Max context tokens supported: 40k
- Max generation tokens recommended: 512
Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "your-username/Shay"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="flash_attention_2",
trust_remote_code=True,
rope_scaling={"type": "dynamic", "factor": 10.0}
)
# Safe example prompt
prompt = """<|system|>
You are an intelligent, helpful assistant.
<|user|>
Write a detailed plan for organizing a community event with volunteers, budget, and timeline.
<|assistant|>
"""
# Prepare inputs
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate output
output = model.generate(
**inputs,
max_new_tokens=512,
temperature=1.05,
top_p=0.97,
top_k=60,
repetition_penalty=1.12,
do_sample=True
)
# Decode the response
reply = tokenizer.decode(output[0], skip_special_tokens=True)
reply = reply.split("<|assistant|>")[-1].strip()
print(reply)
Model tree for Abigail45/Shay
Merge model
this model