The CultureSPA model from https://github.com/shaoyangxu/CultureSPA

## load
device = torch.device("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.bfloat16
).to(device)

## inference
messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": instruction},
]
tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_dict = True,
    return_tensors = 'pt'
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids = input_ids['input_ids'],
    attention_mask = input_ids['attention_mask'],
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
    pad_token_id=tokenizer.eos_token_id
)
Downloads last month
1
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for beiweixiaoxu/CultureSPA

Finetuned
(950)
this model
Quantizations
1 model