Philosopher-14B (merged)
Fully merged version of tunedai/philosopher-14b.
Load directly with transformers — no PEFT required.
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
'tunedai/philosopher-14b-merged',
torch_dtype=torch.bfloat16,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained('tunedai/philosopher-14b-merged')
messages = [{'role': 'user', 'content': 'Do we have free will?'}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
inputs = tokenizer(text, return_tensors='pt').to(model.device)
out = model.generate(**inputs, max_new_tokens=2000, temperature=0.7, do_sample=True)
print(tokenizer.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
Built by TunedAI Labs
License: CC-BY-NC-4.0. Commercial use requires a separate license — hello@tunedailabs.com
- Downloads last month
- 70