# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Delta-Vector/Austral-SFT-KTO")
model = AutoModelForCausalLM.from_pretrained("Delta-Vector/Austral-SFT-KTO")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
a KTO finetune ontop of the -Base Austral-24B, Still not recc'd for use, Use -Winton!
Datasets:
datasets:
- path: Delta-Vector/Tauri-IFeval-Dans-Tulu-KTO
split: train
type: chatml.argilla
- path: NewEden/Helpsteer-3-edit-kto-v7
split: train
type: chatml.argilla
- path: Delta-Vector/Tauri-Helpsteer-3-Preference-KTO
split: train
type: chatml.argilla
- path: NewEden/Helpsteer-3-edit-kto-v7
split: train
type: chatml.argilla
- path: Delta-Vector/Tauri-Opus-Accepted-GPT-Rejected-Opus-Writing-Prompts
split: train
type: chatml.argilla
- path: NewEden/Opus-accepted-hermes-rejected-shuffled
split: train
type: chatml.argilla
- path: NewEden/Purpura-Arkhaios-CC-KTO
split: train
type: chatml.argilla
- path: Delta-Vector/Tauri-KTO-Instruct-Mix
split: train
type: chatml.argilla
- Downloads last month
- 4
Model tree for Delta-Vector/Austral-SFT-KTO
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503 Finetuned
LatitudeGames/Harbinger-24B Finetuned
Delta-Vector/Austral-24B-Base
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Delta-Vector/Austral-SFT-KTO") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)