YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
Model: FINGU-AI/Qwen2.5-Orpo
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_name = "FINGU-AI/Qwen2.5-Orpo"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define prompt
prompt = "Translate ko to uz: \n ๊ทธ๋ฌ๋ ์ถฉ๋น๋ถ์ฑ๋ ๊ฒฐ์ฐ์ผ์ ๊ธฐ์
์ด ๋ถ๋ดํด์ผ ํ ์๋ฌด๊ฐ ๋ช
๋ฐฑํ ์กด์ฌํ๊ณ ๊ธ์ก์ ํฉ๋ฆฌ์ ์ผ๋ก ์ถ์ ํ ์ ์๋ค๋ ์ ์์ '์ฐ๋ฐ๋ถ์ฑ์๋ ํ์คํ๊ฒ ์ฐจ์ด๊ฐ ์๋ค.'"
# Prepare messages and text
messages = [
{"role": "system", "content": "You are helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate model inputs
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate response
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
# Decode the response
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Model Details
- Model Name: FINGU-AI/Qwen2.5-Orpo
- Type: Causal Language Model
- Task: Text generation and translation (Korean to Uzbek)
- Framework: PyTorch
- Auto Tokenizer: Yes, using
AutoTokenizer - Device: Auto configuration based on available hardware (e.g., GPU/CPU)
Example Use Case
This model supports translation from Korean to Uzbek. In the given example, the input is a sentence in Korean, and the model translates it into Uzbek. The system role is set up as "You are a helpful assistant."
Model Inputs
- Prompt: Text input asking for translation (Korean to Uzbek).
- Tokenization: The
apply_chat_templateis used to structure the input for a conversational AI use case.
Output
The model generates a translated response in Uzbek, utilizing the AutoModelForCausalLM for causal language modeling and generation. The result is decoded and presented in a readable format.
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support