# Quick Start ```python from transformers import pipeline import torch prompt= "FILL IN THE QUESTION" generator = pipeline( model="Verirl-CodeQwen2.5", task="text-generation", torch_dtype=torch.bfloat16, device_map="auto", ) result = generator(prompt , max_length=2048,num_return_sequences=1, temperature=0.0) response = result[0]["generated_text"] print("Response:", response) ``` --- license: apache-2.0 ---