Trendyol-LLM-Asure-12B
Trendyol-LLM-Asure-12B is a 12-billion-parameter multimodal instruct model built on top of Gemma 3-12B. It is optimized for structured instruction following over both text and image-text inputs, with a primary focus on operational task performance in Turkish and English.
The model’s general encyclopedic world knowledge is intentionally limited. Instead, it is heavily tuned for e-commerce business tasks such as summarization, question-answering, structured extraction, and controlled generation. Compared to its base model, it is optimized for lower token consumption and more efficient inference in production environments.
🔑 Highlights
- Multimodal (Vision + Text) – Native support for image-text conversations using Gemma 3 multimodal capabilities.
- Instruct-Optimized – Trained exclusively in instruct format for high prompt adherence and system-message compliance.
- Efficient Inference – Reduced token verbosity compared to base Gemma 3-12B.
- Task-Oriented Design
- Summarisation & paraphrasing
- Textual and visual Question-Answering (QA)
- Structured extraction
- Controlled generation tasks
- Text classification
- E-commerce tasks such as relevancy
- Bilingual – Strong Turkish and English performance.
Basic Usage with Transformers
from transformers import AutoProcessor, AutoModelForImageTextToText
model_id = "Trendyol/Trendyol-LLM-Asure-12B"
model = AutoModelForImageTextToText.from_pretrained(model_id, device_map="auto")
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{"role": "user", "content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Bu görselde ne görüyorsun? Kısa ve net şekilde açıklayabilir misin?"}
]}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
return_dict=True
).to(model.device)
output = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(output[0], skip_special_tokens=True))
Serve the Model with VLLM
Below is a minimal production-style setup for serving Trendyol-LLM-Asure-12B with vLLM.
vllm serve Trendyol/Trendyol-LLM-Asure-12B \
--served-model-name asure-12b \
--dtype bfloat16 \
Limitations, Risks, Bias, and Ethical Considerations
Limitations and Known Biases
- Primary Function and Application: Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. Outputs should be considered as suggestions rather than definitive answers.
- Language Comprehension and Generation: The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- Generation of False Information: Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Its world-knowdledge is limited, it is built for business use cases.
Risks and Ethical Considerations
- Potential for Harmful Use: There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- Unintended Content and Bias: The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- Toxicity: Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
Recommendations for Safe and Ethical Usage
- Human Oversight: We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- Application-Specific Testing: Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- Responsible Development and Deployment: It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.
- Downloads last month
- 188