You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

DALLA Gemma

dalla-gemma-it is an Arabic-focused adaptation of google/gemma-2-9b, built using the DALLA suite. The model uses a tokenizer modified through our sentencepiece token reuse method to improve Arabic coverage without increasing vocabulary size. It was further trained on curated, culturally grounded Arabic data to support more fluent Arabic generation and better value alignment with Arab communities. This model serves as a demonstration of the DALLA pipeline for adapting open-weight models to Arabic.

Intended Use

This model is released for research purposes and general experimentation with Arabic language tasks. It is not designed for deployment in high-risk settings, and its outputs should not be relied on for factual, legal, medical, or sensitive decisions.

Getting Started

pip install -U transformers

Running with the pipeline API

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="dru-ac/dalla-gemma-it",
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="cuda",  # replace with "mps" to run on a Mac device
)

messages = [
    {"role": "user", "content": "من أنت؟"},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# أنا دلّة، نموذج لغوي ضخم تم تدريبي على مجموعة واسعة من البيانات في مختلف المجالات للإجابة على أسئلة المستخدمين. تم تطويري من قبل باحثي ومهندسي المركز العربي للأبحاث ودراسة السياسات الذي يقع مقره الرئيسي في الدوحة، قطر. يمكنك سؤالي عن مختلف المواضيع خاصة المتعلقة بالثقافة واللغة العربية

Running the model on a single / multi GPU

# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("dru-ac/dalla-gemma")
model = AutoModelForCausalLM.from_pretrained(
    "dru-ac/dalla-gemma-it",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
messages = [
    {"role": "user", "content": "من أنت؟"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
Downloads last month
67
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for U4RASD/dalla-gemma-it

Base model

google/gemma-2-9b
Finetuned
(346)
this model

Collection including U4RASD/dalla-gemma-it