license: cc-by-nc-4.0
language:
- ar
- en
base_model:
- google/gemma-2-9b
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
DALLA Gemma
dalla-gemma-it is an Arabic-focused adaptation of google/gemma-2-9b, built using the DALLA suite.
The model uses a tokenizer modified through our sentencepiece token reuse method to improve Arabic coverage without increasing vocabulary size.
It was further trained on curated, culturally grounded Arabic data to support more fluent Arabic generation and better value alignment with Arab communities.
This model serves as a demonstration of the DALLA pipeline for adapting open-weight models to Arabic.
Intended Use
This model is released for research purposes and general experimentation with Arabic language tasks. It is not designed for deployment in high-risk settings, and its outputs should not be relied on for factual, legal, medical, or sensitive decisions.
Getting Started
pip install -U transformers
Running with the pipeline API
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="dru-ac/dalla-gemma-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "من أنت؟"},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# أنا دلّة، نموذج لغوي ضخم تم تدريبي على مجموعة واسعة من البيانات في مختلف المجالات للإجابة على أسئلة المستخدمين. تم تطويري من قبل باحثي ومهندسي المركز العربي للأبحاث ودراسة السياسات الذي يقع مقره الرئيسي في الدوحة، قطر. يمكنك سؤالي عن مختلف المواضيع خاصة المتعلقة بالثقافة واللغة العربية
Running the model on a single / multi GPU
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("dru-ac/dalla-gemma")
model = AutoModelForCausalLM.from_pretrained(
"dru-ac/dalla-gemma-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "من أنت؟"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))