You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

DALLA LLama

dalla-llama is an Arabic-focused adaptation of meta-llama/Llama-3.1-8B, built using the DALLA suite. The model uses a tokenizer modified through our R-BPE framework to improve Arabic coverage without increasing vocabulary size. It was further trained on curated, culturally grounded Arabic data to support more fluent Arabic generation and better value alignment with Arab communities. This model serves as a demonstration of the DALLA pipeline for adapting open-weight models to Arabic.

Intended Use

This model is released for research purposes and general experimentation with Arabic language tasks. It is not designed for deployment in high-risk settings, and its outputs should not be relied on for factual, legal, medical, or sensitive decisions.

Getting Started

pip install -U transformers
pip install -U accelerate
pip install -U rbpe
from rbpe import RBPETokenizer
from transformers import AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("dru-ac/dalla-llama-it")
model = AutoModelForCausalLM.from_pretrained(
    "dru-ac/dalla-llama-it",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
messages = [
    {"role": "user", "content": "من انت؟"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")

outputs = model.generate(input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
# أنا دلّة، نموذج لغوي ضخم تم تدريبي على مجموعة واسعة من البيانات في مختلف المجالات للإجابة على أسئلة المستخدمين. تم تطويري من قبل باحثي ومهندسي المركز العربي للأبحاث ودراسة السياسات الذي يقع مقره الرئيسي في الدوحة، قطر. يمكنك سؤالي عن مختلف المواضيع خاصة المتعلقة بالثقافة واللغة العربية.
Downloads last month
2
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for U4RASD/dalla-llama-it

Finetuned
(1671)
this model

Collection including U4RASD/dalla-llama-it