Model Card for Model ID

This model performs Part-of-Speech (POS) tagging for Arabic text using a character-level BIO tagging scheme. It is specifically trained on the Holy Quran and built with the dytr (Dynamic Transformer) library, enabling continual learning and multi-task capabilities.

Key Features

  • โœ… Character-level BIO tagging for handling complex Arabic morphology
  • โœ… Trained on Quranic Arabic - high accuracy on classical texts
  • โœ… Continual Learning ready - add new tasks without forgetting
  • โœ… Multi-task capable - extend to NER, error detection, or generation

Model Details

Model Description

  • Developed by: [Akram Alsubari]
  • Model type: Token Classification (Character-level POS tagging)
  • Language(s): Arabic (Modern Standard & Quranic)
  • License: Apache 2.0
  • Base Model: google-bert/bert-base-multilingual-cased
  • Framework: dytr (Dynamic Transformer)
  • Training Data: Holy Quran with morphological annotations

Model Performance

Metric Score
Token Accuracy 95.68%
F1 Macro 89.59%
F1 Weighted 95.41%
Validation Loss 0.8816

Uses

Direct Use

Requirements

pip install torch pyarabic huggingface_hub dytr -q

inferencer

import torch
import json
import pyarabic.araby as araby
from huggingface_hub import hf_hub_download
from dytr import DynamicTransformer

# Download and load model
model_path = hf_hub_download(
    repo_id="alsubari/bert-base-multilingual-cased-dytr",
    filename="dytr.pt"
)
config_path = hf_hub_download(
    repo_id="alsubari/bert-base-multilingual-cased-dytr",
    filename="config.json"
)

model = DynamicTransformer.load_model(model_path)
model.eval()

with open(config_path) as f:
    config = json.load(f)
    id2label = config['id2label']

tokenizer = model.tokenizer

def tag_arabic(text):
    # Simple tagging function
    chars = []
    for word in text.split():
        for i, c in enumerate(list(word)):
            if i == 0:
                chars.append(c)
            else:
                chars.append(f'##{c}')
    
    input_ids = [tokenizer.cls_token_id] + tokenizer.convert_tokens_to_ids(chars) + [tokenizer.sep_token_id]
    input_tensor = torch.tensor([input_ids])
    
    with torch.no_grad():
        outputs = model.forward(input_ids=input_tensor, task_name='ar_pos_tagging')
        preds = outputs['logits'].argmax(-1).squeeze().tolist()
    
    tags = [id2label[str(p)] for p in preds[1:len(chars)+1]]
    
    # Format output
    result = []
    idx = 0
    for word in text.split():
        word_tags = tags[idx:idx+len(word)]
        pos = '+'.join([t[2:] for t in word_tags if t.startswith('B-')])
        result.append(f"{word}/{pos if pos else 'O'}")
        idx += len(word)
    
    return '\n'.join(result)

# Test
print(tag_arabic("ุจุณู… ุงู„ู„ู‡ ุงู„ุฑุญู…ู† ุงู„ุฑุญูŠู…"))

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
134
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for alsubari/bert-base-multilingual-cased-dytr

Base model

alsubari/dytr
Finetuned
(1)
this model

Space using alsubari/bert-base-multilingual-cased-dytr 1

Paper for alsubari/bert-base-multilingual-cased-dytr