File size: 1,987 Bytes
f0a0ecb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language:
- en
- yo
- ha
- ig
- sw
- am
- pcm
license: apache-2.0
base_model: davlan/afro-xlmr-base
tags:
- text-classification
- human-ai-text-attribution
- hata
- african-languages
- multilingual
datasets:
- msmaje/phd-hata-african-dataset
metrics:
- accuracy
- f1
---

# AfroXLMR for Human-AI Text Attribution (HATA)

This model is a fine-tuned version of [davlan/afro-xlmr-base](https://huggingface.co/davlan/afro-xlmr-base) for **Human-AI Text Attribution** in African languages.

## Model Description

- **Model Type:** Text Classification (Binary)
- **Base Model:** AfroXLMR-base
- **Languages:** Yoruba, Hausa, Igbo, Swahili, Amharic, Nigerian Pidgin, English
- **Task:** Distinguishing between human-written and AI-generated text

## Performance

| Metric    | Score  |
|-----------|--------|
| Accuracy  | 1.0000 |
| F1 Score  | 1.0000 |
| Precision | 1.0000 |
| Recall    | 1.0000 |

## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_name = "msmaje/phdhatamodel"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

text = "Your text here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)

with torch.no_grad():
    outputs = model(**inputs)
    predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
    predicted_class = torch.argmax(predictions, dim=-1).item()
    
labels = {0: "Human-written", 1: "AI-generated"}
print(f"Prediction: {labels[predicted_class]}")
```

## Training Details

- **Dataset:** msmaje/phd-hata-african-dataset
- **Training samples:** 128,000
- **Validation samples:** 32,000
- **Epochs:** 3
- **Learning Rate:** 2e-5
- **Batch Size:** 16

## Citation
```bibtex
@misc{msmaje2025hata,
  author = {Maje, M.S.},
  title = {AfroXLMR for Human-AI Text Attribution},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/msmaje/phdhatamodel}
}
```