File size: 2,963 Bytes
e9a0693
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
language:
- ar
tags:
- iraqi-dialect
- pragmatics
- nlp
- social-reasoning
- cultural-alignment
---

# Project Nabu: A Model-Agnostic Pragmatic Layer

**Author:** Abdullah Hawas (Independent Researcher, Iraq)  
**Paper Title:** Project Nabu: A Model-Agnostic Pragmatic Layer for Social Intent Understanding in Arabic Dialects

## 1. Abstract
[cite_start]Most natural language processing (NLP) systems rely on surface-level sentiment cues, which leads to systematic failures when processing language in high-context cultures[cite: 7]. [cite_start]**Project Nabu** introduces a model-agnostic pragmatic layer designed to sit on top of any pretrained language model (like BERT or MARBERT), allowing inference of social intent beyond traditional sentiment analysis[cite: 8].

[cite_start]We use **Iraqi Arabic** as a stress-test case due to its dense hierarchical signaling[cite: 9]. [cite_start]Our evaluation on the **ICLE dataset (4,000 annotated sentences)** demonstrates that the Nabu Layer can suppress literal sentiment cues when they conflict with pragmatic intent[cite: 10].

## 2. The Problem: "The Pragmatic Gap"
Standard sentiment pipelines often misclassify utterances in hierarchical settings. For example, exaggerated praise or apparent sympathy often serves strategic goals like:
* Deference signaling (Respect)
* Request softening
* [cite_start]Status negotiation [cite: 18]

Current models see "Good job" as **Positive**, while Nabu analyzes if it is **Sarcastic** or **Flattery**.

## 3. Methodology & Architecture
[cite_start]The Nabu Layer operates on the embeddings of a pretrained base model without retraining the base model itself[cite: 37].

### Architecture Design
The layer extracts pragmatic features based on:
1.  **Hierarchical role indicators**
2.  **Pragmatic trigger density**
3.  [cite_start]**Status comparison patterns** [cite: 40-42]

*(Note: See Figure 1 in the attached Paper PDF for the full diagram)*

## 4. Evaluation & Results
We evaluated the model on **Test Case II: Sentiment Paradox (Status Inflation)**.
[cite_start]Example: *"By God, Professor, frankly you are oppressed in this position, you should be a minister not a manager."* [cite: 78]

| Metric | Standard Sentiment | **Nabu Layer** |
| :--- | :--- | :--- |
| **Interpretation** | Negative (Sadness) | **Strategic Flattery (Hypocrisy)** |
| **Confidence** | N/A | [cite_start]**73.04%** [cite: 81] |

**Overall Performance:**
[cite_start]The Nabu framework achieved an average accuracy of **89%** on the Iraqi Arabic test set, compared to **54%** for standard sentiment classifiers[cite: 94].

## 5. Technical Usage
To use the Nabu Layer (Weights coming soon):

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the Nabu-Trained Layer
model_name = "ay933/Nabu-Iraqi" 

model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)