Project Nabu: A Model-Agnostic Pragmatic Layer

Author: Abdullah Hawas (Independent Researcher, Iraq)
Paper Title: Project Nabu: A Model-Agnostic Pragmatic Layer for Social Intent Understanding in Arabic Dialects

1. Abstract

[cite_start]Most natural language processing (NLP) systems rely on surface-level sentiment cues, which leads to systematic failures when processing language in high-context cultures[cite: 7]. [cite_start]Project Nabu introduces a model-agnostic pragmatic layer designed to sit on top of any pretrained language model (like BERT or MARBERT), allowing inference of social intent beyond traditional sentiment analysis[cite: 8].

[cite_start]We use Iraqi Arabic as a stress-test case due to its dense hierarchical signaling[cite: 9]. [cite_start]Our evaluation on the ICLE dataset (4,000 annotated sentences) demonstrates that the Nabu Layer can suppress literal sentiment cues when they conflict with pragmatic intent[cite: 10].

2. The Problem: "The Pragmatic Gap"

Standard sentiment pipelines often misclassify utterances in hierarchical settings. For example, exaggerated praise or apparent sympathy often serves strategic goals like:

  • Deference signaling (Respect)
  • Request softening
  • [cite_start]Status negotiation [cite: 18]

Current models see "Good job" as Positive, while Nabu analyzes if it is Sarcastic or Flattery.

3. Methodology & Architecture

[cite_start]The Nabu Layer operates on the embeddings of a pretrained base model without retraining the base model itself[cite: 37].

Architecture Design

The layer extracts pragmatic features based on:

  1. Hierarchical role indicators
  2. Pragmatic trigger density
  3. [cite_start]Status comparison patterns [cite: 40-42]

(Note: See Figure 1 in the attached Paper PDF for the full diagram)

4. Evaluation & Results

We evaluated the model on Test Case II: Sentiment Paradox (Status Inflation). [cite_start]Example: "By God, Professor, frankly you are oppressed in this position, you should be a minister not a manager." [cite: 78]

Metric Standard Sentiment Nabu Layer
Interpretation Negative (Sadness) Strategic Flattery (Hypocrisy)
Confidence N/A [cite_start]73.04% [cite: 81]

Overall Performance: [cite_start]The Nabu framework achieved an average accuracy of 89% on the Iraqi Arabic test set, compared to 54% for standard sentiment classifiers[cite: 94].

5. Technical Usage

To use the Nabu Layer (Weights coming soon):

from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the Nabu-Trained Layer
model_name = "ay933/Nabu-Iraqi" 

model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support