language:
- ar
license: mit
task_categories:
- text-classification
tags:
- arabic
- sentiment
- logistics
- customer-feedback
- delivery-service
pretty_name: Arabic Logistics Feedback Corpus
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: text
dtype: string
description: Customer feedback text in Arabic about logistics/delivery service
- name: label
dtype: string
description: Sentiment label (positive/negative)
- name: score
dtype: float32
description: Confidence score for the label
- name: domain
dtype: string
description: Domain of the feedback (currently all logistics)
- name: is_conflict
dtype: bool
description: Indicates if there's a conflict between label and actual sentiment
splits:
- name: train
num_bytes: 178122
num_examples: 1504
download_size: 75168
dataset_size: 178122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Arabic Feedback Corpus
Dataset Description
This dataset contains 1,504 Arabic customer feedback entries for sentiment analysis and quality assessment in the logistics domain. The data consists of real customer reviews about delivery services, courier performance, and order fulfillment experiences in Egyptian Arabic and Modern Standard Arabic.
Languages
- Primary: Egyptian Arabic (العامية المصرية)
- Secondary: Modern Standard Arabic (MSA)
- Code: ar, ar_EG
Dataset Summary
Customer feedback is crucial for service improvement and quality assurance. This dataset provides:
- Authentic customer reviews from logistics services
- Binary sentiment labels (positive/negative)
- Quality scores (1-5 scale)
- Conflict detection flags for quality control
- Real-world colloquial Egyptian Arabic expressions
Dataset Structure
Data Format
Each entry contains:
{
"text": "المندوب محترم جدا وسريع في التوصيل",
"label": "positive",
"score": 5.0,
"domain": "logistics",
"is_conflict": false
}
Data Fields
| Field | Type | Description |
|---|---|---|
text |
string | Customer feedback text in Arabic |
label |
string | Sentiment label ("positive" or "negative") |
score |
float | Quality rating (1.0 to 5.0) |
domain |
string | Content domain (always "logistics") |
is_conflict |
bool | Flag for label-score conflicts |
Field Details
text
Customer feedback ranging from 3 to 75 characters, containing:
- Delivery experience descriptions
- Courier behavior comments
- Service quality assessments
- Product condition feedback
- Timing and professionalism complaints/praise
label
Binary sentiment classification:
- positive: Satisfied customers, good experiences
- negative: Complaints, dissatisfaction, problems
score
Numerical rating on 1-5 scale:
- 5.0: Excellent service
- 4.0: Good service
- 3.0: Average service
- 2.0: Below average
- 1.0: Poor service
is_conflict
Quality control flag indicating mismatch between label and score:
- false: Label and score are consistent
- true: Conflict detected (e.g., positive label with score 1.0)
Dataset Statistics
Overview
- Total Entries: 1,504
- Positive Reviews: ~35%
- Negative Reviews: ~65%
- Conflicted Labels: ~2%
- Average Text Length: 38.5 characters
- Domain: Logistics only
Label Distribution
| Label | Count | Percentage |
|---|---|---|
| negative | ~978 | 65% |
| positive | ~526 | 35% |
Score Distribution
| Score | Count | Typical Label |
|---|---|---|
| 1.0 | ~1,450 | negative |
| 5.0 | ~50 | positive |
| 2.0-4.0 | ~4 | varies |
Conflict Examples
Conflicted entries (where label contradicts score):
{
"text": "ممتاز وسرعة في الاداء",
"label": "positive",
"score": 1.0, # ← Conflict!
"is_conflict": true
}
{
"text": "المندوب بيبلغ بوقت وبيجي بعديها ب ٧ ساعات",
"label": "positive", # ← Conflict!
"score": 1.0,
"is_conflict": true
}
Common Feedback Themes
Positive Feedback Topics
- ✅ Professional and respectful couriers
- ✅ Fast delivery
- ✅ Good communication
- ✅ Helpful service
- ✅ On-time arrival
Negative Feedback Topics
- ❌ Rude or unprofessional behavior
- ❌ Delivery delays
- ❌ Courier refusing to come upstairs
- ❌ Extra charges/tips demanded
- ❌ Not answering calls
- ❌ Poor product condition
- ❌ Wrong items delivered
- ❌ Courier attitude problems
Use Cases
✅ Recommended Use Cases
- Sentiment Analysis: Train Arabic sentiment classifiers
- Quality Assessment: Predict service quality scores
- Conflict Detection: Identify inconsistent reviews
- Egyptian Arabic NLP: Understand colloquial expressions
- Customer Service AI: Build chatbots understanding complaints
- Logistics Analytics: Analyze delivery service quality
- Multi-Task Learning: Joint sentiment + score prediction
- Data Quality Models: Detect annotation inconsistencies
⚠️ Limitations
- Domain Specificity: Limited to logistics/delivery domain
- Geographic Scope: Primarily Egyptian context
- Label Noise: Contains ~2% conflicted labels
- Imbalanced Data: 65% negative vs 35% positive
- Size: 1,504 entries (medium-sized dataset)
- Score Distribution: Heavily skewed toward 1.0 and 5.0
Loading the Dataset
Using Hugging Face Datasets
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("fr3on/arabic-feedback-corpus")
# Access the data
print(dataset['train'][0])
# Filter by sentiment
positive_reviews = dataset['train'].filter(lambda x: x['label'] == 'positive')
negative_reviews = dataset['train'].filter(lambda x: x['label'] == 'negative')
# Filter clean data (no conflicts)
clean_data = dataset['train'].filter(lambda x: x['is_conflict'] == False)
# Filter by score
excellent_service = dataset['train'].filter(lambda x: x['score'] == 5.0)
poor_service = dataset['train'].filter(lambda x: x['score'] == 1.0)
Using Pandas
import pandas as pd
# Load Parquet file directly
df = pd.read_parquet("hf://datasets/fr3on/arabic-feedback-corpus/data/train-00000-of-00001.parquet")
# Analyze sentiment distribution
print(df['label'].value_counts())
# Check for conflicts
conflicts = df[df['is_conflict'] == True]
print(f"Conflicted entries: {len(conflicts)}")
# Score statistics
print(df['score'].describe())
# Export filtered data
positive_df = df[df['label'] == 'positive']
positive_df.to_csv('positive_feedback.csv', index=False)
Training Examples
Sentiment Classification
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer
# Load dataset
dataset = load_dataset("fr3on/arabic-feedback-corpus")
# Remove conflicted samples for clean training
clean_dataset = dataset['train'].filter(lambda x: not x['is_conflict'])
# Load Arabic BERT model
model_name = "asafaya/bert-base-arabic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
num_labels=2 # positive/negative
)
# Tokenize
def preprocess(examples):
return tokenizer(
examples['text'],
truncation=True,
max_length=128,
padding='max_length'
)
tokenized = clean_dataset.map(preprocess, batched=True)
# Convert labels to numbers
label_map = {'negative': 0, 'positive': 1}
tokenized = tokenized.map(lambda x: {'label': label_map[x['label']]})
# Train
trainer = Trainer(
model=model,
train_dataset=tokenized,
)
trainer.train()
Multi-Task Learning (Sentiment + Score)
from datasets import load_dataset
import torch.nn as nn
dataset = load_dataset("fr3on/arabic-feedback-corpus")
clean_data = dataset['train'].filter(lambda x: not x['is_conflict'])
# Multi-task model architecture
class MultiTaskModel(nn.Module):
def __init__(self, base_model):
super().__init__()
self.base = base_model
self.sentiment_head = nn.Linear(768, 2) # positive/negative
self.score_head = nn.Linear(768, 1) # score prediction
def forward(self, input_ids, attention_mask):
outputs = self.base(input_ids, attention_mask)
pooled = outputs.last_hidden_state[:, 0] # CLS token
sentiment = self.sentiment_head(pooled)
score = self.score_head(pooled)
return sentiment, score
# Train with both objectives
# sentiment_loss = CrossEntropyLoss()
# score_loss = MSELoss()
# total_loss = sentiment_loss + score_loss
Conflict Detection
from datasets import load_dataset
dataset = load_dataset("fr3on/arabic-feedback-corpus")
# Train a model to detect annotation conflicts
# Features: text + predicted_label + predicted_score
# Target: is_conflict flag
def extract_features(example):
return {
'text': example['text'],
'label': example['label'],
'score': example['score'],
'target': example['is_conflict']
}
conflict_dataset = dataset['train'].map(extract_features)
# This can help identify:
# - Annotation errors
# - Sarcastic comments
# - Ambiguous feedback
Data Collection & Processing
Source
- Origin: Real customer feedback from logistics services
- Language: Primarily Egyptian Arabic (colloquial)
- Quality: Authentic user-generated content
Annotation Process
- Text Collection: Customer reviews and feedback
- Labeling: Binary sentiment annotation (positive/negative)
- Scoring: Quality ratings on 1-5 scale
- Conflict Detection: Automated flag for label-score mismatches
- Validation: Quality checks and consistency reviews
Data Quality
- ✅ Real customer feedback (not synthetic)
- ⚠️ Contains ~2% label-score conflicts
- ✅ Text lengths validated (3-75 characters)
- ✅ Domain consistency (all logistics)
- ⚠️ Class imbalance (65% negative)
Considerations for Using the Data
Egyptian Arabic Characteristics
This dataset contains colloquial Egyptian expressions:
- Informal spelling: مش instead of ليس
- Egyptian vocabulary: مندوب، اوردر، شحنة
- Mixed language: Some English words (أوردر = order)
- Abbreviated words: ج for جنيه (Egyptian pound)
Handling Conflicts
The is_conflict flag identifies potential issues:
# Option 1: Exclude conflicts
clean_data = dataset.filter(lambda x: not x['is_conflict'])
# Option 2: Use conflicts for quality control training
conflicts = dataset.filter(lambda x: x['is_conflict'])
# Option 3: Manually review and correct
for item in conflicts:
# Review and fix annotations
pass
Recommended Training Approaches
- Balance the dataset using oversampling or class weights
- Remove conflicts for cleaner training
- Use Arabic-specific models (AraBERT, MARBERT)
- Consider dialectal variations in preprocessing
- Apply data augmentation to address class imbalance
Ethical Considerations
- Privacy: Customer names and personal info removed
- Bias: Dataset reflects real customer experiences
- Negativity bias: More complaints than praise (common in feedback data)
- Cultural context: Egyptian service expectations and norms
Applications
Customer Service Automation
# Real-time sentiment analysis for support tickets
def analyze_feedback(text):
sentiment = model.predict(text)
if sentiment == 'negative' and score < 3.0:
# Escalate to human agent
priority = "high"
return sentiment, score, priority
Quality Monitoring
# Track service quality trends
import pandas as pd
df = pd.read_parquet("data.parquet")
daily_scores = df.groupby('date')['score'].mean()
# Alert on quality drops
if daily_scores.last() < 3.0:
send_alert("Service quality declining")
Training Data Annotation
# Use model to pre-annotate new data
new_feedback = ["المندوب كان ممتاز"]
predicted_label = model.predict(new_feedback)
# Human reviews and corrects predictions
Common Arabic Tokens
Positive indicators:
- ممتاز (excellent)
- محترم (respectful)
- سريع (fast)
- كويس (good)
- شكرا (thanks)
Negative indicators:
- سيء (bad)
- اتأخر (delayed)
- قليل الذوق (rude, lit. "little taste")
- وحش (bad/ugly)
- مش (not)
- رفض (refused)
Neutral/Context-dependent:
- المندوب (the courier)
- الاوردر (the order)
- الشحنة (the shipment)
- وصل (arrived)
License
This dataset is released under the Apache 2.0 License.
Citation
If you use this dataset in your research, please cite:
@dataset{arabic_feedback_corpus,
title={Arabic Feedback Corpus: Logistics Domain Sentiment Analysis},
author={fr3on},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/fr3on/arabic-feedback-corpus}
}
Acknowledgments
- Source: Customer feedback from logistics services
- Annotation: Sentiment labels and quality scores
- Format: Parquet for efficient storage and loading
Version History
- v1.0.0 (2026-01-06): Initial release
- 1,504 entries
- Binary sentiment labels
- 1-5 quality scores
- Conflict detection flags
- Parquet format
Keywords: Arabic NLP, sentiment analysis, customer feedback, logistics, Egyptian Arabic, colloquial Arabic, quality assessment, conflict detection, delivery services
Dataset Size: 1,504 examples | Format: Parquet | License: Apache 2.0