Sentinel_dataset / README.md
Mehd1SLH's picture
Update README.md
582039d verified
metadata
language:
  - en
license: mit
task_categories:
  - text-classification
task_ids:
  - fact-checking
pretty_name: Project Sentinel  Semantic Filtering Dataset

Project Sentinel – Semantic Filtering Dataset

Dataset Summary

This dataset is designed to train and evaluate lightweight semantic filtering models for Retrieval-Augmented Generation (RAG) systems deployed in resource-constrained / edge environments.

It supports binary classification of (query, candidate_sentence) pairs into:

  • 1 – Supporting Fact: Factually useful evidence required to answer the query
  • 0 – Distractor: Topically similar but factually insufficient or misleading text

The dataset was created to address the hallucination-by-distraction failure mode commonly observed in local RAG pipelines.

This dataset was used to fine-tune a 4-layer TinyBERT cross-encoder for the Project Sentinel semantic gatekeeping module.


Supported Tasks

  • Semantic filtering
  • Evidence verification
  • Hallucination reduction in RAG
  • Binary text classification

Dataset Structure

The dataset consists of sentence-level query–evidence pairs derived from the HotpotQA (Distractor) dataset.

Each sample follows the structure:

Field Type Description
query string Natural language question
sentence string Candidate sentence extracted from context
label int 1 = supporting fact, 0 = distractor

Dataset Splits

Split Samples
Train 69,101
Validation 7,006

The class distribution is intentionally imbalanced, reflecting real-world retrieval conditions where distractors vastly outnumber true evidence.


Data Construction

Source Dataset

  • HotpotQA – Distractor Setting
  • Multi-hop QA dataset with explicit annotations for supporting facts and distractor paragraphs

Positive Samples

  • Extracted exclusively from annotated supporting fact sentences
  • Based on (title, sentence index) ground-truth supervision

Negative Samples

  • Non-annotated sentences from gold paragraphs (hard negatives)
  • Sentences from distractor paragraphs intentionally selected for topical similarity

This strategy produces adversarial negatives closely matching real retrieval errors.


Preprocessing Pipeline

  • Removal of null or malformed samples
  • Sentence-level deduplication
  • Text normalization and whitespace cleanup
  • Controlled negative sampling
  • Label integrity verification

Input Representation

The dataset is intended for cross-encoder architectures.

Model input format:

[CLS] query [SEP] sentence [SEP]
  • Maximum sequence length: 512 tokens
  • Designed for TinyBERT / MiniLM-style encoders

Class Imbalance Handling

Due to natural imbalance, training is expected to use:

  • Weighted cross-entropy loss
  • Recall-preserving thresholds

This ensures evidence recall is prioritized over aggressive filtering.


Intended Use

✔ Training semantic gatekeepers for RAG
✔ Edge-deployed LLM pipelines
✔ Hallucination suppression
✔ Early-exit decision systems


Limitations

  • Derived from Wikipedia-based QA (HotpotQA)
  • English-only
  • Sentence-level relevance only (not passage-level)

Performance may vary when applied to highly domain-specific corpora.


Citation

If you use this dataset, please cite:

@article{salih2026sentinel,
  title={Project Sentinel: Lightweight Semantic Filtering for Edge RAG},
  author={Salih, El Mehdi and Ait El Mouden, Khaoula and Akchouch, Abdelhakim},
  year={2026}
}

Contact

El Mehdi Salih
Mohammed V University – Rabat
Email: elmehdi_salih@um5.ac.ma