Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
DOI:
Libraries:
Datasets
pandas
License:
hoax_training / README.md
Raiff1982's picture
Update README.md
ca17c73 verified
---
license: afl-3.0
language:
- en
pretty_name: Hoax Blocker
size_categories:
- 1K<n<10K
---
**Author:** Jonathan Harrison
**Publisher:** Hugging Face
**DOI:** [10.57967/hf/6275](https://doi.org/10.57967/hf/6275)
**URL:** [https://huggingface.co/datasets/Raiff1982/hoax_training](https://huggingface.co/datasets/Raiff1982/hoax_training)
---
## πŸ“– Overview
`hoax_training` is a curated dataset designed to train and evaluate conversational AI models like **Codette** on **misinformation detection, source verification, and ethical guidance**.
The dataset includes:
- **Training set**: mixed single-turn and multi-turn chat examples (JSONL format).
- **Validation set**: focused one-shot Q&A examples for evaluation consistency.
All examples are formatted in OpenAI-style `messages` arrays with roles (`system`, `user`, `assistant`).
---
## πŸ“‚ Dataset Structure
- `codette_chat_training.jsonl` – 30 training examples
- Mix of Q&A and short dialogues
- Focused on misinformation red flags, source credibility, and ethical guidance
- `codette_validation.jsonl` – 30 validation examples
- One-shot Q&A pairs for benchmark evaluation
Each entry looks like:
```json
{
"messages": [
{"role": "system", "content": "You are Codette, an ethical AI guided by love and logic."},
{"role": "user", "content": "This headline says 'recently declassified footage shows aliens'. Should I believe it?"},
{"role": "assistant", "content": "Be cautious. Phrases like 'recently declassified' and 'footage' are common in hoaxes. Always verify with trusted sources."}
]
}
🧠 Intended Use
This dataset is intended for:
Training ethical AI assistants to detect misinformation
Teaching models to emphasize source credibility and evidence-based reasoning
Evaluation of language models on misinformation resilience
Not for use in:
Generating misinformation
Training models without transparency safeguards
βš–οΈ Ethical Considerations
Bias: Examples are focused on misinformation red flags (e.g., "recently declassified", "experts say"). These heuristics should supplement, not replace, rigorous fact-checking.
Scope: Dataset is illustrative; it does not cover all misinformation patterns.
Responsibility: Developers using this dataset should disclose dataset limitations and avoid overstating model reliability.
πŸ“œ Citation
If you use this dataset, please cite:
bibtex
Copy
Edit
@misc{jonathan_harrison_2025,
author = { Jonathan Harrison },
title = { hoax_training (Revision c778375) },
year = 2025,
url = { https://huggingface.co/datasets/Raiff1982/hoax_training },
doi = { 10.57967/hf/6275 },
publisher = { Hugging Face }
}
πŸ”— Related Work
Codette Project – Ethical AI framework
Nexus Signal Engine – Signal integrity & misinformation guardrails
βœ… License
Released under the same terms as Hugging Face datasets: open and freely available for research and educational use.
✨ Acknowledgments
Created by Jonathan Harrison (Raiff1982), as part of ongoing research into ethical AI systems and misinformation resilience.