Text Classification
Transformers
Safetensors
English
roberta
ai-text-detection
voight-kampff
pan-2025
text-embeddings-inference
Instructions to use protagonist/roberta-eloquent with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use protagonist/roberta-eloquent with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="protagonist/roberta-eloquent")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("protagonist/roberta-eloquent") model = AutoModelForSequenceClassification.from_pretrained("protagonist/roberta-eloquent") - Notebooks
- Google Colab
- Kaggle
| # eloquent26 generations | |
| 5 generators Γ 5 strategies Γ 66 topics = 1,650+ generations from the | |
| ELOQUENT 2026 Voight-Kampff factor-isolation experiment. | |
| ## Files | |
| - `generations.tar.gz` β full `out/` tree: | |
| - `{generator}/{strategy}/{topic_id}.txt` β the texts | |
| - `_references/{set}/*.txt` β human controls | |
| - `scores/{detector}/{generator}/{strategy}/{topic_id}.json` β per-text scores | |
| - `manifest.jsonl`, `analysis/*.csv` | |
| - `scores.parquet` β 9.6k-row aggregated detector scores (for quick inspection) | |
| ## Strategies | |
| `vanilla`, `imperfection`, `roundtrip`, `roundtrip_imperf`, `lost_in_translation`. | |
| Round-trip language: Hindi (closed frontier), Chinese (Qwen family). | |
| ## License | |
| For research use. Contact the author for any commercial use. | |
| Repo: `protagonist/roberta-eloquent`. | |