Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,5 +9,49 @@ app_file: app.py
|
|
| 9 |
pinned: false
|
| 10 |
short_description: AI vs Human text classifier
|
| 11 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
| 9 |
pinned: false
|
| 10 |
short_description: AI vs Human text classifier
|
| 11 |
---
|
| 12 |
+
# 🤖 AI vs Human Text Classifier (RoBERTa)
|
| 13 |
+
|
| 14 |
+
This project fine-tunes **RoBERTa** to classify text as either:
|
| 15 |
+
- 🧑 Human-Written
|
| 16 |
+
- 🤖 AI-Generated
|
| 17 |
+
|
| 18 |
+
It was developed as a **Capstone Project** to explore the power of transformer-based models in detecting AI-generated content.
|
| 19 |
+
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
## 📌 Project Overview
|
| 23 |
+
With the rapid rise of LLMs like GPT and other AI text generators, distinguishing between human-written and AI-generated text is becoming crucial in education, research, and online authenticity.
|
| 24 |
+
This project leverages **RoBERTa**, a transformer-based model, to build a binary text classifier.
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## 🛠️ Features
|
| 29 |
+
- Fine-tuned **RoBERTa-base** model
|
| 30 |
+
- Binary classification: `Human (0)` vs `AI (1)`
|
| 31 |
+
- Deployed with **Gradio** for easy interaction
|
| 32 |
+
- Model hosted on **Hugging Face Model Hub**
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## 📂 Dataset
|
| 37 |
+
The dataset used in training contains two columns:
|
| 38 |
+
- **Text** → the input text sample
|
| 39 |
+
- **Generated** → label (`0 = Human`, `1 = AI`)
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
## 🚀 Training
|
| 44 |
+
The model was fine-tuned on Google Colab using the Hugging Face `transformers` library.
|
| 45 |
+
|
| 46 |
+
**Steps:**
|
| 47 |
+
1. Load dataset (`Text`, `Generated`)
|
| 48 |
+
2. Preprocess using Hugging Face `AutoTokenizer`
|
| 49 |
+
3. Fine-tune RoBERTa with `Trainer` API
|
| 50 |
+
4. Evaluate using Accuracy, Precision, Recall, F1-score
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## 📊 Results
|
| 55 |
+
Validation accuracy achieved: **~99%**
|
| 56 |
|
| 57 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|