๐ง AI Text Detector โ DeBERTa v3 Large (Fine-tuned on Human vs AI Text)
This model is fine-tuned on a labeled dataset for AI-generated vs. Human-written text detection.
โ๏ธ Model Details
- Base Model:
microsoft/deberta-v3-large - Fine-tuned by: @kishan
- Epochs: 4
- Learning Rate: 2e-05
- Batch Size: 31
- GPU: 80 GB A100
- Optimizer: AdamW (Fused)
- Scheduler: Cosine
- Mixed Precision: FP16
- Gradient Checkpointing: Enabled
๐ Evaluation Results (Test Set)
| Metric | Score |
|---|---|
| Accuracy | 0.998 |
| Human (0) โ Precision | 0.999 |
| Human (0) โ Recall | 0.997 |
| AI (1) โ Precision | 0.997 |
| AI (1) โ Recall | 0.999 |
๐งฎ Confusion Matrix
๐ Example Inference
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("kishankachhadiya/debarta-text-classifier")
model = AutoModelForSequenceClassification.from_pretrained("kishankachhadiya/debarta-text-classifier")
text = "This text was likely written by an AI model."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
probs = torch.nn.functional.softmax(outputs.logits, dim=1)
print(probs)
- Downloads last month
- 8
Model tree for kishankachhadiya/debarta-text-classifier
Base model
microsoft/deberta-v3-largeSpace using kishankachhadiya/debarta-text-classifier 1
Evaluation results
- Accuracy on custom_text_datasettest set self-reported0.998
- Precision on custom_text_datasettest set self-reported0.998
- Recall on custom_text_datasettest set self-reported0.998
- F1 Score on custom_text_datasettest set self-reported0.998
- Validation Loss (Best) on custom_text_datasettest set self-reported0.004
- Step (Best Checkpoint) on custom_text_datasettest set self-reported5000.000
