ai-text-detector-deberta-v3-large-80-gb
This model is a fine-tuned version of microsoft/deberta-v3-large on an unknown dataset.
Model description
MODEL_REPO = "abhi099k/ai-text-detector-deberta-v3-large-80-gb" MODEL_NAME = "AI Text Detector - DeBERTa v3 Large (80GB GPU)" BASE_MODEL = "microsoft/deberta-v3-large"
π§ ai-text-detector-deberta-v3-large-80-gb
This model is fine-tuned on a Human vs AI Text Detection dataset using the base model:microsoft/deberta-v3-large
π Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 4 |
| Batch Size | 8 |
| Learning Rate | 2e-5 |
| Weight Decay | 0.01 |
| GPU Used | 80GB A100 |
| Framework | π€ Transformers + PyTorch |
π Evaluation Results (Test Set)
| Metric | Score |
|---|---|
| Accuracy | 0.993 |
| F1 Score | 0.994 |
πΉ Class-wise Performance
| Class | Precision | Recall | F1-score |
|---|---|---|---|
| Human (0) | 1.000 | 0.986 | 0.993 |
| AI (1) | 0.988 | 1.000 | 0.994 |
π Usage Example
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("{MODEL_REPO}")
model = AutoModelForSequenceClassification.from_pretrained("{MODEL_REPO}")
text = "This text might have been written by an AI."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
probs = torch.nn.functional.softmax(outputs.logits, dim=1)
labels = ["Human", "AI"]
predicted_label = labels[torch.argmax(probs)]
print(predicted_label, probs)
- Downloads last month
- 9
Model tree for abhi099k/ai-text-detector-deberta-v3-large-80-gb
Base model
microsoft/deberta-v3-large