Neural-Hacker's picture
Update README.md
b333309 verified
|
raw
history blame
889 Bytes
metadata
license: mit
language:
  - en
base_model:
  - distilbert/distilbert-base-uncased

DistilBERT AI-vs-Human Text Classifier

This is a binary text classification model built on top of distilbert-base-uncased. It has been fine-tuned to distinguish between AI-generated and human-written text.

Base model: DistilBERT (uncased)

Task: Sequence classification

Labels:

0 → Human-written text

1 → AI-generated text

Framework: PyTorch, Hugging Face Transformers

Training Details

Dataset size: 1367 samples

Batch size: 16

Epochs: 10

Learning rate: 5e-6

Performance

Best validation metrics:

Accuracy: 0.5693

Precision: 0.6162

Recall: 0.9858

F1-score: 0.6814

Usage Load the model and tokenizer with the Hugging Face Transformers library, provide a text input, and the model will output a label indicating whether the text is AI-generated or human-written.

License MIT License