File size: 1,987 Bytes
37ea438
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language: en
license: mit
library: pytorch
datasets:
- custom
tags:
- text-classification
- ai-text-detection
- roberta
widget:
- text: "The impact of artificial intelligence on modern society has been profound and far-reaching, transforming industries and reshaping how we live and work."
- text: "The quantum mechanics principle demonstrates that particles can exist in multiple states simultaneously until observed, a phenomenon known as superposition."
---

# AI vs Human Text Detector

This model can detect whether a text was written by a human or generated by AI.

## Model description

This AI text detector is built by fine-tuning RoBERTa-base on a dataset containing both human-written and AI-generated text samples.
The model has been trained with data augmentation techniques to improve its robustness.

## Performance

The model achieves the following performance on the validation set:
- Accuracy: 0.9999
- F1-Score (Human): 1.0000
- F1-Score (AI): 0.9999

## How to use

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
model_name = "Abuzaid01/Ai_Human_Text_Detector"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

# Prepare text for classification
text = "Your text to classify goes here."
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512, padding=True)

# Run inference
with torch.no_grad():
    outputs = model(**inputs)
    logits = outputs.logits
    
# Get the predicted class and probabilities
probabilities = torch.nn.functional.softmax(logits, dim=1)
predicted_class_idx = torch.argmax(probabilities, dim=1).item()
confidence = probabilities[0][predicted_class_idx].item()

# Map class index to label
labels = ["Human-written", "AI-generated"]
predicted_label = labels[predicted_class_idx]

print(f"Prediction: {predicted_label}")
print(f"Confidence: {confidence:.4f}")
```