Text Classification
Transformers
Safetensors
English
Chinese
bert
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-classification", model="certainstar/Trained-Mul-classification")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("certainstar/Trained-Mul-classification")
model = AutoModelForSequenceClassification.from_pretrained("certainstar/Trained-Mul-classification")
Quick Links
  • 本模型采取 HC3的英文数据集和中文bert-base-multilingual-cased 模型进行三轮训练得到结果。
  • 其作用是对文本是否为 GPT 生成进行分类,所得 Label 为0,则不为 GPT 生成,反之为1,则是。
Downloads last month
5
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train certainstar/Trained-Mul-classification