Instructions to use autopilot-ai/EthicalEye with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use autopilot-ai/EthicalEye with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="autopilot-ai/EthicalEye")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("autopilot-ai/EthicalEye") model = AutoModelForSequenceClassification.from_pretrained("autopilot-ai/EthicalEye") - Notebooks
- Google Colab
- Kaggle
How to generate predictions from this model?
#3
by vivasvan100 - opened
sentence = tokenizer("god bless you",return_tensors="pt")
outputs = model(**inputs)
print(outputs.logits)
tensor([[ 1.9721, -1.8041]], grad_fn=<AddmmBackward0>)
is this how to generate the outputs?
inputs = tokenizer.encode("god bless you", return_tensors="pt")
outputs = model(inputs)[0]
probabilities = torch.softmax(outputs, dim=1)
predicted_class = torch.argmax(probabilities).item()
toxic_probability = round(probabilities[0][1].item() * 100, 2)
pc = ""
if predicted_class == 1:
pc = "Toxic"
else:
pc = "Not toxic"