unitary/toxic-bert - LiteRT Optimized

This is a LiteRT (formerly TensorFlow Lite) export of unitary/toxic-bert.

It is optimized for mobile and edge inference (Android/iOS/Embedded).

Model Details

Attribute Value
Task Toxicity Detection
Format .tflite (Float32)
File Size 417.1 MB
Input Length 128 tokens
Output Dim 6

Usage

import numpy as np
from ai_edge_litert.interpreter import Interpreter
from transformers import AutoTokenizer

model_path = "unitary_toxic-bert.tflite"
interpreter = Interpreter(model_path=model_path)
interpreter.allocate_tensors()

tokenizer = AutoTokenizer.from_pretrained("unitary/toxic-bert")
labels = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]

def predict(text):
    # Tokenize
    inputs = tokenizer(text, max_length=128, padding="max_length", truncation=True, return_tensors="np")

    # Set inputs
    input_details = interpreter.get_input_details()
    interpreter.set_tensor(input_details[0]['index'], inputs['input_ids'].astype(np.int64))
    interpreter.set_tensor(input_details[1]['index'], inputs['attention_mask'].astype(np.int64))

    # Run inference
    interpreter.invoke()

    # Get output (Logits)
    output_details = interpreter.get_output_details()
    logits = interpreter.get_tensor(output_details[0]['index'])[0]

    # Softmax to get probabilities
    probs = np.exp(logits) / np.sum(np.exp(logits))

    # Get top label
    top_idx = np.argmax(probs)
    return labels[top_idx], probs[top_idx]

label, confidence = predict("This is amazing!")
print(f"Result: {label} ({confidence:.2f})")

Converted by Bombek1 using litert-torch

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bombek1/toxic-bert-litert

Base model

unitary/toxic-bert
Finetuned
(8)
this model