Instructions to use MinhND2301/toxic_classification_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MinhND2301/toxic_classification_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="MinhND2301/toxic_classification_model")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("MinhND2301/toxic_classification_model") model = AutoModelForSequenceClassification.from_pretrained("MinhND2301/toxic_classification_model") - Notebooks
- Google Colab
- Kaggle
Adding `safetensors` variant of this model
#1
by SFconvertbot - opened
- model.safetensors +3 -0
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:522a14531ef0b120862ec558ff880155ae2fafffde1c922bdf9c0c4fc7aa1c9d
|
| 3 |
+
size 540023384
|