Text Classification
Transformers
Safetensors
English
modernbert
security
jailbreak-detection
prompt-injection
llm-safety
Eval Results (legacy)
text-embeddings-inference
Instructions to use rootfs/function-call-sentinel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use rootfs/function-call-sentinel with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="rootfs/function-call-sentinel")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("rootfs/function-call-sentinel") model = AutoModelForSequenceClassification.from_pretrained("rootfs/function-call-sentinel") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 519d1c91210e7af223a522c7346f52c251f5fa126462493178c112b359b07c3d
- Size of remote file:
- 598 MB
- SHA256:
- 15c933701407efc0f1a3bd93053bd3745a2f67b9790bbd571b60b7b1cea0960f
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.