Instructions to use Qwen/Qwen3Guard-Stream-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3Guard-Stream-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="Qwen/Qwen3Guard-Stream-8B", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3Guard-Stream-8B", trust_remote_code=True) model = AutoModel.from_pretrained("Qwen/Qwen3Guard-Stream-8B", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
How can I tune the model output towards business needs? Can I do fast prompt tuning?
#3 opened 6 months ago
by
AnothnyXu
AAAAHHHHHH THE SAFETY π«π IM SNEEEEEEEEEEEEEEEEEEEEEEEEEEEEDING π₯πβ¨
#2 opened 7 months ago
by
evewashere
Improve model card: Add pipeline tag, project page, and update paper link
#1 opened 7 months ago
by
nielsr