Instructions to use Qwen/Qwen3Guard-Stream-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3Guard-Stream-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="Qwen/Qwen3Guard-Stream-4B", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3Guard-Stream-4B", trust_remote_code=True) model = AutoModel.from_pretrained("Qwen/Qwen3Guard-Stream-4B", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Improve model card: Add pipeline tag and update paper link
#2
by nielsr HF Staff - opened
This PR enhances the model card by:
- Adding the
pipeline_tag: text-classificationto the metadata, which ensures the model is correctly categorized on the Hugging Face Hub for better discoverability (e.g., at https://huggingface.co/models?pipeline_tag=text-classification). - Updating the link to the "Technical Report" in the introductory section of the model card to point to the official Hugging Face paper page: https://huggingface.co/papers/2510.14276. The BibTeX citation's URL remains unchanged as it points to the arXiv version, as per guidelines.