nielsr HF Staff commited on
Commit
1a85e64
·
verified ·
1 Parent(s): 574a063

Improve model card: Add pipeline tag, project page, and update paper link

Browse files

This PR enhances the model card for `Qwen3Guard-Stream-8B` by:
- Adding the `pipeline_tag: text-classification` to the metadata, which helps users discover this model more easily for safety classification tasks on the Hugging Face Hub.
- Updating the link to the Technical Report in the model card content to point to the official Hugging Face paper page: https://huggingface.co/papers/2510.14276.
- Adding the project's official blog page as `project_page` to the metadata, providing an additional valuable resource for users.

Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
 
 
2
  library_name: transformers
3
  license: apache-2.0
4
  license_link: https://huggingface.co/Qwen/Qwen3Guard-Stream-8B/blob/main/LICENSE
5
- base_model:
6
- - Qwen/Qwen3-8B
7
  ---
8
 
9
  # Qwen3Guard-Stream-8B
@@ -20,7 +22,7 @@ This repository hosts **Qwen3Guard-Stream**, which offers the following key adva
20
  * **Three-Tiered Severity Classification:** Enables detailed risk assessment by categorizing outputs into safe, controversial, and unsafe severity levels, supporting adaptation to diverse deployment scenarios.
21
  * **Multilingual Support:** Supports 119 languages and dialects, ensuring robust performance in global and cross-lingual applications.
22
 
23
- For more details, please refer to our [blog](https://qwen.ai/blog?id=f0bbad0677edf58ba93d80a1e12ce458f7a80548&from=research.research-list), [GitHub](https://github.com/QwenLM/Qwen3Guard), and [Technical Report](https://github.com/QwenLM/Qwen3/blob/main/Qwen3_Technical_Report.pdf).
24
 
25
  ## Quickstart
26
 
@@ -63,7 +65,8 @@ token_ids = model_inputs.input_ids[0]
63
  # In a real-world scenario, the user's input is processed completely before the model generates a response.
64
  token_ids_list = token_ids.tolist()
65
  # We identify the end of the user's turn in the tokenized input.
66
- # The template for a user turn is `<|im_start|>user\n...<|im_end|>`.
 
67
  im_start_token = '<|im_start|>'
68
  user_token = 'user'
69
  im_end_token = '<|im_end|>'
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen3-8B
4
  library_name: transformers
5
  license: apache-2.0
6
  license_link: https://huggingface.co/Qwen/Qwen3Guard-Stream-8B/blob/main/LICENSE
7
+ pipeline_tag: text-classification
8
+ project_page: https://qwen.ai/blog?id=f0bbad0677edf58ba93d80a1e12ce458f7a80548&from=research.research-list
9
  ---
10
 
11
  # Qwen3Guard-Stream-8B
 
22
  * **Three-Tiered Severity Classification:** Enables detailed risk assessment by categorizing outputs into safe, controversial, and unsafe severity levels, supporting adaptation to diverse deployment scenarios.
23
  * **Multilingual Support:** Supports 119 languages and dialects, ensuring robust performance in global and cross-lingual applications.
24
 
25
+ For more details, please refer to our [blog](https://qwen.ai/blog?id=f0bbad0677edf58ba93d80a1e12ce458f7a80548&from=research.research-list), [GitHub](https://github.com/QwenLM/Qwen3Guard), and [Technical Report](https://huggingface.co/papers/2510.14276).
26
 
27
  ## Quickstart
28
 
 
65
  # In a real-world scenario, the user's input is processed completely before the model generates a response.
66
  token_ids_list = token_ids.tolist()
67
  # We identify the end of the user's turn in the tokenized input.
68
+ # The template for a user turn is `<|im_start|>user
69
+ ...<|im_end|>`.
70
  im_start_token = '<|im_start|>'
71
  user_token = 'user'
72
  im_end_token = '<|im_end|>'