PyTorch
qwen2
nielsr HF Staff commited on
Commit
27a235b
·
verified ·
1 Parent(s): 2e1e6fe

Improve model card: Add pipeline tag, library name, paper link, relevant tags, and sample usage

Browse files

Hi there!

This PR aims to enhance the `ASearcher-Web-QwQ-32B` model card by improving its discoverability and providing more immediate utility to users.

Specifically, it:
- Adds the `pipeline_tag: text-generation` to the metadata, which helps users find your model when browsing for text generation models on the Hugging Face Hub (e.g., via [https://huggingface.co/models?pipeline_tag=text-generation](https://huggingface.co/models?pipeline_tag=text-generation)).
- Adds the `library_name: transformers` to the metadata, ensuring the model's compatibility with the `transformers` library is explicitly stated and enabling the "Use in Transformers" widget.
- Adds relevant `tags` such as `agent`, `search`, and `qwen` for better categorization and searchability on the Hub.
- Prominently displays the official Hugging Face Paper link for easier access to the research.
- Includes a `transformers`-based Python code snippet for basic text generation in the "Quickstart" section, making it easier for users to get started with the model.

These additions will make it easier for researchers and developers to find, understand, and utilize your impressive work.

Please review and let me know if any adjustments are needed!

Files changed (1) hide show
  1. README.md +47 -4
README.md CHANGED
@@ -1,11 +1,24 @@
1
  ---
2
- license: apache-2.0
3
- datasets:
4
- - inclusionAI/ASearcher-train-data
5
  base_model:
6
  - Qwen/QwQ-32B
 
 
 
 
 
 
 
 
 
7
  ---
8
 
 
 
 
 
 
 
 
9
  ### Instruction
10
  [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/inclusionAI/ASearcher)
11
 
@@ -37,4 +50,34 @@ We have released multiple models trained with different settings and based on fo
37
  We also release our full [training data](https://huggingface.co/datasets/inclusionAI/ASearcher-train-data) and [test data](https://huggingface.co/datasets/inclusionAI/ASearcher-test-data), you can easily get them and reproduce our result.
38
 
39
  ### Quickstart
40
- If you want to learn more details, please refer to our GitHub repository: [ASearcher](https://github.com/inclusionAI/ASearcher)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
  - Qwen/QwQ-32B
4
+ datasets:
5
+ - inclusionAI/ASearcher-train-data
6
+ license: apache-2.0
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - agent
11
+ - search
12
+ - qwen
13
  ---
14
 
15
+ # ASearcher-Web-QwQ-32B
16
+
17
+ This model is presented in the paper [Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL](https://huggingface.co/papers/2508.07976).
18
+
19
+ **Paper**: [https://huggingface.co/papers/2508.07976](https://huggingface.co/papers/2508.07976)
20
+ **Code**: [https://github.com/inclusionAI/ASearcher](https://github.com/inclusionAI/ASearcher)
21
+
22
  ### Instruction
23
  [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/inclusionAI/ASearcher)
24
 
 
50
  We also release our full [training data](https://huggingface.co/datasets/inclusionAI/ASearcher-train-data) and [test data](https://huggingface.co/datasets/inclusionAI/ASearcher-test-data), you can easily get them and reproduce our result.
51
 
52
  ### Quickstart
53
+
54
+ To perform text generation with `ASearcher-Web-QwQ-32B` using the `transformers` library, you can use the following code:
55
+
56
+ ```python
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+ import torch
59
+
60
+ model_name = "inclusionAI/ASearcher-Web-QwQ-32B"
61
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
63
+
64
+ messages = [
65
+ {"role": "user", "content": "What is the capital of France?"},
66
+ ]
67
+
68
+ text = tokenizer.apply_chat_template(
69
+ messages,
70
+ tokenize=False,
71
+ add_generation_prompt=True
72
+ )
73
+
74
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
75
+
76
+ generated_ids = model.generate(
77
+ model_inputs.input_ids,
78
+ max_new_tokens=512
79
+ )
80
+ generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
81
+ print(generated_text)
82
+ ```
83
+ For more details and advanced usage, please refer to our GitHub repository: [ASearcher](https://github.com/inclusionAI/ASearcher)