Improve model card: Add pipeline tag, library name, paper title, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +55 -5
README.md CHANGED
@@ -1,22 +1,26 @@
1
  ---
2
- license: mit
 
3
  language:
4
  - en
5
  - zh
6
- base_model:
7
- - Qwen/Qwen2.5-7B-Instruct
 
8
  tags:
9
  - biology
10
  - finance
11
  - text-generation-inference
12
  ---
13
 
 
 
14
  ## Model Information
15
 
16
- We release agent model used in **HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web Searches**.
17
 
18
  <p align="left">
19
- Useful links: 📝 <a href="https://arxiv.org/abs/2508.08088" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/papers/2508.08088" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/plageon/HierSearch" target="_blank">Github</a>
20
  </p>
21
 
22
  1. We explore the deep search framework in multi-knowledge-source scenarios and propose a hierarchical agentic paradigm and train with HRL;
@@ -26,3 +30,49 @@ Useful links: 📝 <a href="https://arxiv.org/abs/2508.08088" target="_blank">Pa
26
 
27
  🌹 If you use this model, please ✨star our **[GitHub repository](https://github.com/plageon/HierSearch)** or upvote our **[paper](https://huggingface.co/papers/2508.08088)** to support us. Your star means a lot!
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
  language:
5
  - en
6
  - zh
7
+ license: mit
8
+ pipeline_tag: question-answering
9
+ library_name: transformers
10
  tags:
11
  - biology
12
  - finance
13
  - text-generation-inference
14
  ---
15
 
16
+ # HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web Searches
17
+
18
  ## Model Information
19
 
20
+ We release the agent model used in **HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web Searches**.
21
 
22
  <p align="left">
23
+ Useful links: 📝 <a href="https://arxiv.org/abs/2508.08088" target="_blank">Paper (arXiv)</a> • 🤗 <a href="https://huggingface.co/papers/2508.08088" target="_blank">Paper (Hugging Face)</a> • 🧩 <a href="https://github.com/plageon/HierSearch" target="_blank">Github</a>
24
  </p>
25
 
26
  1. We explore the deep search framework in multi-knowledge-source scenarios and propose a hierarchical agentic paradigm and train with HRL;
 
30
 
31
  🌹 If you use this model, please ✨star our **[GitHub repository](https://github.com/plageon/HierSearch)** or upvote our **[paper](https://huggingface.co/papers/2508.08088)** to support us. Your star means a lot!
32
 
33
+ ## Sample Usage
34
+
35
+ You can load and use this model directly with the Hugging Face `transformers` library for basic text generation or question-answering inference. For the full HierSearch framework capabilities, please refer to the [official GitHub repository](https://github.com/plageon/HierSearch).
36
+
37
+ ```python
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+ import torch
40
+
41
+ model_id = "zstanjj/HierSearch-Planner-Agent" # This model represents the Planner Agent.
42
+ # Other agent models include "zstanjj/HierSearch-Local-Agent" or "zstanjj/HierSearch-Web-Agent".
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_id,
47
+ torch_dtype=torch.bfloat16, # Or torch.float16 depending on your hardware
48
+ device_map="auto" # Or specify your device, e.g., "cuda:0"
49
+ )
50
+
51
+ # Example for a question-answering interaction with the Planner Agent
52
+ messages = [
53
+ {"role": "user", "content": "Explain the concept of Hierarchical Reinforcement Learning as applied in this paper."},
54
+ ]
55
+
56
+ # Apply chat template and tokenize inputs
57
+ text = tokenizer.apply_chat_template(
58
+ messages,
59
+ tokenize=False,
60
+ add_generation_prompt=True
61
+ )
62
+
63
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
64
+
65
+ # Generate response
66
+ generated_ids = model.generate(
67
+ model_inputs.input_ids,
68
+ max_new_tokens=1024, # Adjust max_new_tokens as needed for detailed answers
69
+ temperature=0.7, # Adjust generation parameters for diversity
70
+ do_sample=True,
71
+ eos_token_id=tokenizer.eos_token_id, # Ensure generation stops at EOS token
72
+ pad_token_id=tokenizer.pad_token_id # Set pad_token_id for proper generation
73
+ )
74
+
75
+ # Decode and print the output
76
+ decoded_output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
77
+ print(decoded_output)
78
+ ```