Add text-retrieval task category and improve documentation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +64 -6
README.md CHANGED
@@ -1,17 +1,75 @@
1
  ---
 
 
2
  tags:
3
  - agent
4
  ---
 
5
  This dataset hosts the [AgentIR-4B](https://huggingface.co/Tevatron/AgentIR-4B) indexes.
6
- - Paper: https://arxiv.org/abs/2603.04384
7
- - Code: https://github.com/texttron/AgentIR/tree/main
8
- - Model: https://huggingface.co/Tevatron/AgentIR-4B
9
- - Project Page: https://texttron.github.io/AgentIR/
10
 
11
- For usage details of this index, please see https://github.com/wu-ming233/AgentIR-dev/tree/main/evaluation.
 
 
 
12
 
13
- ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ```
 
 
 
15
  @article{chen2026AgentIR,
16
  title={AgentIR: Reasoning-Aware Retrieval for Deep Research Agents},
17
  author={Zijian Chen and Xueguang Ma and Shengyao Zhuang and Jimmy Lin and Akari Asai and Victor Zhong},
 
1
  ---
2
+ task_categories:
3
+ - text-retrieval
4
  tags:
5
  - agent
6
  ---
7
+
8
  This dataset hosts the [AgentIR-4B](https://huggingface.co/Tevatron/AgentIR-4B) indexes.
 
 
 
 
9
 
10
+ - **Paper:** [AgentIR: Reasoning-Aware Retrieval for Deep Research Agents](https://huggingface.co/papers/2603.04384)
11
+ - **Code:** [https://github.com/texttron/AgentIR](https://github.com/texttron/AgentIR)
12
+ - **Model:** [Tevatron/AgentIR-4B](https://huggingface.co/Tevatron/AgentIR-4B)
13
+ - **Project Page:** [https://texttron.github.io/AgentIR/](https://texttron.github.io/AgentIR/)
14
 
15
+ For usage details of this index, please see [https://github.com/wu-ming233/AgentIR-dev/tree/main/evaluation](https://github.com/wu-ming233/AgentIR-dev/tree/main/evaluation).
16
+
17
+ ## Quick Usage
18
+
19
+ Below is the example code from the official repository to embed queries (including reasoning) and documents using the AgentIR-4B model:
20
+
21
+ ```python
22
+ import torch
23
+ from transformers import AutoModel, AutoTokenizer
24
+
25
+ MODEL = "Tevatron/AgentIR-4B"
26
+ PREFIX = "Instruct: Given a user's reasoning followed by a web search query, retrieve relevant passages that answer the query while incorporating the user's reasoning
27
+ Query:"
28
+ QUERY = """Reasoning: Search results show some relevant info about music and Grammy. We need a composer who won a Grammy, could be from Sweden/Finland/Austria (joined 1995)? The person is known for a certain creation that is a subgenre known for euphoric finale. Which subgenre has a euphoric finale? "Progressive house"? There's a structure: Build-up, breakdown, climax, drop, euphoria. They started creating this piece in a small studio's backroom.
29
+
30
+ Query: "backroom" "studio" "early 2010s" "euphoric"
31
+ """
32
+ DOCS = [
33
+ "35+ Studios With Upcoming Games to Watch: Turtle Rock Studios
34
+
35
+ Making its name on the classic Left 4 Dead series of games, Turtle Rock Studios is working on an all-new co-op game called Back 4 Blood that sees you fighting through a zombie apocalypse. Sound familiar? Announced in early 2019 and being published",
36
+ "name: Otto Knows
37
+ image_upright: 1.25
38
+ birth_name: Otto Jettman
39
+ birth_date: 6 05 1989
40
+ birth_place: Stockholm, Sweden
41
+ genre: Electro house, house, progressive house
42
+ occupation: DJ, music producer, remixer
43
+
44
+ Otto Jettman (born 6 May 1989), better known by his stage name Otto Knows is a Swedish DJ, producer and remixer who has had a number of hits in Sweden, Belgium and the Netherlands"
45
+ ]
46
+
47
+ def embed(texts, model, tokenizer, device, is_query=False):
48
+ batch = tokenizer(
49
+ [PREFIX + t if is_query else t for t in texts],
50
+ padding=True,
51
+ truncation=True,
52
+ max_length=8192,
53
+ return_tensors="pt",
54
+ )
55
+ batch = {k: v.to(device) for k, v in batch.items()}
56
+ with torch.no_grad():
57
+ hidden = model(**batch, return_dict=True).last_hidden_state
58
+ reps = hidden[:, -1]
59
+ return torch.nn.functional.normalize(reps, p=2, dim=-1).cpu()
60
+
61
+ model = AutoModel.from_pretrained(MODEL, torch_dtype=torch.float16, device_map="auto")
62
+ device = model.device
63
+ tokenizer = AutoTokenizer.from_pretrained(MODEL, padding_side="left")
64
+
65
+ q = embed([QUERY], model, tokenizer, device, is_query=True)[0]
66
+ docs = embed(DOCS, model, tokenizer, device)
67
+ for doc, vec in zip(DOCS, docs):
68
+ print(f"{torch.dot(q, vec).item():.6f} {doc}")
69
  ```
70
+
71
+ ## Citation
72
+ ```bibtex
73
  @article{chen2026AgentIR,
74
  title={AgentIR: Reasoning-Aware Retrieval for Deep Research Agents},
75
  author={Zijian Chen and Xueguang Ma and Shengyao Zhuang and Jimmy Lin and Akari Asai and Victor Zhong},