flust commited on
Commit
f8c0fda
·
verified ·
1 Parent(s): fc5ea62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -3
README.md CHANGED
@@ -1,3 +1,27 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ToolMesh: Synthesizing Complex Tool-Use Trajectories via Graph Sampling and Multi-Agent Simulation
2
+
3
+ ToolMesh is a large open-source tool-use dataset with reasoning traces, designed to advance reasoning and tool-calling capabilities in agentic LLMs. It comprises over 160k turns synthesized from over 20k tools. By organizing functions as nodes in a graph structure and sampling paths on the graph, we construct complex and high-quality user intents. Then, trajectory is synthesized by a multi-agent way with user and tool are simulted with a LM. Moreover, we perform inference answering and correctness filtering for each round in the trajectory through thinking model, only keeping the correct and valuable turns. Models fine-tuned on ToolMesh achieves promising improvements against baselines on Tau-bench, Tau2-bench and BFCL-v4 agentic.
4
+
5
+ * Technical Report - comming soon
6
+
7
+ * HF Dataset - this page is the dataset
8
+
9
+ # Synthesis pipeline
10
+
11
+
12
+ * Data collection and augmentation
13
+ * We collect a wide variety of functions from open-source datasets, including [xlam](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [glaive](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), and [toolace](https://huggingface.co/datasets/Team-ACE/ToolACE). Each function is expected to have defined inputs and outputs; however, the original definitions are often incomplete — for instance, some functions do not explicitly specify the output parameter types. To unify them within a common representation space, we use powful LMs to complete the descriptions and types of all input and output parameters, and then vectorize them using the embedding model [Conan-embedding-v1](https://huggingface.co/TencentBAC/Conan-embedding-v1).
14
+
15
+ * Graph construction
16
+ * Based on the unified vector representations of functions, we further construct a function graph to capture their potential relationships. We regard each function as a node and construct edges based on the vector similarity between output and input parameters. Specifically, an edge is established when an output parameter of one function is semantically similar to an input parameter of another function. In this way, we build a function graph where edges represent transitive relationships between functions. In addition, to increase the diversity of edges and the overall topology, we introduce a certain degree of random edge construction.
17
+
18
+ * Random walk sampling
19
+ * After constructing the graph, function chains are sampled using a random walk of length 5–20. Meanwhile, to avoid oversampling specific functions, we restrict the number of visits to each node.
20
+
21
+ * Multi-agent trajectory synthesis
22
+ * Following the sampling of function chains, we synthesize the user intention without enforcing task completion in chain order. The trajectory is then generated through a multi-agent simulation, where three models simulate a user, an assistant, and a function: the user poses questions according to the synthesized intention, the assistant responds, and the function provides simulated tool responses.
23
+ * To construct correct procedural steps under various scenarios (including front-wheel error correction) and to retain only valid rounds from multi-turn interactions, we leverage a thinking model to generate and perform quality filtering for each turn within the synthesized traces.
24
+
25
+ * Hybrid Training with Augmented Open-Source Data
26
+ * In addition to the synthesized trajectories, we also incorporated a large amount of processed open-source data, including XLAM, When2Call, Glaive, ToolACE, Button, and API_Gen. The processing steps involved quality filtering and response reconstruction. Experimental results demonstrate that both our synthesized data and the post-processed open-source data significantly contribute to performance improvements.
27
+