Datasets:

Languages:
English
ArXiv:
License:
flust commited on
Commit
8045344
·
verified ·
1 Parent(s): 173424c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -48,8 +48,9 @@ Beyond trajectory-level validation, we employ fine-grained turn-level filtering
48
  * To ensure that the synthesized interactions provide reliable learning signals, we apply a two-stage quality filtering process: trajectory-level filtering that maintains goal alignment and coherence, followed by turn-level filtering that removes erroneous or misaligned steps.
49
 
50
  * Hybrid Training with Augmented Open-Source Data
51
- * In addition to the synthesized trajectories, we also incorporated a large amount of processed open-source data, including [xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [When2Call](https://huggingface.co/datasets/nvidia/When2Call), [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE), [BUTTONInstruct](https://github.com/PKU-Baichuan-MLSystemLab/BUTTON), [APIGen-MT-5k](https://huggingface.co/datasets/Salesforce/APIGen-MT-5k), [Tau-bench training set](https://github.com/sierra-research/tau-bench/tree/main). The processing steps involved quality filtering and response reconstruction. Experimental results demonstrate that both our synthesized data and the post-processed open-source data significantly contribute to performance improvements.
52
- * It should be noted that our data is segmented based on the messages of the assistant, so the loss is only calculated for the last assistant message for each sample during training.
 
53
 
54
  # Dataset Statistic
55
 
 
48
  * To ensure that the synthesized interactions provide reliable learning signals, we apply a two-stage quality filtering process: trajectory-level filtering that maintains goal alignment and coherence, followed by turn-level filtering that removes erroneous or misaligned steps.
49
 
50
  * Hybrid Training with Augmented Open-Source Data
51
+ * We also incorporat a large amount of processed open-source data, including [xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [When2Call](https://huggingface.co/datasets/nvidia/When2Call), [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE), [BUTTONInstruct](https://github.com/PKU-Baichuan-MLSystemLab/BUTTON), [APIGen-MT-5k](https://huggingface.co/datasets/Salesforce/APIGen-MT-5k), [Tau-bench training set](https://github.com/sierra-research/tau-bench/tree/main). The processing steps involved quality filtering and response reconstruction.
52
+ * All open-source multi-turn datasets are subjected to the same split and quality-filtering procedures as the synthesised data.
53
+
54
 
55
  # Dataset Statistic
56