Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -48,8 +48,9 @@ Beyond trajectory-level validation, we employ fine-grained turn-level filtering
|
|
| 48 |
* To ensure that the synthesized interactions provide reliable learning signals, we apply a two-stage quality filtering process: trajectory-level filtering that maintains goal alignment and coherence, followed by turn-level filtering that removes erroneous or misaligned steps.
|
| 49 |
|
| 50 |
* Hybrid Training with Augmented Open-Source Data
|
| 51 |
-
*
|
| 52 |
-
*
|
|
|
|
| 53 |
|
| 54 |
# Dataset Statistic
|
| 55 |
|
|
|
|
| 48 |
* To ensure that the synthesized interactions provide reliable learning signals, we apply a two-stage quality filtering process: trajectory-level filtering that maintains goal alignment and coherence, followed by turn-level filtering that removes erroneous or misaligned steps.
|
| 49 |
|
| 50 |
* Hybrid Training with Augmented Open-Source Data
|
| 51 |
+
* We also incorporat a large amount of processed open-source data, including [xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [When2Call](https://huggingface.co/datasets/nvidia/When2Call), [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE), [BUTTONInstruct](https://github.com/PKU-Baichuan-MLSystemLab/BUTTON), [APIGen-MT-5k](https://huggingface.co/datasets/Salesforce/APIGen-MT-5k), [Tau-bench training set](https://github.com/sierra-research/tau-bench/tree/main). The processing steps involved quality filtering and response reconstruction.
|
| 52 |
+
* All open-source multi-turn datasets are subjected to the same split and quality-filtering procedures as the synthesised data.
|
| 53 |
+
|
| 54 |
|
| 55 |
# Dataset Statistic
|
| 56 |
|