Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
License:

Improve dataset card: Add task categories, language, size categories, tags, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -1,4 +1,18 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: matpo_train_musique
4
  features:
@@ -490,6 +504,12 @@ dataset_info:
490
  struct:
491
  - name: ground_truth
492
  dtype: string
 
 
 
 
 
 
493
  splits:
494
  - name: train
495
  num_bytes: 8858574
@@ -529,7 +549,6 @@ configs:
529
  data_files:
530
  - split: train
531
  path: single_agent_val_webwalkerqa_repeat_2/train-*
532
- license: apache-2.0
533
  ---
534
 
535
  <div align="center">
@@ -665,7 +684,27 @@ User Query → Planner Agent → Subtask 1 → Worker Agent → Result 1
665
  <em>Visualization of MATPO implementation.</em>
666
  </p>
667
 
 
668
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
669
 
670
  ## Quick Start
671
 
@@ -849,7 +888,7 @@ For Qwen3-14B-base, we recommend:
849
  MATPO extends GRPO with principled credit assignment:
850
  1. The planner's final answer determines the accuracy reward
851
  2. This reward is normalized across all rollouts in a group
852
- 3. Gradients flow proportionally to both planner and worker actions
853
  4. Worker agents receive the same advantage value as their parent planner rollout
854
 
855
  See our paper for more details.
@@ -888,4 +927,4 @@ For production systems with no data leakage concerns, this is optional.
888
 
889
  <p align="center">
890
  <strong>Star ⭐ this repository if you find it helpful!</strong>
891
- </p>
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ - reinforcement-learning
7
+ language:
8
+ - en
9
+ size_categories:
10
+ - 10K<n<100K
11
+ tags:
12
+ - multi-agent
13
+ - tool-use
14
+ - llm-agents
15
+ - reinforcement-learning-from-feedback
16
  dataset_info:
17
  - config_name: matpo_train_musique
18
  features:
 
504
  struct:
505
  - name: ground_truth
506
  dtype: string
507
+ - name: search_and_browse
508
+ struct:
509
+ - name: create_kwargs
510
+ struct:
511
+ - name: ground_truth
512
+ dtype: string
513
  splits:
514
  - name: train
515
  num_bytes: 8858574
 
549
  data_files:
550
  - split: train
551
  path: single_agent_val_webwalkerqa_repeat_2/train-*
 
552
  ---
553
 
554
  <div align="center">
 
684
  <em>Visualization of MATPO implementation.</em>
685
  </p>
686
 
687
+ ## Sample Usage
688
 
689
+ To load the dataset, you can use the `load_dataset` function from the 🤗 Datasets library:
690
+
691
+ ```python
692
+ from datasets import load_dataset
693
+
694
+ # Load the 'matpo_train_musique' configuration
695
+ dataset = load_dataset("veggiebird/MATPO-data", "matpo_train_musique")
696
+
697
+ # Access the training split
698
+ train_split = dataset["train"]
699
+
700
+ # Print an example
701
+ print(train_split[0])
702
+
703
+ # To load other configurations, replace "matpo_train_musique" with
704
+ # "matpo_val_frames_repeat_2", "matpo_val_gaia_repeat_8", etc.
705
+ # For example:
706
+ # dataset_gaia = load_dataset("veggiebird/MATPO-data", "matpo_val_gaia_repeat_8")
707
+ ```
708
 
709
  ## Quick Start
710
 
 
888
  MATPO extends GRPO with principled credit assignment:
889
  1. The planner's final answer determines the accuracy reward
890
  2. This reward is normalized across all rollouts in a group
891
+ 3. Gradients flow proportionally to both planner actions and worker actions
892
  4. Worker agents receive the same advantage value as their parent planner rollout
893
 
894
  See our paper for more details.
 
927
 
928
  <p align="center">
929
  <strong>Star ⭐ this repository if you find it helpful!</strong>
930
+ </p>