Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Rodrigo FERREIRA RODRIGUES commited on
Commit
dd5922d
·
1 Parent(s): 5871aee

Adding data fields for datasets

Browse files
Files changed (1) hide show
  1. README.md +88 -10
README.md CHANGED
@@ -584,28 +584,106 @@ As this dataset contains very heterogenous tasks, almost every dataset as a diff
584
 
585
  ### Data Instances
586
 
587
- TO DO
588
 
589
  ### Data Fields
590
 
591
- TO DO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
592
 
593
  ### Data Splits
594
 
595
  | Category | Tasks | Datasets | Train | Dev | Test |
596
  | --------------- | ---------------------- | ---------------------------------------- | --------------------- | ------------------- | ------------------------- |
597
- | **Knowledge** | Coordinates Prediction | GeoQuestions1089 | – | – | 84 |
598
- | | Yes/No questions | GeoQuestions1089 | – | – | 181 |
599
- | | Regression | GeoQuestions1089<br>GeoQuery | –<br>180 | –<br>17 | 234<br>88 |
600
- | | Place prediction | GeoQuestions1089<br>GeoQuery<br>MS-Marco | –<br>348<br>23 513 | –<br>32<br>4 149 | 455<br>184<br>2 907 |
601
  | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** |
602
- | **Reasoning** | Scenario Complex QA | GeoSQA<br>GKMC | <br>– | <br>– | 4 110<br>1 600 |
603
  | | Spatial Reasoning | SpatialEvalLLM<br>SpartUN<br>StepGame | –<br>37 095<br>50 000 | –<br>5 600<br>5 000 | 1 400<br>5 551<br>100 000 |
604
  | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** |
605
- | **Application** | POI Recommendation | TourismQA<br>NY-QA | 19 960<br>– | 2 119<br>– | 2 173<br>1 347 |
606
- | | Path Finding | bAbI (task 19)<br>GridRoute<br>PPNL | 9 000<br><br>69 472 | 1 000<br><br>8 684 | 1 000<br>300<br>74 484 |
607
  | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** |
608
- | **Total** | – | – | **236 290** | **29 942** | **176 628** |
609
 
610
 
611
 
 
584
 
585
  ### Data Instances
586
 
587
+ Please report to the dataset viewer to see what an instance for each dataset looks like.
588
 
589
  ### Data Fields
590
 
591
+ We will give for each dataset the data fields.
592
+
593
+ - **GeoQuestions1089_coord**:
594
+ - `question`(`str`) : the question to be answered.
595
+ - `answer`(`List[float]`) : the coordinates of the answer. The first element of the list correspond to the latitude and the second to the longitude.
596
+ - **GeoQuestions1089_YN**:
597
+ - `question`(`str`) : the question to be answered.
598
+ - `answer`(`List[bool]`) : a list containing the boolean corresponding to the answer.
599
+ - **GeoQuestions1089_regression** and **GeoQuery_regression**:
600
+ - `question`(`str`) : the question to be answered.
601
+ - `answer`(`List[float]`) : a list containing the numbers to be predicted.
602
+ - **GeoQuestions1089_place** and **GeoQuery_place**:
603
+ - `question`(`str`) : the question to be answered.
604
+ - `answer`(`List[str]`) : a list containing the names of the places to be predicted.
605
+ - **Ms-Marco_place**:
606
+ - `question_id`(`int64`) : the id of the question from the original dataset.
607
+ - `question`(`str`) : the question to be answered.
608
+ - `answer`(`str`) : the answer to the question formulated by a human.
609
+ - `passages`(`List[dict]`) : a list of dicts. Each dict correspond to a passage and gives the following information:
610
+ - `is_selected`(`int64`) : 1 if the passage was selected to write the answer, 0 otherwise.
611
+ - `passage_text`(`str`) : the text of the passage.
612
+ - `url`(`str`) : the url from where the passage was retrieved.
613
+ - **GeoSQA**:
614
+ - `question_id`(`int64`) : the id of the question from the original dataset.
615
+ - `scenario_id`(`int64`) : the id of the scenario from the original dataset.
616
+ - `annotation`(`str`) : the description of the image normally used to answer the question.
617
+ - `scenario`(`str`) : the scenario attached to the image providing context to the question.
618
+ - `question`(`str`) : the question to be answered.
619
+ - `answer`(`str`) : the letter corresponding to the right choice.
620
+ - `A`(`str`) : one of the possibles answers to the question.
621
+ - `B`(`str`) : one of the possibles answers to the question.
622
+ - `C`(`str`) : one of the possibles answers to the question.
623
+ - `D`(`str`) : one of the possibles answers to the question.
624
+ - **GKMC**:
625
+ - `question_id`(`int64`) : the id of the question from the original dataset.
626
+ - `scenario`(`str`) : the scenario providing context to the question.
627
+ - `question`(`str`) : the question to be answered.
628
+ - `answer`(`str`) : the letter corresponding to the right choice.
629
+ - `A`(`str`) : one of the possibles answers to the question.
630
+ - `B`(`str`) : one of the possibles answers to the question.
631
+ - `C`(`str`) : one of the possibles answers to the question.
632
+ - `D`(`str`) : one of the possibles answers to the question.
633
+ - **SpatialEvalLLM**:
634
+ - `scenario`(`str`) : the scenario providing context to the question.
635
+ - `question`(`str`) : the question to be answered.
636
+ - `answer`(`List[str]`) : a list containing the names of the right objects to predict.
637
+ - `struct_type`(`str`) : the geometric structure of the map.
638
+ - `size`(`str`) : the size of the structure in number of tiles composing it.
639
+ - `k_hop`(`str`) : the minimum reasoning steps required to answer the question.
640
+ - `seed`(`str`) : the seed used to generate the question.
641
+ - `description_level`(`str`) : if **global** then the entierity of the map is described. If **local**, only a portion of the map is described.
642
+ - **SpartUN**:
643
+ - `question_id`(`str`) : the id of the question from the original dataset.
644
+ - `scenario_id`(`str`) : the id of the scenario from the original dataset.
645
+ - `scenario`(`str`) : the scenario providing context to the question.
646
+ - `question`(`str`) : the question to be answered.
647
+ - `candidates_answers`(`List[str]`) : the candidates answers from which the model has to retrieve.
648
+ - `answer`(`List[str]`) : a list containing the right answers from the candidate list.
649
+ - `type`(`str`) : **YN** from boolean questions, **FR** for Find Relation questions.
650
+ - `k_hop`(`int64`) : the minimum reasoning steps required to answer the question.
651
+ - **StepGame**:
652
+ - `scenario`(`str`) : the scenario providing context to the question.
653
+ - `question`(`str`) : the question to be answered.
654
+ - `candidates_answers`(`List[str]`) : the candidates answers from which the model has to retrieve.
655
+ - `answer`(`List[str]`) : a list containing the right answers from the candidate list.
656
+ - `k_hop`(`int64`) : the minimum reasoning steps required to answer the question.
657
+ - **TourismQA**:
658
+ - `question`(`str`) : the question to be answered.
659
+ - `answers_names`(`List[str]`) : a list containing the names of the POI to be recommended (answer expected).
660
+ - `city`(`dict`) : a dict containing the following informations about the city where take place the question:
661
+ - `coord`(`List[float]`) : the coordinates of the city. The first element of the list correspond to the latitude and the second to the longitude.
662
+ - `name`(`str`) : the name of the city.
663
+ - `tagged_locations`(`List[str]`) : the locations names retrieved from the question (not used for our description of the task).
664
+ - `tagged_locations_lat_long`(`List[flaot]`) : the latitudes and longitudes of the locations retrieved from the question (not used for our description of the task).
665
+ - `answers_adresses`(`List[str]`) : the postal adresses of each answer (not used for our description of the task).
666
+ - `answers_reviews`(`List[List[str]]`) : for each POI, we have a list of reviews (not used for our description of the task).
667
+ - `answers_sum_reviews`(`List[str]`) : a summarization of the reviews for each POI retrieved from ??? work (not used for our description of the task).
668
+ - `answers_lat_longs`(`List[str]`) : the latitudes and longitudes of the answers (not used for our description of the task).
669
+
670
 
671
  ### Data Splits
672
 
673
  | Category | Tasks | Datasets | Train | Dev | Test |
674
  | --------------- | ---------------------- | ---------------------------------------- | --------------------- | ------------------- | ------------------------- |
675
+ | **Knowledge** | Coordinates Prediction | GeoQuestions1089_coord | – | – | 87 |
676
+ | | Yes/No questions | GeoQuestions1089_YN | – | – | 181 |
677
+ | | Regression | GeoQuestions1089_regression<br>GeoQuery_regression | –<br>182 | –<br>17 | 231<br>89 |
678
+ | | Place prediction | GeoQuestions1089_place<br>GeoQuery_place<br>MS-Marco_place | –<br>346<br>23 513 | –<br>33<br>4 149 | 455<br>184<br>2 907 |
679
  | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** |
680
+ | **Reasoning** | Scenario Complex QA | GeoSQA<br>GKMC | 2 644<br>– | 628<br>– | 838<br>1 600 |
681
  | | Spatial Reasoning | SpatialEvalLLM<br>SpartUN<br>StepGame | –<br>37 095<br>50 000 | –<br>5 600<br>5 000 | 1 400<br>5 551<br>100 000 |
682
  | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** |
683
+ | **Application** | POI Recommendation | TourismQA<br>NY-QA | 19 762<br>– | 2 109<br>– | 2 153<br>1 347 |
684
+ | | Path Finding | GridRoute<br>PPNL_single<br>PPNL_multi | <br>16 032<br>53 440 | <br>2 004<br>6 680 | 300<br>19 044<br>55 440 |
685
  | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** | **──────────** |
686
+ | **Total** | – | – | **203 014** | **26 220** | **191 807** |
687
 
688
 
689