Datasets:

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
pandas
License:
librarian-bot commited on
Commit
8e7543e
·
verified ·
1 Parent(s): f20b415

Librarian Bot: Add language metadata for dataset

Browse files

This pull request aims to enrich the metadata of your dataset by adding language metadata to `YAML` block of your dataset card `README.md`.

How did we find this information?

- The librarian-bot downloaded a sample of rows from your dataset using the [dataset-server](https://huggingface.co/docs/datasets-server/) library
- The librarian-bot used a language detection model to predict the likely language of your dataset. This was done on columns likely to contain text data.
- Predictions for rows are aggregated by language and a filter is applied to remove languages which are very infrequently predicted
- A confidence threshold is applied to remove languages which are not confidently predicted

The following languages were detected with the following mean probabilities:

- English (en): 99.99%


If this PR is merged, the language metadata will be added to your dataset card. This will allow users to filter datasets by language on the [Hub](https://huggingface.co/datasets).
If the language metadata is incorrect, please feel free to close this PR.

To merge this PR, you can use the merge button below the PR:
![Screenshot 2024-02-06 at 15.27.46.png](https://cdn-uploads.huggingface.co/production/uploads/63d3e0e8ff1384ce6c5dd17d/1PRE3CoDpg_wfThC6U1w0.png)

This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bots). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to

@davanstrien
.

Files changed (1) hide show
  1. README.md +22 -23
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
 
 
2
  license: cc-by-sa-4.0
3
-
4
  dataset_info:
5
  features:
6
  - name: context
@@ -26,7 +27,7 @@ dataset_info:
26
  - name: symbolic_entity_map
27
  dtype: string
28
  - name: symbolic_question
29
- sequence: string
30
  - name: num_context_entities
31
  dtype: int32
32
  - name: num_question_entities
@@ -43,28 +44,26 @@ dataset_info:
43
  dtype: string
44
  - name: comments
45
  sequence: string
46
-
47
  configs:
48
- - config_name: "SpaRP-PS1 (SpaRTUN)"
49
- version: 0.1.0
50
- default: true
51
- data_files:
52
- - split: train
53
- path: "SpaRP-PS1 (SpaRTUN)/train.json"
54
- - split: validation
55
- path: "SpaRP-PS1 (SpaRTUN)/val.json"
56
- - split: test
57
- path: "SpaRP-PS1 (SpaRTUN)/test.json"
58
-
59
- - config_name: "SpaRP-PS2 (StepGame)"
60
- version: 0.1.0
61
- data_files:
62
- - split: train
63
- path: "SpaRP (StepGame)/PS2/train.json"
64
- - split: validation
65
- path: "SpaRP (StepGame)/PS2/val.json"
66
- - split: test
67
- path: "SpaRP (StepGame)/PS2/test.json"
68
  ---
69
 
70
  # Dataset Card for Spatial Reasoning Path (SpaRP)
 
1
  ---
2
+ language:
3
+ - en
4
  license: cc-by-sa-4.0
 
5
  dataset_info:
6
  features:
7
  - name: context
 
27
  - name: symbolic_entity_map
28
  dtype: string
29
  - name: symbolic_question
30
+ sequence: string
31
  - name: num_context_entities
32
  dtype: int32
33
  - name: num_question_entities
 
44
  dtype: string
45
  - name: comments
46
  sequence: string
 
47
  configs:
48
+ - config_name: SpaRP-PS1 (SpaRTUN)
49
+ version: 0.1.0
50
+ default: true
51
+ data_files:
52
+ - split: train
53
+ path: SpaRP-PS1 (SpaRTUN)/train.json
54
+ - split: validation
55
+ path: SpaRP-PS1 (SpaRTUN)/val.json
56
+ - split: test
57
+ path: SpaRP-PS1 (SpaRTUN)/test.json
58
+ - config_name: SpaRP-PS2 (StepGame)
59
+ version: 0.1.0
60
+ data_files:
61
+ - split: train
62
+ path: SpaRP (StepGame)/PS2/train.json
63
+ - split: validation
64
+ path: SpaRP (StepGame)/PS2/val.json
65
+ - split: test
66
+ path: SpaRP (StepGame)/PS2/test.json
 
67
  ---
68
 
69
  # Dataset Card for Spatial Reasoning Path (SpaRP)