BeeGass commited on
Commit
edb5c72
·
verified ·
1 Parent(s): d941769

Upload folder using huggingface_hub

Browse files
data/s5_data/README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Permutation Composition Dataset (S5)
3
+ size_categories:
4
+ - small
5
+ - medium
6
+ - large
7
+ - xlarge
8
+ - xxlarge
9
+ tags:
10
+ - mathematics
11
+ - group-theory
12
+ - permutations
13
+ - sequence-to-sequence
14
+ - benchmark
15
+ - generated
16
+ task_categories:
17
+ - text-generation
18
+ - sequence-modeling
19
+ annotations_creators:
20
+ - no-annotations
21
+ language_creators:
22
+ - other
23
+ language:
24
+ - en
25
+ licenses:
26
+ - mit
27
+ ---
28
+
29
+ # Permutation Composition Dataset for S5
30
+
31
+ This dataset contains sequences of permutation IDs and their compositions, designed for benchmarking sequence-to-sequence models on group theory tasks.
32
+
33
+ ## Dataset Structure
34
+
35
+ The dataset is split into `train` and `test` sets. Each sample in the dataset has the following features:
36
+
37
+ - `input_sequence`: A space-separated string of integer IDs representing the sequence of permutations to be composed.
38
+ - `target`: An integer ID representing the composition of the `input_sequence` permutations.
39
+
40
+ ## Group Details
41
+
42
+ - **Group Name**: S5
43
+ - **Group Type**: Symmetric Group
44
+ - **Degree**: 5 (permutations act on 5 elements)
45
+ - **Order**: 120 (total number of elements in the group)
46
+
47
+ ## Data Generation
48
+
49
+ This dataset was generated using the `s5-data-gen` script. The generation process involves:
50
+
51
+ 1. Generating all unique permutations for the specified group.
52
+ 2. Mapping each unique permutation to a unique integer ID.
53
+ 3. Randomly sampling sequences of these permutation IDs.
54
+ 4. Composing the permutations in the sequence (from right to left: `p_n o ... o p_2 o p_1`).
55
+ 5. Mapping the resulting composed permutation to its integer ID as the target.
56
+
57
+ ### Generation Parameters:
58
+
59
+ - **Total Samples**: 100000
60
+ - **Minimum Sequence Length**: 3
61
+ - **Maximum Sequence Length**: 512
62
+ - **Test Split Size**: 0.2
63
+
64
+ ## Dataset Statistics
65
+
66
+ - **Train Samples**: 80000
67
+ - **Test Samples**: 20000
68
+
69
+ ## Permutation Mapping
70
+
71
+ The mapping from integer IDs to their corresponding permutation array forms is provided in the `metadata.json` file alongside the dataset. This file is crucial for interpreting the `input_sequence` and `target` IDs.
72
+
73
+ Example of `metadata.json` content:
74
+
75
+ ```json
76
+ {
77
+ "0": "[0, 1, 2, 3, 4]",
78
+ "1": "[0, 1, 3, 2, 4]",
79
+ "2": "[0, 1, 4, 3, 2]",
80
+ "3": "[0, 2, 1, 3, 4]",
81
+ "4": "[0, 2, 3, 1, 4]"
82
+ // ... and so on for all 120 permutations
83
+ }
84
+ ```
85
+
86
+ ## Usage
87
+
88
+ You can load this dataset using the Hugging Face `datasets` library:
89
+
90
+ ```python
91
+ from datasets import load_dataset
92
+ import json
93
+ from huggingface_hub import hf_hub_download
94
+
95
+ # Load the dataset
96
+ dataset = load_dataset("BeeGass/permutation-groups")
97
+
98
+ # Load the permutation mapping
99
+ metadata_path = hf_hub_download(repo_id="BeeGass/permutation-groups", filename="metadata.json")
100
+ with open(metadata_path, "r") as f:
101
+ id_to_perm_map = json.load(f)
102
+
103
+ # Example: Decode a sample
104
+ first_train_sample = dataset["train"][0]
105
+ input_ids = [int(x) for x in first_train_sample["input_sequence"].split(" ")]
106
+ target_id = int(first_train_sample["target"])
107
+
108
+ print(f"Input sequence IDs: {input_ids}")
109
+ print(f"Target ID: {target_id}")
110
+
111
+ # Convert IDs back to permutations (example for the first input permutation)
112
+ # Note: SymPy Permutation expects a list of integers, not a string representation
113
+ # You would need to parse the string representation from id_to_perm_map
114
+ # For example: eval(id_to_perm_map[str(input_ids[0])])
115
+
116
+ print(f"First input permutation (array form): {id_to_perm_map[str(input_ids[0])]}")
117
+ print(f"Target permutation (array form): {id_to_perm_map[str(target_id)]}")
118
+ ```
119
+
120
+ ## License
121
+
122
+ This dataset is licensed under the MIT License.
data/s5_data/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "test"]}
data/s5_data/metadata.json ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "0": "[0, 1, 4, 2, 3]",
3
+ "1": "[1, 2, 0, 3, 4]",
4
+ "2": "[2, 3, 1, 4, 0]",
5
+ "3": "[3, 4, 2, 0, 1]",
6
+ "4": "[4, 1, 0, 2, 3]",
7
+ "5": "[0, 2, 3, 1, 4]",
8
+ "6": "[1, 3, 4, 2, 0]",
9
+ "7": "[2, 4, 0, 3, 1]",
10
+ "8": "[3, 0, 1, 4, 2]",
11
+ "9": "[4, 2, 3, 1, 0]",
12
+ "10": "[0, 3, 2, 4, 1]",
13
+ "11": "[1, 4, 3, 0, 2]",
14
+ "12": "[2, 0, 4, 1, 3]",
15
+ "13": "[3, 1, 0, 2, 4]",
16
+ "14": "[4, 3, 2, 0, 1]",
17
+ "15": "[0, 4, 3, 1, 2]",
18
+ "16": "[1, 0, 4, 2, 3]",
19
+ "17": "[2, 1, 0, 3, 4]",
20
+ "18": "[3, 2, 1, 4, 0]",
21
+ "19": "[4, 0, 3, 1, 2]",
22
+ "20": "[0, 1, 2, 3, 4]",
23
+ "21": "[1, 2, 3, 4, 0]",
24
+ "22": "[2, 3, 4, 0, 1]",
25
+ "23": "[3, 4, 0, 1, 2]",
26
+ "24": "[4, 1, 2, 3, 0]",
27
+ "25": "[0, 2, 1, 4, 3]",
28
+ "26": "[1, 3, 2, 0, 4]",
29
+ "27": "[2, 4, 3, 1, 0]",
30
+ "28": "[3, 0, 4, 2, 1]",
31
+ "29": "[4, 2, 1, 0, 3]",
32
+ "30": "[0, 3, 4, 1, 2]",
33
+ "31": "[1, 4, 0, 2, 3]",
34
+ "32": "[2, 0, 1, 3, 4]",
35
+ "33": "[3, 1, 2, 4, 0]",
36
+ "34": "[4, 3, 0, 1, 2]",
37
+ "35": "[0, 4, 1, 2, 3]",
38
+ "36": "[1, 0, 2, 3, 4]",
39
+ "37": "[2, 1, 3, 4, 0]",
40
+ "38": "[3, 2, 4, 0, 1]",
41
+ "39": "[4, 0, 1, 2, 3]",
42
+ "40": "[0, 1, 2, 4, 3]",
43
+ "41": "[1, 2, 3, 0, 4]",
44
+ "42": "[2, 3, 4, 1, 0]",
45
+ "43": "[3, 4, 0, 2, 1]",
46
+ "44": "[4, 1, 2, 0, 3]",
47
+ "45": "[0, 2, 1, 3, 4]",
48
+ "46": "[1, 3, 2, 4, 0]",
49
+ "47": "[2, 4, 3, 0, 1]",
50
+ "48": "[3, 0, 4, 1, 2]",
51
+ "49": "[4, 2, 1, 3, 0]",
52
+ "50": "[0, 3, 4, 2, 1]",
53
+ "51": "[1, 4, 0, 3, 2]",
54
+ "52": "[2, 0, 1, 4, 3]",
55
+ "53": "[3, 1, 2, 0, 4]",
56
+ "54": "[4, 3, 0, 2, 1]",
57
+ "55": "[0, 4, 1, 3, 2]",
58
+ "56": "[1, 0, 2, 4, 3]",
59
+ "57": "[2, 1, 3, 0, 4]",
60
+ "58": "[3, 2, 4, 1, 0]",
61
+ "59": "[4, 0, 1, 3, 2]",
62
+ "60": "[0, 1, 3, 2, 4]",
63
+ "61": "[1, 2, 4, 3, 0]",
64
+ "62": "[2, 3, 0, 4, 1]",
65
+ "63": "[3, 4, 1, 0, 2]",
66
+ "64": "[4, 1, 3, 2, 0]",
67
+ "65": "[0, 2, 4, 1, 3]",
68
+ "66": "[1, 3, 0, 2, 4]",
69
+ "67": "[2, 4, 1, 3, 0]",
70
+ "68": "[3, 0, 2, 4, 1]",
71
+ "69": "[4, 2, 0, 1, 3]",
72
+ "70": "[0, 3, 1, 4, 2]",
73
+ "71": "[1, 4, 2, 0, 3]",
74
+ "72": "[2, 0, 3, 1, 4]",
75
+ "73": "[3, 1, 4, 2, 0]",
76
+ "74": "[4, 3, 1, 0, 2]",
77
+ "75": "[0, 4, 2, 1, 3]",
78
+ "76": "[1, 0, 3, 2, 4]",
79
+ "77": "[2, 1, 4, 3, 0]",
80
+ "78": "[3, 2, 0, 4, 1]",
81
+ "79": "[4, 0, 2, 1, 3]",
82
+ "80": "[0, 1, 4, 3, 2]",
83
+ "81": "[1, 2, 0, 4, 3]",
84
+ "82": "[2, 3, 1, 0, 4]",
85
+ "83": "[3, 4, 2, 1, 0]",
86
+ "84": "[4, 1, 0, 3, 2]",
87
+ "85": "[0, 2, 3, 4, 1]",
88
+ "86": "[1, 3, 4, 0, 2]",
89
+ "87": "[2, 4, 0, 1, 3]",
90
+ "88": "[3, 0, 1, 2, 4]",
91
+ "89": "[4, 2, 3, 0, 1]",
92
+ "90": "[0, 3, 2, 1, 4]",
93
+ "91": "[1, 4, 3, 2, 0]",
94
+ "92": "[2, 0, 4, 3, 1]",
95
+ "93": "[3, 1, 0, 4, 2]",
96
+ "94": "[4, 3, 2, 1, 0]",
97
+ "95": "[0, 4, 3, 2, 1]",
98
+ "96": "[1, 0, 4, 3, 2]",
99
+ "97": "[2, 1, 0, 4, 3]",
100
+ "98": "[3, 2, 1, 0, 4]",
101
+ "99": "[4, 0, 3, 2, 1]",
102
+ "100": "[0, 1, 3, 4, 2]",
103
+ "101": "[1, 2, 4, 0, 3]",
104
+ "102": "[2, 3, 0, 1, 4]",
105
+ "103": "[3, 4, 1, 2, 0]",
106
+ "104": "[4, 1, 3, 0, 2]",
107
+ "105": "[0, 2, 4, 3, 1]",
108
+ "106": "[1, 3, 0, 4, 2]",
109
+ "107": "[2, 4, 1, 0, 3]",
110
+ "108": "[3, 0, 2, 1, 4]",
111
+ "109": "[4, 2, 0, 3, 1]",
112
+ "110": "[0, 3, 1, 2, 4]",
113
+ "111": "[1, 4, 2, 3, 0]",
114
+ "112": "[2, 0, 3, 4, 1]",
115
+ "113": "[3, 1, 4, 0, 2]",
116
+ "114": "[4, 3, 1, 2, 0]",
117
+ "115": "[0, 4, 2, 3, 1]",
118
+ "116": "[1, 0, 3, 4, 2]",
119
+ "117": "[2, 1, 4, 0, 3]",
120
+ "118": "[3, 2, 0, 1, 4]",
121
+ "119": "[4, 0, 2, 3, 1]"
122
+ }
data/s5_data/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:187284abecc7779215dbaa6bf906cbe4ab0f96479a233c5ea578828d235638cd
3
+ size 16034432
data/s5_data/test/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "Permutation composition benchmark for the S5 group.",
4
+ "features": {
5
+ "input_sequence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "https://github.com/your-repo",
15
+ "license": "mit"
16
+ }
data/s5_data/test/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "ce4f0dd6a26473f4",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
data/s5_data/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6afc7bb41b2886d1c818379986ffaa2c2185f2934440a3ddde81dc0f2f76c088
3
+ size 64145960
data/s5_data/train/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "Permutation composition benchmark for the S5 group.",
4
+ "features": {
5
+ "input_sequence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "https://github.com/your-repo",
15
+ "license": "mit"
16
+ }
data/s5_data/train/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "3fc36b0fe7ca19af",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }