BeeGass commited on
Commit
2446235
·
verified ·
1 Parent(s): e72310c

Upload folder using huggingface_hub

Browse files
data/a5_data/README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Permutation Composition Dataset (A5)
3
+ size_categories:
4
+ - small
5
+ - medium
6
+ - large
7
+ - xlarge
8
+ - xxlarge
9
+ tags:
10
+ - mathematics
11
+ - group-theory
12
+ - permutations
13
+ - sequence-to-sequence
14
+ - benchmark
15
+ - generated
16
+ task_categories:
17
+ - text-generation
18
+ - sequence-modeling
19
+ annotations_creators:
20
+ - no-annotations
21
+ language_creators:
22
+ - other
23
+ language:
24
+ - en
25
+ licenses:
26
+ - mit
27
+ ---
28
+
29
+ # Permutation Composition Dataset for A5
30
+
31
+ This dataset contains sequences of permutation IDs and their compositions, designed for benchmarking sequence-to-sequence models on group theory tasks.
32
+
33
+ ## Dataset Structure
34
+
35
+ The dataset is split into `train` and `test` sets. Each sample in the dataset has the following features:
36
+
37
+ - `input_sequence`: A space-separated string of integer IDs representing the sequence of permutations to be composed.
38
+ - `target`: An integer ID representing the composition of the `input_sequence` permutations.
39
+
40
+ ## Group Details
41
+
42
+ - **Group Name**: A5
43
+ - **Group Type**: Alternating Group
44
+ - **Degree**: 5 (permutations act on 5 elements)
45
+ - **Order**: 60 (total number of elements in the group)
46
+
47
+ ## Data Generation
48
+
49
+ This dataset was generated using the `s5-data-gen` script. The generation process involves:
50
+
51
+ 1. Generating all unique permutations for the specified group.
52
+ 2. Mapping each unique permutation to a unique integer ID.
53
+ 3. Randomly sampling sequences of these permutation IDs.
54
+ 4. Composing the permutations in the sequence (from right to left: `p_n o ... o p_2 o p_1`).
55
+ 5. Mapping the resulting composed permutation to its integer ID as the target.
56
+
57
+ ### Generation Parameters:
58
+
59
+ - **Total Samples**: 20000
60
+ - **Minimum Sequence Length**: 3
61
+ - **Maximum Sequence Length**: 512
62
+ - **Test Split Size**: 0.2
63
+
64
+ ## Dataset Statistics
65
+
66
+ - **Train Samples**: 16000
67
+ - **Test Samples**: 4000
68
+
69
+ ## Permutation Mapping
70
+
71
+ The mapping from integer IDs to their corresponding permutation array forms is provided in the `metadata.json` file alongside the dataset. This file is crucial for interpreting the `input_sequence` and `target` IDs.
72
+
73
+ Example of `metadata.json` content:
74
+
75
+ ```json
76
+ {
77
+ "0": "[0, 1, 2, 3, 4]",
78
+ "1": "[0, 1, 3, 2, 4]",
79
+ "2": "[0, 1, 4, 3, 2]",
80
+ "3": "[0, 2, 1, 3, 4]",
81
+ "4": "[0, 2, 3, 1, 4]"
82
+ // ... and so on for all 60 permutations
83
+ }
84
+ ```
85
+
86
+ ## Usage
87
+
88
+ You can load this dataset using the Hugging Face `datasets` library:
89
+
90
+ ```python
91
+ from datasets import load_dataset
92
+ import json
93
+ from huggingface_hub import hf_hub_download
94
+
95
+ # Load the dataset
96
+ dataset = load_dataset("BeeGass/permutation-groups")
97
+
98
+ # Load the permutation mapping
99
+ metadata_path = hf_hub_download(repo_id="BeeGass/permutation-groups", filename="metadata.json")
100
+ with open(metadata_path, "r") as f:
101
+ id_to_perm_map = json.load(f)
102
+
103
+ # Example: Decode a sample
104
+ first_train_sample = dataset["train"][0]
105
+ input_ids = [int(x) for x in first_train_sample["input_sequence"].split(" ")]
106
+ target_id = int(first_train_sample["target"])
107
+
108
+ print(f"Input sequence IDs: {input_ids}")
109
+ print(f"Target ID: {target_id}")
110
+
111
+ # Convert IDs back to permutations (example for the first input permutation)
112
+ # Note: SymPy Permutation expects a list of integers, not a string representation
113
+ # You would need to parse the string representation from id_to_perm_map
114
+ # For example: eval(id_to_perm_map[str(input_ids[0])])
115
+
116
+ print(f"First input permutation (array form): {id_to_perm_map[str(input_ids[0])]}")
117
+ print(f"Target permutation (array form): {id_to_perm_map[str(target_id)]}")
118
+ ```
119
+
120
+ ## License
121
+
122
+ This dataset is licensed under the MIT License.
data/a5_data/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "test"]}
data/a5_data/metadata.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "0": "[0, 3, 1, 2, 4]",
3
+ "1": "[1, 3, 2, 0, 4]",
4
+ "2": "[2, 3, 0, 1, 4]",
5
+ "3": "[3, 4, 2, 0, 1]",
6
+ "4": "[4, 0, 3, 1, 2]",
7
+ "5": "[0, 1, 2, 3, 4]",
8
+ "6": "[1, 2, 0, 3, 4]",
9
+ "7": "[2, 0, 1, 3, 4]",
10
+ "8": "[3, 2, 0, 4, 1]",
11
+ "9": "[4, 3, 1, 0, 2]",
12
+ "10": "[0, 1, 3, 4, 2]",
13
+ "11": "[1, 2, 3, 4, 0]",
14
+ "12": "[2, 0, 3, 4, 1]",
15
+ "13": "[3, 2, 4, 1, 0]",
16
+ "14": "[4, 3, 0, 2, 1]",
17
+ "15": "[0, 1, 4, 2, 3]",
18
+ "16": "[1, 2, 4, 0, 3]",
19
+ "17": "[2, 0, 4, 1, 3]",
20
+ "18": "[3, 2, 1, 0, 4]",
21
+ "19": "[4, 3, 2, 1, 0]",
22
+ "20": "[0, 2, 1, 4, 3]",
23
+ "21": "[1, 0, 2, 4, 3]",
24
+ "22": "[2, 1, 0, 4, 3]",
25
+ "23": "[3, 0, 2, 1, 4]",
26
+ "24": "[4, 1, 3, 2, 0]",
27
+ "25": "[0, 3, 2, 4, 1]",
28
+ "26": "[1, 3, 0, 4, 2]",
29
+ "27": "[2, 3, 1, 4, 0]",
30
+ "28": "[3, 4, 0, 1, 2]",
31
+ "29": "[4, 0, 1, 2, 3]",
32
+ "30": "[0, 4, 3, 2, 1]",
33
+ "31": "[1, 4, 3, 0, 2]",
34
+ "32": "[2, 4, 3, 1, 0]",
35
+ "33": "[3, 1, 4, 0, 2]",
36
+ "34": "[4, 2, 0, 1, 3]",
37
+ "35": "[0, 2, 4, 3, 1]",
38
+ "36": "[1, 0, 4, 3, 2]",
39
+ "37": "[2, 1, 4, 3, 0]",
40
+ "38": "[3, 0, 1, 4, 2]",
41
+ "39": "[4, 1, 2, 0, 3]",
42
+ "40": "[0, 4, 1, 3, 2]",
43
+ "41": "[1, 4, 2, 3, 0]",
44
+ "42": "[2, 4, 0, 3, 1]",
45
+ "43": "[3, 1, 2, 4, 0]",
46
+ "44": "[4, 2, 3, 0, 1]",
47
+ "45": "[0, 4, 2, 1, 3]",
48
+ "46": "[1, 4, 0, 2, 3]",
49
+ "47": "[2, 4, 1, 0, 3]",
50
+ "48": "[3, 1, 0, 2, 4]",
51
+ "49": "[4, 2, 1, 3, 0]",
52
+ "50": "[0, 2, 3, 1, 4]",
53
+ "51": "[1, 0, 3, 2, 4]",
54
+ "52": "[2, 1, 3, 0, 4]",
55
+ "53": "[3, 0, 4, 2, 1]",
56
+ "54": "[4, 1, 0, 3, 2]",
57
+ "55": "[0, 3, 4, 1, 2]",
58
+ "56": "[1, 3, 4, 2, 0]",
59
+ "57": "[2, 3, 4, 0, 1]",
60
+ "58": "[3, 4, 1, 2, 0]",
61
+ "59": "[4, 0, 2, 3, 1]"
62
+ }
data/a5_data/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:933c566f28c04d29bf2bffb993bd8e6b4896634e85cbf98c21cfc28d4d3adb31
3
+ size 2947992
data/a5_data/test/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "Permutation composition benchmark for the A5 group.",
4
+ "features": {
5
+ "input_sequence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "https://github.com/your-repo",
15
+ "license": "mit"
16
+ }
data/a5_data/test/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "cabecb3ec4981e96",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
data/a5_data/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a93bb63c0dc75ae2379b21f926cc22181d794282bdcf3e3ce432b26f1cdf175b
3
+ size 11782824
data/a5_data/train/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "Permutation composition benchmark for the A5 group.",
4
+ "features": {
5
+ "input_sequence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "https://github.com/your-repo",
15
+ "license": "mit"
16
+ }
data/a5_data/train/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "7f61a1752fac25e8",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }