BeeGass commited on
Commit
5944459
·
verified ·
1 Parent(s): 541b3a5

Upload folder using huggingface_hub

Browse files
data/a5_data/README.md CHANGED
@@ -4,93 +4,37 @@ size_categories:
4
  - small
5
  - medium
6
  - large
7
- - xlarge
8
- - xxlarge
9
  tags:
10
  - mathematics
11
  - group-theory
12
  - permutations
13
  - sequence-to-sequence
14
  - benchmark
15
- - generated
16
  task_categories:
17
  - text-generation
18
  - sequence-modeling
19
- annotations_creators:
20
- - no-annotations
21
- language_creators:
22
- - other
23
- language:
24
- - en
25
- licenses:
26
- - mit
27
- ---
28
 
29
  # Permutation Composition Dataset for A5
30
 
31
- This dataset contains sequences of permutation IDs and their compositions, designed for benchmarking sequence-to-sequence models on group theory tasks.
32
-
33
- ## Dataset Structure
34
-
35
- The dataset is split into `train` and `test` sets. Each sample in the dataset has the following features:
36
-
37
- - `input_sequence`: A space-separated string of integer IDs representing the sequence of permutations to be composed.
38
- - `target`: An integer ID representing the composition of the `input_sequence` permutations.
39
-
40
- ## Group Details
41
-
42
- - **Group Name**: A5
43
- - **Group Type**: Alternating Group
44
- - **Degree**: 5 (permutations act on 5 elements)
45
- - **Order**: 60 (total number of elements in the group)
46
 
47
- ## Data Generation
48
-
49
- This dataset was generated using the `s5-data-gen` script. The generation process involves:
50
-
51
- 1. Generating all unique permutations for the specified group.
52
- 2. Mapping each unique permutation to a unique integer ID.
53
- 3. Randomly sampling sequences of these permutation IDs.
54
- 4. Composing the permutations in the sequence (from right to left: `p_n o ... o p_2 o p_1`).
55
- 5. Mapping the resulting composed permutation to its integer ID as the target.
56
-
57
- ### Generation Parameters:
58
-
59
- - **Total Samples**: 20000
60
- - **Minimum Sequence Length**: 3
61
- - **Maximum Sequence Length**: 512
62
- - **Test Split Size**: 0.2
63
 
64
  ## Dataset Statistics
65
-
66
- - **Train Samples**: 16000
67
- - **Test Samples**: 4000
68
-
69
- ## Permutation Mapping
70
-
71
- The mapping from integer IDs to their corresponding permutation array forms is provided in the `metadata.json` file alongside the dataset. This file is crucial for interpreting the `input_sequence` and `target` IDs.
72
-
73
- Example of `metadata.json` content:
74
-
75
- ```json
76
- {
77
- "0": "[0, 1, 2, 3, 4]",
78
- "1": "[0, 1, 3, 2, 4]",
79
- "2": "[0, 1, 4, 3, 2]",
80
- "3": "[0, 2, 1, 3, 4]",
81
- "4": "[0, 2, 3, 1, 4]"
82
- // ... and so on for all 60 permutations
83
- }
84
- ```
85
 
86
  ## Usage
87
 
88
- You can load this dataset using the Hugging Face `datasets` library:
89
-
90
  ```python
91
  from datasets import load_dataset
92
- import json
93
- from huggingface_hub import hf_hub_download
94
 
95
  # Load the dataset
96
  dataset = load_dataset("BeeGass/permutation-groups")
@@ -99,22 +43,6 @@ dataset = load_dataset("BeeGass/permutation-groups")
99
  metadata_path = hf_hub_download(repo_id="BeeGass/permutation-groups", filename="metadata.json")
100
  with open(metadata_path, "r") as f:
101
  id_to_perm_map = json.load(f)
102
-
103
- # Example: Decode a sample
104
- first_train_sample = dataset["train"][0]
105
- input_ids = [int(x) for x in first_train_sample["input_sequence"].split(" ")]
106
- target_id = int(first_train_sample["target"])
107
-
108
- print(f"Input sequence IDs: {input_ids}")
109
- print(f"Target ID: {target_id}")
110
-
111
- # Convert IDs back to permutations (example for the first input permutation)
112
- # Note: SymPy Permutation expects a list of integers, not a string representation
113
- # You would need to parse the string representation from id_to_perm_map
114
- # For example: eval(id_to_perm_map[str(input_ids[0])])
115
-
116
- print(f"First input permutation (array form): {id_to_perm_map[str(input_ids[0])]}")
117
- print(f"Target permutation (array form): {id_to_perm_map[str(target_id)]}")
118
  ```
119
 
120
  ## License
 
4
  - small
5
  - medium
6
  - large
 
 
7
  tags:
8
  - mathematics
9
  - group-theory
10
  - permutations
11
  - sequence-to-sequence
12
  - benchmark
 
13
  task_categories:
14
  - text-generation
15
  - sequence-modeling
16
+ ---
 
 
 
 
 
 
 
 
17
 
18
  # Permutation Composition Dataset for A5
19
 
20
+ This dataset contains 30000 samples of permutation composition problems for the group A5.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
+ ## Group Information
23
+ - **Group**: A5
24
+ - **Order**: 60
25
+ - **Degree**: 5
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Dataset Statistics
28
+ - **Total samples**: 30000
29
+ - **Train samples**: 24000
30
+ - **Test samples**: 6000
31
+ - **Min sequence length**: 3
32
+ - **Max sequence length**: 512
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Usage
35
 
 
 
36
  ```python
37
  from datasets import load_dataset
 
 
38
 
39
  # Load the dataset
40
  dataset = load_dataset("BeeGass/permutation-groups")
 
43
  metadata_path = hf_hub_download(repo_id="BeeGass/permutation-groups", filename="metadata.json")
44
  with open(metadata_path, "r") as f:
45
  id_to_perm_map = json.load(f)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```
47
 
48
  ## License
data/a5_data/metadata.json CHANGED
@@ -1,62 +1,62 @@
1
  {
2
- "0": "[0, 3, 1, 2, 4]",
3
- "1": "[1, 3, 2, 0, 4]",
4
- "2": "[2, 3, 0, 1, 4]",
5
- "3": "[3, 4, 2, 0, 1]",
6
- "4": "[4, 0, 3, 1, 2]",
7
- "5": "[0, 1, 2, 3, 4]",
8
- "6": "[1, 2, 0, 3, 4]",
9
- "7": "[2, 0, 1, 3, 4]",
10
- "8": "[3, 2, 0, 4, 1]",
11
- "9": "[4, 3, 1, 0, 2]",
12
- "10": "[0, 1, 3, 4, 2]",
13
- "11": "[1, 2, 3, 4, 0]",
14
- "12": "[2, 0, 3, 4, 1]",
15
- "13": "[3, 2, 4, 1, 0]",
16
- "14": "[4, 3, 0, 2, 1]",
17
- "15": "[0, 1, 4, 2, 3]",
18
- "16": "[1, 2, 4, 0, 3]",
19
- "17": "[2, 0, 4, 1, 3]",
20
- "18": "[3, 2, 1, 0, 4]",
21
- "19": "[4, 3, 2, 1, 0]",
22
- "20": "[0, 2, 1, 4, 3]",
23
- "21": "[1, 0, 2, 4, 3]",
24
- "22": "[2, 1, 0, 4, 3]",
25
- "23": "[3, 0, 2, 1, 4]",
26
- "24": "[4, 1, 3, 2, 0]",
27
- "25": "[0, 3, 2, 4, 1]",
28
- "26": "[1, 3, 0, 4, 2]",
29
- "27": "[2, 3, 1, 4, 0]",
30
- "28": "[3, 4, 0, 1, 2]",
31
- "29": "[4, 0, 1, 2, 3]",
32
- "30": "[0, 4, 3, 2, 1]",
33
- "31": "[1, 4, 3, 0, 2]",
34
- "32": "[2, 4, 3, 1, 0]",
35
- "33": "[3, 1, 4, 0, 2]",
36
- "34": "[4, 2, 0, 1, 3]",
37
- "35": "[0, 2, 4, 3, 1]",
38
- "36": "[1, 0, 4, 3, 2]",
39
- "37": "[2, 1, 4, 3, 0]",
40
- "38": "[3, 0, 1, 4, 2]",
41
- "39": "[4, 1, 2, 0, 3]",
42
- "40": "[0, 4, 1, 3, 2]",
43
- "41": "[1, 4, 2, 3, 0]",
44
- "42": "[2, 4, 0, 3, 1]",
45
- "43": "[3, 1, 2, 4, 0]",
46
- "44": "[4, 2, 3, 0, 1]",
47
- "45": "[0, 4, 2, 1, 3]",
48
- "46": "[1, 4, 0, 2, 3]",
49
- "47": "[2, 4, 1, 0, 3]",
50
- "48": "[3, 1, 0, 2, 4]",
51
- "49": "[4, 2, 1, 3, 0]",
52
- "50": "[0, 2, 3, 1, 4]",
53
- "51": "[1, 0, 3, 2, 4]",
54
- "52": "[2, 1, 3, 0, 4]",
55
- "53": "[3, 0, 4, 2, 1]",
56
- "54": "[4, 1, 0, 3, 2]",
57
- "55": "[0, 3, 4, 1, 2]",
58
- "56": "[1, 3, 4, 2, 0]",
59
- "57": "[2, 3, 4, 0, 1]",
60
- "58": "[3, 4, 1, 2, 0]",
61
- "59": "[4, 0, 2, 3, 1]"
62
  }
 
1
  {
2
+ "0": "[0, 3, 1, 2, 4]",
3
+ "1": "[1, 3, 2, 0, 4]",
4
+ "2": "[2, 3, 0, 1, 4]",
5
+ "3": "[3, 4, 2, 0, 1]",
6
+ "4": "[4, 0, 3, 1, 2]",
7
+ "5": "[0, 1, 2, 3, 4]",
8
+ "6": "[1, 2, 0, 3, 4]",
9
+ "7": "[2, 0, 1, 3, 4]",
10
+ "8": "[3, 2, 0, 4, 1]",
11
+ "9": "[4, 3, 1, 0, 2]",
12
+ "10": "[0, 1, 3, 4, 2]",
13
+ "11": "[1, 2, 3, 4, 0]",
14
+ "12": "[2, 0, 3, 4, 1]",
15
+ "13": "[3, 2, 4, 1, 0]",
16
+ "14": "[4, 3, 0, 2, 1]",
17
+ "15": "[0, 1, 4, 2, 3]",
18
+ "16": "[1, 2, 4, 0, 3]",
19
+ "17": "[2, 0, 4, 1, 3]",
20
+ "18": "[3, 2, 1, 0, 4]",
21
+ "19": "[4, 3, 2, 1, 0]",
22
+ "20": "[0, 2, 1, 4, 3]",
23
+ "21": "[1, 0, 2, 4, 3]",
24
+ "22": "[2, 1, 0, 4, 3]",
25
+ "23": "[3, 0, 2, 1, 4]",
26
+ "24": "[4, 1, 3, 2, 0]",
27
+ "25": "[0, 3, 2, 4, 1]",
28
+ "26": "[1, 3, 0, 4, 2]",
29
+ "27": "[2, 3, 1, 4, 0]",
30
+ "28": "[3, 4, 0, 1, 2]",
31
+ "29": "[4, 0, 1, 2, 3]",
32
+ "30": "[0, 4, 3, 2, 1]",
33
+ "31": "[1, 4, 3, 0, 2]",
34
+ "32": "[2, 4, 3, 1, 0]",
35
+ "33": "[3, 1, 4, 0, 2]",
36
+ "34": "[4, 2, 0, 1, 3]",
37
+ "35": "[0, 2, 4, 3, 1]",
38
+ "36": "[1, 0, 4, 3, 2]",
39
+ "37": "[2, 1, 4, 3, 0]",
40
+ "38": "[3, 0, 1, 4, 2]",
41
+ "39": "[4, 1, 2, 0, 3]",
42
+ "40": "[0, 4, 1, 3, 2]",
43
+ "41": "[1, 4, 2, 3, 0]",
44
+ "42": "[2, 4, 0, 3, 1]",
45
+ "43": "[3, 1, 2, 4, 0]",
46
+ "44": "[4, 2, 3, 0, 1]",
47
+ "45": "[0, 4, 2, 1, 3]",
48
+ "46": "[1, 4, 0, 2, 3]",
49
+ "47": "[2, 4, 1, 0, 3]",
50
+ "48": "[3, 1, 0, 2, 4]",
51
+ "49": "[4, 2, 1, 3, 0]",
52
+ "50": "[0, 2, 3, 1, 4]",
53
+ "51": "[1, 0, 3, 2, 4]",
54
+ "52": "[2, 1, 3, 0, 4]",
55
+ "53": "[3, 0, 4, 2, 1]",
56
+ "54": "[4, 1, 0, 3, 2]",
57
+ "55": "[0, 3, 4, 1, 2]",
58
+ "56": "[1, 3, 4, 2, 0]",
59
+ "57": "[2, 3, 4, 0, 1]",
60
+ "58": "[3, 4, 1, 2, 0]",
61
+ "59": "[4, 0, 2, 3, 1]"
62
  }
data/a5_data/test/data-00000-of-00001.arrow CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ee1484fe59700b54e967fabbec5209299135a5b9c8c0c8258c4daefb0b16360
3
- size 4446976
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e0aafe398005a42883d31e4b506ddc13e358c65a6b12d0b8dd1c1fb5f8121c6
3
+ size 4438264
data/a5_data/test/dataset_info.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "citation": "",
3
- "description": "Permutation composition benchmark for the A5 group.",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
@@ -11,6 +11,6 @@
11
  "_type": "Value"
12
  }
13
  },
14
- "homepage": "https://github.com/your-repo",
15
- "license": "mit"
16
  }
 
1
  {
2
  "citation": "",
3
+ "description": "",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
 
11
  "_type": "Value"
12
  }
13
  },
14
+ "homepage": "",
15
+ "license": ""
16
  }
data/a5_data/test/state.json CHANGED
@@ -4,7 +4,7 @@
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
- "_fingerprint": "c2e6cbea4c6b14b5",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,
 
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
+ "_fingerprint": "ed51b7fb34870132",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,
data/a5_data/train/data-00000-of-00001.arrow CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7de6bf71a1e552bfc1a7c5201268619e77f5b1af854ec246e2b221968de58b3f
3
- size 17719640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0376ed66fe662ec93a1268b697d147577bfe7c843924c50b8159797164b5125
3
+ size 17830448
data/a5_data/train/dataset_info.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "citation": "",
3
- "description": "Permutation composition benchmark for the A5 group.",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
@@ -11,6 +11,6 @@
11
  "_type": "Value"
12
  }
13
  },
14
- "homepage": "https://github.com/your-repo",
15
- "license": "mit"
16
  }
 
1
  {
2
  "citation": "",
3
+ "description": "",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
 
11
  "_type": "Value"
12
  }
13
  },
14
+ "homepage": "",
15
+ "license": ""
16
  }
data/a5_data/train/state.json CHANGED
@@ -4,7 +4,7 @@
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
- "_fingerprint": "60004397ad3e58dc",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,
 
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
+ "_fingerprint": "ff4a605364620460",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,