BeeGass commited on
Commit
a27d7c8
·
verified ·
1 Parent(s): 475137d

Upload folder using huggingface_hub

Browse files
data/s3_data/README.md CHANGED
@@ -4,93 +4,37 @@ size_categories:
4
  - small
5
  - medium
6
  - large
7
- - xlarge
8
- - xxlarge
9
  tags:
10
  - mathematics
11
  - group-theory
12
  - permutations
13
  - sequence-to-sequence
14
  - benchmark
15
- - generated
16
  task_categories:
17
  - text-generation
18
  - sequence-modeling
19
- annotations_creators:
20
- - no-annotations
21
- language_creators:
22
- - other
23
- language:
24
- - en
25
- licenses:
26
- - mit
27
- ---
28
 
29
  # Permutation Composition Dataset for S3
30
 
31
- This dataset contains sequences of permutation IDs and their compositions, designed for benchmarking sequence-to-sequence models on group theory tasks.
32
-
33
- ## Dataset Structure
34
-
35
- The dataset is split into `train` and `test` sets. Each sample in the dataset has the following features:
36
-
37
- - `input_sequence`: A space-separated string of integer IDs representing the sequence of permutations to be composed.
38
- - `target`: An integer ID representing the composition of the `input_sequence` permutations.
39
-
40
- ## Group Details
41
-
42
- - **Group Name**: S3
43
- - **Group Type**: Symmetric Group
44
- - **Degree**: 3 (permutations act on 3 elements)
45
- - **Order**: 6 (total number of elements in the group)
46
 
47
- ## Data Generation
48
-
49
- This dataset was generated using the `s5-data-gen` script. The generation process involves:
50
-
51
- 1. Generating all unique permutations for the specified group.
52
- 2. Mapping each unique permutation to a unique integer ID.
53
- 3. Randomly sampling sequences of these permutation IDs.
54
- 4. Composing the permutations in the sequence (from right to left: `p_n o ... o p_2 o p_1`).
55
- 5. Mapping the resulting composed permutation to its integer ID as the target.
56
-
57
- ### Generation Parameters:
58
-
59
- - **Total Samples**: 10000
60
- - **Minimum Sequence Length**: 3
61
- - **Maximum Sequence Length**: 512
62
- - **Test Split Size**: 0.2
63
 
64
  ## Dataset Statistics
65
-
66
- - **Train Samples**: 8000
67
- - **Test Samples**: 2000
68
-
69
- ## Permutation Mapping
70
-
71
- The mapping from integer IDs to their corresponding permutation array forms is provided in the `metadata.json` file alongside the dataset. This file is crucial for interpreting the `input_sequence` and `target` IDs.
72
-
73
- Example of `metadata.json` content:
74
-
75
- ```json
76
- {
77
- "0": "[0, 1, 2, 3, 4]",
78
- "1": "[0, 1, 3, 2, 4]",
79
- "2": "[0, 1, 4, 3, 2]",
80
- "3": "[0, 2, 1, 3, 4]",
81
- "4": "[0, 2, 3, 1, 4]"
82
- // ... and so on for all 6 permutations
83
- }
84
- ```
85
 
86
  ## Usage
87
 
88
- You can load this dataset using the Hugging Face `datasets` library:
89
-
90
  ```python
91
  from datasets import load_dataset
92
- import json
93
- from huggingface_hub import hf_hub_download
94
 
95
  # Load the dataset
96
  dataset = load_dataset("BeeGass/permutation-groups")
@@ -99,22 +43,6 @@ dataset = load_dataset("BeeGass/permutation-groups")
99
  metadata_path = hf_hub_download(repo_id="BeeGass/permutation-groups", filename="metadata.json")
100
  with open(metadata_path, "r") as f:
101
  id_to_perm_map = json.load(f)
102
-
103
- # Example: Decode a sample
104
- first_train_sample = dataset["train"][0]
105
- input_ids = [int(x) for x in first_train_sample["input_sequence"].split(" ")]
106
- target_id = int(first_train_sample["target"])
107
-
108
- print(f"Input sequence IDs: {input_ids}")
109
- print(f"Target ID: {target_id}")
110
-
111
- # Convert IDs back to permutations (example for the first input permutation)
112
- # Note: SymPy Permutation expects a list of integers, not a string representation
113
- # You would need to parse the string representation from id_to_perm_map
114
- # For example: eval(id_to_perm_map[str(input_ids[0])])
115
-
116
- print(f"First input permutation (array form): {id_to_perm_map[str(input_ids[0])]}")
117
- print(f"Target permutation (array form): {id_to_perm_map[str(target_id)]}")
118
  ```
119
 
120
  ## License
 
4
  - small
5
  - medium
6
  - large
 
 
7
  tags:
8
  - mathematics
9
  - group-theory
10
  - permutations
11
  - sequence-to-sequence
12
  - benchmark
 
13
  task_categories:
14
  - text-generation
15
  - sequence-modeling
16
+ ---
 
 
 
 
 
 
 
 
17
 
18
  # Permutation Composition Dataset for S3
19
 
20
+ This dataset contains 15000 samples of permutation composition problems for the group S3.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
+ ## Group Information
23
+ - **Group**: S3
24
+ - **Order**: 6
25
+ - **Degree**: 3
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Dataset Statistics
28
+ - **Total samples**: 15000
29
+ - **Train samples**: 12000
30
+ - **Test samples**: 3000
31
+ - **Min sequence length**: 3
32
+ - **Max sequence length**: 10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Usage
35
 
 
 
36
  ```python
37
  from datasets import load_dataset
 
 
38
 
39
  # Load the dataset
40
  dataset = load_dataset("BeeGass/permutation-groups")
 
43
  metadata_path = hf_hub_download(repo_id="BeeGass/permutation-groups", filename="metadata.json")
44
  with open(metadata_path, "r") as f:
45
  id_to_perm_map = json.load(f)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```
47
 
48
  ## License
data/s3_data/metadata.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "0": "[0, 1, 2]",
3
- "1": "[1, 2, 0]",
4
- "2": "[2, 0, 1]",
5
- "3": "[0, 2, 1]",
6
- "4": "[1, 0, 2]",
7
- "5": "[2, 1, 0]"
8
  }
 
1
  {
2
+ "0": "[0, 1, 2]",
3
+ "1": "[1, 2, 0]",
4
+ "2": "[2, 0, 1]",
5
+ "3": "[0, 2, 1]",
6
+ "4": "[1, 0, 2]",
7
+ "5": "[2, 1, 0]"
8
  }
data/s3_data/test/data-00000-of-00001.arrow CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d826664b71d425b7def3161248a92c295daf6b9ffd9c4fe44c239475730fd3df
3
- size 1057608
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7137671e834d49c69ae57369723b3216a6822105be52ad4e9971d51da1702aa
3
+ size 64112
data/s3_data/test/dataset_info.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "citation": "",
3
- "description": "Permutation composition benchmark for the S3 group.",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
@@ -11,6 +11,6 @@
11
  "_type": "Value"
12
  }
13
  },
14
- "homepage": "https://github.com/your-repo",
15
- "license": "mit"
16
  }
 
1
  {
2
  "citation": "",
3
+ "description": "",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
 
11
  "_type": "Value"
12
  }
13
  },
14
+ "homepage": "",
15
+ "license": ""
16
  }
data/s3_data/test/state.json CHANGED
@@ -4,7 +4,7 @@
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
- "_fingerprint": "0f355b33f06225cb",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,
 
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
+ "_fingerprint": "b0dae84368b918b9",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,
data/s3_data/train/data-00000-of-00001.arrow CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb60e451f78fc76b532c209fcd62a122ecd66cc7b17671ab2022ebbef342a6a9
3
- size 4181200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e696434e646a240c8c31c4b554034b06ed7a95206df1187f0cd39d763a88d3e
3
+ size 255944
data/s3_data/train/dataset_info.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "citation": "",
3
- "description": "Permutation composition benchmark for the S3 group.",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
@@ -11,6 +11,6 @@
11
  "_type": "Value"
12
  }
13
  },
14
- "homepage": "https://github.com/your-repo",
15
- "license": "mit"
16
  }
 
1
  {
2
  "citation": "",
3
+ "description": "",
4
  "features": {
5
  "input_sequence": {
6
  "dtype": "string",
 
11
  "_type": "Value"
12
  }
13
  },
14
+ "homepage": "",
15
+ "license": ""
16
  }
data/s3_data/train/state.json CHANGED
@@ -4,7 +4,7 @@
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
- "_fingerprint": "05de78783756a546",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,
 
4
  "filename": "data-00000-of-00001.arrow"
5
  }
6
  ],
7
+ "_fingerprint": "114446984893d5fb",
8
  "_format_columns": null,
9
  "_format_kwargs": {},
10
  "_format_type": null,