victor-villar commited on
Commit
3f6d9af
·
1 Parent(s): 5403fe7

Address team comments

Browse files
README.md CHANGED
@@ -1,51 +1,105 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- # Permutation Quantum Synthesis Circuit Models
5
 
6
  ## Introduction
7
- This repo hosts models for permutation synthesis in quantum circuits trained with RL techniques. The models are specialized for different topologies and qubit counts.
8
 
9
- Permutation circuits rearrange computational basis states according to an n-element permutation (one per qubit in binary encoding) using SWAP gates constrained by the device topology.
 
 
10
 
11
- For each model, there is the environment config (`.json`) and the trained policy weights (`.pt`).
12
 
13
  ## Scope
14
- - Permutation synthesis models only.
15
  - Each model is tied to a specific qubit count and topology; use the matching pair for your target device/layout. To discover the specific topology for each model see the `gateset` property in the model's config.
16
 
17
  ## Contents
18
- - `permutation_*.json`: model configs for a given qubit count/topology.
19
- - Matching `.pt` files: trained policies for each JSON (same filename stem).
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  ## Usage example
22
 
23
- Below is a snippet to synthesize a random 10-qubit permutation. We use `qiskit-gym` ([repo](https://github.com/AI4quantum/qiskit-gym)), `twisteRL` ([repo](https://github.com/AI4quantum/twisteRL)), and `qiskit`. Install dependencies via `pip install qiskit-gym` in your virtual environment.
24
 
25
  ```python
26
  from qiskit_gym.rl import RLSynthesis
 
27
  from twisterl.utils import pull_hub_algorithm
28
- from qiskit.circuit.library import Permutation
29
- import numpy as np
 
30
 
31
  local_path = pull_hub_algorithm(
32
- repo_id="Qiskit/ai-transpiler_permutations",
33
  model_path="./models",
34
  revision="main",
35
- validate=True,
36
  )
37
 
38
  if not local_path:
39
  raise ValueError("Failed to download model from hub")
40
 
41
- num_qubits = 12
42
- seed = 42
43
- input_perm = np.random.default_rng(seed).permutation(num_qubits).tolist()
44
-
45
- rls = RLSynthesis.from_config_json(
46
- f"{local_path}/permutation_12qO.json",
47
- f"{local_path}/permutation_12qO.pt",
48
- )
49
- qc_perm_output = rls.synth(input_perm, num_searches=10, num_mcts_searches=0, deterministic=False)
50
- print(qc_perm_output)
51
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Linear Function Quantum Synthesis Circuit Models
5
 
6
  ## Introduction
7
+ This repository hosts models for linear function synthesis in quantum circuits trained with RL techniques. The models are specialized for different topologies up to 10 qubits.
8
 
9
+ A linear function over n qubits is an n×n binary transformation on X/Z operators (the reversible linear part of a Clifford). Implementing it means finding CX/SWAP-based circuits whose action matches that binary map.
10
+
11
+ For each model, there is the environment configs (`.json`) and the trained policy weights (`.safetensors`).
12
 
 
13
 
14
  ## Scope
15
+ - Linear-function synthesis models.
16
  - Each model is tied to a specific qubit count and topology; use the matching pair for your target device/layout. To discover the specific topology for each model see the `gateset` property in the model's config.
17
 
18
  ## Contents
19
+ - `linear_function_*.json`: model configs for a given qubit count/topology.
20
+ - Matching `.safetensors` files: trained policies for each JSON (same filename stem).
21
+
22
+
23
+ ## Training
24
+
25
+ Training data is entirely synthetic and generated internally at IBM Quantum using custom reinforcement learning environments built on [Qiskit-Gym](https://github.com/AI4quantum/qiskit-gym).
26
+
27
+ **Data Collection:** Target operators are generated by sampling random linear-reversible circuits with CNOT gates consistent with the target coupling map. The number of gates scales with the difficulty, which increase when the model learns to solve circuits at that difficulty. No external datasets or third-party circuit repositories are used.
28
+
29
+ **PII:** No personal or sensitive data is present or used in any phase of training, as all data is synthetic and generated algorithmically.
30
+
31
+ **Infrastructure:** We train the models using IBM's Cognitive Computing Cluster (CCC) using NVIDIA A100 40GB GPUs. The cluster provides a scalable and efficient infrastructure for training.
32
+
33
 
34
  ## Usage example
35
 
36
+ Here we provide a code snippet to synthesis a random 10qL linear function. In addition to the model we use the `qiskit-gym` ([repo](https://github.com/AI4quantum/qiskit-gym)) and `twisteRL` ([repo](https://github.com/AI4quantum/twisteRL)) libraries, as well as `qiskit`. You can install all needed libraries running `pip install qiskit-gym` in your python virutal environment.
37
 
38
  ```python
39
  from qiskit_gym.rl import RLSynthesis
40
+
41
  from twisterl.utils import pull_hub_algorithm
42
+
43
+ from qiskit.circuit.library import LinearFunction
44
+ from qiskit.synthesis.linear.linear_matrix_utils import random_invertible_binary_matrix
45
 
46
  local_path = pull_hub_algorithm(
47
+ repo_id="Qiskit/ai-transpiler_linear-functions",
48
  model_path="./models",
49
  revision="main",
50
+ validate=True
51
  )
52
 
53
  if not local_path:
54
  raise ValueError("Failed to download model from hub")
55
 
56
+ num_qubits = 10
57
+ matrix = random_invertible_binary_matrix(num_qubits, seed=42)
58
+ input_lf = LinearFunction(matrix)
59
+ rls = RLSynthesis.from_config_json(f"{local_path}/linear_function_10qL.json", f"{local_path}/linear_function_10qL.safetensors")
60
+ qc_lf_output = rls.synth(input_lf, num_searches=10, num_mcts_searches=0, deterministic=False)
61
+ print(qc_lf_output)
 
 
 
 
62
  ```
63
+
64
+ <!-- MODEL_INDEX_START -->
65
+ ## Models
66
+
67
+ Below is the list of available models with qubit counts and topologies:
68
+
69
+ | Model | Qubits | Topology |
70
+ | --- | --- | --- |
71
+ | [`linear_function_2qL`](linear_function_2qL.md) | 2 | L |
72
+ | [`linear_function_3qL`](linear_function_3qL.md) | 3 | L |
73
+ | [`linear_function_4qL`](linear_function_4qL.md) | 4 | L |
74
+ | [`linear_function_4qY`](linear_function_4qY.md) | 4 | Y |
75
+ | [`linear_function_5qL`](linear_function_5qL.md) | 5 | L |
76
+ | [`linear_function_5qT`](linear_function_5qT.md) | 5 | T |
77
+ | [`linear_function_6qL`](linear_function_6qL.md) | 6 | L |
78
+ | [`linear_function_6qT`](linear_function_6qT.md) | 6 | T |
79
+ | [`linear_function_6qY`](linear_function_6qY.md) | 6 | Y |
80
+ | [`linear_function_7qF`](linear_function_7qF.md) | 7 | F |
81
+ | [`linear_function_7qH`](linear_function_7qH.md) | 7 | H |
82
+ | [`linear_function_7qL`](linear_function_7qL.md) | 7 | L |
83
+ | [`linear_function_7qT`](linear_function_7qT.md) | 7 | T |
84
+ | [`linear_function_7qY`](linear_function_7qY.md) | 7 | Y |
85
+ | [`linear_function_8qF`](linear_function_8qF.md) | 8 | F |
86
+ | [`linear_function_8qJ`](linear_function_8qJ.md) | 8 | J |
87
+ | [`linear_function_8qL`](linear_function_8qL.md) | 8 | L |
88
+ | [`linear_function_8qT1`](linear_function_8qT1.md) | 8 | T1 |
89
+ | [`linear_function_8qT2`](linear_function_8qT2.md) | 8 | T2 |
90
+ | [`linear_function_8qY`](linear_function_8qY.md) | 8 | Y |
91
+ | [`linear_function_9qF1`](linear_function_9qF1.md) | 9 | F1 |
92
+ | [`linear_function_9qF2`](linear_function_9qF2.md) | 9 | F2 |
93
+ | [`linear_function_9qH1`](linear_function_9qH1.md) | 9 | H1 |
94
+ | [`linear_function_9qH2`](linear_function_9qH2.md) | 9 | H2 |
95
+ | [`linear_function_9qH3`](linear_function_9qH3.md) | 9 | H3 |
96
+ | [`linear_function_9qJ`](linear_function_9qJ.md) | 9 | J |
97
+ | [`linear_function_9qL`](linear_function_9qL.md) | 9 | L |
98
+ | [`linear_function_9qT1`](linear_function_9qT1.md) | 9 | T1 |
99
+ | [`linear_function_9qT2`](linear_function_9qT2.md) | 9 | T2 |
100
+ | [`linear_function_9qY`](linear_function_9qY.md) | 9 | Y |
101
+ | [`linear_function_10qL`](linear_function_10qL.md) | 10 | L |
102
+ <!-- MODEL_INDEX_END -->
103
+
104
+ ## Acknowledgements
105
+ The authors acknowledge the IBM Research CCC Service for providing resources that have contributed to the production or processing of the data contained within this data collection.
model_data/images/permutation_12qO_topology.svg ADDED
model_data/images/permutation_27q_topology.svg ADDED
model_data/images/permutation_33q_topology.svg ADDED
model_data/images/permutation_65q_topology.svg ADDED
model_data/images/permutation_8qL_topology.svg ADDED
model_data/permutation_12qO.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # permutation_12qO
2
+
3
+ - Qubits: 12
4
+ - Topology: O
5
+ - Config: `permutation_12qO.json`
6
+ - Weights: `permutation_12qO.safetensors`
7
+
8
+ ![Topology](images/permutation_12qO_topology.svg)
model_data/permutation_27q.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # permutation_27q
2
+
3
+ - Qubits: 27
4
+ - Topology: HEX
5
+ - Config: `permutation_27q.json`
6
+ - Weights: `permutation_27q.safetensors`
7
+
8
+ ![Topology](images/permutation_27q_topology.svg)
model_data/permutation_33q.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # permutation_33q
2
+
3
+ - Qubits: 33
4
+ - Topology: HEX
5
+ - Config: `permutation_33q.json`
6
+ - Weights: `permutation_33q.safetensors`
7
+
8
+ ![Topology](images/permutation_33q_topology.svg)
model_data/permutation_65q.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # permutation_65q
2
+
3
+ - Qubits: 65
4
+ - Topology: HEX
5
+ - Config: `permutation_65q.json`
6
+ - Weights: `permutation_65q.safetensors`
7
+
8
+ ![Topology](images/permutation_65q_topology.svg)
model_data/permutation_8qL.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # permutation_8qL
2
+
3
+ - Qubits: 8
4
+ - Topology: L
5
+ - Config: `permutation_8qL.json`
6
+ - Weights: `permutation_8qL.safetensors`
7
+
8
+ ![Topology](images/permutation_8qL_topology.svg)