victor-villar commited on
Commit
36850fc
·
1 Parent(s): 3f6d9af

Fix readme

Browse files
Files changed (1) hide show
  1. README.md +29 -55
README.md CHANGED
@@ -1,64 +1,63 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- # Linear Function Quantum Synthesis Circuit Models
5
 
6
  ## Introduction
7
- This repository hosts models for linear function synthesis in quantum circuits trained with RL techniques. The models are specialized for different topologies up to 10 qubits.
8
 
9
- A linear function over n qubits is an n×n binary transformation on X/Z operators (the reversible linear part of a Clifford). Implementing it means finding CX/SWAP-based circuits whose action matches that binary map.
10
-
11
- For each model, there is the environment configs (`.json`) and the trained policy weights (`.safetensors`).
12
 
 
13
 
14
  ## Scope
15
- - Linear-function synthesis models.
16
  - Each model is tied to a specific qubit count and topology; use the matching pair for your target device/layout. To discover the specific topology for each model see the `gateset` property in the model's config.
17
 
18
  ## Contents
19
- - `linear_function_*.json`: model configs for a given qubit count/topology.
20
  - Matching `.safetensors` files: trained policies for each JSON (same filename stem).
21
 
22
-
23
  ## Training
24
 
25
  Training data is entirely synthetic and generated internally at IBM Quantum using custom reinforcement learning environments built on [Qiskit-Gym](https://github.com/AI4quantum/qiskit-gym).
26
 
27
- **Data Collection:** Target operators are generated by sampling random linear-reversible circuits with CNOT gates consistent with the target coupling map. The number of gates scales with the difficulty, which increase when the model learns to solve circuits at that difficulty. No external datasets or third-party circuit repositories are used.
28
 
29
  **PII:** No personal or sensitive data is present or used in any phase of training, as all data is synthetic and generated algorithmically.
30
 
31
  **Infrastructure:** We train the models using IBM's Cognitive Computing Cluster (CCC) using NVIDIA A100 40GB GPUs. The cluster provides a scalable and efficient infrastructure for training.
32
 
33
-
34
  ## Usage example
35
 
36
- Here we provide a code snippet to synthesis a random 10qL linear function. In addition to the model we use the `qiskit-gym` ([repo](https://github.com/AI4quantum/qiskit-gym)) and `twisteRL` ([repo](https://github.com/AI4quantum/twisteRL)) libraries, as well as `qiskit`. You can install all needed libraries running `pip install qiskit-gym` in your python virutal environment.
37
 
38
  ```python
39
  from qiskit_gym.rl import RLSynthesis
40
-
41
  from twisterl.utils import pull_hub_algorithm
42
-
43
- from qiskit.circuit.library import LinearFunction
44
- from qiskit.synthesis.linear.linear_matrix_utils import random_invertible_binary_matrix
45
 
46
  local_path = pull_hub_algorithm(
47
- repo_id="Qiskit/ai-transpiler_linear-functions",
48
  model_path="./models",
49
  revision="main",
50
- validate=True
51
  )
52
 
53
  if not local_path:
54
  raise ValueError("Failed to download model from hub")
55
 
56
- num_qubits = 10
57
- matrix = random_invertible_binary_matrix(num_qubits, seed=42)
58
- input_lf = LinearFunction(matrix)
59
- rls = RLSynthesis.from_config_json(f"{local_path}/linear_function_10qL.json", f"{local_path}/linear_function_10qL.safetensors")
60
- qc_lf_output = rls.synth(input_lf, num_searches=10, num_mcts_searches=0, deterministic=False)
61
- print(qc_lf_output)
 
 
 
 
62
  ```
63
 
64
  <!-- MODEL_INDEX_START -->
@@ -68,38 +67,13 @@ Below is the list of available models with qubit counts and topologies:
68
 
69
  | Model | Qubits | Topology |
70
  | --- | --- | --- |
71
- | [`linear_function_2qL`](linear_function_2qL.md) | 2 | L |
72
- | [`linear_function_3qL`](linear_function_3qL.md) | 3 | L |
73
- | [`linear_function_4qL`](linear_function_4qL.md) | 4 | L |
74
- | [`linear_function_4qY`](linear_function_4qY.md) | 4 | Y |
75
- | [`linear_function_5qL`](linear_function_5qL.md) | 5 | L |
76
- | [`linear_function_5qT`](linear_function_5qT.md) | 5 | T |
77
- | [`linear_function_6qL`](linear_function_6qL.md) | 6 | L |
78
- | [`linear_function_6qT`](linear_function_6qT.md) | 6 | T |
79
- | [`linear_function_6qY`](linear_function_6qY.md) | 6 | Y |
80
- | [`linear_function_7qF`](linear_function_7qF.md) | 7 | F |
81
- | [`linear_function_7qH`](linear_function_7qH.md) | 7 | H |
82
- | [`linear_function_7qL`](linear_function_7qL.md) | 7 | L |
83
- | [`linear_function_7qT`](linear_function_7qT.md) | 7 | T |
84
- | [`linear_function_7qY`](linear_function_7qY.md) | 7 | Y |
85
- | [`linear_function_8qF`](linear_function_8qF.md) | 8 | F |
86
- | [`linear_function_8qJ`](linear_function_8qJ.md) | 8 | J |
87
- | [`linear_function_8qL`](linear_function_8qL.md) | 8 | L |
88
- | [`linear_function_8qT1`](linear_function_8qT1.md) | 8 | T1 |
89
- | [`linear_function_8qT2`](linear_function_8qT2.md) | 8 | T2 |
90
- | [`linear_function_8qY`](linear_function_8qY.md) | 8 | Y |
91
- | [`linear_function_9qF1`](linear_function_9qF1.md) | 9 | F1 |
92
- | [`linear_function_9qF2`](linear_function_9qF2.md) | 9 | F2 |
93
- | [`linear_function_9qH1`](linear_function_9qH1.md) | 9 | H1 |
94
- | [`linear_function_9qH2`](linear_function_9qH2.md) | 9 | H2 |
95
- | [`linear_function_9qH3`](linear_function_9qH3.md) | 9 | H3 |
96
- | [`linear_function_9qJ`](linear_function_9qJ.md) | 9 | J |
97
- | [`linear_function_9qL`](linear_function_9qL.md) | 9 | L |
98
- | [`linear_function_9qT1`](linear_function_9qT1.md) | 9 | T1 |
99
- | [`linear_function_9qT2`](linear_function_9qT2.md) | 9 | T2 |
100
- | [`linear_function_9qY`](linear_function_9qY.md) | 9 | Y |
101
- | [`linear_function_10qL`](linear_function_10qL.md) | 10 | L |
102
  <!-- MODEL_INDEX_END -->
103
 
104
  ## Acknowledgements
105
- The authors acknowledge the IBM Research CCC Service for providing resources that have contributed to the production or processing of the data contained within this data collection.
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Permutation Quantum Synthesis Circuit Models
5
 
6
  ## Introduction
7
+ This repository hosts models for permutation synthesis in quantum circuits trained with RL techniques. The models are specialized for different topologies and qubit counts.
8
 
9
+ Permutation circuits rearrange computational basis states according to an n-element permutation (one per qubit in binary encoding) using SWAP gates constrained by the device topology.
 
 
10
 
11
+ For each model, there is the environment config (`.json`) and the trained policy weights (`.safetensors`).
12
 
13
  ## Scope
14
+ - Permutation synthesis models only.
15
  - Each model is tied to a specific qubit count and topology; use the matching pair for your target device/layout. To discover the specific topology for each model see the `gateset` property in the model's config.
16
 
17
  ## Contents
18
+ - `permutation_*.json`: model configs for a given qubit count/topology.
19
  - Matching `.safetensors` files: trained policies for each JSON (same filename stem).
20
 
 
21
  ## Training
22
 
23
  Training data is entirely synthetic and generated internally at IBM Quantum using custom reinforcement learning environments built on [Qiskit-Gym](https://github.com/AI4quantum/qiskit-gym).
24
 
25
+ **Data Collection:** Random permutation operators are generated by composing random SWAP sequences consistent with the target coupling map. The number of gates scales with the difficulty, which increase when the model learns to solve circuits at that difficulty. No external datasets or third-party circuit repositories are used.
26
 
27
  **PII:** No personal or sensitive data is present or used in any phase of training, as all data is synthetic and generated algorithmically.
28
 
29
  **Infrastructure:** We train the models using IBM's Cognitive Computing Cluster (CCC) using NVIDIA A100 40GB GPUs. The cluster provides a scalable and efficient infrastructure for training.
30
 
 
31
  ## Usage example
32
 
33
+ Below is a snippet to synthesize a random 10-qubit permutation. We use `qiskit-gym` ([repo](https://github.com/AI4quantum/qiskit-gym)), `twisteRL` ([repo](https://github.com/AI4quantum/twisteRL)), and `qiskit`. Install dependencies via `pip install qiskit-gym` in your virtual environment.
34
 
35
  ```python
36
  from qiskit_gym.rl import RLSynthesis
 
37
  from twisterl.utils import pull_hub_algorithm
38
+ from qiskit.circuit.library import Permutation
39
+ import numpy as np
 
40
 
41
  local_path = pull_hub_algorithm(
42
+ repo_id="Qiskit/ai-transpiler_permutations",
43
  model_path="./models",
44
  revision="main",
45
+ validate=True,
46
  )
47
 
48
  if not local_path:
49
  raise ValueError("Failed to download model from hub")
50
 
51
+ num_qubits = 12
52
+ seed = 42
53
+ input_perm = np.random.default_rng(seed).permutation(num_qubits).tolist()
54
+
55
+ rls = RLSynthesis.from_config_json(
56
+ f"{local_path}/permutation_12qO.json",
57
+ f"{local_path}/permutation_12qO.safetensors",
58
+ )
59
+ qc_perm_output = rls.synth(input_perm, num_searches=10, num_mcts_searches=0, deterministic=False)
60
+ print(qc_perm_output)
61
  ```
62
 
63
  <!-- MODEL_INDEX_START -->
 
67
 
68
  | Model | Qubits | Topology |
69
  | --- | --- | --- |
70
+ | [`permutation_8qL`](model_data/permutation_8qL.md) | 8 | L |
71
+ | [`permutation_12qO`](model_data/permutation_12qO.md) | 12 | O |
72
+ | [`permutation_27q`](model_data/permutation_27q.md) | 27 | HEX |
73
+ | [`permutation_33q`](model_data/permutation_33q.md) | 33 | HEX |
74
+ | [`permutation_65q`](model_data/permutation_65q.md) | 65 | HEX |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  <!-- MODEL_INDEX_END -->
76
 
77
  ## Acknowledgements
78
+ The authors acknowledge the IBM Research CCC Service for providing resources that have contributed to the production or processing of the data contained within this data collection.
79
+