markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
That's not how humans/textbooks write exponential form. Let's use mathform=True (which is the default).
a2l.to_ltx(A, frmt = '{:6.2e}', arraytype = 'array', mathform=True)
Examples.ipynb
josephcslater/array_to_latex
mit
It's easier to make these columns line up than when using f format styling- so I believe it is working. Of course, the typeset $\LaTeX$ will look better than the raw $\LaTeX$. One can also capture the string in the output. It will also do column and row-vectors. It's the array is 1-D, the default is a row.
A = np.array([1.23456, 23.45678, 456.23, 8.239521]) a2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array') A = np.array([[1.23456, 23.45678, 456.23, 8.239521]]) a2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array') A = np.array([[1.23456, 23.45678, 456.23, 8.239521]]).T a2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array...
Examples.ipynb
josephcslater/array_to_latex
mit
We can use the lambda function method to create a function with personalized defaults. This makes for a much more compact call, and one that can be adjusted for an entire session.
to_tex = lambda A : a2l.to_ltx(A, frmt = '{:6.2e}', arraytype = 'array', mathform=True) to_tex(A) to_tex = lambda A : a2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array', mathform=True) to_tex(A)
Examples.ipynb
josephcslater/array_to_latex
mit
Panda DataFrames You can also produce tables or math arrays from Panda DataFrames.
df = pd.DataFrame(np.random.randint(low=0, high=10, size=(5, 5)), ... columns=['a', 'b', 'c', 'd', 'e']) df np.array(df) a2l.to_ltx(df, arraytype='bmatrix') a2l.to_ltx(df, arraytype='tabular') df2 = pd.DataFrame(['cat', 'dog', 'bird', 'snake', 'honey badger'], columns=['pets']) df2 df_mixed = d...
Examples.ipynb
josephcslater/array_to_latex
mit
If we use it with a string, it loops over its characters.
for ch in 'test': print(ch)
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
If use it with a dictionary, it loops over its keys
for k in {1:'test1',2:'test'}: print(k)
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
So there are many types of objects which can be used with a for loop. These are called iterable objects. There are many functions which consume these iterables.
",".join(["a","b","c"]) ",".join(('this','is','a','test')) ",".join({'key1':'value','key2':'value2'})
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Iteration Protocol
x = iter([1,2,3]) print(x) print(next(x)) print(next(x)) print(next(x)) print(next(x)) # <-- will create an error
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Having seen the mechanics behind the iterator protocol, it is easy to add iterator behavior to your classes. Define an _iter_() method which returns an object with a _next_() method. If the class defines _next_(), then _iter_() can just return self:
class Reverse: """Iterator for looping over a sequence backwards.""" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def __next__(self): if self.index == 0: raise StopIteration self.index = self.index...
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Generators Generators are a simple and powerful tool for creating iterators. They are written like regular functions but use the yield statement whenever they want to return data. Each time next() is called on it, the generator resumes where it left off (it remembers all the data values and which statement was last ex...
def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] for ch in reverse('shallow'): print(ch)
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Anything that can be done with generators can also be done with class-based iterators as described in the previous section. What makes generators so compact is that the iter() and next() methods are created automatically. Another key feature is that the local variables and execution state are automatically saved betwee...
def samplegen(): print("begin") for i in range(3): print("before yield", i) yield i print("after yield", i) print("end") f = samplegen() print(next(f)) print(next(f)) print(next(f)) print(next(f))
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Generator Expressions Generator Expressions are generator version of list comprehensions. They look like list comprehensions, but returns a generator back instead of a list.
a = (x * x for x in range(10)) sum(a)
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.
xvec = [5,16,7] yvec = [4,12,18] sum(x * y for x,y in zip(xvec,yvec)) data = 'golf' list(data[i] for i in range(len(data)-1, -1, -1)) # unique_words = set(word for line in page for word in line.split()) # valedictorian = max((student.gpa, student.name) for student in graduates)
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Note that generators provide another way to deal with infinity, for example:
from time import gmtime, strftime def myGen(): while True: yield strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime()) myGeneratorInstance = myGen() next(myGeneratorInstance) next(myGeneratorInstance)
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Use of Generators 1. Easy to Implement Generators can be implemented in a clear and concise way as compared to their iterator class counterpart.
# Iterator Class class PowTwo: def __init__(self, max = 0): self.max = max def __iter__(self): self.n = 0 return self def __next__(self): if self.n > self.max: raise StopIteration result = 2 ** self.n self.n += 1 return result
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
This was lengthy. Now lets do the same using a generator function.
def PowTwoGen(max = 0): n = 0 while n < max: yield 2 ** n n += 1
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Since generators keep track of details automatically, it was concise and much cleaner in implementation. 2. Memory Efficient A normal function to return a sequence will create the entire sequence in memory before returning the result. This is an overkill if the number of items in the sequence is very large. 3. Represen...
def all_event(): n = 0 while True: yield n n += 2
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
4. Pipelining Generators Generators can be used to pipeline a series of operations. If we are analysing a log file and if the log file has a 3rd column that keeps track of the ips every hour and we want to sum it to find unique ips in last 5 months.
with open('sells.log') as file: ip_col = (line[3] for line in file) per_hr = (int(x) for x in ip_col if x != 'N/A') print("IPs =", sum(per_hr))
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Using Itertools
import itertools horses = [1,2,3,4] races = itertools.permutations(horses) print(list(races))
Iterators and Generators.ipynb
vravishankar/Jupyter-Books
mit
Now, we can create a micromagnetic system object.
system = oc.System(name="macrospin")
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Let us assume we have a simple Hamiltonian which consists of only Zeeman energy term $$\mathcal{H} = -\mu_{0}M_\text{s}\mathbf{m}\cdot\mathbf{H},$$ where $M_\text{s}$ is the saturation magnetisation, $\mu_{0}$ is the magnetic constant, and $\mathbf{H}$ is the external magnetic field. For more information on defining mi...
H = (0, 0, 2e6) # external magnetic field (A/m) system.hamiltonian = oc.Zeeman(H=H)
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
In the next step we can define the system's dynamics. Let us assume we have $\gamma_{0} = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$ and $\alpha=0.1$.
gamma = 2.211e5 # gyromagnetic ratio (m/As) alpha = 0.1 # Gilbert damping system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
To check what is our dynamics equation:
system.dynamics
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Before we start running time evolution simulations, we need to initialise the magnetisation. In this case, our magnetisation is pointing in the positive $x$ direction with $M_\text{s} = 8 \times 10^{6} \,\text{A}\,\text{m}^{-1}$. The magnetisation is defined using Field class from the discretisedfield package we import...
initial_m = (1, 0, 0) # vector in x direction Ms = 8e6 # magnetisation saturation (A/m) system.m = df.Field(mesh, value=initial_m, norm=Ms)
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Now, we can run the time evolution using TimeDriver for $t=0.1 \,\text{ns}$ and save the magnetisation configuration in $n=200$ steps.
td = oc.TimeDriver() td.drive(system, t=0.1e-9, n=200)
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
How different system parameters vary with time, we can inspect by showing the system's datatable.
system.dt
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
However, in our case it is much more informative if we plot the time evolution of magnetisation $z$ component $m_{z}(t)$.
system.dt.plot("t", "mz");
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Similarly, we can plot all three magnetisation components
system.dt.plot("t", ["mx", "my", "mz"]);
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
We can see that after some time the macrospin aligns parallel to the external magnetic field in the $z$ direction. We can explore the effect of Gilbert damping $\alpha = 0.2$ on the magnetisation dynamics.
system.dynamics.damping.alpha = 0.2 system.m = df.Field(mesh, value=initial_m, norm=Ms) td.drive(system, t=0.1e-9, n=200) system.dt.plot("t", ["mx", "my", "mz"]);
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Exercise 1 By looking at the previous example, explore the magnetisation dynamics for $\alpha=0.005$ in the following code cell.
# insert missing code here. system.m = df.Field(mesh, value=initial_m, norm=Ms) td.drive(system, t=0.1e-9, n=200) system.dt.plot("t", ["mx", "my", "mz"]);
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Exercise 2 Repeat the simulation with $\alpha=0.1$ and H = (0, 0, -2e6).
# insert missing code here. system.m = df.Field(mesh, value=initial_m, norm=Ms) td.drive(system, t=0.1e-9, n=200) system.dt.plot("t", ["mx", "my", "mz"]);
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
Exercise 3 Keep using $\alpha=0.1$. Change the field from H = (0, 0, -2e6) to H = (0, -1.41e6, -1.41e6), and plot $m_x(t)$, $m_y(t)$ and $m_z(t)$ as above. Can you explain the (initially non-intuitive) output?
system.hamiltonian.zeeman.H = (0, -1.41e6, -1.41e6) td.drive(system, t=0.1e-9, n=200) system.dt.plot("t", ["mx", "my", "mz"]);
workshops/Durham/tutorial3_dynamics.ipynb
joommf/tutorial
bsd-3-clause
2D trajectory interpolation The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time: t which has discrete values of time t[i]. x which has values of the x position at those times: x[i] = x(t[i]). x which has values of the y position at those times: y[i] = y(t[i...
dictionary = np.load('trajectory.npz') y = dictionary.items()[0][1] t = dictionary.items()[1][1] x = dictionary.items()[2][1] assert isinstance(x, np.ndarray) and len(x)==40 assert isinstance(y, np.ndarray) and len(y)==40 assert isinstance(t, np.ndarray) and len(t)==40
Interpolation/.ipynb_checkpoints/InterpolationEx01-checkpoint.ipynb
JAmarel/Phys202
mit
Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays: newt which has 200 points between ${t_{min},t_{max}}$. newx which has the interpolated values of $x(t)$ at those times. newy which has the interpolated values of $y(t)$ at those times.
x_approx = interp1d(t, x, kind='cubic') y_approx = interp1d(t, y, kind='cubic') newt = np.linspace(0,4,200) newx = x_approx(newt) newy = y_approx(newt) assert newt[0]==t.min() assert newt[-1]==t.max() assert len(newt)==200 assert len(newx)==200 assert len(newy)==200
Interpolation/.ipynb_checkpoints/InterpolationEx01-checkpoint.ipynb
JAmarel/Phys202
mit
Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points: For the interpolated points, use a solid line. For the original points, use circles of a different color and no line. Customize you plot to make it effective and beautiful.
plt.figure(figsize=(12,8)); plt.plot(x, y, marker='o', linestyle='', label='Original Data') plt.plot(newx, newy, label='Interpolated Curve'); plt.legend(); plt.xlabel('X(t)'); plt.ylabel('Y(t)'); plt.title('Position as a Function of Time'); assert True # leave this to grade the trajectory plot
Interpolation/.ipynb_checkpoints/InterpolationEx01-checkpoint.ipynb
JAmarel/Phys202
mit
Iris introduction course 4. Joining Cubes Together Learning outcome: by the end of this section, you will be able to apply Iris functionality to combine multiple Iris cubes into a new larger cube. Duration: 30 minutes Overview:<br> 4.1 Merge<br> 4.2 Concatenate<br> 4.3 Exercise<br> 4.4 Summary of the Section Setup
import iris import numpy as np
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
4.1 Merge<a id='merge'></a> When Iris loads data it tries to reduce the number of cubes returned by collecting together multiple fields with shared metadata into a single multidimensional cube. In Iris, this is known as merging. In order to merge two cubes, they must be identical in everything but a scalar dimension, w...
fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp') cubes = iris.load(fname) print(cubes)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
As you can see iris.load returns a CubeList containing a single 3D cube. Now let's try loading in the file using iris.load_raw:
fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp') raw_cubes = iris.load_raw(fname) print(raw_cubes)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
This time, iris has returned six 2D cubes. PP files usually contain multiple 2D fields. iris.load_raw has returned a 2D cube for each of these fields, whereas iris.load has merged the cubes together then returned the resulting 3D cube. When we look in detail at the raw 2D cubes, we find that they are identical in ever...
print(raw_cubes[0]) print('--' * 40) print(raw_cubes[1])
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
To merge a CubeList, we can use the merge or merge_cube methods. The merge method will try to merge together the cubes in the CubeList in order to return a CubeList of as few cubes as possible. The merge_cube method will do the same as merge but will return a single Cube. If the initial CubeList cannot be merged into ...
merged_cubelist = raw_cubes.merge() print(merged_cubelist)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
merge has returned a cubelist of a single 3D cube.
merged_cube = merged_cubelist[0] print(merged_cube)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
<div class="alert alert-block alert-warning"> <b><font color='brown'>Exercise: </font></b> <p>Try merging <b><font face="courier" color="black">raw_cubes</font></b> using the <b><font face="courier" color="black">merge_cube</font></b> method.</p> </div>
# # edit space for user code ... #
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
When we look in more detail at our merged cube, we can see that the time coordinate has become a new dimension, as well as gaining another forecast_period auxiliary coordinate:
print(merged_cube.coord('time')) print(merged_cube.coord('forecast_period'))
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
Identifying Merge Problems In order to avoid the Iris merge functionality making inappropriate assumptions about the data, merge is strict with regards to the uniformity of the incoming cubes. For example, if we load the fields from two ensemble members from the GloSea4 model sample data, we see we have 12 fields befor...
fname = iris.sample_data_path('GloSea4', 'ensemble_00[34].pp') cubes = iris.load_raw(fname, 'surface_temperature') print(len(cubes))
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
If we try to merge these 12 cubes we get 2 cubes rather than one:
incomplete_cubes = cubes.merge() print(incomplete_cubes)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
When we look in more detail at these two cubes, what is different between the two? (Hint: One value changes, another is completely missing)
print(incomplete_cubes[0]) print('--' * 40) print(incomplete_cubes[1])
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
As mentioned earlier, if merge_cube cannot merge the given CubeList to return a single Cube, it will raise a helpful error message identifying the cause of the failiure. <div class="alert alert-block alert-warning"> <b><font color="brown">Exercise: </font></b><p>Try merging the loaded <b><font face="courier" color=...
# # edit space for user code ... #
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
By inspecting the cubes themselves or using the error message raised when using merge_cube we can see that some cubes are missing the realization coordinate. By adding the missing coordinate, we can trigger a merge of the 12 cubes into a single cube, as expected:
for cube in cubes: if not cube.coords('realization'): cube.add_aux_coord(iris.coords.DimCoord(np.int32(3), 'realization')) merged_cube = cubes.merge_cube() print(merged_cube)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
4.2 Concatenate<a id='concatenate'></a> We have seen that merge combines a list of cubes with a common scalar coordinate to produce a single cube with a new dimension created from these scalar values. But what happens if you try to combine cubes along a common dimension. Let's create a CubeList with two cubes that have...
fname = iris.sample_data_path('A1B_north_america.nc') cube = iris.load_cube(fname) cube_1 = cube[:10] cube_2 = cube[10:20] cubes = iris.cube.CubeList([cube_1, cube_2]) print(cubes)
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
These cubes should be able to be joined together; after all, they have both come from the same original cube! However, merge returns two cubes, suggesting that these two cubes cannot be merged:
print(cubes.merge())
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
Merge cannot be used to combine common non-scalar coordinates. Instead we must use concatenate. Concatenate joins together ("concatenates") common non-scalar coordinates to produce a single cube with the common dimension extended. In the below diagram, we see how three 3D cubes are concatenated together to produce a 3D...
print(cubes.concatenate())
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
<div class="alert alert-block alert-warning"> <b><font color='brown'>Exercise: </font></b> <p>Try concatenating <b><font face="courier" color="black">cubes</font></b> using the <b><font face="courier" color="black">concatenate_cube</font></b> method. </div>
# # edit space for user code ... #
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
4.3 Section Review Exercise<a id='exercise'></a> The following exercise is designed to give you experience of solving issues that prevent a merge or concatenate from taking place. Part 1 Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.1.*.nc files from bei...
# EDIT for user code ... # SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ... # %load solutions/iris_exercise_4.3.1a
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
b) Try merging the loaded cubes into a single cube. Why does this raise an error?
# user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.1b
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
c) Fix the cubes such that they can be merged into a single cube. Hint: You can use del to remove an item from a dictionary.
# user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.1c
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
Part 2 Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.5.*.nc files from being joined together into a single cube. a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.5.*.nc'. Store the cubes in a va...
# user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.2a
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
b) Join the cubes together into a single cube. Should these cubes be merged or concatenated?
# user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.2b
course_content/iris_course/4.Joining_Cubes_Together.ipynb
SciTools/courses
gpl-3.0
Rabi oscillation experiment <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/experiments/benchmarks/rabi_oscillations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> ...
try: import cirq import recirq except ImportError: !pip install -U pip !pip install --quiet cirq !pip install --quiet git+https://github.com/quantumlib/ReCirq import cirq import recirq import numpy as np import cirq_google
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
In this experiment, you are going to use Cirq to check that rotating a qubit by an increasing angle, and then measuring the qubit, produces Rabi oscillations. This requires you to do the following things: Prepare the $|0\rangle$ state. Rotate by an angle $\theta$ around the $X$ axis. Measure to see if the result is a ...
working_device = cirq_google.Sycamore print(working_device)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
For this experiment you only need one qubit and you can just pick whichever one you like.
my_qubit = cirq.GridQubit(5, 6)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
Once you've chosen your qubit you can build circuits that use it.
from cirq.contrib.svg import SVGCircuit # Create a circuit with X, Ry(pi/2) and H. my_circuit = cirq.Circuit( # Rotate the qubit pi/2 radians around the X axis. cirq.rx(np.pi / 2).on(my_qubit), # Measure the qubit. cirq.measure(my_qubit, key="out"), ) SVGCircuit(my_circuit)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
Now you can simulate sampling from your circuit using cirq.Simulator.
sim = cirq.Simulator() samples = sim.sample(my_circuit, repetitions=10)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
You can also get properties of the circuit, such as the density matrix of the circuit's output or the state vector just before the terminal measurement.
state_vector_before_measurement = sim.simulate(my_circuit[:-1]) sampled_state_vector_after_measurement = sim.simulate(my_circuit) print(f"State before measurement:") print(state_vector_before_measurement) print(f"State after measurement:") print(sampled_state_vector_after_measurement)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
You can also examine the outputs from a noisy environment. For example, an environment where 10% depolarization is applied to each qubit after each operation in the circuit:
noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.1)) noisy_post_measurement_state = noisy_sim.simulate(my_circuit) noisy_pre_measurement_state = noisy_sim.simulate(my_circuit[:-1]) print("Noisy state after measurement:" + str(noisy_post_measurement_state)) print("Noisy state before measurement:" + str(n...
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
2. Parameterized Circuits and Sweeps Now that you have some of the basics end to end, you can create a parameterized circuit that rotates by an angle $\theta$:
import sympy theta = sympy.Symbol("theta") parameterized_circuit = cirq.Circuit( cirq.rx(theta).on(my_qubit), cirq.measure(my_qubit, key="out") ) SVGCircuit(parameterized_circuit)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
In the above block you saw that there is a sympy.Symbol that you placed in the circuit. Cirq supports symbolic computation involving circuits. What this means is that when you construct cirq.Circuit objects you can put placeholders in many of the classical control parameters of the circuit which you can fill with value...
sim.sample(parameterized_circuit, params={theta: 2}, repetitions=10)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
You can also specify multiple values of theta, and get samples back for each value.
sim.sample(parameterized_circuit, params=[{theta: 0.5}, {theta: np.pi}], repetitions=10)
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
Cirq has shorthand notation you can use to sweep theta over a range of values.
sim.sample( parameterized_circuit, params=cirq.Linspace(theta, start=0, stop=np.pi, length=5), repetitions=5, )
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
The result value being returned by sim.sample is a pandas.DataFrame object. Pandas is a common library for working with table data in python. You can use standard pandas methods to analyze and summarize your results.
import pandas big_results = sim.sample( parameterized_circuit, params=cirq.Linspace(theta, start=0, stop=np.pi, length=20), repetitions=10_000, ) # big_results is too big to look at. Plot cross tabulated data instead. pandas.crosstab(big_results.theta, big_results.out).plot()
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
3. The ReCirq experiment ReCirq comes with a pre-written Rabi oscillation experiment recirq.benchmarks.rabi_oscillations, which performs the steps outlined at the start of this tutorial to create a circuit that exhibits Rabi Oscillations or Rabi Cycles. This method takes a cirq.Sampler, which could be a simulator or a...
import datetime from recirq.benchmarks import rabi_oscillations result = rabi_oscillations( sampler=noisy_sim, qubit=my_qubit, num_points=50, repetitions=10000 ) result.plot()
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
Notice that you can tell from the plot that you used the noisy simulator you defined earlier. You can also tell that the amount of depolarization is roughly 10%. 4. Exercise: Find the best qubit As you have seen, you can use Cirq to perform a Rabi oscillation experiment. You can either make the experiment yourself out ...
import hashlib class SecretNoiseModel(cirq.NoiseModel): def noisy_operation(self, op): # Hey! No peeking! q = op.qubits[0] v = hashlib.sha256(str(q).encode()).digest()[0] / 256 yield cirq.depolarize(v).on(q) yield op secret_noise_sampler = cirq.DensityMatrixSimulator(nois...
docs/benchmarks/rabi_oscillations.ipynb
quantumlib/ReCirq
apache-2.0
raw plot data
import matplotlib.pyplot as plt %matplotlib inline amitted = data[data['amitted']==1] rejected = data[data['amitted']==0] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(amitted['exam1'], amitted['exam2'], s=50, c='b', marker='o', label='Admitted') ax.scatter(rejected['exam1'], rejected['exam2'], s=50, c='r', marke...
ex2/ex2.ipynb
zhenxinlei/SpringSecurity1
epl-1.0
Sigmoid function hypothesis function $$ h_{\theta}(x)= g(\theta^Tx)$$ $$ g(z) =\frac{1}{1+e^{-z}}$$
def sigmoid(z): return 1/(1+np.exp(-z)) # sanity check nums = np.arange(-10,10,step =1 ) fig, ax = plt.subplots() ax.plot(nums, sigmoid(nums))
ex2/ex2.ipynb
zhenxinlei/SpringSecurity1
epl-1.0
cost function $$J(\theta) = \frac{1}{m} \sum_{i=1}^{m}[-y^{(i)} log(h_{\theta}(x^{(i)})) -(1-y^{(i)})log(1-h_{\theta}(x^{(i)}))]$$ Proof: probability when y =1 or 0 $$P(y=1 \mid x, \theta)= h_{\theta}(x)$$ $$P(y=0 \mid x, \theta)= 1- h_{\theta}(x)$$ compact above two $$P(y \mid x, \theta)= h_{\theta}(x)^y...
def cost(theta,x,y): theta = np.matrix(theta) x= np.matrix(x) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(x*theta.T))) second = np.multiply( (1-y), np.log(1-sigmoid(x*theta.T))) return np.sum(first-second)/len(x) # add a ones column - this makes the matrix multiplication w...
ex2/ex2.ipynb
zhenxinlei/SpringSecurity1
epl-1.0
The binary files are small in size and store every floating point number exactly, so you don't have to worry about efficiency or losing precision. You can make lots of checkpoints if you want! Let's delete the old REBOUND simulation (that frees up the memory from that simulation) and then read the binary file we just s...
del sim sim = rebound.Simulation.from_file("checkpoint.bin") sim.status()
ipython_examples/Checkpoints.ipynb
eford/rebound
gpl-3.0
SPA output
tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson') print(tus) golden = Location(39.742476, -105.1786, 'America/Denver', 1830, 'Golden') print(golden) golden_mst = Location(39.742476, -105.1786, 'MST', 1830, 'Golden MST') print(golden_mst) berlin = Location(52.5167, 13.3833, 'Europe/Berlin', 34, 'Berlin') print(ber...
docs/tutorials/solarposition.ipynb
ianctse/pvlib-python
bsd-3-clause
Speed tests
times_loc = times.tz_localize(loc.tz) %%timeit pyephemout = pvlib.solarposition.pyephem(times_loc, loc.latitude, loc.longitude) #ephemout = pvlib.solarposition.ephemeris(times, loc) %%timeit #pyephemout = pvlib.solarposition.pyephem(times, loc) ephemout = pvlib.solarposition.ephemeris(times_loc, loc.latitude, loc.l...
docs/tutorials/solarposition.ipynb
ianctse/pvlib-python
bsd-3-clause
This numba test will only work properly if you have installed numba.
%%timeit #pyephemout = pvlib.solarposition.pyephem(times, loc) ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude, method='nrel_numba')
docs/tutorials/solarposition.ipynb
ianctse/pvlib-python
bsd-3-clause
The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 b...
%%timeit #pyephemout = pvlib.solarposition.pyephem(times, loc) ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude, method='nrel_numba', numthreads=16) %%timeit ephemout = pvlib.solarposition.spa_python(times_loc, loc.latitude, loc....
docs/tutorials/solarposition.ipynb
ianctse/pvlib-python
bsd-3-clause
Symca Symca is used to perform symbolic metabolic control analysis [3,4] on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are ...
mod = pysces.model('lin4_fb') sc = psctb.Symca(mod)
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Additionally Symca has the following arguments: internal_fixed: This must be set to True in the case where an internal metabolite has a fixed concentration (default: False) auto_load: If True Symca will try to load a previously saved session. Saved data is unaffected by the internal_fixed argument above (default: Fals...
sc.do_symca()
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
do_symca has the following arguments: internal_fixed: This must be set to True in the case where an internal metabolite has a fixed concentration (default: False) auto_save_load: If set to True Symca will attempt to load a previously saved session and only generate new expressions in case of a failure. After generatio...
sc.cc_results
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Inspecting an individual control coefficient yields a symbolic expression together with a value:
sc.cc_results.ccJR1_R4
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$. Various properties of this control coefficient can be accessed such as the: * Expression (as a SymPy expression)
sc.cc_results.ccJR1_R4.expression
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Numerator expression (as a SymPy expression)
sc.cc_results.ccJR1_R4.numerator
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Denominator expression (as a SymPy expression)
sc.cc_results.ccJR1_R4.denominator
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Value (as a float64)
sc.cc_results.ccJR1_R4.value
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Additional, less pertinent, attributes are abs_value, latex_expression, latex_expression_full, latex_numerator, latex_name, name and denominator_object. The individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows:
sc.cc_results.ccJR1_R4.CP001 sc.cc_results.ccJR1_R4.CP002
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed). Control pattern percentage contribution Additionally control patterns have a percentage field which indicates the degree to wh...
sc.cc_results.ccJR1_R4.CP001.percentage sc.cc_results.ccJR1_R4.CP002.percentage
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have differe...
mod.reLoad() # mod.Vf_4 has a default value of 50 mod.Vf_4 = 0.1 # calculating new steady state mod.doMca() # now ccJR1_R4 and its two control patterns should have new values sc.cc_results.ccJR1_R4 # original value was 0.000 sc.cc_results.ccJR1_R4.CP001 # original value was 0.964 sc.cc_results.ccJR1_R4.CP002 # rese...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Control pattern graphs As described under Basic Usage, Symca has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the highlight_patterns method:
# This path leads to the provided layout file path_to_layout = '~/Pysces/psc/lin4_fb.dict' # Correct path depending on platform - necessary for platform independent scripts if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8): path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_l...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
highlight_patterns has the following optional arguments: width: Sets the width of the graph (default: 900). height:Sets the height of the graph (default: 500). show_dummy_sinks: If True reactants with the "dummy" or "sink" will not be displayed (default: False). show_external_modifier_links: If True edges representing...
# clicking on CP002 shows that this control pattern representing # the chain of effects passing through the feedback loop # is totally responsible for the observed control coefficient value. sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout) # clicking on CP001 shows that this control pat...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Parameter scans Parameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed above). The procedures for...
percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4', scan_range=numpy.logspace(-1,3,200), scan_type='percentage')
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
As previously described, these data can be displayed using ScanFig by calling the plot method of percentage_scan_data. Furthermore, lines can be enabled/disabled using the toggle_category method of ScanFig or by clicking on the appropriate buttons:
percentage_scan_plot = percentage_scan_data.plot() # set the x-axis to a log scale percentage_scan_plot.ax.semilogx() # enable all the lines percentage_scan_plot.toggle_category('Control Patterns', True) percentage_scan_plot.toggle_category('CP001', True) percentage_scan_plot.toggle_category('CP002', True) # display...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
A value plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present:
value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4', scan_range=numpy.logspace(-1,3,200), scan_type='value') value_scan_plot = value_scan_data.plot() # set the x-axis to a log scale value_scan_p...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
Fixed internal metabolites In the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the internal_fixed argument must be set to True in either the do_symca method, or when instantiating the Symca object. This will typically result in the creation of a cc_results_N object f...
# Create a variant of mod with 'C' fixed at its steady-state value mod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3') # Instantiate Symca object the 'internal_fixed' argument set to 'True' sc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True) # Run the 'do_symca' method (internal_fixed can also be set...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
The normal sc_fixed_S3.cc_results object is still generated, but will be invalid for the fixed model. Each additional cc_results_N contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These cc_results_N objects are numbered arbitrarily, but consist...
sc_fixed_S3.cc_results_1
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
cc_results_0 contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the S3 demand block consists of a single reaction, this object also conta...
sc_fixed_S3.cc_results_0
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
If the demand block of S3 in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional cc_results_N object containing the control coefficients of that reaction block. Saving results In addition to being able to save parameter scan results (as previously described...
sc.save_results()
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause
save_results has the following optional arguments: file_name: Specifies a path to save the results to. If None, the path defaults as described above. separator: The separator between fields (default: ",") The contents of the saved data file is as follows:
# the following code requires `pandas` to run import pandas as pd # load csv file at default path results_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv' # Correct path depending on platform - necessary for platform independent scripts if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8): r...
example_notebooks/Symca.ipynb
PySCeS/PyscesToolbox
bsd-3-clause