code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
##### Copyright 2020 Google
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Get started with qsimcirq
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.example.org/qsim/tutorials/qsimcirq"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on QuantumLib</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/qsim/blob/master/docs/tutorials/qsimcirq.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/qsim/blob/master/docs/tutorials/qsimcirq.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/qsim/docs/tutorials/qsimcirq.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The qsim library provides a Python interface to Cirq in the **qsimcirq** PyPI package.
## Setup
Install the Cirq and qsimcirq packages:
```
try:
import cirq
except ImportError:
!pip install cirq --quiet
import cirq
try:
import qsimcirq
except ImportError:
!pip install qsimcirq --quiet
import qsimcirq
```
Simulating Cirq circuits with qsim is easy: just define the circuit as you normally would, then create a `QSimSimulator` to perform the simulation. This object implements Cirq's [simulator.py](https://github.com/quantumlib/Cirq/blob/master/cirq/sim/simulator.py) interfaces, so you can drop it in anywhere the basic Cirq simulator is used.
## Full state-vector simulation
qsim is optimized for computing the final state vector of a circuit. Try it by running the example below.
```
# Define qubits and a short circuit.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with Cirq and return the full state vector.
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.simulate(circuit)
print(cirq_results)
print()
# Simulate the circuit with qsim and return the full state vector.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.simulate(circuit)
print(qsim_results)
```
To sample from this state, you can invoke Cirq's `sample_state_vector` method:
```
samples = cirq.sample_state_vector(
qsim_results.state_vector(), indices=[0, 1], repetitions=10)
print(samples)
```
## Measurement sampling
qsim also supports sampling from user-defined measurement gates.
> *Note*: Since qsim and Cirq use different random number generators, identical runs on both simulators may give different results, even if they use the same seed.
```
# Define a circuit with measurements.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(
cirq.H(q0), cirq.X(q1), cirq.CX(q0, q1),
cirq.measure(q0, key='qubit_0'),
cirq.measure(q1, key='qubit_1'),
)
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with Cirq and return just the measurement values.
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.run(circuit, repetitions=5)
print(cirq_results)
print()
# Simulate the circuit with qsim and return just the measurement values.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.run(circuit, repetitions=5)
print(qsim_results)
```
The warning above highlights an important distinction between the `simulate` and `run` methods:
* `simulate` only executes the circuit once.
- Sampling from the resulting state is fast, but if there are intermediate measurements the final state vector depends on the results of those measurements.
* `run` will execute the circuit once for each repetition requested.
- As a result, sampling is much slower, but intermediate measurements are re-sampled for each repetition. If there are no intermediate measurements, `run` redirects to `simulate` for faster execution.
The warning goes away if intermediate measurements are present:
```
# Define a circuit with intermediate measurements.
q0 = cirq.LineQubit(0)
circuit = cirq.Circuit(
cirq.X(q0)**0.5, cirq.measure(q0, key='m0'),
cirq.X(q0)**0.5, cirq.measure(q0, key='m1'),
cirq.X(q0)**0.5, cirq.measure(q0, key='m2'),
)
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with qsim and return just the measurement values.
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.run(circuit, repetitions=5)
print(qsim_results)
```
## Amplitude evaluation
qsim can also calculate amplitudes for specific output bitstrings.
```
# Define a simple circuit.
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))
print("Circuit:")
print(circuit)
print()
# Simulate the circuit with qsim and return the amplitudes for |00) and |01).
print('Cirq results:')
cirq_simulator = cirq.Simulator()
cirq_results = cirq_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01])
print(cirq_results)
print()
# Simulate the circuit with qsim and return the amplitudes for |00) and |01).
print('qsim results:')
qsim_simulator = qsimcirq.QSimSimulator()
qsim_results = qsim_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01])
print(qsim_results)
```
## Performance benchmark
The code below generates a depth-16 circuit on a 4x5 qubit grid, then runs it against the basic Cirq simulator. For a circuit of this size, the difference in runtime can be significant - try it out!
```
import time
# Get a rectangular grid of qubits.
qubits = cirq.GridQubit.rect(4, 5)
# Generates a random circuit on the provided qubits.
circuit = cirq.experiments.random_rotations_between_grid_interaction_layers_circuit(
qubits=qubits, depth=16)
# Simulate the circuit with Cirq and print the runtime.
cirq_simulator = cirq.Simulator()
cirq_start = time.time()
cirq_results = cirq_simulator.simulate(circuit)
cirq_elapsed = time.time() - cirq_start
print(f'Cirq runtime: {cirq_elapsed} seconds.')
print()
# Simulate the circuit with qsim and print the runtime.
qsim_simulator = qsimcirq.QSimSimulator()
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
```
qsim performance can be tuned further by passing options to the simulator constructor. These options use the same format as the qsim_base binary - a full description can be found in the qsim [usage doc](https://github.com/quantumlib/qsim/blob/master/docs/usage.md).
```
# Use eight threads to parallelize simulation.
options = {'t': 8}
qsim_simulator = qsimcirq.QSimSimulator(options)
qsim_start = time.time()
qsim_results = qsim_simulator.simulate(circuit)
qsim_elapsed = time.time() - qsim_start
print(f'qsim runtime: {qsim_elapsed} seconds.')
```
## Advanced applications: Distributed execution
qsimh (qsim-hybrid) is a second library in the qsim repository that takes a slightly different approach to circuit simulation. When simulating a quantum circuit, it's possible to simplify the execution by decomposing a subset of two-qubit gates into pairs of one-qubit gates with shared indices. This operation is called "slicing" (or "cutting") the gates.
qsimh takes advantage of the "slicing" operation by selecting a set of gates to "slice" and assigning each possible value of the shared indices across a set of executors running in parallel. By adding up the results afterwards, the total state can be recovered.
```
# Pick a pair of qubits.
q0 = cirq.GridQubit(0, 0)
q1 = cirq.GridQubit(0, 1)
# Create a circuit that entangles the pair.
circuit = cirq.Circuit(
cirq.H(q0), cirq.CX(q0, q1), cirq.X(q1)
)
print("Circuit:")
print(circuit)
```
In order to let qsimh know how we want to split up the circuit, we need to pass it some additional options. More detail on these can be found in the qsim [usage doc](https://github.com/quantumlib/qsim/blob/master/docs/usage.md), but the fundamentals are explained below.
```
options = {}
# 'k' indicates the qubits on one side of the cut.
# We'll use qubit 0 for this.
options['k'] = [0]
# 'p' and 'r' control when values are assigned to cut indices.
# There are some intricacies in choosing values for these options,
# but for now we'll set p=1 and r=0.
# This allows us to pre-assign the value of the CX indices
# and distribute its execution to multiple jobs.
options['p'] = 1
options['r'] = 0
# 'w' indicates the value pre-assigned to the cut.
# This should change for each execution.
options['w'] = 0
# Create the qsimh simulator with those options.
qsimh_simulator = qsimcirq.QSimhSimulator(options)
results_0 = qsimh_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print(results_0)
```
Now to run the other side of the cut...
```
options['w'] = 1
qsimh_simulator = qsimcirq.QSimhSimulator(options)
results_1 = qsimh_simulator.compute_amplitudes(
circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print(results_1)
```
...and add the two together. The results of a normal qsim simulation are shown for comparison.
```
results = [r0 + r1 for r0, r1 in zip(results_0, results_1)]
print("qsimh results:")
print(results)
qsim_simulator = qsimcirq.QSimSimulator()
qsim_simulator.compute_amplitudes(circuit, bitstrings=[0b00, 0b01, 0b10, 0b11])
print("qsim results:")
print(results)
```
The key point to note here is that `results_0` and `results_1` are completely independent - they can be run in parallel on two separate machines, with no communication between the two. Getting the full result requires `2^p` executions, but each individual result is much cheaper to calculate than trying to do the whole circuit at once.
| github_jupyter |
Synergetics<br/>[Oregon Curriculum Network](http://4dsolutions.net/ocn/)
<h3 align="center">Computing Volumes in XYZ and IVM units</h3>
<h4 align="center">by Kirby Urner, July 2016</h4>

A cube is composed of 24 identical not-regular tetrahedrons, each with a corner at the cube's center, an edge from cube's center to a face center, and two more to adjacent cube corners on that face, defining six edges in all (Fig. 1).
If we define the cube's edges to be √2 then the whole cube would have volume √2 * √2 * √2 in XYZ units.
However, in IVM units, the very same cube has a volume of 3, owing to the differently-shaped volume unit, a tetrahedron of edges 2, inscribed in this same cube. [Fig. 986.210](http://www.rwgrayprojects.com/synergetics/findex/fx0900.html) from *Synergetics*:

Those lengths would be in R-units, where R is the radius of a unit sphere. In D-units, twice as long (D = 2R), the tetrahedron has edges 1 and the cube has edges √2/2.
By XYZ we mean the XYZ coordinate system of René Descartes (1596 – 1650).
By IVM we mean the "octet-truss", a space-frame consisting of tetrahedrons and octahedrons in a space-filling matrix, with twice as many tetrahedrons as octahedrons.

The tetrahedron and octahedron have relative volumes of 1:4. The question then becomes, how to superimpose the two.
The canonical solution is to start with unit-radius balls (spheres) of radius R. R = 1 in other words, whereas D, the diameter, is 2. Alternatively, we may set D = 1 and R = 0.5, keeping the same 2:1 ratio for D:R.
The XYZ cube has edges R, whereas the IVM tetrahedron has edges D. That relative sizing convention brings their respective volumes fairly close together, with the cube's volume exceeding the tetrahedron's by about six percent.
```
import math
xyz_volume = math.sqrt(2)**3
ivm_volume = 3
print("XYZ units:", xyz_volume)
print("IVM units:", ivm_volume)
print("Conversion constant:", ivm_volume/xyz_volume)
```
The Python code below encodes a Tetrahedron type based solely on its six edge lengths. The code makes no attempt to determine the consequent angles.
A complicated volume formula, mined from the history books and streamlined by mathematician Gerald de Jong, outputs the volume of said tetrahedron in both IVM and XYZ units.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/45589318711/in/dateposted-public/" title="dejong"><img src="https://farm2.staticflickr.com/1935/45589318711_677d272397.jpg" width="417" height="136" alt="dejong"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The [unittests](http://pythontesting.net/framework/unittest/unittest-introduction/) that follow assure it's producing the expected results. The formula bears great resemblance to the one by [Piero della Francesca](https://mathpages.com/home/kmath424/kmath424.htm).
```
from math import sqrt as rt2
from qrays import Qvector, Vector
R =0.5
D =1.0
S3 = pow(9/8, 0.5)
root2 = rt2(2)
root3 = rt2(3)
root5 = rt2(5)
root6 = rt2(6)
PHI = (1 + root5)/2.0
class Tetrahedron:
"""
Takes six edges of tetrahedron with faces
(a,b,d)(b,c,e)(c,a,f)(d,e,f) -- returns volume
in ivm and xyz units
"""
def __init__(self, a,b,c,d,e,f):
self.a, self.a2 = a, a**2
self.b, self.b2 = b, b**2
self.c, self.c2 = c, c**2
self.d, self.d2 = d, d**2
self.e, self.e2 = e, e**2
self.f, self.f2 = f, f**2
def ivm_volume(self):
ivmvol = ((self._addopen() - self._addclosed() - self._addopposite())/2) ** 0.5
return ivmvol
def xyz_volume(self):
xyzvol = rt2(8/9) * self.ivm_volume()
return xyzvol
def _addopen(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = f2*a2*b2
sumval += d2 * a2 * c2
sumval += a2 * b2 * e2
sumval += c2 * b2 * d2
sumval += e2 * c2 * a2
sumval += f2 * c2 * b2
sumval += e2 * d2 * a2
sumval += b2 * d2 * f2
sumval += b2 * e2 * f2
sumval += d2 * e2 * c2
sumval += a2 * f2 * e2
sumval += d2 * f2 * c2
return sumval
def _addclosed(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = a2 * b2 * d2
sumval += d2 * e2 * f2
sumval += b2 * c2 * e2
sumval += a2 * c2 * f2
return sumval
def _addopposite(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = a2 * e2 * (a2 + e2)
sumval += b2 * f2 * (b2 + f2)
sumval += c2 * d2 * (c2 + d2)
return sumval
def make_tet(v0,v1,v2):
"""
three edges from any corner, remaining three edges computed
"""
tet = Tetrahedron(v0.length(), v1.length(), v2.length(),
(v0-v1).length(), (v1-v2).length(), (v2-v0).length())
return tet.ivm_volume(), tet.xyz_volume()
tet = Tetrahedron(D, D, D, D, D, D)
print(tet.ivm_volume())
```
The ```make_tet``` function takes three vectors from a common corner, in terms of vectors with coordinates, and computes the remaining missing lengths, thereby getting the information it needs to use the Tetrahedron class as before.
```
import unittest
from qrays import Vector, Qvector
class Test_Tetrahedron(unittest.TestCase):
def test_unit_volume(self):
tet = Tetrahedron(D, D, D, D, D, D)
self.assertEqual(tet.ivm_volume(), 1, "Volume not 1")
def test_e_module(self):
e0 = D
e1 = root3 * PHI**-1
e2 = rt2((5 - root5)/2)
e3 = (3 - root5)/2
e4 = rt2(5 - 2*root5)
e5 = 1/PHI
tet = Tetrahedron(e0, e1, e2, e3, e4, e5)
self.assertTrue(1/23 > tet.ivm_volume()/8 > 1/24, "Wrong E-mod")
def test_unit_volume2(self):
tet = Tetrahedron(R, R, R, R, R, R)
self.assertAlmostEqual(float(tet.xyz_volume()), 0.117851130)
def test_phi_edge_tetra(self):
tet = Tetrahedron(D, D, D, D, D, PHI)
self.assertAlmostEqual(float(tet.ivm_volume()), 0.70710678)
def test_right_tetra(self):
e = pow((root3/2)**2 + (root3/2)**2, 0.5) # right tetrahedron
tet = Tetrahedron(D, D, D, D, D, e)
self.assertAlmostEqual(tet.xyz_volume(), 1)
def test_quadrant(self):
qA = Qvector((1,0,0,0))
qB = Qvector((0,1,0,0))
qC = Qvector((0,0,1,0))
tet = make_tet(qA, qB, qC)
self.assertAlmostEqual(tet[0], 0.25)
def test_octant(self):
x = Vector((0.5, 0, 0))
y = Vector((0 , 0.5, 0))
z = Vector((0 , 0 , 0.5))
tet = make_tet(x,y,z)
self.assertAlmostEqual(tet[1], 1/6, 5) # good to 5 places
def test_quarter_octahedron(self):
a = Vector((1,0,0))
b = Vector((0,1,0))
c = Vector((0.5,0.5,root2/2))
tet = make_tet(a, b, c)
self.assertAlmostEqual(tet[0], 1, 5) # good to 5 places
def test_xyz_cube(self):
a = Vector((0.5, 0.0, 0.0))
b = Vector((0.0, 0.5, 0.0))
c = Vector((0.0, 0.0, 0.5))
R_octa = make_tet(a,b,c)
self.assertAlmostEqual(6 * R_octa[1], 1, 4) # good to 4 places
def test_s3(self):
D_tet = Tetrahedron(D, D, D, D, D, D)
a = Vector((0.5, 0.0, 0.0))
b = Vector((0.0, 0.5, 0.0))
c = Vector((0.0, 0.0, 0.5))
R_cube = 6 * make_tet(a,b,c)[1]
self.assertAlmostEqual(D_tet.xyz_volume() * S3, R_cube, 4)
def test_martian(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(5*q, 2*p, 2*r)
self.assertAlmostEqual(result[0], 20, 7)
def test_phi_tet(self):
"edges from common vertex: phi, 1/phi, 1"
p = Vector((1, 0, 0))
q = Vector((1, 0, 0)).rotz(60) * PHI
r = Vector((0.5, root3/6, root6/3)) * 1/PHI
result = make_tet(p, q, r)
self.assertAlmostEqual(result[0], 1, 7)
def test_phi_tet_2(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(PHI*q, (1/PHI)*p, r)
self.assertAlmostEqual(result[0], 1, 7)
def test_phi_tet_3(self):
T = Tetrahedron(PHI, 1/PHI, 1.0,
root2, root2/PHI, root2)
result = T.ivm_volume()
self.assertAlmostEqual(result, 1, 7)
def test_koski(self):
a = 1
b = PHI ** -1
c = PHI ** -2
d = (root2) * PHI ** -1
e = (root2) * PHI ** -2
f = (root2) * PHI ** -1
T = Tetrahedron(a,b,c,d,e,f)
result = T.ivm_volume()
self.assertAlmostEqual(result, PHI ** -3, 7)
a = Test_Tetrahedron()
R =0.5
D =1.0
suite = unittest.TestLoader().loadTestsFromModule(a)
unittest.TextTestRunner().run(suite)
```
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/41211295565/in/album-72157624750749042/" title="Martian Multiplication"><img src="https://farm1.staticflickr.com/907/41211295565_59145e2f63.jpg" width="500" height="312" alt="Martian Multiplication"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The above tetrahedron has a=2, b=2, c=5, for a volume of 20. The remaining three lengths have not been computed as it's sufficient to know only a, b, c if the angles between them are those of the regular tetrahedron.
That's how IVM volume is computed: multiply a * b * c from a regular tetrahedron corner, then "close the lid" to see the volume.
```
a = 2
b = 4
c = 5
d = 3.4641016151377544
e = 4.58257569495584
f = 4.358898943540673
tetra = Tetrahedron(a,b,c,d,e,f)
print("IVM volume of tetra:", round(tetra.ivm_volume(),5))
```
Lets define a MITE, one of these 24 identical space-filling tetrahedrons, with reference to D=1, R=0.5, as this is how our Tetrahedron class is calibrated. The cubes 12 edges will all be √2/2.
Edges 'a' 'b' 'c' fan out from the cube center, with 'b' going up to a face center, with 'a' and 'c' to adjacent ends of the face's edge.
From the cube's center to mid-face is √2/4 (half an edge), our 'b'. 'a' and 'c' are both half the cube's body diagonal of √(3/2)/2 or √(3/8).
Edges 'd', 'e' and 'f' define the facet opposite the cube's center.
'd' and 'e' are both half face diagonals or 0.5, whereas 'f' is a cube edge, √2/2. This gives us our tetrahedron:
```
b = rt2(2)/4
a = c = rt2(3/8)
d = e = 0.5
f = rt2(2)/2
mite = Tetrahedron(a, b, c, d, e, f)
print("IVM volume of Mite:", round(mite.ivm_volume(),5))
print("XYZ volume of Mite:", round(mite.xyz_volume(),5))
```
Allowing for floating point error, this space-filling right tetrahedron has a volume of 0.125 or 1/8. Since 24 of them form a cube, said cube has a volume of 3. The XYZ volume, on the other hand, is what we'd expect from a regular tetrahedron of edges 0.5 in the current calibration system.
```
regular = Tetrahedron(0.5, 0.5, 0.5, 0.5, 0.5, 0.5)
print("MITE volume in XYZ units:", round(regular.xyz_volume(),5))
print("XYZ volume of 24-Mite Cube:", round(24 * regular.xyz_volume(),5))
```
The MITE (minimum tetrahedron) further dissects into component modules, a left and right A module, then either a left or right B module. Outwardly, the positive and negative MITEs look the same. Here are some drawings from R. Buckminster Fuller's research, the chief popularizer of the A and B modules.

In a different Jupyter Notebook, we could run these tetrahedra through our volume computer to discover both As and Bs have a volume of 1/24 in IVM units.
Instead, lets take a look at the E-module and compute its volume.
<br />
The black hub is at the center of the RT, as shown here...
<br />
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/24971714468/in/dateposted-public/" title="E module with origin"><img src="https://farm5.staticflickr.com/4516/24971714468_46e14ce4b5_z.jpg" width="640" height="399" alt="E module with origin"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<b>RT center is the black hub (Koski with vZome)</b>
</div>
```
from math import sqrt as rt2
from tetravolume import make_tet, Vector
ø = (rt2(5)+1)/2
e0 = Black_Yellow = rt2(3)*ø**-1
e1 = Black_Blue = 1
e3 = Yellow_Blue = (3 - rt2(5))/2
e6 = Black_Red = rt2((5 - rt2(5))/2)
e7 = Blue_Red = 1/ø
# E-mod is a right tetrahedron, so xyz is easy
v0 = Vector((Black_Blue, 0, 0))
v1 = Vector((Black_Blue, Yellow_Blue, 0))
v2 = Vector((Black_Blue, 0, Blue_Red))
# assumes R=0.5 so computed result is 8x needed
# volume, ergo divide by 8.
ivm, xyz = make_tet(v0,v1,v2)
print("IVM volume:", round(ivm/8, 5))
print("XYZ volume:", round(xyz/8, 5))
```
This information is being shared around Portland in various contexts. Below, an image from a hands-on workshop in 2010 organized by the Portland Free School.

| github_jupyter |
Here you have a collection of guided exercises for the first class on Python. <br>
The exercises are divided by topic, following the topics reviewed during the theory session, and for each topic you have some mandatory exercises, and other optional exercises, which you are invited to do if you still have time after the mandatory exercises. <br>
Remember that you have 5 hours to solve these exercises, after which we will review the most interesting exercises together. If you don't finish all the exercises, you can work on them tonightor tomorrow.
At the end of the class, we will upload the code with the solutions of the exercises so that you can review them again if needed. If you still have not finished some exercises, try to do them first by yourself, before taking a look at the solutions: you are doing these exercises for yourself, so it is always the best to do them your way first, as it is the fastest way to learn!
**Exercise 1.1:** The cover price of a book is 24.95 EUR, but bookstores get a 40 percent discount. Shipping costs 3 EUR for the first copy and 75 cents for each additional copy. **Calculate the total wholesale costs for 60 copies**.
```
#Your Code Here
#i will firstly define the variables as:
bookPrice = 24.95
discount = 40/100
totalNumberOfBooks= 60
shippingFirstCopy = 3
shippingSubsequentCopies = 0.75
wholeSalePrice = bookPrice - (bookPrice*discount)
totalBookCost = wholeSalePrice*totalNumberOfBooks
totalShippingCost = shippingFirstCopy + (shippingSubsequentCopies*59)
#i will now create a function for calculating the total wholesale cost
def wholeSaleCost():
totalWholeSaleCost = totalBookCost + totalShippingCost
return totalWholeSaleCost
wholeSaleCost()
```
**Exercise 1.2:** When something is wrong with your code, Python will raise errors. Often these will be "syntax errors" that signal that something is wrong with the form of your code (e.g., the code in the previous exercise raised a `SyntaxError`). There are also "runtime errors", which signal that your code was in itself formally correct, but that something went wrong during the code's execution. A good example is the `ZeroDivisionError`, which indicates that you tried to divide a number by zero (which, as you may know, is not allowed). Try to make Python **raise such a `ZeroDivisionError`.**
```
#Your Code Here
classAge = 28
numberInClass = 0
averageAge = classAge/numberInClass
def myClassAge (averageAge):
return averageAge
print(averageAge)
```
**Exercise 5.1**: Create a countdown function that starts at a certain count, and counts down to zero. Instead of zero, print "Blast off!". Use a `for` loop.
```
# Countdown
def countdown():
"""
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
Blast off!
"""
i = 20
while i > 0:
print(i)
i -= 1
else:
print("Blast off!")
countdown()
```
**Exercise 5.2:** Write and test three functions that return the largest, the smallest, and the number of dividables by 3 in a given collection of numbers. Use the algorithm described earlier in the Part 5 lecture :)
```
# Your functions
def main():
"""
a = [2, 4, 6, 12, 15, 99, 100]
100
2
4
"""
a = [2, 4, 6, 12, 15, 99, 100]
count = 0
print(max(a))
print(min(a))
for num in a :
if num%3 == 0:
count += 1
print(count)
main()
```
| github_jupyter |
# Chapter 9
*Modeling and Simulation in Python*
Copyright 2021 Allen Downey
License: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# check if the libraries we need are installed
try:
import pint
except ImportError:
!pip install pint
import pint
try:
from modsim import *
except ImportError:
!pip install modsimpy
from modsim import *
```
The following displays SymPy expressions and provides the option of showing results in LaTeX format.
```
from sympy.printing import latex
def show(expr, show_latex=False):
"""Display a SymPy expression.
expr: SymPy expression
show_latex: boolean
"""
if show_latex:
print(latex(expr))
return expr
```
### Analysis with SymPy
Create a symbol for time.
```
import sympy as sp
t = sp.symbols('t')
t
```
If you combine symbols and numbers, you get symbolic expressions.
```
expr = t + 1
expr
```
The result is an `Add` object, which just represents the sum without trying to compute it.
```
type(expr)
```
`subs` can be used to replace a symbol with a number, which allows the addition to proceed.
```
expr.subs(t, 2)
```
`f` is a special class of symbol that represents a function.
```
f = sp.Function('f')
f
```
The type of `f` is `UndefinedFunction`
```
type(f)
```
SymPy understands that `f(t)` means `f` evaluated at `t`, but it doesn't try to evaluate it yet.
```
f(t)
```
`diff` returns a `Derivative` object that represents the time derivative of `f`
```
dfdt = sp.diff(f(t), t)
dfdt
type(dfdt)
```
We need a symbol for `alpha`
```
alpha = sp.symbols('alpha')
alpha
```
Now we can write the differential equation for proportional growth.
```
eq1 = sp.Eq(dfdt, alpha*f(t))
eq1
```
And use `dsolve` to solve it. The result is the general solution.
```
solution_eq = sp.dsolve(eq1)
solution_eq
```
We can tell it's a general solution because it contains an unspecified constant, `C1`.
In this example, finding the particular solution is easy: we just replace `C1` with `p_0`
```
C1, p_0 = sp.symbols('C1 p_0')
particular = solution_eq.subs(C1, p_0)
particular
```
In the next example, we have to work a little harder to find the particular solution.
### Solving the quadratic growth equation
We'll use the (r, K) parameterization, so we'll need two more symbols:
```
r, K = sp.symbols('r K')
```
Now we can write the differential equation.
```
eq2 = sp.Eq(sp.diff(f(t), t), r * f(t) * (1 - f(t)/K))
eq2
```
And solve it.
```
solution_eq = sp.dsolve(eq2)
solution_eq
```
The result, `solution_eq`, contains `rhs`, which is the right-hand side of the solution.
```
general = solution_eq.rhs
general
```
We can evaluate the right-hand side at $t=0$
```
at_0 = general.subs(t, 0)
at_0
```
Now we want to find the value of `C1` that makes `f(0) = p_0`.
So we'll create the equation `at_0 = p_0` and solve for `C1`. Because this is just an algebraic identity, not a differential equation, we use `solve`, not `dsolve`.
The result from `solve` is a list of solutions. In this case, [we have reason to expect only one solution](https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem), but we still get a list, so we have to use the bracket operator, `[0]`, to select the first one.
```
solutions = sp.solve(sp.Eq(at_0, p_0), C1)
type(solutions), len(solutions)
value_of_C1 = solutions[0]
value_of_C1
```
Now in the general solution, we want to replace `C1` with the value of `C1` we just figured out.
```
particular = general.subs(C1, value_of_C1)
particular
```
The result is complicated, but SymPy provides a method that tries to simplify it.
```
particular = sp.simplify(particular)
particular
```
Often simplicity is in the eye of the beholder, but that's about as simple as this expression gets.
Just to double-check, we can evaluate it at `t=0` and confirm that we get `p_0`
```
particular.subs(t, 0)
```
This solution is called the [logistic function](https://en.wikipedia.org/wiki/Population_growth#Logistic_equation).
In some places you'll see it written in a different form:
$f(t) = \frac{K}{1 + A e^{-rt}}$
where $A = (K - p_0) / p_0$.
We can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:
```
A = (K - p_0) / p_0
A
logistic = K / (1 + A * sp.exp(-r*t))
logistic
```
To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.
```
sp.simplify(particular - logistic)
```
This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).
But if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.
### Exercises
**Exercise:** Solve the quadratic growth equation using the alternative parameterization
$\frac{df(t)}{dt} = \alpha f(t) + \beta f^2(t) $
```
beta = sp.symbols('beta')
beta
eq3 = sp.Eq(sp.diff(f(t), t), alpha * f(t) + beta * f(t)**2)
eq3
solution_eq3 = sp.dsolve(eq3)
solution_eq3
general3 = solution_eq3.rhs
general3
at_03 = general3.subs(t, 0)
at_03
solutions3 = sp.solve(sp.Eq(at_03, p_0), C1)
solutions3[0]
particular3 = sp.simplify(general3.subs(C1, solutions3[0]))
particular3
```
**Exercise:** Use [WolframAlpha](https://www.wolframalpha.com/) to solve the quadratic growth model, using either or both forms of parameterization:
df(t) / dt = alpha f(t) + beta f(t)^2
or
df(t) / dt = r f(t) (1 - f(t)/K)
Find the general solution and also the particular solution where `f(0) = p_0`.
```
Please see the solution above
```
| github_jupyter |
```
!pip install yacs
!pip install gdown
import os, sys, time
import argparse
import importlib
from tqdm.notebook import tqdm
from imageio import imread
import torch
import numpy as np
import matplotlib.pyplot as plt
```
### Download pretrained
- We use HoHoNet w/ hardnet encoder in this demo
- Download other version [here](https://drive.google.com/drive/folders/1raT3vRXnQXRAQuYq36dE-93xFc_hgkTQ?usp=sharing)
```
PRETRAINED_PTH = 'ckpt/mp3d_layout_HOHO_layout_aug_efficienthc_Transen1_resnet34/ep300.pth'
if not os.path.exists(PRETRAINED_PTH):
os.makedirs(os.path.split(PRETRAINED_PTH)[0], exist_ok=True)
!gdown 'https://drive.google.com/uc?id=1OU9uyuNiswkPovJuvG3sevm3LqHJgazJ' -O $PRETRAINED_PTH
```
### Download image
- We use a out-of-distribution image from PanoContext
```
if not os.path.exists('assets/pano_asmasuxybohhcj.png'):
!gdown 'https://drive.google.com/uc?id=1CXl6RPK6yPRFXxsa5OisHV9KwyRcejHu' -O 'assets/pano_asmasuxybohhcj.png'
rgb = imread('assets/pano_asmasuxybohhcj.png')
plt.imshow(rgb)
plt.show()
```
### Load model config
- We use HoHoNet w/ hardnet encoder in this demo
- Find out other version in `mp3d_depth/` and `s2d3d_depth`
```
from lib.config import config
config.defrost()
config.merge_from_file('config/mp3d_layout/HOHO_layout_aug_efficienthc_Transen1_resnet34.yaml')
config.freeze()
```
### Load model
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('devcie:', device)
model_file = importlib.import_module(config.model.file)
model_class = getattr(model_file, config.model.modelclass)
net = model_class(**config.model.kwargs)
net.load_state_dict(torch.load(PRETRAINED_PTH, map_location=device))
net = net.eval().to(device)
```
### Move image into tensor, normzlie to [0, 255], resize to 512x1024
```
x = torch.from_numpy(rgb).permute(2,0,1)[None].float() / 255.
if x.shape[2:] != (512, 1024):
x = torch.nn.functional.interpolate(x, self.hw, mode='area')
x = x.to(device)
```
### Model feedforward
```
with torch.no_grad():
ts = time.time()
layout = net.infer(x)
if torch.cuda.is_available():
torch.cuda.synchronize()
print(f'Eps time: {time.time() - ts:.2f} sec.')
cor_id = layout['cor_id']
y_bon_ = layout['y_bon_']
y_cor_ = layout['y_cor_']
```
### Visualize result in 2d
```
from eval_layout import layout_2_depth
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(np.concatenate([
(y_cor_ * 255).reshape(1,-1,1).repeat(30, 0).repeat(3, 2).astype(np.uint8),
rgb[30:]
], 0))
plt.plot(np.arange(y_bon_.shape[1]), y_bon_[0], 'r-')
plt.plot(np.arange(y_bon_.shape[1]), y_bon_[1], 'r-')
plt.scatter(cor_id[:, 0], cor_id[:, 1], marker='x', c='b')
plt.axis('off')
plt.title('y_bon_ (red) / y_cor_ (up-most bar) / cor_id (blue x)')
plt.subplot(122)
plt.imshow(layout_2_depth(cor_id, *rgb.shape[:2]), cmap='inferno_r')
plt.axis('off')
plt.title('rendered depth from the estimated layout (cor_id)')
plt.show()
```
### Visualize result as 3d mesh
```
!pip install open3d
!pip install plotly
import open3d as o3d
import plotly.graph_objects as go
from scipy.signal import correlate2d
from scipy.ndimage import shift
from skimage.transform import resize
from lib.misc.post_proc import np_coor2xy, np_coorx2u, np_coory2v
H, W = 256, 512
ignore_floor = False
ignore_ceiling = True
ignore_wall = False
# Convert corners to layout
depth, floor_mask, ceil_mask, wall_mask = [
resize(v, [H, W], order=0, preserve_range=True).astype(v.dtype)
for v in layout_2_depth(cor_id, *rgb.shape[:2], return_mask=True)]
coorx, coory = np.meshgrid(np.arange(W), np.arange(H))
us = np_coorx2u(coorx, W)
vs = np_coory2v(coory, H)
zs = depth * np.sin(vs)
cs = depth * np.cos(vs)
xs = cs * np.sin(us)
ys = -cs * np.cos(us)
# Aggregate mask
mask = np.ones_like(floor_mask)
if ignore_floor:
mask &= ~floor_mask
if ignore_ceiling:
mask &= ~ceil_mask
if ignore_wall:
mask &= ~wall_mask
# Prepare ply's points and faces
xyzrgb = np.concatenate([
xs[...,None], ys[...,None], zs[...,None],
resize(rgb, [H, W])], -1)
xyzrgb = np.concatenate([xyzrgb, xyzrgb[:,[0]]], 1)
mask = np.concatenate([mask, mask[:,[0]]], 1)
lo_tri_template = np.array([
[0, 0, 0],
[0, 1, 0],
[0, 1, 1]])
up_tri_template = np.array([
[0, 0, 0],
[0, 1, 1],
[0, 0, 1]])
ma_tri_template = np.array([
[0, 0, 0],
[0, 1, 1],
[0, 1, 0]])
lo_mask = (correlate2d(mask, lo_tri_template, mode='same') == 3)
up_mask = (correlate2d(mask, up_tri_template, mode='same') == 3)
ma_mask = (correlate2d(mask, ma_tri_template, mode='same') == 3) & (~lo_mask) & (~up_mask)
ref_mask = (
lo_mask | (correlate2d(lo_mask, np.flip(lo_tri_template, (0,1)), mode='same') > 0) |\
up_mask | (correlate2d(up_mask, np.flip(up_tri_template, (0,1)), mode='same') > 0) |\
ma_mask | (correlate2d(ma_mask, np.flip(ma_tri_template, (0,1)), mode='same') > 0)
)
points = xyzrgb[ref_mask]
ref_id = np.full(ref_mask.shape, -1, np.int32)
ref_id[ref_mask] = np.arange(ref_mask.sum())
faces_lo_tri = np.stack([
ref_id[lo_mask],
ref_id[shift(lo_mask, [1, 0], cval=False, order=0)],
ref_id[shift(lo_mask, [1, 1], cval=False, order=0)],
], 1)
faces_up_tri = np.stack([
ref_id[up_mask],
ref_id[shift(up_mask, [1, 1], cval=False, order=0)],
ref_id[shift(up_mask, [0, 1], cval=False, order=0)],
], 1)
faces_ma_tri = np.stack([
ref_id[ma_mask],
ref_id[shift(ma_mask, [1, 0], cval=False, order=0)],
ref_id[shift(ma_mask, [0, 1], cval=False, order=0)],
], 1)
faces = np.concatenate([faces_lo_tri, faces_up_tri, faces_ma_tri])
fig = go.Figure(
data=[
go.Mesh3d(
x=points[:,0],
y=points[:,1],
z=points[:,2],
i=faces[:,0],
j=faces[:,1],
k=faces[:,2],
facecolor=points[:,3:][faces[:,0]])
],
layout=dict(
scene=dict(
xaxis=dict(visible=False),
yaxis=dict(visible=False),
zaxis=dict(visible=False)
)
)
)
fig.show()
```
| github_jupyter |
```
###################################################################################################
# #
# Primordial Black Hole Evaporation + DM Production #
# Interplay with Freeze-In #
# #
# Authors: Andrew Cheek, Lucien Heurtier, Yuber F. Perez-Gonzalez, Jessica Turner #
# Based on: arXiv:2107.xxxxx #
# #
###################################################################################################
import BHProp as bh
import ulysses
import math
from odeintw import odeintw
import pandas as pd
from scipy import interpolate
import matplotlib.pyplot as plt
import scipy.integrate as integrate
from scipy.integrate import quad, ode, solve_ivp, odeint
from scipy.optimize import root
from scipy.special import zeta, kn
from scipy.interpolate import interp1d, RectBivariateSpline
from numpy import ma
from matplotlib import ticker, cm
from matplotlib import cm
from collections import OrderedDict
import BHProp as bh #Schwarzschild and Kerr BHs library
from Omega_h2_onlyDM import FrInPBH as FrIn1 # Only DM contribution
#----- Package for LateX plotting -----
from matplotlib import rc
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{amsmath,amssymb,bm}')
#-----
# Import solving functions
from Omega_h2_FI import FrInPBH as FrInFull # Freeze-In + PBH
# ----------- Input Parameters --------------
Mi = 7.47 # Log10@ Initial BH mass in g
ai = 0. # Initial a* value, a* = 0. -> Schwarzschild, a* > 0. -> Kerr.
bi = -10. # Log10@beta^\prime
mDM = -3. # Log10 @ DM Mass in GeV
mX = 1. # Log10 @ Mediaton Mass in GeV
mf = -10. # Log10 @ SM mass in GeV
sv = -43. # Log10 @ averaged cross section <sv>
BR = 0.5 # Branching ratio to DM
g_DM = 2 # DM degrees of freedom
model = 1 # Type of model --> fixed here to be one
Z=FrInFull(Mi, ai, bi, mDM, mX, mf, sv, BR, g_DM, model)
relic=Z.Omegah2()
relic_analytic=Z.Omegah2_analytics_FI()
print('relic numerics : ',relic)
print('relic analytics : ',relic_analytic[0])
if(relic_analytic[1]>=1):
print('Thermalization of X -> True')
else:
print('Thermalization of X -> False')
```
| github_jupyter |
# GPyOpt: dealing with cost fuctions
### Written by Javier Gonzalez, University of Sheffield.
## Reference Manual index
*Last updated Friday, 11 March 2016.*
GPyOpt allows to consider function evaluation costs in the optimization.
```
%pylab inline
import GPyOpt
# --- Objective function
objective_true = GPyOpt.objective_examples.experiments2d.branin() # true function
objective_noisy = GPyOpt.objective_examples.experiments2d.branin(sd = 0.1) # noisy version
bounds = objective_noisy.bounds
objective_true.plot()
domain = [{'name': 'var_1', 'type': 'continuous', 'domain': bounds[0]}, ## use default bounds
{'name': 'var_2', 'type': 'continuous', 'domain': bounds[1]}]
def mycost(x):
cost_f = np.atleast_2d(.1*x[:,0]**2 +.1*x[:,1]**2).T
cost_df = np.array([0.2*x[:,0],0.2*x[:,1]]).T
return cost_f, cost_df
# plot the cost fucntion
grid = 400
bounds = objective_true.bounds
X1 = np.linspace(bounds[0][0], bounds[0][1], grid)
X2 = np.linspace(bounds[1][0], bounds[1][1], grid)
x1, x2 = np.meshgrid(X1, X2)
X = np.hstack((x1.reshape(grid*grid,1),x2.reshape(grid*grid,1)))
cost_X, _ = mycost(X)
# Feasible region
plt.contourf(X1, X2, cost_X.reshape(grid,grid),100, alpha=1,origin ='lower')
plt.title('Cost function')
plt.colorbar()
GPyOpt.methods.BayesianOptimization?
from numpy.random import seed
seed(123)
BO = GPyOpt.methods.BayesianOptimization(f=objective_noisy.f,
domain = domain,
initial_design_numdata = 5,
acquisition_type = 'EI',
normalize_Y = True,
exact_feval = False,
acquisition_jitter = 0.05)
seed(123)
BO_cost = GPyOpt.methods.BayesianOptimization(f=objective_noisy.f,
cost_withGradients = mycost,
initial_design_numdata =5,
domain = domain,
acquisition_type = 'EI',
normalize_Y = True,
exact_feval = False,
acquisition_jitter = 0.05)
BO.plot_acquisition()
BO_cost.run_optimization(15)
BO_cost.plot_acquisition()
BO.run_optimization(15)
BO.plot_acquisition()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("test2_result.csv")
df
df2 = pd.read_excel("Test_2.xlsx")
# 只含特征值的完整数据集
data = df2.drop("TRUE VALUE", axis=1)
# 只含真实分类信息的完整数据集
labels = df2["TRUE VALUE"]
# data2是去掉真实分类信息的数据集(含有聚类后的结果)
data2 = df.drop("TRUE VALUE", axis=1)
data2
# 查看使用kmeans聚类后的分类标签值,两类
data2['km_clustering_label'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
# 基于kmeans聚类结果的分层抽样
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(data2, data2["km_clustering_label"]):
strat_train_set = data2.loc[train_index]
strat_test_set = data2.loc[test_index]
def clustering_result_propotions(data):
"""
分层抽样后,训练集或测试集里不同分类标签的数量比
:param data: 训练集或测试集,纯随机取样或分层取样
"""
return data["km_clustering_label"].value_counts() / len(data)
# 经过分层抽样的测试集中,不同分类标签的数量比
clustering_result_propotions(strat_test_set)
# 经过分层抽样的训练集中,不同分类标签的数量比
clustering_result_propotions(strat_train_set)
# 完整的数据集中,不同分类标签的数量比
clustering_result_propotions(data2)
from sklearn.model_selection import train_test_split
# 纯随机取样
random_train_set, random_test_set = train_test_split(data2, test_size=0.2, random_state=42)
# 完整的数据集、分层抽样后的测试集、纯随机抽样后的测试集中,不同分类标签的数量比
compare_props = pd.DataFrame({
"Overall": clustering_result_propotions(data2),
"Stratified": clustering_result_propotions(strat_test_set),
"Random": clustering_result_propotions(random_test_set),
}).sort_index()
# 计算分层抽样和纯随机抽样后的测试集中不同分类标签的数量比,和完整的数据集中不同分类标签的数量比的误差
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Start. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
compare_props
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
def get_classification_marks(model, data, labels, train_index, test_index):
"""
获取分类模型(二元或多元分类器)的评分:F1值
:param data: 只含有特征值的数据集
:param labels: 只含有标签值的数据集
:param train_index: 分层抽样获取的训练集中数据的索引
:param test_index: 分层抽样获取的测试集中数据的索引
:return: F1评分值
"""
m = model(random_state=42)
m.fit(data.loc[train_index], labels.loc[train_index])
test_labels_predict = m.predict(data.loc[test_index])
score = f1_score(labels.loc[test_index], test_labels_predict, average="weighted")
return score
# 用分层抽样后的训练集训练分类模型后的评分值
start_marks = get_classification_marks(LogisticRegression, data, labels, strat_train_set.index, strat_test_set.index)
start_marks
# 用纯随机抽样后的训练集训练分类模型后的评分值
random_marks = get_classification_marks(LogisticRegression, data, labels, random_train_set.index, random_test_set.index)
random_marks
from sklearn.metrics import f1_score
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone, BaseEstimator, TransformerMixin
class stratified_cross_val_score(BaseEstimator, TransformerMixin):
"""实现基于分层抽样的k折交叉验证"""
def __init__(self, model, data, labels, random_state=0, cv=5):
"""
:model: 训练的模型(回归或分类)
:data: 只含特征值的完整数据集
:labels: 只含标签值的完整数据集
:random_state: 模型的随机种子值
:cv: 交叉验证的次数
"""
self.model = model
self.data = data
self.labels = labels
self.random_state = random_state
self.cv = cv
self.score = [] # 储存每折测试集的模型评分
self.i = 0
def fit(self, X, y):
"""
:param X: 含有特征值和聚类结果的完整数据集
:param y: 含有聚类结果的完整数据集
"""
skfolds = StratifiedKFold(n_splits=self.cv, random_state=self.random_state)
for train_index, test_index in skfolds.split(X, y):
# 复制要训练的模型(分类或回归)
clone_model = clone(self.model)
strat_X_train_folds = self.data.loc[train_index]
strat_y_train_folds = self.labels.loc[train_index]
strat_X_test_fold = self.data.loc[test_index]
strat_y_test_fold = self.labels.loc[test_index]
# 训练模型
clone_model.fit(strat_X_train_folds, strat_y_train_folds)
# 预测值(这里是分类模型的分类结果)
test_labels_pred = clone_model.predict(strat_X_test_fold)
# 这里使用的是分类模型用的F1值,如果是回归模型可以换成相应的模型
score_fold = f1_score(labels.loc[test_index], test_labels_pred, average="weighted")
# 避免重复向列表里重复添加值
if self.i < self.cv:
self.score.append(score_fold)
else:
None
self.i += 1
def transform(self, X, y=None):
return self
def mean(self):
"""返回交叉验证评分的平均值"""
return np.array(self.score).mean()
def std(self):
"""返回交叉验证评分的标准差"""
return np.array(self.score).std()
from sklearn.linear_model import SGDClassifier
# 分类模型
clf_model = SGDClassifier(max_iter=5, tol=-np.infty, random_state=42)
# 基于分层抽样的交叉验证,data是只含特征值的完整数据集,labels是只含标签值的完整数据集
clf_cross_val = stratified_cross_val_score(clf_model, data, labels, cv=5, random_state=42)
# data2是含有特征值和聚类结果的完整数据集
clf_cross_val.fit(data2, data2["km_clustering_label"])
# 每折交叉验证的评分
clf_cross_val.score
# 交叉验证评分的平均值
clf_cross_val.mean()
# 交叉验证评分的标准差
clf_cross_val.std()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Water/usgs_watersheds.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Water/usgs_watersheds.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Water/usgs_watersheds.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC02')
styleParams = {
'fillColor': '000070',
'color': '0000be',
'width': 3.0,
}
regions = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 4)
Map.addLayer(regions, {}, 'USGS/WBD/2017/HUC02')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC04')
styleParams = {
'fillColor': '5885E3',
'color': '0000be',
'width': 3.0,
}
subregions = dataset.style(**styleParams)
Map.setCenter(-110.904, 36.677, 7)
Map.addLayer(subregions, {}, 'USGS/WBD/2017/HUC04')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC06')
styleParams = {
'fillColor': '588593',
'color': '587193',
'width': 3.0,
}
basins = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 7)
Map.addLayer(basins, {}, 'USGS/WBD/2017/HUC06')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC08')
styleParams = {
'fillColor': '2E8593',
'color': '587193',
'width': 2.0,
}
subbasins = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 8)
Map.addLayer(subbasins, {}, 'USGS/WBD/2017/HUC08')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC10')
styleParams = {
'fillColor': '2E85BB',
'color': '2E5D7E',
'width': 1.0,
}
watersheds = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 9)
Map.addLayer(watersheds, {}, 'USGS/WBD/2017/HUC10')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC12')
styleParams = {
'fillColor': '2E85BB',
'color': '2E5D7E',
'width': 0.1,
}
subwatersheds = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 10)
Map.addLayer(subwatersheds, {}, 'USGS/WBD/2017/HUC12')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
<img align="right" src="images/ninologo.png" width="150"/>
<img align="right" src="images/tf-small.png" width="125"/>
<img align="right" src="images/dans.png" width="150"/>
# Start
This notebook gets you started with using
[Text-Fabric](https://github.com/Nino-cunei/uruk/blob/master/docs/textfabric.md) for coding in cuneiform tablet transcriptions.
Familiarity with the underlying
[data model](https://annotation.github.io/text-fabric/tf/about/datamodel.html)
is recommended.
For provenance, see the documentation:
[about](https://github.com/Nino-cunei/uruk/blob/master/docs/about.md).
## Overview
* we tell you how to get Text-Fabric on your system;
* we tell you how to get the Uruk IV-III corpus on your system.
## Installing Text-Fabric
See [here](https://annotation.github.io/text-fabric/tf/about/install.html)
### Get the data
Text-Fabric will get the data for you and store it on your system.
If you have cloned the github repo with the data,
[Nino-cunei/uruk](https://github.com/Nino-cunei/uruk),
your data is already in place, and nothing will be downloaded.
Otherwise, on first run, Text-Fabric will load the data and store it in the folder
`text-fabric-data` in your home directory.
This only happens if the data is not already there.
Not only transcription data will be downloaded, also linearts and photos.
These images are contained in a zipfile of 550 MB,
so take care that you have a good internet connection when it comes to downloading the images.
## Start the engines
Navigate to this directory in a terminal and say
```
jupyter notebook
```
(just literally).
Your browser opens with a directory view, and you'll see `start.ipynb`.
Click on it. A new browser tab opens, and a Python engine has been allocated to this
notebook.
Now we are ready to compute .
The next cell is a code cell that can be executed if you have downloaded this
notebook and have issued the `jupyter notebook` command.
You execute a code cell by standing in it and press `Shift Enter`.
### The code
```
%load_ext autoreload
%autoreload 2
import sys, os
from tf.app import use
```
View the next cell as an *incantation*.
You just have to say it to get things underway.
For the very last version, use `hot`.
For the latest release, use `latest`.
If you have cloned the repos (TF app and data), use `clone`.
If you do not want/need to upgrade, leave out the checkout specifiers.
```
A = use("uruk:clone", checkout="clone", hoist=globals())
# A = use('uruk:hot', checkout="hot", hoist=globals())
# A = use('uruk:latest', checkout="latest", hoist=globals())
# A = use('uruk', hoist=globals())
```
### The output
The output shows some statistics about the images found in the Uruk data.
Then there are links to the documentation.
**Tip:** open them, and have a quick look.
Every notebook that you set up with `Cunei` will have such links.
**GitHub and NBViewer**
If you have made your own notebook, and used this incantation,
and pushed the notebook to GitHub, links to the online version
of *your* notebook on GitHub and NBViewer will be generated and displayed.
By the way, GitHub shows notebooks nicely.
Sometimes NBViewer does it better, although it fetches exactly the same notebook from GitHub.
NBViewer is handy to navigate all the notebooks of a particular organization.
Try the [Nino-cunei starting point](http://nbviewer.jupyter.org/github/Nino-cunei/).
These links you can share with colleagues.
## Test
We perform a quick test to see that everything works.
### Count the signs
We count how many signs there are in the corpus.
In a next notebook we'll explain code like this.
```
len(F.otype.s("sign"))
```
### Show photos and lineart
We show the photo and lineart of a tablet, to whet your appetite.
```
example = T.nodeFromSection(("P005381",))
A.photo(example)
```
Note that you can click on the photo to see a better version on CDLI.
Here comes the lineart:
```
A.lineart(example)
```
A pretty representation of the transcription with embedded lineart for quads and signs:
```
A.pretty(example, withNodes=True)
```
We can suppress the lineart:
```
A.pretty(example, showGraphics=False)
```
The transliteration:
```
A.getSource(example)
```
Now the lines ans cases of this tablet in a table:
```
table = []
for sub in L.d(example):
if F.otype.v(sub) in {"line", "case"}:
table.append((sub,))
A.table(table, showGraphics=False)
```
We can include the lineart in plain displays:
```
A.table(table, showGraphics=True)
```
This is just the beginning.
In the next chapters we show you how to
* fine-tune tablet displays,
* step and jump around in the corpus,
* search for patterns,
* drill down to quads and signs,
* and study frequency distributions of signs in subcases.
# Next
[imagery](imagery.ipynb)
*Get the big picture ...*
All chapters:
**start**
[imagery](imagery.ipynb)
[steps](steps.ipynb)
[search](search.ipynb)
[calc](calc.ipynb)
[signs](signs.ipynb)
[quads](quads.ipynb)
[jumps](jumps.ipynb)
[cases](cases.ipynb)
---
CC-BY Dirk Roorda
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
Since most of the wrong prediction occures from the 23rd to the 31st of Decemeber, I think the reason is because of the fact that training data may not include information that are similar to the events that occure during the wrong prediction period.
| github_jupyter |
```
import zmq
import msgpack
import sys
from pprint import pprint
import json
import numpy as np
import ceo
import matplotlib.pyplot as plt
%matplotlib inline
port = "5556"
```
# SETUP
```
context = zmq.Context()
print "Connecting to server..."
socket = context.socket(zmq.REQ)
socket.connect ("tcp://localhost:%s" % port)
print "Sending request ", "ubuntu_cuda70","..."
socket.send ("ubuntu_cuda70")
message = socket.recv_json()
pprint(message)
optical_path = {}
for kk, vv in message.iteritems():
print kk, ' is ', vv
socket.send_string (vv)
message = socket.recv_json()
pprint(message)
if kk=="Source":
optical_path[vv] = ceo.Source(message["band"],
zenith=message["zenith"],
azimuth=message["azimuth"],
height=np.float(message["height"]),
magnitude = message["magnitude"],
rays_box_size=message["pupil size"],
rays_box_sampling=message["pupil sampling"],
rays_origin=[0.0,0.0,25])
N_SRC = optical_path[vv].N_SRC
elif kk=="GMT_MX":
D_px = message["pupil sampling"]
optical_path[vv] = ceo.GMT_MX(message["pupil size"],
message["pupil sampling"],
M1_radial_order=message["M1"]["Zernike radial order"],
M2_radial_order=message["M2"]["Zernike radial order"])
elif kk=="Imaging":
optical_path[vv] = ceo.Imaging(1, D_px-1,
DFT_osf=2*message["nyquist oversampling"],
N_PX_IMAGE=message["resolution"],
N_SOURCE=N_SRC)
optical_path["star"].reset()
optical_path["GMT"].propagate(optical_path["star"])
optical_path["imager"].propagate(optical_path["star"])
plt.imshow(optical_path["star"].phase.host(),interpolation='None')
plt.imshow(optical_path["imager"].frame.host())
```
# DATA SERVER
```
port = "5557"
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%s" % port)
port = "5558"
sub_context = zmq.Context()
sub_socket = sub_context.socket(zmq.SUB)
sub_socket.connect ("tcp://localhost:%s" % port)
message = socket.recv()
print "Received request: ", message
optical_path["star"].reset()
optical_path["GMT"].propagate(optical_path["star"])
optical_path["imager"].propagate(optical_path["star"])
data = optical_path["star"].phase.host()
msg = msgpack.packb(data.tolist())
socket.send(msg)
```
| github_jupyter |
# **Tutorial 11: Working with Files (Part 02)** 👀
<a id='t11toc'></a>
#### Contents: ####
- **[Parsing](#t11parsing)**
- [`strip()`](#t11strip)
- [Exercise 1](#t11ex1)
- [`split()`](#t11split)
- [Exercise 2](#t11ex2)
- **[JSON](#t11json)**
- [Reading JSON from a String](#t11loads)
- [Reading a JSON File](#t11load)
- [Converting to a JSON String](#t11dumps)
- [Writing to a JSON File](#t11dump)
- **[CSV](#t11csv)**
- [Reading CSV Files in Python](#t11readcsv)
- [Writing into CSV Files (Row by Row)](#t11writecsvrbr)
- [Writing into CSV Files (Multiple Rows)](#t11writecsvmultiple)
- [Writing into CSV Files (Custom Delimiter)](#t11delimiter)
- [Exercise 3](#t11ex3)
- [Exercises Solutions](#t11sol)
💡 <b>TIP</b><br>
> <i>In Exercises, when time permits, try to write the codes yourself, and do not copy it from the other cells.</i>
<a id='t11parsing'></a>
## ▙▂ **🄿ARSING ▂▂**
Parsing means splitting up a text into meaningful components (meaningful for a *given purpose*).
Python has some built-in methods that you can be used for some basic parsing tasks on strings. We will practise with few of them.
<a id='t11strip'></a>
#### **▇▂ `strip()` ▂▂**
The `strip()` method removes characters from both left and right sides of a string, based on the argument (a string specifying the set of characters to be removed).
The syntax of the `strip()` method is: `string.strip([chars])`
**`strip()` Parameters**
- `chars` (optional) - a string specifying the set of characters to be removed from the left and right sides of the string.
- If the chars argument is not provided, all **leading and trailing whitespaces** are removed from the string.
```
Str = ' Analysis 3: Object Oriented Programming '
```
The code below removes all white spaces (blank) from the left and right side of the string:
```
CleanStr1 = Str.strip()
print(f'Original String is = "{Str}" --- (length={len(Str)})')
print(f'Removing Leading and Trailing White spaces = "{CleanStr1}" --- (length={len(CleanStr1)})')
```
The method can also be diectly applied to a string:
```
CleanStr2 = 'OOOOOOOOAnalysis 3: Object Oriented ProgrammingOOOOO'.strip('O')
print(f'Removing O\'s = "{CleanStr2}"')
```
<br>⚠ <b>NOTE</b><br>
>It removes only leading and trailing `'O'`s, but not those in between.<br>
##### **Multiple Characters**
The `chars` parameter is not a prefix or suffix; rather, all combinations of its values are stripped.
In below example, `strip()` would strip all the characters provided in the argument i.e. `'+'`, `'*'`.
```
CleanStr3 = '+*++*++Analysis 3: Object Oriented Programming**++**'.strip('+*')
print(f'Stripping + and * on both sides = {CleanStr3}')
```
##### **Only One Side**
- `lstrip()` trims leading characters and return the trimmed string.
- `rstrip()` strips any trailing white-spaces, tabs, and newlines, and returns the trimmed string.
```
CleanStr4 = '***********Analysis 3: Object Oriented Programming***********'.lstrip('*')
print('Removing Left Side using lstrip() is = "%s"' %CleanStr4)
```
<i>Try to do this first without checking the solution in the next cell. After typing your code, compare it with the solution. </i>
```
CleanStr5 = '***********Analysis 3: Object Oriented Programming***********'.rstrip('*')
print('Removing Right Side using rstrip() is = "%s"' %CleanStr5)
```
<br>[back to top ↥](#t11toc)
<br><br><a id='t11ex1'></a>
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
**✎ Exercise 𝟙**<br> <br> ▙ ⏰ ~ 2+2 min. ▟ <br>
❶ We have a file `studentsgrades.txt` in the current folder which contains the students first name and last names and their grades for Analysis 2.
the information are not properly formatted in the file and there are some extra characters on each row. (Open the file to see the records.)
**CMI-Inf St. Num 1002121 Andrew Bren 8.4
CMI-Inf St. Num 1002121 Peter Cole 7.0
CMI-Inf St. Num 1002121 Chris Charles 9.1
CMI-Inf St. Num 1002121 Andy Frankline 6.9
CMI-Inf St. Num 1002121 Robert Ford 5.6
CMI-Inf St. Num 1002121 Charley Walton 7.7**
Write a short piece of code to read the file and remove the extra leading characters and students numbers from each row, and Create a new file `studentsgrades-m.txt` containing the new records. The new file should be like:
**Andrew Bren 8.4
Peter Cole 7.0
Chris Charles 9.1
Andy Frankline 6.9
Robert Ford 5.6
Charley Walton 7.7**
```
# Exercise 1.1
```
❷ Modify your code to work on the original file `studentsgrades.txt` and create another new file `studentsnames.txt` containing only the students names, without grades.
```
# Exercise 1.2
```
<br>[back to top ↥](#t11toc)
<a id='t11split'></a>
#### **▇▂ `split()` ▂▂**
The `split()` method breaks up a string at the specified separator and returns a *list of strings*.
The syntax of the `split()` method is: `str.split([separator [, maxsplit]])`
**`split()` Parameters**
`split()` method takes a maximum of 2 parameters:
- `separator` (optional)- It is a delimiter. The string splits at the specified separator.
- If the separator is not specified, any whitespace (space, newline etc.) string is a separator.
- `maxsplit` (optional) - The maxsplit defines the maximum number of splits.
- The default value of maxsplit is `-1`, meaning, no limit on the number of splits.
```
text = 'Never regret anything that made you smile'
print(text.split())
```
##### **with `seperator`**
```
grocery = 'Milk, Chicken, Bread, Butter'
print(grocery.split(', '))
```
Try the following code:
```
print(grocery.split(':'))
```
🔴 How many elements are in the python list above? Why? Discuss it.
##### **with `maxsplit`**
```
print(grocery.split(', ', 1))
print(grocery.split(', ', 2))
print(grocery.split(', ', 5))
```
<br>⚠ <b>NOTE</b><br>
>If `maxsplit` is specified, the list will have at most `maxsplit+1` items.<br>
<br>[back to top ↥](#t11toc)
<br><br><a id='t11ex2'></a>
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
**✎ Exercise 𝟚**<br> <br> ▙ ⏰ ~ 3+3 min. ▟ <br>
❶ We want to further process `studentsgrades.txt` file discussed in Exercise 1. Now, we would like to make a **list** of students. Each item in the list must be a student. A student should be created as an object with three attributes: `first name`, `last name`, and `grade`.
```
# Exercise 2.1
```
❷ Define a class to represent a group of students. Create an object to contain the students in our list, and add a method to the class for a simple linear search based on the last name of a student.
```
# Exercise 2.2
```
<br>[back to top ↥](#t11toc)
<a id='t11json'></a>
## ▙▂ **🄹SON ▂▂**
JSON is a syntax for storing and exchanging data.
- JSON is text, written with JavaScript object notation.
- JSON is language independent.
- JSON uses JavaScript syntax, but the JSON format is text only.
- Text can be read and used as a data format by any programming language.
<a id='t11loads'></a>
#### **▇▂ Reading JSON from a String ▂▂**
`json.loads()` reads JSON from a string and converts it to a Python dictionary.
```
import json
book = """
{
"author": "Chinua Achebe",
"editor": null,
"country": "Nigeria",
"imageLink": "images/things-fall-apart.jpg",
"language": "English",
"link": "https://en.wikipedia.org/wiki/Things_Fall_Apart",
"pages": 209,
"title": "Things Fall Apart",
"year": 1958,
"available": true
}
"""
book_dict = json.loads(book)
print(book_dict)
print("\n")
print("The title of the book is:", book_dict['title'])
print(f'book_json is: {type(book_dict)}')
```
<br>[back to top ↥](#t11toc)
<a id='t11load'></a>
#### **▇▂ Reading a JSON File ▂▂**
Now let's load a JSON file into a JSON object in Python. For this, we use the file `book.json`, located in the current directory.
First, let's take a look into the file contents.
```
f = open("book.json")
text = f.read()
f.close()
print(f"The full text in the file is:\n\n{text}")
```
you could also open the `book.json` in the Jupyter or a text editor.
The `json.load()` method reads a file containing JSON object:
```
import json
with open('book.json') as f:
data = json.load(f)
print(data)
print('\n')
print(type(data))
```
<br>⚠ <b>NOTE</b><br>
>Note the difference between JSON object and Python object.<br>
<br>[back to top ↥](#t11toc)
<a id='t11dumps'></a>
#### **▇▂ Converting to a JSON String ▂▂**
`json.dumps()` converts a python dictionary to a JSON string.
```
import json
book = {
"author": "Chinua Achebe",
"editor": None,
"country": "Nigeria",
"imageLink": "images/things-fall-apart.jpg",
"language": "English",
"link": "https://en.wikipedia.org/wiki/Things_Fall_Apart",
"pages": 209,
"title": "Things Fall Apart",
"year": 1958,
"available": True
}
book_json = json.dumps(book)
print(book_json)
print('\n')
print(f'book is: {type(book)}')
print(f'book_json is: {type(book_json)}')
```
<br>[back to top ↥](#t11toc)
<a id='t11dump'></a>
#### **▇▂ Writing to a JSON File ▂▂**
`json.dump()` converts and writes a dictionary to a JSON file.
```
import json
book_dict = {
"author": "Chinua Achebe",
"editor": None,
"country": "Nigeria",
"imageLink": "images/things-fall-apart.jpg",
"language": "English",
"link": "https://en.wikipedia.org/wiki/Things_Fall_Apart",
"pages": 209,
"title": "Things Fall Apart",
"year": 1958,
"available": True
}
with open('book-new.json', 'w') as json_file:
json.dump(book_dict, json_file)
```
Now let's take a look into the contents of the file we have just written:
```
f = open("book-new.json")
text = f.read()
f.close()
print(f"The full text in the file is:\n\n{text}")
```
<br>[back to top ↥](#t11toc)
<a id='t11csv'></a>
## ▙▂ **🄲SV ▂▂**
While we could use the built-in `open()` function to work with `CSV` files in Python, there is a dedicated `csv` module that makes working with `CSV` files much easier.
Before we can use the methods to the `csv` module, we need to import the module first, using:
```
import csv
```
<a id='t11readcsv'></a>
#### **▇▂ Reading CSV Files in Python ▂▂**
To read a `CSV` file in Python, we can use the `csv.reader()` function.
The `csv.reader()` returns an iterable `reader` object.
The `reader` object is then iterated using a for loop to print the contents of each row.
##### **Using comma (`,`) as delimiter**
Comma is the default delimiter for `csv.reader()`.
```
import csv
with open('grades.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
print(row)
```
##### **Using tab (`\t`) as delimiter**
```
import csv
with open('gradesTab.csv', 'r',) as file:
reader = csv.reader(file, delimiter = '\t')
for row in reader:
print(row)
```
<br>[back to top ↥](#t11toc)
<a id='t11writecsvrbr'></a>
#### **▇▂ Writing into CSV Files (Row by Row) ▂▂**
To write to a CSV file in Python, we can use the `csv.writer()` function.
The `csv.writer()` function returns a `writer` object that converts the user's data into a delimited string. This string can later be used to write into `CSV` files using the `writerow()` function.
`csv.writer` class provides two methods for writing to `CSV`. They are `writerow()` and `writerows()`:
- `writerow()`: This method writes a single row at a time. Fields row can be written using this method.
- `writerows()`: This method is used to write multiple rows at a time. This can be used to write rows list.
```
import csv
with open('gradesW1.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(["Lastname","Firstname","SSN","Test1","Test2","Test3","Test4","Final","Grade"])
writer.writerow(["George","Boy","345-67-3901",40.0,1.0,11.0,-1.0,4.0,"B"])
writer.writerow(["Heffalump","Harvey","632-79-9439",30.0,1.0,20.0,30.0,40.0,"C"])
```
Now let's take a look into the contents of the file we have just written:
```
f = open("gradesW1.csv")
text = f.read()
f.close()
print(f"The full text in the file is:\n\n{text}")
```
<br>[back to top ↥](#t11toc)
<a id='t11writecsvmultiple'></a>
#### **▇▂ Writing into CSV Files (Multiple Rows) ▂▂**
If we need to write the contents of the 2-dimensional list to a `CSV file`, here's how we can do it:
```
import csv
csv_rowlist = [ ["Lastname","Firstname","SSN","Test1","Test2","Test3","Test4","Final","Grade"],
["Dandy","Jim","087-75-4321",47.0,1.0,23.0,36.0,45.0,"C+"],
["Elephant","Ima","456-71-9012",45.0,1.0,78.0,88.0,77.0,"B-"],
["Franklin","Benny","234-56-2890",50.0,1.0,90.0,80.0,90.0,"B-"]]
with open('gradesW2.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerows(csv_rowlist)
```
Now let's take a look into the contents of the file we have just written:
```
f = open("gradesW2.csv")
text = f.read()
f.close()
print(f"The full text in the file is:\n\n{text}")
```
<br>[back to top ↥](#t11toc)
<a id='t11delimiter'></a>
#### **▇▂ Writing into CSV Files (Custom Delimiter) ▂▂**
As mentioned before, by default, a comma `,` is used as a delimiter in a `CSV` file.
However, we can pass a different delimiter parameter as argument to the `csv.writer()` function:
```
import csv
csv_rowlist = [ ["Lastname","Firstname","SSN","Test1","Test2","Test3","Test4","Final","Grade"],
["Dandy","Jim","087-75-4321",47.0,1.0,23.0,36.0,45.0,"C+"],
["Elephant","Ima","456-71-9012",45.0,1.0,78.0,88.0,77.0,"B-"],
["Franklin","Benny","234-56-2890",50.0,1.0,90.0,80.0,90.0,"B-"]]
with open('gradesW3.csv', 'w', newline='') as file:
writer = csv.writer(file, delimiter='|')
writer.writerows(csv_rowlist)
```
Now let's take a look into the contents of the file we have just written:
```
f = open("gradesW3.csv")
text = f.read()
f.close()
print(f"The full text in the file is:\n\n{text}")
```
<br>[back to top ↥](#t11toc)
<br><br><a id='t11ex3'></a>
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
**✎ Exercise 𝟛**<br> <br> ▙ ⏰ ~ 3+3 min. ▟ <br>
❶ Write a code to write the result of the Exercise 2.1 into a JSON file.
```
# Exercise 3.1
```
❷ Write a code to write the result of the Exercise 2.1 into a CSV file.
```
# Exercise 3.2
```
<br>[back to top ↥](#t11toc)
<br><br><a id='t11sol'></a>
◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼<br>
◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼
#### 🔑 **Exercises Solutions** ####
**Exercise 1.1:**
```
with open("studentsgrades.txt") as f:
t = f.readlines()
t = [student.lstrip('CMI-InfSt.Num ') for student in t]
t = [student.lstrip(' 0123456789') for student in t]
with open("studentsgrades-m.txt", "w") as f:
f.writelines(t)
```
**Exercise 1.2:**
```
with open("studentsgrades.txt") as f:
t = f.readlines()
t = [student.lstrip('CMI-InfSt.Num ') for student in t]
t = [student.strip(' .0123456789\n') + '\n' for student in t]
with open("studentsnames.txt", "w") as f:
f.writelines(t)
```
<br>[back to Exercise 1 ↥](#t11ex1)
<br>[back to top ↥](#t11toc)
**Exercise 2.1:**
```
class student:
def __init__(self, fn, ln, gr):
self.firstName = fn
self.lastName = ln
self.grade = gr
def __str__(self):
return f'{self.firstName} {self.lastName} ({self.grade})'
def __repr__(self):
return f'{self.__class__.__name__}({self.firstName}, {self.lastName}, {self.grade})'
with open("studentsgrades.txt") as f:
t = f.readlines()
t = [st.lstrip('CMI-InfSt.Num ') for st in t]
t = [st.lstrip(' 0123456789') for st in t]
t = [st.rstrip('\n') for st in t]
t = [st.split(' ') for st in t]
studentsList= [student(st[0],st[1],st[2]) for st in t]
for st in studentsList:
print(st)
```
**Exercise 2.2:**
```
class student:
def __init__(self, fn, ln, gr):
self.firstName = fn
self.lastName = ln
self.grade = gr
def __str__(self):
return f'{self.firstName} {self.lastName} ({self.grade})'
def __repr__(self):
return f'{self.__class__.__name__}({self.firstName}, {self.lastName}, {self.grade})'
class group:
def __init__(self, sl):
self.studentsList = sl
def search(self, key_lastname):
for st in self.studentsList:
if st.lastName == key_lastname:
return st
return None
with open("studentsgrades.txt") as f:
t = f.readlines()
t = [st.lstrip('CMI-InfSt.Num ') for st in t]
t = [st.lstrip(' 0123456789') for st in t]
t = [st.rstrip('\n') for st in t]
t = [st.split(' ') for st in t]
studentsList= [student(st[0],st[1],st[2]) for st in t]
analysis2_cmiinf1M = group(studentsList)
print(analysis2_cmiinf1M.search('Charles'))
print(analysis2_cmiinf1M.search('Andy'))
```
<br>[back to Exercise 2 ↥](#t11ex2)
<br>[back to top ↥](#t11toc)
**Exercise 3.1:**
```
import json
class student:
def __init__(self, fn, ln, gr):
self.firstName = fn
self.lastName = ln
self.grade = gr
def __str__(self):
return f'{self.firstName} {self.lastName} ({self.grade})'
def __repr__(self):
return f'{self.__class__.__name__}({self.firstName}, {self.lastName}, {self.grade})'
with open("studentsgrades.txt") as f:
t = f.readlines()
t = [st.lstrip('CMI-InfSt.Num ') for st in t]
t = [st.lstrip(' 0123456789') for st in t]
t = [st.rstrip('\n') for st in t]
t = [st.split(' ') for st in t]
studentsList= [student(st[0], st[1], st[2]) for st in t]
studentsDict = []
for st in studentsList:
dictItem ={"First Name": st.firstName, "Last Name": st.lastName, "Grade": st.grade}
studentsDict.append(dictItem)
with open('students-grades.json', 'w') as f:
json.dump(studentsDict , f, indent = 1)
```
**Exercise 3.2:**
```
import csv
class student:
def __init__(self, fn, ln, gr):
self.firstName = fn
self.lastName = ln
self.grade = gr
def __str__(self):
return f'{self.firstName} {self.lastName} ({self.grade})'
def __repr__(self):
return f'{self.__class__.__name__}({self.firstName}, {self.lastName}, {self.grade})'
with open("studentsgrades.txt") as f:
t = f.readlines()
t = [st.lstrip('CMI-InfSt.Num ') for st in t]
t = [st.lstrip(' 0123456789') for st in t]
t = [st.rstrip('\n') for st in t]
t = [st.split(' ') for st in t]
studentsList= [student(st[0], st[1], st[2]) for st in t]
header = ["FName", "LName", "Grade"]
rows = [header]
for st in studentsList:
new_row =[st.firstName, st.lastName, st.grade]
rows.append(new_row)
with open('students-grades.csv', 'w', newline='') as f:
writer = csv.writer(f, delimiter='\t')
writer.writerows(rows)
```
<br>[back to Exercise 3 ↥](#t11ex3)
<br>[back to top ↥](#t11toc)
| github_jupyter |
<div align="center"><img src='http://ufq.unq.edu.ar/sbg/images/top.jpg' alt="SGB logo"> </div>
<h1 align='center'> TALLER “PROGRAMACIÓN ORIENTADA A LA BIOLOGÍA”</h1>
<h3 align='center'>(En el marco del II CONCURSO “BIOINFORMÁTICA EN EL AULA”)</h3>
La bioinformática es una disciplina científica destinada a la aplicación de métodos computacionales al análisis de datos biológicos, para poder contestar numerosas preguntas. Las tecnologías computacionales permiten, entre otras cosas, el análisis en plazos cortos de gran cantidad de datos (provenientes de experimentos, bibliografía, bases de datos públicas, etc), así como la predicción de la forma o la función de las distintas moléculas, o la simulación del comportamiento de sistemas biológicos complejos como células y organismos.
La bioinformática puede ser pensada como una herramienta en el aprendizaje de la biología: su objeto de trabajo son entidades biológicas (ADN, proteínas, organismos completos, sus poblaciones, etc.) que se analizan con métodos que permiten “visualizar” distintos procesos de la naturaleza. Así, la bioinformática aporta una manera clara y precisa de percibir los procesos biológicos, y acerca a los estudiantes herramientas que integran múltiples conocimientos (lógico-matemático, biológico, físico, estadístico) generando un aprendizaje significativo y envolvente.
# ¡Bienvenido! ¿Estás listo?
Podemos comenzar con algunas definiciones.
### ¿En qué consiste una computadora?
Una computadora está formada por el `Hardware` (que son todas las partes o elementos físicos que la componen) y el `Software` (que son todas las instrucciones para el funcionamiento del Hardware). El sistema operativo es el principal Software de la computadora, pues proporciona una interfaz con el usuario y permite al resto de los programas una interacción correcta con el Hardware.
### ¿Qué hacemos entonces cuando programamos?
Una computadora está constituida, básicamente, por un gran número de circuitos eléctricos que pueden ser activados **(1)** o desactivados **(0)**. Al establecer diferentes combinaciones de prendido y apagado de los circuitos, los usuarios de computadoras podemos lograr que el equipo realice alguna acción (por ejemplo, que muestre algo en la pantalla). ¡Esto es programar!
Los lenguajes de programación actúan como traductores entre el usuario y el equipo. En lugar de aprender el difícil lenguaje de la máquina, con sus combinaciones de ceros y unos, se puede utilizar un `lenguaje de programación` para dar instrucciones al equipo de un modo que sea más fácil de aprender y entender. Para que la computadora entienda nuestras órdenes, un programa intermedio, denominado `compilador`, convierte las instrucciones dadas por el usuario en un determinado lenguaje de programación, al `lenguaje máquina` de ceros y unos. Esto significa que, como programadores de Python (o cualquier otro lenguaje), no necesitamos entender lo que el equipo hace o cómo lo hace, basta con que entendamos a “hablar y escribir” en el lenguaje de programación.
### ¿Por qué es útil aprender a programar?
Tu smartphone, tu Playstation o Smart TV no serían muy útiles sin programas (aplicaciones, juegos, etc) para hacerlas funcionar. Cada vez que abrimos un documento para hacer un trabajo práctico para la escuela, o usamos el WhatsApp para chatear con nuestros amigos, estamos usando programas que interpretan lo que deseamos, como cambiar un color de fuente, aumentar el tamaño de letra o enviar un mensaje. Estos programas le comunican nuestras órdenes a la PC o teléfono para que las ejecuten. 1Aprendiendo a programar podrías hacer una gran diversidad de cosas: desde escribir tus propios juegos y aplicaciones para celular, combinar el uso de varios programas en forma secuencial o leer millones de textos sin abrir un solo libro… hasta analizar el genoma de un organismo o miles de estructuras de proteínas y así sacar conclusiones de relevancia biológica.
# Entonces: ¿Qué es Python?
Es un lenguaje de programación con una forma de escritura (sintaxis) sencilla y poderosa. Es lo que se conoce como lenguaje de scripting, que puede ser ejecutado por partes y no necesita ser compilado en un paso aparte. Python tiene muchas características, ventajas y usos que vamos a pasar por alto en este taller, pero que podes leer en las páginas oficiales de [Python](https://www.python.org/) y [Python Argentina](https://www.python.org.ar/). Para nosotros, la razón más importante para elegir Python es que podés comenzar a hacer tus propios programas realmente rápido.
### ¿Cómo se puede usar Python?
Depende del dispositivo que uses. En las computadoras, suele venir instalado. Si tenés un teléfono inteligente existen varias aplicaciones que instalan todo lo necesario para que Python funcione. Solo debés buscar ‘Python’ en tu tienda y descargarte alguna de las apps disponibles. Recomendamos las siguientes opciones gratuitas:
- Para teléfonos Android: QPython 3 (o Pydroid 3).
- Para teléfonos Windows: Python 3.
- Para iOS: Python 2.5 for iOS
# ¿Cómo escribimos código en Python dentro de las apps?
¡Es muy fácil! Sólo tenés que abrir la app y ejecutar la `consola` o terminal de Python. Te aparecerá una pantalla con un pequeño símbolo, usualmente `>>>`, donde podrás ingresar las distintas órdenes `(o
comandos)` que le darás a la computadora, siempre en sintaxis de Python. Cada vez que des `Enter` se ejecutará esa orden y podrás escribir un nuevo comando. ¡Ojo! Al salir de la consola se borrarán los comandos, a menos que los guardemos en un archivo o `script` para volver a ejecutarlos más adelante.
### ¿Se pueden correr scripts en las apps de Python?
¡Si, es posible! Pero para eso, primero hay que crearlos. Veremos cómo hacerlo con QPython3. Abrimos el editor de la aplicación desde la pantalla principal, luego escribimos el script que queremos ejecutar y lo guardamos en una carpeta (por ejemplo, “scripts3”) usando el botón correspondiente.
¡Atención! Siempre al final del nombre que demos a nuestro script debemos usar la extensión **“.py”**, como en el ejemplo de la foto: **“ejemplo.py”**.
Nuestro script se ejecuta desde la página principal; allí apretamos el botón **“Programs”** y navegamos en los archivos para encontrar nuestro script. Al seleccionarlo aparecerán las opciones que se muestran en la imagen, de las cuales debemos elegir y pulsar **“Run”**. ¡Tachan! ¡El script se ejecuta!
### ¿Qué otras formas tenemos para correr Python?
Existen consolas en línea que te permiten correr Python en internet como si estuviese instalado en tu PC o teléfono, que son completamente gratis (bueno, ¡siempre que tengas internet!). Te recomendamos dos, pero siempre podés buscar otras:
- [repl.it](https://repl.it/languages/python3)
- [Tutorials Point](https://www.tutorialspoint.com/execute_python_online.php)
### “Aún el camino más largo siempre comienza con el primer paso” - Lao Tse
El primer paso para poder hacer tu primer programa es abrir la consola de Python, tu app del teléfono o consola en línea, ¡lo que tengas a mano para arrancar!
### El principio de un comienzo
En todo proceso de aprendizaje los ‘errores’ tienen un rol muy importante: nos plantean nuevos interrogantes, nos acercan a nuevas hipótesis y sobre todo nos dan oportunidades
para seguir aprendiendo. En la programación los ‘errores’ también importan, ¡y mucho! Son una suerte de comunicación con la máquina, que nos advierte cuando no funciona algo
de lo que intentamos hacer.
Existen distintos tipos de errores en Python y con cada tipo de error la máquina nos marca qué es lo que puede estar fallando de nuestro código. Por eso te pedimos que anotes
todos los errores que puedan ir apareciendo durante tu trabajo en el taller o en casa y que lo compartas con nosotros, para charlar entre todos acerca de los conocimientos
que se ponen en juego durante la resolución de estos problemas. Así que, como diría la señorita Ricitos en su Autobús Mágico, **a cometer errores, tomar oportunidades y rockear con Python, que donde termina la carretera comienza la aventura!**
### Tu primer programa
Una forma no muy original de a aprender escribir tu programa es simplemente abrir la consola, escribir lo siguiente y darle `Enter`:
```
print('A rockear con Python!')
```
#### ¿Qué pasó?
**print** es una función que te permite imprimir o mostrar en la consola todo lo que se encuentre dentro de los paréntesis y entre comillas, como en nuestro ejemplo. Entre otras cosas, esta función nos permite interactuar con nuestro programa o con el futuro usuario de nuestro programa. Felicitaciones, ¡ese fue tu primer programa en Python!
#### Una calculadora super-archi-genial
Con Python podemos hacer todo tipo de cálculos matemáticos. Aunque suene medio bodrio, aprender a hacer estos cálculos nos va a ayudar después a trabajar sobre otros tipos de datos. Vamos a probar algunos cálculos. Escribí en tu consola:
```
3*5
```
#### ¿Cuál es el resultado?
Si, como ves, el asterisco es el símbolo que se utiliza en Python para multiplicar. Probemos ahora:
```
8/4
```
#### ¿Qué resultado nos da? ¿Para qué se usa la barra hacia adelante?
<p>En Python se puede usar los siguientes sı́mbolos básicos de matemáticas, que en programación se llaman operadores:</p>
Operador | Descripción
------------ | -------------
+ | Suma
- | Resta
* | Multiplicación
/ | División
Si por ejemplo tomamos la siguiente operación:
```
5+30*20
```
¿Qué da? ¿Por qué? Y si ahora hacemos:
```
(5+30)*20
```
#### ¿Nos da el mismo resultado? ¿Por qué pensás que ocurre eso?
De las dos operaciones anteriores podemos concluir dos cosas importantes: en este lenguaje, al igual que en la matemática, los operadores no tienen la misma prioridad de lectura o ejecución. La multiplicación y la división tienen mayor orden de prioridad que la suma y la resta. Esto significa que en el ejemplo 5+30*20, Python primero realiza la operación 30*20 y luego le suma 5 al resultado de la multiplicación. Los paréntesis nos sirven para reordenar las prioridades: al hacer (5+30) obligamos la ejecución de esta operación antes que la multiplicación
#### ¿Qué tal si probamos algo más complejo? Escribamos lo siguiente:
```
((4+5)*2)/5
```
| github_jupyter |
```
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
from flask import Flask, jsonify, render_template
import sqlalchemy
from sqlalchemy import create_engine, func, inspect
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
import pandas as pd
connection_string = "postgres:postgres@localhost:5432/ETL_Rental_DB"
def postgres_create_session(connection_string):
#####This functions create all the background functions for a successful connections to the db
#####and returns a session class, mapped classes
#Create an engine to the hawaii.sqlite database
engine = create_engine(f'postgresql://{connection_string}', echo=True)
# reflect an existing database into a new model; reflect the tables
Base = automap_base()
Base.prepare(engine, reflect=True)
# Save references to each table
Rental = Base.classes.Rental
Income = Base.classes.Income
Crime = Base.classes.Crime
Community_Assets = Base.classes.Community_Assets
Bridge_Rental_Crime = Base.classes.Bridge_Rental_Crime
# Create our session (link) from Python to the DB
session = Session(bind=engine)
return session, Rental, Income, Crime, Community_Assets, Bridge_Rental_Crime
session, Rental, Income, Crime, Community_Assets, Bridge_Rental_Crime = postgres_create_session(connection_string)
def listings(Rental, count=100):
####This function retrieves the "count" of listings upto 100 of data from the Rental class
#limit count to 100
count = 100 if count>100 else max(count,1)
### Design a query to retrieve the "count" no of listings
rental_listing = session.query(Rental.id, Rental.title, Rental.price, Rental.image, Rental.url, Rental.bedrooms, Rental.rental_type, Rental.source, Rental.sqft).filter().order_by(Rental.post_published_date).limit(count)
rental_listing_DF = pd.DataFrame(rental_listing)
#Convert the DF to a dictionary
rental_listing_dict = rental_listing_DF.T.to_dict()
return rental_listing_dict
def comm_services(Community_Assets, count=100):
####This function retrieves the "count" of comm_services upto 100 of data from the Community_Assets class
#limit count to 100
count = 100 if count>100 else max(count,1)
### Design a query to retrieve the "count" no of services
service_listing = session.query(Community_Assets.id, Community_Assets.agency_name, Community_Assets.e_mail, Community_Assets.fees, Community_Assets.hours, Community_Assets.application, Community_Assets.category, Community_Assets.address, Community_Assets.crisis_phone).limit(count)
servicel_listing_DF = pd.DataFrame(service_listing)
#Convert the DF to a dictionary
servicel_listing_dict = servicel_listing_DF.T.to_dict()
return servicel_listing_dict
def crime_details(Crime, Type="Assault"):
####This function retrieves all the crime data in the last year based on type
#["assault", "auto theft", "break and enter", "robbery" ,'Homicide', and "theft over"]
Type = "Assault" if Type not in ['Assault', 'Auto Theft', 'Break and Enter', 'Theft Over',
'Robbery', 'Homicide'] else Type
### Design a query to retrieve all the crime data based on type
crime_listing = session.query(Crime.MCI, Crime.occurrencedate, Crime.reporteddate, Crime.offence, Crime.neighbourhood).filter(Crime.MCI==Type).order_by(Crime.occurrencedate)
crime_listing_DF = pd.DataFrame(crime_listing)
#Convert the DF to a dictionary
crime_listing_dict = crime_listing_DF.T.to_dict()
return crime_listing_dict
def income_details(Income):
####This function retrieves the income details for all FSA in Toronto
### Design a query to retrieve all the income data for all FSA in Toronto
fsa_income = session.query(Income.FSA, Income.avg_income)
fsa_income_DF = pd.DataFrame(fsa_income)
#Convert the DF to a dictionary
fsa_income_dict = fsa_income_DF.T.to_dict()
return fsa_income_dict
listings(Rental, count=100)
comm_services(Community_Assets, count=100)
crime_details(Crime, "Homicide")
#limit count to 100
count = 100
count = 100 if count>100 else max(count,1)
rental_listing = session.query(Rental.id, Rental.title, Rental.price, Rental.image, Rental.url, Rental.bedrooms, Rental.rental_type, Rental.source, Rental.sqft).filter().order_by(Rental.post_published_date).limit(count)
rental_listing_DF = pd.DataFrame(rental_listing)
income_details(Income)
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Format license keys.
See the [LeetCode](https://leetcode.com/problems/license-key-formatting/) problem page.
<pre>
Now you are given a string S, which represents a software license key which we would like to format. The string S is composed of alphanumerical characters and dashes. The dashes split the alphanumerical characters within the string into groups. (i.e. if there are M dashes, the string is split into M+1 groups). The dashes in the given string are possibly misplaced.
We want each group of characters to be of length K (except for possibly the first group, which could be shorter, but still must contain at least one character). To satisfy this requirement, we will reinsert dashes. Additionally, all the lower case letters in the string must be converted to upper case.
So, you are given a non-empty string S, representing a license key to format, and an integer K. And you need to return the license key formatted according to the description above.
Example 1:
Input: S = "2-4A0r7-4k", K = 4
Output: "24A0-R74K"
Explanation: The string S has been split into two parts, each part has 4 characters.
Example 2:
Input: S = "2-4A0r7-4k", K = 3
Output: "24-A0R-74K"
Explanation: The string S has been split into three parts, each part has 3 characters except the first part as it could be shorter as said above.
Note:
The length of string S will not exceed 12,000, and K is a positive integer.
String S consists only of alphanumerical characters (a-z and/or A-Z and/or 0-9) and dashes(-).
String S is non-empty.
</pre>
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Is the output a string?
* Yes
* Can we change the input string?
* No, you can't modify the input string
* Can we assume the inputs are valid?
* No
* Can we assume this fits memory?
* Yes
## Test Cases
* None -> TypeError
* '---', k=3 -> ''
* '2-4A0r7-4k', k=3 -> '24-A0R-74K'
* '2-4A0r7-4k', k=4 -> '24A0-R74K'
## Algorithm
Refer to the [Solution Notebook](). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
class Solution(object):
def format_license_key(self, license_key, k):
# TODO: Implement me
pass
```
## Unit Test
**The following unit test is expected to fail until you solve the challenge.**
```
# %load test_format_license_key.py
from nose.tools import assert_equal, assert_raises
class TestSolution(object):
def test_format_license_key(self):
solution = Solution()
assert_raises(TypeError, solution.format_license_key, None, None)
license_key = '---'
k = 3
expected = ''
assert_equal(solution.format_license_key(license_key, k), expected)
license_key = '2-4A0r7-4k'
k = 3
expected = '24-A0R-74K'
assert_equal(solution.format_license_key(license_key, k), expected)
license_key = '2-4A0r7-4k'
k = 4
expected = '24A0-R74K'
assert_equal(solution.format_license_key(license_key, k), expected)
print('Success: test_format_license_key')
def main():
test = TestSolution()
test.test_format_license_key()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook]() for a discussion on algorithms and code solutions.
| github_jupyter |
##### Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Fitting Dirichlet Process Mixture Model Using Preconditioned Stochastic Gradient Langevin Dynamics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Fitting_DPMM_Using_pSGLD"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this notebook, we will demonstrate how to cluster a large number of samples and infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution. We use Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) for inference.
## Table of contents
1. Samples
1. Model
1. Optimization
1. Visualize the result
4.1. Clustered result
4.2. Visualize uncertainty
4.3. Mean and scale of selected mixture component
4.4. Mixture weight of each mixture component
4.5. Convergence of $\alpha$
4.6. Inferred number of clusters over iterations
4.7. Fitting the model using RMSProp
1. Conclusion
---
## 1. Samples
First, we set up a toy dataset. We generate 50,000 random samples from three bivariate Gaussian distributions.
```
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
import tensorflow_probability as tfp
plt.style.use('ggplot')
tfd = tfp.distributions
def session_options(enable_gpu_ram_resizing=True):
"""Convenience function which sets common `tf.Session` options."""
config = tf.ConfigProto()
config.log_device_placement = True
if enable_gpu_ram_resizing:
# `allow_growth=True` makes it possible to connect multiple colabs to your
# GPU. Otherwise the colab malloc's all GPU ram.
config.gpu_options.allow_growth = True
return config
def reset_sess(config=None):
"""Convenience function to create the TF graph and session, or reset them."""
if config is None:
config = session_options()
tf.reset_default_graph()
global sess
try:
sess.close()
except:
pass
sess = tf.InteractiveSession(config=config)
# For reproducibility
rng = np.random.RandomState(seed=45)
tf.set_random_seed(76)
# Precision
dtype = np.float64
# Number of training samples
num_samples = 50000
# Ground truth loc values which we will infer later on. The scale is 1.
true_loc = np.array([[-4, -4],
[0, 0],
[4, 4]], dtype)
true_components_num, dims = true_loc.shape
# Generate training samples from ground truth loc
true_hidden_component = rng.randint(0, true_components_num, num_samples)
observations = (true_loc[true_hidden_component]
+ rng.randn(num_samples, dims).astype(dtype))
# Visualize samples
plt.scatter(observations[:, 0], observations[:, 1], 1)
plt.axis([-10, 10, -10, 10])
plt.show()
```
## 2. Model
Here, we define a Dirichlet Process Mixture of Gaussian distribution with Symmetric Dirichlet Prior. Throughout the notebook, vector quantities are written in bold. Over $i\in\{1,\ldots,N\}$ samples, the model with a mixture of $j \in\{1,\ldots,K\}$ Gaussian distributions is formulated as follow:
$$\begin{align*}
p(\boldsymbol{x}_1,\cdots, \boldsymbol{x}_N) &=\prod_{i=1}^N \text{GMM}(x_i), \\
&\,\quad \text{with}\;\text{GMM}(x_i)=\sum_{j=1}^K\pi_j\text{Normal}(x_i\,|\,\text{loc}=\boldsymbol{\mu_{j}},\,\text{scale}=\boldsymbol{\sigma_{j}})\\
\end{align*}$$
where:
$$\begin{align*}
x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}_{z_i},\,\text{scale}=\boldsymbol{\sigma}_{z_i}) \\
z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\\
&\,\quad \text{with}\;\boldsymbol{\pi}=\{\pi_1,\cdots,\pi_K\}\\
\boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}=\{\frac{\alpha}{K},\cdots,\frac{\alpha}{K}\})\\
\alpha&\sim \text{InverseGamma}(\text{concentration}=1,\,\text{rate}=1)\\
\boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, \,\text{scale}=\boldsymbol{1})\\
\boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},\,\text{rate}=\boldsymbol{1})\\
\end{align*}$$
Our goal is to assign each $x_i$ to the $j$th cluster through $z_i$ which represents the inferred index of a cluster.
For an ideal Dirichlet Mixture Model, $K$ is set to $\infty$. However, it is known that one can approximate a Dirichlet Mixture Model with a sufficiently large $K$. Note that although we arbitrarily set an initial value of $K$, an optimal number of clusters is also inferred through optimization, unlike a simple Gaussian Mixture Model.
In this notebook, we use a bivariate Gaussian distribution as a mixture component and set $K$ to 30.
```
reset_sess()
# Upperbound on K
max_cluster_num = 30
# Define trainable variables.
mix_probs = tf.nn.softmax(
tf.Variable(
name='mix_probs',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc = tf.Variable(
name='loc',
initial_value=np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision = tf.nn.softplus(tf.Variable(
name='precision',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha = tf.nn.softplus(tf.Variable(
name='alpha',
initial_value=
np.ones([1], dtype=dtype)))
training_vals = [mix_probs, alpha, loc, precision]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num,
name='rv_sdp')
rv_loc = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc')
rv_precision = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision')
rv_alpha = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha')
# Define mixture model
rv_observations = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc,
scale_diag=precision))
```
## 3. Optimization
We optimize the model with Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which enables us to optimize a model over a large number of samples in a mini-batch gradient descent manner.
To update parameters $\boldsymbol{\theta}\equiv\{\boldsymbol{\pi},\,\alpha,\, \boldsymbol{\mu_j},\,\boldsymbol{\sigma_j}\}$ in $t\,$th iteration with mini-batch size $M$, the update is sampled as:
$$\begin{align*}
\Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right)
+ \frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\\
&+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,\, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\\
\end{align*}$$
In the above equation, $\epsilon _ { t }$ is learning rate at $t\,$th iteration and $\log p(\theta_t)$ is a sum of log prior distributions of $\theta$. $G ( \boldsymbol { \theta } _ { t })$ is a preconditioner which adjusts the scale of the gradient of each parameter.
```
# Learning rates and decay
starter_learning_rate = 1e-6
end_learning_rate = 1e-10
decay_steps = 1e4
# Number of training steps
training_steps = 10000
# Mini-batch size
batch_size = 20
# Sample size for parameter posteriors
sample_size = 100
```
We will use the joint log probability of the likelihood $\text{GMM}(x_{t_k})$ and the prior probabilities $p(\theta_t)$ as the loss function for pSGLD.
Note that as specified in the [API of pSGLD](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/StochasticGradientLangevinDynamics), we need to divide the sum of the prior probabilities by sample size $N$.
```
# Placeholder for mini-batch
observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims])
# Define joint log probabilities
# Notice that each prior probability should be divided by num_samples and
# likelihood is divided by batch_size for pSGLD optimization.
log_prob_parts = [
rv_loc.log_prob(loc) / num_samples,
rv_precision.log_prob(precision) / num_samples,
rv_alpha.log_prob(alpha) / num_samples,
rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis]
/ num_samples,
rv_observations.log_prob(observations_tensor) / batch_size
]
joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1)
# Make mini-batch generator
dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\
.shuffle(500).repeat().batch(batch_size)
iterator = tf.compat.v1.data.make_one_shot_iterator(dx)
next_batch = iterator.get_next()
# Define learning rate scheduling
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate,
global_step, decay_steps,
end_learning_rate, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics(
learning_rate=learning_rate,
preconditioner_decay_rate=0.99,
burnin=1500,
data_size=num_samples)
train_op = optimizer_kernel.minimize(-joint_log_prob)
# Arrays to store samples
mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num])
mean_alpha_mtx = np.zeros([training_steps, 1])
mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims])
mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims])
init = tf.global_variables_initializer()
sess.run(init)
start = time.time()
for it in range(training_steps):
[
mean_mix_probs_mtx[it, :],
mean_alpha_mtx[it, 0],
mean_loc_mtx[it, :, :],
mean_precision_mtx[it, :, :],
_
] = sess.run([
*training_vals,
train_op
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_psgld = time.time() - start
print("Elapsed time: {} seconds".format(elapsed_time_psgld))
# Take mean over the last sample_size iterations
mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0)
mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0)
mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0)
mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0)
```
## 4. Visualize the result
### 4.1. Clustered result
First, we visualize the result of clustering.
For assigning each sample $x_i$ to a cluster $j$, we calculate the posterior of $z_i$ as:
$$\begin{align*}
j = \underset{z_i}{\arg\max}\,p(z_i\,|\,x_i,\,\boldsymbol{\theta})
\end{align*}$$
```
loc_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='loc_for_posterior')
precision_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='precision_for_posterior')
mix_probs_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num], name='mix_probs_for_posterior')
# Posterior of z (unnormalized)
unnomarlized_posterior = tfd.MultivariateNormalDiag(
loc=loc_for_posterior, scale_diag=precision_for_posterior)\
.log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\
+ tf.log(mix_probs_for_posterior[tf.newaxis, ...])
# Posterior of z (normarizad over latent states)
posterior = unnomarlized_posterior\
- tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis]
cluster_asgmt = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]})
idxs, count = np.unique(cluster_asgmt, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
def convert_int_elements_to_consecutive_numbers_in(array):
unique_int_elements = np.unique(array)
for consecutive_number, unique_int_element in enumerate(unique_int_elements):
array[array == unique_int_element] = consecutive_number
return array
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt)))
plt.axis([-10, 10, -10, 10])
plt.show()
```
We can see an almost equal number of samples are assigned to appropriate clusters and the model has successfully inferred the correct number of clusters as well.
### 4.2. Visualize uncertainty
Here, we look at the uncertainty of the clustering result by visualizing it for each sample.
We calculate uncertainty by using entropy:
$$\begin{align*}
\text{Uncertainty}_\text{entropy} = -\frac{1}{K}\sum^{K}_{z_i=1}\sum^{O}_{l=1}p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\log p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)
\end{align*}$$
In pSGLD, we treat the value of a training parameter at each iteration as a sample from its posterior distribution. Thus, we calculate entropy over values from $O$ iterations for each parameter. The final entropy value is calculated by averaging entropies of all the cluster assignments.
```
# Calculate entropy
posterior_in_exponential = tf.exp(posterior)
uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum(
posterior_in_exponential
* posterior,
axis=1), axis=1)
uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]
})
plt.title('Entropy')
sc = plt.scatter(observations[:, 0],
observations[:, 1],
1,
c=uncertainty_in_entropy_,
cmap=plt.cm.viridis_r)
cbar = plt.colorbar(sc,
fraction=0.046,
pad=0.04,
ticks=[uncertainty_in_entropy_.min(),
uncertainty_in_entropy_.max()])
cbar.ax.set_yticklabels(['low', 'high'])
cbar.set_label('Uncertainty', rotation=270)
plt.show()
```
In the above graph, less luminance represents more uncertainty.
We can see the samples near the boundaries of the clusters have especially higher uncertainty. This is intuitively true, that those samples are difficult to cluster.
### 4.3. Mean and scale of selected mixture component
Next, we look at selected clusters' $\mu_j$ and $\sigma_j$.
```
for idx, numbe_of_samples in zip(idxs, count):
print(
'Component id = {}, Number of elements = {}'
.format(idx, numbe_of_samples))
print(
'Mean loc = {}, Mean scale = {}\n'
.format(mean_loc_[idx, :], mean_precision_[idx, :]))
```
Again, the $\boldsymbol{\mu_j}$ and $\boldsymbol{\sigma_j}$ close to the ground truth.
### 4.4 Mixture weight of each mixture component
We also look at inferred mixture weights.
```
plt.ylabel('Mean posterior of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mean_mix_probs_)
plt.show()
```
We see only a few (three) mixture component have significant weights and the rest of the weights have values close to zero. This also shows the model successfully inferred the correct number of mixture components which constitutes the distribution of the samples.
### 4.5. Convergence of $\alpha$
We look at convergence of Dirichlet distribution's concentration parameter $\alpha$.
```
print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0]))
plt.ylabel('Sample value of alpha')
plt.xlabel('Iteration')
plt.plot(mean_alpha_mtx)
plt.show()
```
Considering the fact that smaller $\alpha$ results in less expected number of clusters in a Dirichlet mixture model, the model seems to be learning the optimal number of clusters over iterations.
### 4.6. Inferred number of clusters over iterations
We visualize how the inferred number of clusters changes over iterations.
To do so, we infer the number of clusters over the iterations.
```
step = sample_size
num_of_iterations = 50
estimated_num_of_clusters = []
interval = (training_steps - step) // (num_of_iterations - 1)
iterations = np.asarray(range(step, training_steps+1, interval))
for iteration in iterations:
start_position = iteration-step
end_position = iteration
result = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior:
mean_loc_mtx[start_position:end_position, :],
precision_for_posterior:
mean_precision_mtx[start_position:end_position, :],
mix_probs_for_posterior:
mean_mix_probs_mtx[start_position:end_position, :]})
idxs, count = np.unique(result, return_counts=True)
estimated_num_of_clusters.append(len(count))
plt.ylabel('Number of inferred clusters')
plt.xlabel('Iteration')
plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1))
plt.plot(iterations - 1, estimated_num_of_clusters)
plt.show()
```
Over the iterations, the number of clusters is getting closer to three. With the result of convergence of $\alpha$ to smaller value over iterations, we can see the model is successfully learning the parameters to infer an optimal number of clusters.
Interestingly, we can see the inference has already converged to the correct number of clusters in the early iterations, unlike $\alpha$ converged in much later iterations.
### 4.7. Fitting the model using RMSProp
In this section, to see the effectiveness of Monte Carlo sampling scheme of pSGLD, we use RMSProp to fit the model. We choose RMSProp for comparison because it comes without the sampling scheme and pSGLD is based on RMSProp.
```
# Learning rates and decay
starter_learning_rate_rmsprop = 1e-2
end_learning_rate_rmsprop = 1e-4
decay_steps_rmsprop = 1e4
# Number of training steps
training_steps_rmsprop = 50000
# Mini-batch size
batch_size_rmsprop = 20
# Define trainable variables.
mix_probs_rmsprop = tf.nn.softmax(
tf.Variable(
name='mix_probs_rmsprop',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc_rmsprop = tf.Variable(
name='loc_rmsprop',
initial_value=np.zeros([max_cluster_num, dims], dtype)
+ np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision_rmsprop = tf.nn.softplus(tf.Variable(
name='precision_rmsprop',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha_rmsprop = tf.nn.softplus(tf.Variable(
name='alpha_rmsprop',
initial_value=
np.ones([1], dtype=dtype)))
training_vals_rmsprop =\
[mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype)
* alpha_rmsprop / max_cluster_num,
name='rv_sdp_rmsprop')
rv_loc_rmsprop = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc_rmsprop')
rv_precision_rmsprop = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision_rmsprop')
rv_alpha_rmsprop = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha_rmsprop')
# Define mixture model
rv_observations_rmsprop = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc_rmsprop,
scale_diag=precision_rmsprop))
og_prob_parts_rmsprop = [
rv_loc_rmsprop.log_prob(loc_rmsprop),
rv_precision_rmsprop.log_prob(precision_rmsprop),
rv_alpha_rmsprop.log_prob(alpha_rmsprop),
rv_symmetric_dirichlet_process_rmsprop
.log_prob(mix_probs_rmsprop)[..., tf.newaxis],
rv_observations_rmsprop.log_prob(observations_tensor)
* num_samples / batch_size
]
joint_log_prob_rmsprop = tf.reduce_sum(
tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1)
# Define learning rate scheduling
global_step_rmsprop = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate_rmsprop,
global_step_rmsprop, decay_steps_rmsprop,
end_learning_rate_rmsprop, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.99)
train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop)
init_rmsprop = tf.global_variables_initializer()
sess.run(init_rmsprop)
start = time.time()
for it in range(training_steps_rmsprop):
[
_
] = sess.run([
train_op_rmsprop
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_rmsprop = time.time() - start
print("RMSProp elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_rmsprop, training_steps_rmsprop))
print("pSGLD elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_psgld, training_steps))
mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\
sess.run(training_vals_rmsprop)
```
Compare to pSGLD, although the number of iterations for RMSProp is longer, optimization by RMSProp is much faster.
Next, we look at the clustering result.
```
cluster_asgmt_rmsprop = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: loc_rmsprop_[tf.newaxis, :],
precision_for_posterior: precision_rmsprop_[tf.newaxis, :],
mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]})
idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(
cluster_asgmt_rmsprop)))
plt.axis([-10, 10, -10, 10])
plt.show()
```
The number of clusters was not correctly inferred by RMSProp optimization in our experiment. We also look at the mixture weight.
```
plt.ylabel('MAP inferece of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_)
plt.show()
```
We can see the incorrect number of components have significant mixture weights.
Although the optimization takes longer time, pSGLD, which has Monte Carlo sampling scheme, performed better in our experiment.
## 5. Conclusion
In this notebook, we have described how to cluster a large number of samples as well as to infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution using pSGLD.
The experiment has shown the model successfully clustered samples and inferred the correct number of clusters. Also, we have shown the Monte Carlo sampling scheme of pSGLD allows us to visualize uncertainty in the result. Not only clustering the samples but also we have seen the model could infer the correct parameters of mixture components. On the relationship between the parameters and the number of inferred clusters, we have investigated how the model learns the parameter to control the number of effective clusters by visualizing the correlation between convergence of 𝛼 and the number of inferred clusters. Lastly, we have looked at the results of fitting the model using RMSProp. We have seen RMSProp, which is the optimizer without Monte Carlo sampling scheme, works considerably faster than pSGLD but has produced less accuracy in clustering.
Although the toy dataset only had 50,000 samples with only two dimensions, the mini-batch manner optimization used here is scalable for much larger datasets.
| github_jupyter |
# Problem Statement
The Indian Premier League (IPL) is a professional Twenty20 cricket league in India contested during March or April and May of every year by eight teams representing eight different cities in India.The league was founded by the Board of Control for Cricket in India (BCCI) in 2008. The IPL has an exclusive window in ICC Future Tours Programme.In this we have to perform few data analysis using pandas,numpy and visualization libraries. In this few task has been completed like which team has won more number of times,at which venue more number of matches has been played and few other relevant task has been done.The Dataset is download from kaggle and contains around 6 csv's but For the current analysis we will be using only matches Played Data i.e matches.csv.
# Import Relevant Libraries
```
import pandas as pd
import numpy as p
import matplotlib.pyplot as plt
%matplotlib inline
df=pd.read_csv('matches.csv')
df.head()
df.info()
df.describe()
df['city'].nunique()
```
# Cleaning unnecessary data
We wont be using the Umpires Columns ('umpire1', 'umpire2', 'umpire3') in this analysis so we will remove those fields using .drop() method
```
df.drop(['umpire1','umpire2','umpire3'],axis=1,inplace=True)
df.head()
df['winner'].nunique()
df['team1'].unique()
df['city'].unique()
```
As the teams names get changed as per the season we need to replace those names for all required teams.
```
df.team1.replace({'Rising Pune Supergiants' : 'Rising Pune Supergiant', 'Delhi Daredevils':'Delhi Capitals','Pune Warriors' : 'Rising Pune Supergiant'},inplace=True)
df.team2.replace({'Rising Pune Supergiants' : 'Rising Pune Supergiant', 'Delhi Daredevils':'Delhi Capitals','Pune Warriors' : 'Rising Pune Supergiant'},inplace=True)
df.toss_winner.replace({'Rising Pune Supergiants' : 'Rising Pune Supergiant', 'Delhi Daredevils':'Delhi Capitals','Pune Warriors' : 'Rising Pune Supergiant'},inplace=True)
df.winner.replace({'Rising Pune Supergiants' : 'Rising Pune Supergiant', 'Delhi Daredevils':'Delhi Capitals','Pune Warriors' : 'Rising Pune Supergiant'},inplace=True)
df.city.replace({'Bangalore' : 'Bengaluru'},inplace=True)
df['team1'].unique()
df['team2'].unique()
df['city'].unique()
df.isnull().sum()
```
Checking the position of null values
```
null_df = df[df.isna().any(axis=1)]
null_df
df.loc[460:470]
df.loc[[461,462,466,468,469,474,476],'city'] = "Dubai"
df.loc[461:480]
df.isnull().sum()
```
Now we will analyse different types of data
```
df['id'].count()
```
# Few relevant task
# Checking for how many matches the result is normal or tie.
So now we can see that there are around 756 matches has been played. Now we have to find for how many matches the result is normal or tie.
```
regular_matches= df[df['result']== 'normal'].count()
regular_matches
```
So we can see there are around 13 matches whose result is tie or not played. Rest all matches are played normal i.e 743.
```
df['city'].unique()
```
Now let see how many matches has been played in each cities
```
cities=df.groupby('city')[['id']].count()
cities
```
Arranging data in organised manner
```
cities.rename(columns={'id':'matches'},inplace=True)
cities = cities.sort_values('matches',ascending=True).reset_index()
cities
```
# Importing Visualization Library
Performing visualization on number of matches played in each city
```
import seaborn as sns
plt.figure(figsize=(20,10))
plt.title('Number Of Matches Played In Each City')
sns.barplot(x='matches',y='city',data=cities)
```
# Now we will see total matches won by each team.
```
df.winner.unique()
winner_df = df.groupby('winner')[['id']].count()
winner_df = winner_df.sort_values('id', ascending=False).reset_index()
winner_df.rename(columns = {'id':'wins','winner':'Teams'},inplace=True)
winner_df
plt.figure(figsize=(30,20))
plt.xlabel('Teams')
plt.ylabel('Wins')
plt.title('Matches Won By Each Team')
sns.barplot(x='Teams',y='wins',data=winner_df)
```
# Now we will see the season with most number of matches
```
season_df = df.groupby('Season')[['id']].count()
season_df = season_df.sort_values('Season', ascending=False).reset_index()
season_df.rename(columns = {'id':'Matches','Season':'Year'},inplace = True)
season_df
plt.figure(figsize=(20,10))
plt.title("Mathes Played In Each Season",fontsize=30)
plt.xlabel('Season',fontsize=30)
plt.ylabel('Total Matches',fontsize=30)
plt.xticks(rotation='60')
plt.tick_params(labelsize=20)
sns.barplot(x='Year', y='Matches', data=season_df)
```
# We will find out now most preferred decision on winning toss
```
df.toss_decision.unique()
decision_df = df.groupby('toss_decision')[['id']].count()
decision_df = decision_df.sort_values('id').reset_index()
decision_df.rename(columns={'id':'Total','toss_decision':'Decision'},inplace=True)
decision_df
plt.figure(figsize=(10,10))
plt.title("Preferred Decision",fontsize=30)
plt.xlabel('Decision',fontsize=30)
plt.ylabel('Total',fontsize=30)
plt.tick_params(labelsize=20)
sns.barplot(x='Decision', y= 'Total', data=decision_df)
```
So fielding is the most preferable decision after winning a toss
# So now we will check which decision is more beneficial
```
field_df = df.loc[(df['toss_winner'] == df['winner']) & (df['toss_decision'] == 'field'), ['id', 'winner','toss_decision']]
field_df.count()
bat_df = df.loc[(df['toss_winner'] == df['winner']) & (df['toss_decision'] == 'bat'), ['id', 'winner','toss_decision']]
bat_df.count()
frames = [bat_df, field_df]
result_df = pd.concat(frames)
result_df = result_df.groupby('toss_decision')[['id']].count()
result_df
```
So we can conclude here that the team who chooses fielding have more chances of winning.
```
result_df = result_df.sort_values('id').reset_index()
result_df.rename(columns={'id':'Total','toss_decision':'Decision'},inplace=True)
result_df
plt.figure(figsize=(10,10))
plt.title("Decision Success",fontsize=30)
plt.xlabel('Decision',fontsize=30)
plt.ylabel('Total',fontsize=30)
plt.tick_params(labelsize=20)
sns.barplot(x='Decision', y='Total',data=result_df)
plt.figure(figsize=(10,10))
plt.title("Decision Success",fontsize=30)
plt.xlabel('Decision',fontsize=30)
plt.ylabel('Total',fontsize=30)
plt.tick_params(labelsize=20)
sns.barplot(x='Decision', y='Total', data=decision_df,palette='rainbow')
sns.barplot(x='Decision', y='Total', data=result_df, palette='coolwarm')
plt.legend(['Decision Taken','Decision Proved Right'])
```
# Venue which has hosted most number of matches
```
df['venue'].unique()
len(df['venue'].unique())
venue_df = df.groupby('venue')[['id']].count()
venue_df = venue_df.sort_values('id',ascending=False).reset_index()
venue_df.rename(columns={'id':'Total','venue':'Stadium'},inplace=True)
plt.figure(figsize=(20,20))
plt.title("Venues",fontweight='bold',fontsize=30)
plt.tick_params(labelsize=40)
plt.pie(venue_df['Total'],labels=venue_df['Stadium'],textprops={'fontsize': 10});
```
So we can conclude here that the most of the matches are played in Eden Gardens.
# So now we will find the player with maximum number of Man of the match award
```
len(df['player_of_match'].unique())
player_df= df.groupby('player_of_match')[['id']].count()
player_df
player_df=player_df.sort_values('id',ascending=False).reset_index()
player_df = player_df.head(15).copy()
player_df.rename(columns={'id':'Total_Awards','player_of_match':'Man_Of_the_Match'},inplace=True)
player_df
player_df.head(10)
import numpy as np
plt.figure(figsize=(15,10))
plt.title("Top 15 Players with Highest Man Of the Match Titles",fontweight='bold' )
plt.xticks(rotation=90)
plt.yticks(ticks=np.arange(0,25,5))
plt.ylabel('No. of Awards')
plt.xlabel('Players')
sns.barplot(x=player_df['Man_Of_the_Match'],y=player_df['Total_Awards'], alpha=0.6)
```
# Team who has won IPL Trophy most of times.
```
final_df=df.groupby('Season').tail(1).copy()
final_df
final_df = final_df.sort_values('Season')
final_df
final_df.winner.unique()
final_df['winner'].value_counts()
```
As now we can conclude here that Mumbai Indians have won maximum number of trophies
```
plt.figure(figsize=(20,10))
plt.title("Season Champions",fontweight='bold',fontsize=20)
plt.xlabel('Teams',fontweight='bold',fontsize=30)
plt.ylabel('Total Seasons',fontweight='bold',fontsize=20)
plt.xticks(rotation='60')
plt.tick_params(labelsize=10)
sns.countplot(x=final_df['winner'],palette='rainbow')
```
# Conclusion
This project simply implies that we have made this IPL data more readble and by using few visualization libraries it helped us to solve different problems which are difficult to handle. SO basically we have analysed data by cleaning and manipulating the required data which helped us to perform few task.
| github_jupyter |
## Blood Donor Management System
```
#Tkinter is the standard GUI library for Python
from tkinter import *
from tkinter import ttk
import pymysql
#class creation
class Donor:
def __init__(self,root):
self.root=root
#Title of the application
self.root.title("Blood Donor Management System")
#Setting the Resolution
self.root.geometry("1350x700+0+0")
#Title of the Upper Label. Can change the values of border,relief, fonts, background and foreground colour.
#relief = RAISED/ GROOVE/ RIDGE/ FLAT/ SUNKEN
title=Label(self.root,text="🩸 Blood Donor Management System",bd=10,relief=RAISED,font=("arial",35,"bold"),bg="crimson",fg="white")
#packed, choosing location, fill=X to fillup the X axis area
title.pack(side=TOP,fill=X)
#-------VARIABLES---------
#using String variable because we don't want to use any calculations with these
self.id_var=StringVar()
self.name_var=StringVar()
self.gender_var=StringVar()
self.bg_var=StringVar()
self.num_var=StringVar()
self.email_var=StringVar()
self.dob_var=StringVar()
self.ail_var=StringVar()
self.lastdn_var=StringVar()
self.address_var=StringVar()
self.search_by=StringVar()
self.search_txt=StringVar()
#-------MANAGE FRAME---------
#create frame
#border size, style
Manage_Frame=Frame(self.root,bd=4,relief=RIDGE,bg="crimson")
#placement and resolution of the frame
Manage_Frame.place(x=10,y=82,height=610,width=472)
#title for Manage_Frame
m_title=Label(Manage_Frame,text="Manage Donors",font=("arial",25,"bold"),bg="crimson",fg="white")
#grid method makes table-like structure, How many Rows and Column will be there. padx/pady gives space between the x/y axis
m_title.grid(row=0,columnspan=2,pady=20)
#ID
#label field
lbl_id=Label(Manage_Frame,text="ID No.",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_id.grid(row=1,column=0,pady=5,padx=10,sticky="w")
#text field, using entry method
#textvariable is used to access the variables
txt_id=Entry(Manage_Frame,textvariable=self.id_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_id.grid(row=1,column=1,pady=5,padx=10,sticky="w")
#Name
lbl_name=Label(Manage_Frame,text="Name",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_name.grid(row=2,column=0,pady=5,padx=10,sticky="w")
txt_name=Entry(Manage_Frame,textvariable=self.name_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_name.grid(row=2,column=1,pady=5,padx=10,sticky="w")
#Gender (combobox) - kinda like a option system
lbl_gender=Label(Manage_Frame,text="Gender",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_gender.grid(row=3,column=0,pady=5,padx=10,sticky="w")
#using combobox
combo_gender=ttk.Combobox(Manage_Frame,textvariable=self.gender_var,font=("arial",14,"bold"),state="readonly")
combo_gender['values']=("Male","Female","Other")
combo_gender.grid(row=3,column=1,pady=5,padx=10)
#Blood Group (combobox)
lbl_bg=Label(Manage_Frame,text="Blood Group",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_bg.grid(row=4,column=0,pady=5,padx=10,sticky="w")
combo_bg=ttk.Combobox(Manage_Frame,textvariable=self.bg_var,font=("arial",14,"bold"),state="readonly")
combo_bg['values']=("A+","A-","B+","B-","AB+","AB-","O+","O-")
combo_bg.grid(row=4,column=1,pady=5,padx=10)
#Phone Number
lbl_num=Label(Manage_Frame,text="Phone Number",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_num.grid(row=5,column=0,pady=5,padx=10,sticky="w")
txt_num=Entry(Manage_Frame,textvariable=self.num_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_num.grid(row=5,column=1,pady=5,padx=10,sticky="w")
#Email
lbl_email=Label(Manage_Frame,text="E-mail",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_email.grid(row=6,column=0,pady=5,padx=10,sticky="w")
txt_email=Entry(Manage_Frame,textvariable=self.email_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_email.grid(row=6,column=1,pady=5,padx=10,sticky="w")
#Date of Birth
lbl_dob=Label(Manage_Frame,text="Date of Birth",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_dob.grid(row=7,column=0,pady=5,padx=10,sticky="w")
txt_dob=Entry(Manage_Frame,textvariable=self.dob_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_dob.grid(row=7,column=1,pady=5,padx=10,sticky="w")
#Known Ailments
lbl_ail=Label(Manage_Frame,text="Known Ailments",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_ail.grid(row=8,column=0,pady=5,padx=10,sticky="w")
txt_ail=Entry(Manage_Frame,textvariable=self.ail_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_ail.grid(row=8,column=1,pady=5,padx=10,sticky="w")
#Last Donation
lbl_lastdn=Label(Manage_Frame,text="Last Donation Date",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_lastdn.grid(row=9,column=0,pady=5,padx=10,sticky="w")
txt_lastdn=Entry(Manage_Frame,textvariable=self.lastdn_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_lastdn.grid(row=9,column=1,pady=5,padx=10,sticky="w")
#Address
lbl_address=Label(Manage_Frame,text="Address",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_address.grid(row=10,column=0,pady=5,padx=10,sticky="w")
txt_address=Entry(Manage_Frame,textvariable=self.address_var,font=("arial",15,"bold"),bd=5,relief=GROOVE)
txt_address.grid(row=10,column=1,pady=5,padx=10,sticky="w")
#using text method (we are not using it)
#use the help of self to access Text data
#self.txt_address=Text(Manage_Frame,height=3, width=29)
#self.txt_address.grid(row=10,column=1,pady=5,padx=10,sticky="w")
#-------BUTTON FRAME---------
btn_Frame=Frame(Manage_Frame,bd=4,relief=RIDGE,bg="crimson")
btn_Frame.place(x=12,y=555,width=433)
#command is used to call function
Addbtn=Button(btn_Frame,text="Add",width=11,command=self.add_donors).grid(row=0,column=0,padx=10,pady=5)
upbtn=Button(btn_Frame,text="Update",width=11,command=self.update_data).grid(row=0,column=1,padx=10,pady=5)
delbtn=Button(btn_Frame,text="Delete",width=11,command=self.delete_data).grid(row=0,column=2,padx=10,pady=5)
clrbtn=Button(btn_Frame,text="Clear",width=11,command=self.clear).grid(row=0,column=3,padx=10,pady=5)
#-------DETAIL FRAME---------
Detail_Frame=Frame(self.root,bd=4,relief=RIDGE,bg="crimson")
Detail_Frame.place(x=487,y=82,height=610,width=857)
lbl_search=Label(Detail_Frame,text="Search By",font=("arial",15,"bold"),bg="crimson",fg="white")
lbl_search.grid(row=0,column=0,pady=5,padx=10,sticky="w")
combo_search=ttk.Combobox(Detail_Frame,textvariable=self.search_by,width=13,font=("arial",14,"bold"),state="readonly")
#name must be same as the database
combo_search['values']=("Blood_Group","Last_Donation","Address","Number")
combo_search.grid(row=0,column=1,pady=5,padx=10)
txt_search=Entry(Detail_Frame,textvariable=self.search_txt,width=25,font=("arial",13,"bold"),bd=5,relief=GROOVE)
txt_search.grid(row=0,column=2,pady=5,padx=10,sticky="w")
searchbtn=Button(Detail_Frame,text="Search",width=13,pady=5,command=self.search_data).grid(row=0,column=3,padx=10,pady=5)
showallbtn=Button(Detail_Frame,text="Show All",width=13,pady=5,command=self.fetch_data).grid(row=0,column=4,padx=10,pady=5)
#-------TABLE FRAME---------
Table_Frame=Frame(Detail_Frame,bd=4,relief=RIDGE,bg="crimson")
Table_Frame.place(x=10,y=50,height=545,width=830)
#scrolling method to add scrollbars for x and y axis
scroll_x=Scrollbar(Table_Frame,orient=HORIZONTAL)
scroll_y=Scrollbar(Table_Frame,orient=VERTICAL)
#TreeView allows us to do is to build a tree-like structure and insert items accordingly
self.Donor_table=ttk.Treeview(Table_Frame,columns=("id","name","gender","bg","num","email","dob","ail","lastdn","address"),xscrollcommand=scroll_x.set,yscrollcommand=scroll_y.set)
scroll_x.pack(side=BOTTOM,fill=X)
scroll_y.pack(side=RIGHT,fill=Y)
scroll_x.config(command=self.Donor_table.xview)
scroll_y.config(command=self.Donor_table.yview)
self.Donor_table.heading("id",text="ID No.")
self.Donor_table.heading("name",text="Name")
self.Donor_table.heading("gender",text="Gender")
self.Donor_table.heading("bg",text="Blood Group")
self.Donor_table.heading("num",text="Phone No.")
self.Donor_table.heading("email",text="E-mail")
self.Donor_table.heading("dob",text="Date of Birth")
self.Donor_table.heading("ail",text="Ailments")
self.Donor_table.heading("lastdn",text="Last Donation")
self.Donor_table.heading("address",text="Address")
#only show the ones with headings
self.Donor_table["show"]="headings"
#setting the column
self.Donor_table.column("id",width=45)
self.Donor_table.column("name",width=100)
self.Donor_table.column("gender",width=60)
self.Donor_table.column("bg",width=75)
self.Donor_table.column("num",width=75)
self.Donor_table.column("email",width=130)
self.Donor_table.column("dob",width=73)
self.Donor_table.column("ail",width=85)
self.Donor_table.column("lastdn",width=80)
self.Donor_table.column("address",width=130)
#filled the table and expanded it for it cover the whole table
self.Donor_table.pack(fill=BOTH,expand=1)
#button event
self.Donor_table.bind("<ButtonRelease-1>",self.get_cursor)
#to show the table from the database
self.fetch_data()
def add_donors(self):
#connection with database #database name=bdms
con=pymysql.connect(host="localhost",user="root",password="",database="bdms")
#cursor function is used to execute queries
cur=con.cursor()
#sql queries, Table name=donors, Used a tuple to store into variables, get() for accessing
cur.execute("insert into donors values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(self.id_var.get(),
self.name_var.get(),
self.gender_var.get(),
self.bg_var.get(),
self.num_var.get(),
self.email_var.get(),
self.dob_var.get(),
self.ail_var.get(),
self.lastdn_var.get(),
self.address_var.get(),
))
#get('1.0',END) will show the first and the last line.....in the middle. (we are not using it btw)
con.commit()
#to show the table after inserting into the database
self.fetch_data()
#clears the manage donors tab
self.clear()
con.close()
def fetch_data(self):
con=pymysql.connect(host="localhost",user="root",password="",database="bdms")
cur=con.cursor()
cur.execute("select * from donors")
#save all data into a variabble that will be fetched
rows=cur.fetchall()
#delete empty rows and their children
if len(rows)!=0:
self.Donor_table.delete(*self.Donor_table.get_children())
for row in rows:
self.Donor_table.insert('',END,values=row) #passing the values
con.commit()
con.close()
def clear(self):
#will show empty values
self.id_var.set(""),
self.name_var.set(""),
self.gender_var.set(""),
self.bg_var.set(""),
self.num_var.set(""),
self.email_var.set(""),
self.dob_var.set(""),
self.ail_var.set(""),
self.lastdn_var.set(""),
self.address_var.set("")
def get_cursor(self,ev):
cursor_row=self.Donor_table.focus() #focus brings up the row selected by the cursor
contents=self.Donor_table.item(cursor_row) #brings selected the data into the function
row=contents['values'] #fetches the values
#saved into a list and will show in the management tab
self.id_var.set(row[0])
self.name_var.set(row[1]),
self.gender_var.set(row[2]),
self.bg_var.set(row[3]),
#concatenation
self.num_var.set("0"+str(row[4])),
self.email_var.set(row[5]),
self.dob_var.set(row[6]),
self.ail_var.set(row[7]),
self.lastdn_var.set(row[8]),
self.address_var.set(row[9])
def update_data(self):
con=pymysql.connect(host="localhost",user="root",password="",database="bdms")
cur=con.cursor()
#name must be same as the database
cur.execute("update donors set name=%s,gender=%s,blood_group=%s,number=%s,email=%s,dob=%s,ailment=%s,last_donation=%s,address=%s where id=%s",(
self.name_var.get(),
self.gender_var.get(),
self.bg_var.get(),
self.num_var.get(),
self.email_var.get(),
self.dob_var.get(),
self.ail_var.get(),
self.lastdn_var.get(),
self.address_var.get(),
self.id_var.get()
))
con.commit()
self.fetch_data()
self.clear()
con.close()
def delete_data(self):
con=pymysql.connect(host="localhost",user="root",password="",database="bdms")
cur=con.cursor()
cur.execute("delete from donors where id=%s",self.id_var.get())
con.commit()
con.close()
self.fetch_data()
self.clear()
def search_data(self):
con=pymysql.connect(host="localhost",user="root",password="",database="bdms")
cur=con.cursor()
#cur.execute("select * from donors where " +str(self.search_by.get())+" LIKE '"+str(self.search_txt.get())+"%'")
#cur.execute("select * from donors where " +str(self.search_by.get())+" LIKE '%"+str(self.search_txt.get())+"%'")
if str(self.search_by.get())=="Blood_Group":
cur.execute("select * from donors where " +str(self.search_by.get())+" LIKE '"+str(self.search_txt.get())+"%'")
else:
cur.execute("select * from donors where " +str(self.search_by.get())+" LIKE '%"+str(self.search_txt.get())+"%'")
rows=cur.fetchall()
if len(rows)!=0:
self.Donor_table.delete(*self.Donor_table.get_children())
for row in rows:
self.Donor_table.insert('',END,values=row)
con.commit()
con.close()
root=Tk()
ob=Donor(root)
#just remove the comment and change the filepath of the image file on your pc
#root.iconbitmap('D:/Blood-Bank-Management-System/blood_drop_no_shadow_icon-icons.com_76229.ico')
root.mainloop()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ElizaLo/Practice-Python/blob/master/Data%20Compression%20Methods/Huffman%20Code/Huffman_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Huffman Coding
## **Solution**
```
import heapq
from collections import Counter, namedtuple
class Node(namedtuple("Node", ["left", "right"])):
def walk(self, code, acc): # code - префикс кода, который мы накопили спускаясь от корня к узлу/литу
self.left.walk(code, acc + "0")
self.right.walk(code, acc + "1")
class Leaf(namedtuple("Leaf", ["char"])):
def walk(self, code, acc):
code[self.char] = acc or "0"
```
**Encoding**
```
def huffman_encode(s):
h = []
for ch, freq in Counter(s).items():
h.append((freq, len(h), Leaf(ch)))
# h = [(freq, Leaf(ch)) for ch, freq in Counter(s).items()]
heapq.heapify(h)
count = len(h)
while len(h) > 1: # пока в очереди есть элем
freq1, _count1, left = heapq.heappop(h) # достаём элем с минимальной частотой
freq2, _count2, right = heapq.heappop(h)
heapq.heappush(h, (freq1 + freq2, count, Node(left, right)))
count += 1
code = {}
if h:
[(_freq, _count, root)] = h # корень дерева
root.walk(code,"")
return code
```
**Decoding**
```
def huffman_decode(encoded, code):
sx = []
enc_ch = ""
for ch in encoded:
enc_ch += ch
for dec_ch in code:
if code.get(dec_ch) == enc_ch:
sx.append(dec_ch)
enc_ch = ""
break
return "".join(sx)
def main():
s = input()
code = huffman_encode(s)
"""
закодированная версия строки s
отображает каждый симвом в соответствующий ему код
"""
encoded = "".join(code[ch] for ch in s)
"""
len(code) - количество различных символов в строке s, словарь
len(encoded) - длина закодированной строки
"""
print("\nDictionary =", len(code), "\nLength of string =", len(encoded))
# описываем как мы кодируем каждый символ
print("\n")
for ch in sorted(code):
print("{}: {}".format(ch, code[ch]))
print("\nEncoded string: ",encoded) # закодированная строка
print("\nDecoded string:",huffman_decode(encoded, code))
if __name__ == "__main__":
main()
```
## Testing
```
import random
import string
def test(n_iter=100):
for i in range(n_iter):
length = random.randint(0, 32)
s = "".join(random.choice(string.ascii_letters) for _ in range(length))
code = huffman_encode(s)
encoded = "".join(code[ch] for ch in s)
assert huffman_decode(encoded, code) == s
```
## Simple code
```
def huffman_encode(s):
return {ch: ch for ch in s} # кодирует сам в себя (отображает каждый символ в соответствующий ему код)
def main():
s = input()
code = huffman_encode(s)
# закодированная версия строки s
# отображает каждый симвом в соответствующий ему код
encoded = "".join(code[ch] for ch in s)
# len(code) - количество различных символов в строке s, словарь
# len(encoded) - длина закодированной строки
print("\nDictionary =", len(code), "\nLength of string =", len(encoded))
# описываем как мы кодируем каждый символ
print("\n")
for ch in sorted(code):
print("{}: {}".format(ch, code[ch]))
print("\n", encoded) # закодированная строка
if __name__ == "__main__":
main()
```
| github_jupyter |
```
import csv
import numpy as np
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
def draw_map_background(m, ax):
ax.set_facecolor('#729FCF')
m.fillcontinents(color='#FFEFDB', ax=ax, zorder=0)
m.drawcounties(ax=ax)
m.drawstates(ax=ax)
m.drawcountries(ax=ax)
m.drawcoastlines(ax=ax)
KM = 1000.
clat = 39.3
clon = -94.7333
wid = 5500 * KM
hgt = 3500 * KM
#m= Basemap(llcrnrlon=-129.098907,llcrnrlat=22.700324,urcrnrlon=-65.553985,urcrnrlat=52.177390,
# resolution='i', projection='lcc', lat_0 = 37.697948, lon_0 = -97.314835)
#m = Basemap(width=wid, height=hgt, rsphere=(6378137.00,6356752.3142),
# resolution='i', area_thresh=2500., projection='lcc',
# lon_0=-110.428794,lat_0=46.998846)
#m = Basemap(projection='lcc',lon_0=-110.428794,lat_0=46.998846,resolution='i',\
# llcrnrx=-800*600,llcrnry=-800*400,
# urcrnrx=+600*900,urcrnry=+450*600)
lats, lons = [], []
county='Park_notownsKDE_notowns'
with open('/Users/usmp/Google Drive/Saidur_Matt_Term_Project/Data_Without_Towns/'+county+'_alldata.csv') as f:
reader = csv.reader(f)
next(reader) # Ignore the header row.
for row in reader:
lat = float(row[15])
lon = float(row[16])
# filter lat,lons to (approximate) map view:
lats.append( lat )
lons.append( lon )
'''
#For Gallatin
min_lat = 44.06338
max_lat = 47.200085
min_lon = -111.891
max_lon = -109.5396
#For Montana
min_lat = 44.36338-1
max_lat = 49.00085+1
min_lon = -116.0491-1
max_lon = -104.0396+1
#For Park
min_lat = 44.96338
max_lat = 46.800085
min_lon = -111.291
max_lon = -109.8396
#For Madison
min_lat = 44.06338
max_lat = 46.200085
min_lon = -112.891
max_lon = -110.9396
'''
min_lat = 44.96338
max_lat = 46.800085
min_lon = -111.291
max_lon = -109.8396
m = Basemap(lon_0=-111.428794,lat_0=44.998846,llcrnrlat = min_lat, urcrnrlat = max_lat, llcrnrlon = min_lon, urcrnrlon=max_lon, resolution='l', fix_aspect = False)
fig = plt.figure()
ax = fig.add_subplot(111)
#print(lats)
#print (lons)
# define custom colormap, white -> nicered, #E6072A = RGB(0.9,0.03,0.16)
#plt.clim([0,100])
# translucent blue scatter plot of epicenters above histogram:
x,y = m(lons, lats)
m.plot(x, y, 'o', markersize=5,zorder=6, markerfacecolor='Red',markeredgecolor="none", alpha=0.33)
draw_map_background(m, ax)
plt.title(county+' Basic Plot')
plt.gcf().set_size_inches(15,15)
plt.show()
#plt.savefig('/Users/usmp/Google Drive/Saidur_Matt_Term_Project/'+county+'CrashData(Basic).jpg')
#plt.close()
print("Done")
```
| github_jupyter |
```
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime
dataset = pd.read_csv(r'C:\Users\ANOVA AJAY PANDEY\Desktop\SEM4\CSE 3021 SIN\proj\stock analysis\Google_Stock_Price_Train.csv',index_col="Date",parse_dates=True)
dataset = pd.read_csv(r'C:\Users\ANOVA AJAY PANDEY\Desktop\SEM4\CSE 3021 SIN\proj\stock analysis\Google_Stock_Price_Train.csv',index_col="Date",parse_dates=True)
dataset.tail()
dataset.isna().any()
dataset.info()
dataset['Open'].plot(figsize=(16,6))
# convert column "a" of a DataFrame
dataset["Close"] = dataset["Close"].str.replace(',', '').astype(float)
dataset["Volume"] = dataset["Volume"].str.replace(',', '').astype(float)
# 7 day rolling mean
dataset.rolling(7).mean().tail(20)
dataset['Open'].plot(figsize=(16,6))
dataset.rolling(window=30).mean()['Close'].plot()
dataset['Close: 30 Day Mean'] = dataset['Close'].rolling(window=30).mean()
dataset[['Close','Close: 30 Day Mean']].plot(figsize=(16,6))
# Optional specify a minimum number of periods
dataset['Close'].expanding(min_periods=1).mean().plot(figsize=(16,6))
training_set=dataset['Open']
training_set=pd.DataFrame(training_set)
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
# Creating a data structure with 60 timesteps and 1 output
X_train = []
y_train = []
for i in range(60, 1258):
X_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshaping
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
from tensorflow.keras.models import Sequential
# Part 2 - Building the RNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
# Part 3 - Making the predictions and visualising the results
# Getting the real stock price of 2017
dataset_test = pd.read_csv(r'C:\Users\ANOVA AJAY PANDEY\Desktop\SEM4\CSE 3021 SIN\proj\stock analysis\Google_Stock_Price_Test.csv',index_col="Date",parse_dates=True)
real_stock_price = dataset_test.iloc[:, 1:2].values
dataset_test.head()
dataset_test.info()
dataset_test["Volume"] = dataset_test["Volume"].str.replace(',', '').astype(float)
test_set=dataset_test['Open']
test_set=pd.DataFrame(test_set)
test_set.info()
# Getting the predicted stock price of 2017
dataset_total = pd.concat((dataset['Open'], dataset_test['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 80):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
predicted_stock_price=pd.DataFrame(predicted_stock_price)
predicted_stock_price.info()
# Visualising the results
plt.plot(real_stock_price, color = 'red', label = 'Real Google Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Google Stock Price')
plt.title('Google Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Google Stock Price')
plt.legend()
plt.show()
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/healthcare/de_identification/DeIdentification_model_overview.ipynb)
All the models avaiable are :
| Language | nlu.load() reference | Spark NLP Model reference |
| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| English | med_ner.deid | nerdl_deid |
| English | [en.de_identify](https://nlp.johnsnowlabs.com/2019/06/04/deidentify_rb_en.html) | [deidentify_rb](https://nlp.johnsnowlabs.com/2019/06/04/deidentify_rb_en.html) |
| English | de_identify.rules | deid_rules |
| English | [de_identify.clinical](https://nlp.johnsnowlabs.com/2021/01/29/deidentify_enriched_clinical_en.html) | [deidentify_enriched_clinical](https://nlp.johnsnowlabs.com/2021/01/29/deidentify_enriched_clinical_en.html) |
| English | [de_identify.large](https://nlp.johnsnowlabs.com/2020/08/04/deidentify_large_en.html) | [deidentify_large](https://nlp.johnsnowlabs.com/2020/08/04/deidentify_large_en.html) |
| English | [de_identify.rb](https://nlp.johnsnowlabs.com/2019/06/04/deidentify_rb_en.html) | [deidentify_rb](https://nlp.johnsnowlabs.com/2019/06/04/deidentify_rb_en.html) |
| English | de_identify.rb_no_regex | deidentify_rb_no_regex |
| English | [resolve_chunk.athena_conditions](https://nlp.johnsnowlabs.com/2020/09/16/chunkresolve_athena_conditions_healthcare_en.html) | [chunkresolve_athena_conditions_healthcare](https://nlp.johnsnowlabs.com/2020/09/16/chunkresolve_athena_conditions_healthcare_en.html) |
| English | [resolve_chunk.cpt_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_cpt_clinical_en.html) | [chunkresolve_cpt_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_cpt_clinical_en.html) |
| English | [resolve_chunk.icd10cm.clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_clinical_en.html) | [chunkresolve_icd10cm_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_clinical_en.html) |
| English | [resolve_chunk.icd10cm.diseases_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_diseases_clinical_en.html) | [chunkresolve_icd10cm_diseases_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_diseases_clinical_en.html) |
| English | resolve_chunk.icd10cm.hcc_clinical | chunkresolve_icd10cm_hcc_clinical |
| English | resolve_chunk.icd10cm.hcc_healthcare | chunkresolve_icd10cm_hcc_healthcare |
| English | [resolve_chunk.icd10cm.injuries](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_injuries_clinical_en.html) | [chunkresolve_icd10cm_injuries_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_injuries_clinical_en.html) |
| English | [resolve_chunk.icd10cm.musculoskeletal](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_musculoskeletal_clinical_en.html) | [chunkresolve_icd10cm_musculoskeletal_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_musculoskeletal_clinical_en.html) |
| English | [resolve_chunk.icd10cm.neoplasms](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_neoplasms_clinical_en.html) | [chunkresolve_icd10cm_neoplasms_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10cm_neoplasms_clinical_en.html) |
| English | [resolve_chunk.icd10cm.poison](https://nlp.johnsnowlabs.com/2020/04/28/chunkresolve_icd10cm_poison_ext_clinical_en.html) | [chunkresolve_icd10cm_poison_ext_clinical](https://nlp.johnsnowlabs.com/2020/04/28/chunkresolve_icd10cm_poison_ext_clinical_en.html) |
| English | [resolve_chunk.icd10cm.puerile](https://nlp.johnsnowlabs.com/2020/04/28/chunkresolve_icd10cm_puerile_clinical_en.html) | [chunkresolve_icd10cm_puerile_clinical](https://nlp.johnsnowlabs.com/2020/04/28/chunkresolve_icd10cm_puerile_clinical_en.html) |
| English | resolve_chunk.icd10pcs.clinical | chunkresolve_icd10pcs_clinical |
| English | [resolve_chunk.icdo.clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10pcs_clinical_en.html) | [chunkresolve_icdo_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_icd10pcs_clinical_en.html) |
| English | [resolve_chunk.loinc](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_loinc_clinical_en.html) | [chunkresolve_loinc_clinical](https://nlp.johnsnowlabs.com/2021/04/02/chunkresolve_loinc_clinical_en.html) |
| English | [resolve_chunk.rxnorm.cd](https://nlp.johnsnowlabs.com/2020/07/27/chunkresolve_rxnorm_cd_clinical_en.html) | [chunkresolve_rxnorm_cd_clinical](https://nlp.johnsnowlabs.com/2020/07/27/chunkresolve_rxnorm_cd_clinical_en.html) |
| English | resolve_chunk.rxnorm.in | chunkresolve_rxnorm_in_clinical |
| English | resolve_chunk.rxnorm.in_healthcare | chunkresolve_rxnorm_in_healthcare |
| English | [resolve_chunk.rxnorm.sbd](https://nlp.johnsnowlabs.com/2020/07/27/chunkresolve_rxnorm_sbd_clinical_en.html) | [chunkresolve_rxnorm_sbd_clinical](https://nlp.johnsnowlabs.com/2020/07/27/chunkresolve_rxnorm_sbd_clinical_en.html) |
| English | [resolve_chunk.rxnorm.scd](https://nlp.johnsnowlabs.com/2020/07/27/chunkresolve_rxnorm_scd_clinical_en.html) | [chunkresolve_rxnorm_scd_clinical](https://nlp.johnsnowlabs.com/2020/07/27/chunkresolve_rxnorm_scd_clinical_en.html) |
| English | resolve_chunk.rxnorm.scdc | chunkresolve_rxnorm_scdc_clinical |
| English | resolve_chunk.rxnorm.scdc_healthcare | chunkresolve_rxnorm_scdc_healthcare |
| English | [resolve_chunk.rxnorm.xsmall.clinical](https://nlp.johnsnowlabs.com/2020/06/24/chunkresolve_rxnorm_xsmall_clinical_en.html) | [chunkresolve_rxnorm_xsmall_clinical](https://nlp.johnsnowlabs.com/2020/06/24/chunkresolve_rxnorm_xsmall_clinical_en.html) |
| English | [resolve_chunk.snomed.findings](https://nlp.johnsnowlabs.com/2020/06/20/chunkresolve_snomed_findings_clinical_en.html) | [chunkresolve_snomed_findings_clinical](https://nlp.johnsnowlabs.com/2020/06/20/chunkresolve_snomed_findings_clinical_en.html) |
| English | classify.icd10.clinical | classifier_icd10cm_hcc_clinical |
| English | classify.icd10.healthcare | classifier_icd10cm_hcc_healthcare |
| English | [classify.ade.biobert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_ade_biobert_en.html) | [classifierdl_ade_biobert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_ade_biobert_en.html) |
| English | [classify.ade.clinical](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_ade_clinicalbert_en.html) | [classifierdl_ade_clinicalbert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_ade_clinicalbert_en.html) |
| English | [classify.ade.conversational](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_ade_conversational_biobert_en.html) | [classifierdl_ade_conversational_biobert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_ade_conversational_biobert_en.html) |
| English | [classify.gender.biobert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_gender_biobert_en.html) | [classifierdl_gender_biobert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_gender_biobert_en.html) |
| English | [classify.gender.sbert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_gender_sbert_en.html) | [classifierdl_gender_sbert](https://nlp.johnsnowlabs.com/2021/01/21/classifierdl_gender_sbert_en.html) |
| English | classify.pico | classifierdl_pico_biobert |
```
# Install NLU
!wget https://setup.johnsnowlabs.com/nlu/colab.sh | bash
# Upload add your spark_nlp_for_healthcare.json
```
#### [Deidentify RB](https://nlp.johnsnowlabs.com/2019/06/04/deidentify_rb_en.html)
```
import nlu
data = '''A . Record date : 2093-01-13 , David Hale , M.D . , Name : Hendrickson , Ora MR . # 7194334 Date : 01/13/93 PCP : Oliveira , 25 years-old , Record date : 2079-11-09 . Cocke County Baptist Hospital . 0295 Keats Street'''
nlu.load("med_ner.jsl.wip.clinical en.de_identify").predict(data,output_level = 'sentence')
```
#### [Deidentify (Enriched)](https://nlp.johnsnowlabs.com/2021/01/29/deidentify_enriched_clinical_en.html)
```
data = '''A . Record date : 2093-01-13 , David Hale , M.D . , Name : Hendrickson , Ora MR . # 7194334 Date : 01/13/93 PCP : Oliveira , 25 years-old , Record date : 2079-11-09 . Cocke County Baptist Hospital . 0295 Keats Street'''
nlu.load("med_ner.jsl.wip.clinical en.de_identify.clinical").predict(data,output_level = 'sentence')
```
#### [Deidentify PHI (Large)](https://nlp.johnsnowlabs.com/2020/08/04/deidentify_large_en.html)
```
data = '''Patient AIQING, 25 month years-old , born in Beijing, was transfered to the The Johns Hopkins Hospital.
Phone number: (541) 754-3010. MSW 100009632582 for his colonic polyps. He wants to know the results from them.
He is not taking hydrochlorothiazide and is curious about his blood pressure. He said he has cut his alcohol back to 6 pack once a week.
He has cut back his cigarettes to one time per week. P: Follow up with Dr. Hobbs in 3 months. Gilbert P. Perez, M.D.'''
nlu.load("med_ner.jsl.wip.clinical en.de_identify.large").predict(data,output_level = 'sentence')
```
| github_jupyter |
```
# change into the root directory of the project
import os
if os.getcwd().split("/")[-1] == "notebooks":
os.chdir('..')
import logging
logger = logging.getLogger()
#import warnings
#warnings.filterwarnings("ignore")
logger.setLevel(logging.INFO)
#logging.disable(logging.WARNING)
#logging.disable(logging.WARN)
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['image.cmap'] = 'plasma'
import scipy
import copy
import tqdm
from neurolib.models.aln import ALNModel
from neurolib.utils.parameterSpace import ParameterSpace
from neurolib.optimize.exploration import BoxSearch
import neurolib.utils.functions as func
import neurolib.utils.devutils as du
import neurolib.utils.brainplot as bp
import neurolib.optimize.exploration.explorationUtils as eu
from neurolib.utils.loadData import Dataset
from neurolib.utils import atlases
atlas = atlases.AutomatedAnatomicalParcellation2()
#plt.style.use("dark")
plt.style.use("paper")
# import matplotlib as mpl
# mpl.rcParams['axes.spines.left'] = True
# mpl.rcParams['axes.spines.right'] = True
# mpl.rcParams['axes.spines.top'] = True
# mpl.rcParams['axes.spines.bottom'] = True
ds = Dataset("gw", fcd=False)
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)
model.params['dt'] = 0.1
model.params['duration'] = 1.0 * 60 * 1000 #ms
models = []
model.params["mue_ext_mean"] = 3.3202829454334535
model.params["mui_ext_mean"] = 3.682451894176651
model.params["b"] = 3.2021806735984186
model.params["tauA"] = 4765.3385276559875
model.params["sigma_ou"] = 0.36802952978628106
model.params["Ke_gl"] = 265.48075753153
models.append(copy.deepcopy(model))
control_params = copy.deepcopy(model.params)
def add_to_models(models, change_par, change_by = 0.5):
model.params = copy.deepcopy(control_params)
model.params[change_par] -= model.params[change_par] * change_by
logging.info(f"Adding {change_par} = {model.params[change_par]}")
models.append(copy.deepcopy(model))
model.params = copy.deepcopy(control_params)
model.params[change_par] += model.params[change_par] * change_by
logging.info(f"Adding {change_par} = {model.params[change_par]}")
models.append(copy.deepcopy(model))
return models
#changepars = ["b", "Ke_gl", "sigma_ou", "signalV"]
changepars = ["b"]
for changepar in changepars:
models = add_to_models(models, changepar)
#labels = ["control", "$-b$", "$+b$", "$-K_{gl}$", "$+K_{gl}$", "$-\\sigma_{ou}$", "$+\\sigma_{ou}$" , "$-v_s$", "$+v_s$"]
labels = ["control", "$-b$", "$+b$"]
```
# Run
```
for model in tqdm.tqdm(models, total=len(models)):
model.run()
involvements = []
all_states = []
all_durations = []
for i in tqdm.tqdm(range(len(models)), total=len(models)):
model = models[i]
states = bp.detectSWs(model)
all_states.append(states)
durations = bp.get_state_lengths(states)
all_durations.append(durations)
involvement = bp.get_involvement(states)
involvements.append(involvement)
#bp.plot_involvement_timeseries(models[0], involvements[0])
# Make a multiple-histogram of data-sets with different length.
#import matplotlib as mpl
#mpl.rc('text', usetex=False)
indices = [1, 0, 2]
colors = ['C1', 'lightgray', 'C0']
plt.figure(figsize=(2.5, 2))
plt.hist([involvements[n]*100 for n in indices], 10, histtype='bar', density=True, rwidth=0.8, edgecolor='k', color=colors, label=[labels[n] for n in indices])
#plt.title('Adaptation')
plt.legend(fontsize=8, loc=1, frameon=False)
plt.xticks([0, 50, 100])
plt.yticks([])
plt.ylabel("Density")
plt.xlabel("Involvement [%]")
plt.xlim([0, 100])
plt.tight_layout()
#plt.savefig("/Users/caglar/Documents/PhD/papers/2020-1-evolutionary-fitting/figures/assets/adaptation/assets/involvement-adaptation.pdf", transparent=True)
plt.show()
for i, model in enumerate(models):
states = bp.detectSWs(model, filter_long=True)
bp.plot_states_timeseries(model, states, title=None, labels=False)
#plt.savefig(f"/Users/caglar/Documents/PhD/papers/2020-1-evolutionary-fitting/figures/assets/adaptation/assets/states-{labels[i]}.pdf", transparent=True)
plt.show()
#bp.plot_state_durations(model, states)
#plt.show()
import dill
for i, model in enumerate(models):
fname = f"data/models/effect-of-adaptation-{labels[i]}.dill"
print(fname)
dill.dump(model, open(fname, "wb+"))
```
| github_jupyter |
```
import os
import sys
import itertools
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.regression.linear_model as sm
from scipy import io
from mpl_toolkits.axes_grid1 import make_axes_locatable
path_root = os.environ.get('DECIDENET_PATH')
path_code = os.path.join(path_root, 'code')
if path_code not in sys.path:
sys.path.append(path_code)
from dn_utils.behavioral_models import load_behavioral_data
%matplotlib inline
# Directory for PPI analysis
path_out = os.path.join(path_root, 'data/main_fmri_study/derivatives/ppi')
path_timeries = os.path.join(path_out, 'timeseries')
# Load behavioral data
path_beh = os.path.join(path_root, 'data/main_fmri_study/sourcedata/behavioral')
beh, meta = load_behavioral_data(path=path_beh, verbose=False)
n_subjects, n_conditions, n_trials, _ = beh.shape
# Load neural & BOLD timeseries
data = io.loadmat(os.path.join(
path_timeries,
'timeseries_pipeline-24HMPCSFWM_atlas-metaROI_neural.mat'))
timeseries_neural_aggregated = data['timeseries_neural_aggregated']
timeseries_denoised_aggregated = np.load(os.path.join(
path_timeries,
'timeseries_pipeline-24HMPCSFWM_atlas-metaROI_bold.npy'))
downsamples = data['k'].flatten()
# Acquisition parameters
_, _, n_volumes, n_rois = timeseries_denoised_aggregated.shape
# Input data shape
print('timeseries_neural_aggregated.shape', timeseries_neural_aggregated.shape)
print('timeseries_denoised_aggregated.shape', timeseries_denoised_aggregated.shape)
mpl.rcParams.update({"font.size": 15})
fc_rest = np.zeros((n_subjects, n_conditions, n_rois, n_rois))
for i in range(n_subjects):
for j in range(n_conditions):
fc_rest[i, j] = np.corrcoef(timeseries_denoised_aggregated[i, j].T)
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(15, 15))
im = [[None, None], [None, None]]
im[0][0] = ax[0][0].imshow(fc_rest[:, 0, :, :].mean(axis=0), clim=[-1, 1], cmap='RdBu_r')
im[0][1] = ax[0][1].imshow(fc_rest[:, 1, :, :].mean(axis=0), clim=[-1, 1], cmap='RdBu_r')
im[1][0] = ax[1][0].imshow(fc_rest[:, 0, :, :].std(axis=0), clim=[0, .2], cmap='RdBu_r')
im[1][1] = ax[1][1].imshow(fc_rest[:, 1, :, :].std(axis=0), clim=[0, .2], cmap='RdBu_r')
for i, j in itertools.product([0, 1], repeat=2):
divider = make_axes_locatable(ax[i][j])
cax = divider.append_axes("right", size="5%", pad=0.05)
fig.colorbar(im[i][j], cax=cax)
ax[0][0].set_title("Reward-seeking")
ax[0][1].set_title("Punishment-avoiding")
ax[0][0].set_ylabel("Mean connectivity")
ax[1][0].set_ylabel("Variability of connectivity")
plt.tight_layout()
```
| github_jupyter |
```
from datascience import *
%matplotlib inline
path_data = '../../../../data/'
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import numpy as np
```
### Deflategate ###
On January 18, 2015, the Indianapolis Colts and the New England Patriots played the American Football Conference (AFC) championship game to determine which of those teams would play in the Super Bowl. After the game, there were allegations that the Patriots' footballs had not been inflated as much as the regulations required; they were softer. This could be an advantage, as softer balls might be easier to catch.
For several weeks, the world of American football was consumed by accusations, denials, theories, and suspicions: the press labeled the topic Deflategate, after the Watergate political scandal of the 1970's. The National Football League (NFL) commissioned an independent analysis. In this example, we will perform our own analysis of the data.
Pressure is often measured in pounds per square inch (psi). NFL rules stipulate that game balls must be inflated to have pressures in the range 12.5 psi and 13.5 psi. Each team plays with 12 balls. Teams have the responsibility of maintaining the pressure in their own footballs, but game officials inspect the balls. Before the start of the AFC game, all the Patriots' balls were at about 12.5 psi. Most of the Colts' balls were at about 13.0 psi. However, these pre-game data were not recorded.
During the second quarter, the Colts intercepted a Patriots ball. On the sidelines, they measured the pressure of the ball and determined that it was below the 12.5 psi threshold. Promptly, they informed officials.
At half-time, all the game balls were collected for inspection. Two officials, Clete Blakeman and Dyrol Prioleau, measured the pressure in each of the balls.
Here are the data. Each row corresponds to one football. Pressure is measured in psi. The Patriots ball that had been intercepted by the Colts was not inspected at half-time. Nor were most of the Colts' balls – the officials simply ran out of time and had to relinquish the balls for the start of second half play.
```
football = Table.read_table(path_data + 'deflategate.csv')
football.show()
```
For each of the 15 balls that were inspected, the two officials got different results. It is not uncommon that repeated measurements on the same object yield different results, especially when the measurements are performed by different people. So we will assign to each the ball the average of the two measurements made on that ball.
```
football = football.with_column(
'Combined', (football.column(1)+football.column(2))/2
).drop(1, 2)
football.show()
```
At a glance, it seems apparent that the Patriots' footballs were at a lower pressure than the Colts' balls. Because some deflation is normal during the course of a game, the independent analysts decided to calculate the drop in pressure from the start of the game. Recall that the Patriots' balls had all started out at about 12.5 psi, and the Colts' balls at about 13.0 psi. Therefore the drop in pressure for the Patriots' balls was computed as 12.5 minus the pressure at half-time, and the drop in pressure for the Colts' balls was 13.0 minus the pressure at half-time.
We can calculate the drop in pressure for each football, by first setting up an array of the starting values. For this we will need an array consisting of 11 values each of which is 12.5, and another consisting of four values each of which is all 13. We will use the NumPy function `np.ones`, which takes a count as its argument and returns an array of that many elements, each of which is 1.
```
np.ones(11)
patriots_start = 12.5 * np.ones(11)
colts_start = 13 * np.ones(4)
start = np.append(patriots_start, colts_start)
start
```
The drop in pressure for each football is the difference between the starting pressure and the combined pressure measurement.
```
drop = start - football.column('Combined')
football = football.with_column('Pressure Drop', drop)
football.show()
```
It looks as though the Patriots' drops were larger than the Colts'. Let's look at the average drop in each of the two groups. We no longer need the combined scores.
```
football = football.drop('Combined')
football.group('Team', np.average)
```
The average drop for the Patriots was about 1.2 psi compared to about 0.47 psi for the Colts.
The question now is why the Patriots' footballs had a larger drop in pressure, on average, than the Colts footballs. Could it be due to chance?
### The Hypotheses ###
How does chance come in here? Nothing was being selected at random. But we can make a chance model by hypothesizing that the 11 Patriots' drops look like a random sample of 11 out of all the 15 drops, with the Colts' drops being the remaining four. That's a completely specified chance model under which we can simulate data. So it's the **null hypothesis**.
For the alternative, we can take the position that the Patriots' drops are too large, on average, to resemble a random sample drawn from all the drops.
### Test Statistic ###
A natural statistic is the difference between the two average drops, which we will compute as "average drop for Patriots - average drop for Colts". Large values of this statistic will favor the alternative hypothesis.
```
observed_means = football.group('Team', np.average).column(1)
observed_difference = observed_means.item(1) - observed_means.item(0)
observed_difference
```
This positive difference reflects the fact that the average drop in pressure of the Patriots' footballs was greater than that of the Colts.
The function `difference_of_means` takes three arguments:
- the name of the table of data
- the label of the column containing the numerical variable whose average is of interest
- the label of the column containing the two group labels
It returns the difference between the means of the two groups.
We have defined this function in an earlier section. The definition is repeated here for ease of reference.
```
def difference_of_means(table, label, group_label):
reduced = table.select(label, group_label)
means_table = reduced.group(group_label, np.average)
means = means_table.column(1)
return means.item(1) - means.item(0)
difference_of_means(football, 'Pressure Drop', 'Team')
```
Notice that the difference has been calculated as Patriots' drops minus Colts' drops as before.
### Predicting the Statistic Under the Null Hypothesis ###
If the null hypothesis were true, then it shouldn't matter which footballs are labeled Patriots and which are labeled Colts. The distributions of the two sets of drops would be the same. We can simulate this by randomly shuffling the team labels.
```
shuffled_labels = football.sample(with_replacement=False).column(0)
original_and_shuffled = football.with_column('Shuffled Label', shuffled_labels)
original_and_shuffled.show()
```
How do all the group averages compare?
```
difference_of_means(original_and_shuffled, 'Pressure Drop', 'Shuffled Label')
difference_of_means(original_and_shuffled, 'Pressure Drop', 'Team')
```
The two teams' average drop values are closer when the team labels are randomly assigned to the footballs than they were for the two groups actually used in the game.
### Permutation Test ###
It's time for a step that is now familiar. We will do repeated simulations of the test statistic under the null hypothesis, by repeatedly permuting the footballs and assigning random sets to the two teams.
Once again, we will use the function `one_simulated_difference` defined in an earlier section as follows.
```
def one_simulated_difference(table, label, group_label):
shuffled_labels = table.sample(with_replacement = False
).column(group_label)
shuffled_table = table.select(label).with_column(
'Shuffled Label', shuffled_labels)
return difference_of_means(shuffled_table, label, 'Shuffled Label')
```
We can now use this function to create an array `differences` that contains 10,000 values of the test statistic simulated under the null hypothesis.
```
differences = make_array()
repetitions = 10000
for i in np.arange(repetitions):
new_difference = one_simulated_difference(football, 'Pressure Drop', 'Team')
differences = np.append(differences, new_difference)
```
### Conclusion of the Test ###
To calculate the empirical P-value, it's important to recall the alternative hypothesis, which is that the Patriots' drops are too large to be the result of chance variation alone.
Larger drops for the Patriots favor the alternative hypothesis. So the P-value is the chance (computed under the null hypothesis) of getting a test statistic equal to our observed value of 0.733522727272728 or larger.
```
empirical_P = np.count_nonzero(differences >= observed_difference) / 10000
empirical_P
```
That's a pretty small P-value. To visualize this, here is the empirical distribution of the test statistic under the null hypothesis, with the observed statistic marked on the horizontal axis.
```
Table().with_column('Difference Between Group Averages', differences).hist()
plots.scatter(observed_difference, 0, color='red', s=30)
plots.title('Prediction Under the Null Hypothesis')
print('Observed Difference:', observed_difference)
print('Empirical P-value:', empirical_P)
```
As in previous examples of this test, the bulk of the distribution is centered around 0. Under the null hypothesis, the Patriots' drops are a random sample of all 15 drops, and therefore so are the Colts'. Therefore the two sets of drops should be about equal on average, and therefore their difference should be around 0.
But the observed value of the test statistic is quite far away from the heart of the distribution. By any reasonable cutoff for what is "small", the empirical P-value is small. So we end up rejecting the null hypothesis of randomness, and conclude that the Patriots drops were too large to reflect chance variation alone.
The independent investigative team analyzed the data in several different ways, taking into account the laws of physics. The final report said,
> "[T]he average pressure drop of the Patriots game balls exceeded the average pressure drop of the Colts balls by 0.45 to 1.02 psi, depending on various possible assumptions regarding the gauges used, and assuming an initial pressure of 12.5 psi for the Patriots balls and 13.0 for the Colts balls."
>
> -- *Investigative report commissioned by the NFL regarding the AFC Championship game on January 18, 2015*
Our analysis shows an average pressure drop of about 0.73 psi, which is close to the center of the interval "0.45 to 1.02 psi" and therefore consistent with the official analysis.
Remember that our test of hypotheses does not establish the reason *why* the difference is not due to chance. Establishing causality is usually more complex than running a test of hypotheses.
But the all-important question in the football world was about causation: the question was whether the excess drop of pressure in the Patriots' footballs was deliberate. If you are curious about the answer given by the investigators, here is the [full report](https://nfllabor.files.wordpress.com/2015/05/investigative-and-expert-reports-re-footballs-used-during-afc-championsh.pdf).
| github_jupyter |
```
# This is Main function.
# Extracting streaming data from Twitter, pre-processing, and loading into MySQL
import credentials # Import api/access_token keys from credentials.py
import setting # Import related setting constants from settings.py
import re
import tweepy
import mysql.connector
import pandas as pd
from textblob import TextBlob
# Streaming With Tweepy
# http://docs.tweepy.org/en/v3.4.0/streaming_how_to.html#streaming-with-tweepy
# Override tweepy.StreamListener to add logic to on_status
class MyStreamListener(tweepy.StreamListener):
'''
Tweets are known as “status updates”. So the Status class in tweepy has properties describing the tweet.
https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object.html
'''
def on_status(self, status):
'''
Extract info from tweets
'''
if status.retweeted:
# Avoid retweeted info, and only original tweets will be received
return True
# Extract attributes from each tweet
id_str = status.id_str
created_at = status.created_at
text = deEmojify(status.text) # Pre-processing the text
sentiment = TextBlob(text).sentiment
polarity = sentiment.polarity
subjectivity = sentiment.subjectivity
user_created_at = status.user.created_at
user_location = deEmojify(status.user.location)
user_description = deEmojify(status.user.description)
user_followers_count =status.user.followers_count
longitude = None
latitude = None
if status.coordinates:
longitude = status.coordinates['coordinates'][0]
latitude = status.coordinates['coordinates'][1]
retweet_count = status.retweet_count
favorite_count = status.favorite_count
print(status.text)
print("Long: {}, Lati: {}".format(longitude, latitude))
# Store all data in MySQL
if mydb.is_connected():
mycursor = mydb.cursor()
sql = "INSERT INTO {} (id_str, created_at, text, polarity, subjectivity, user_created_at, user_location, user_description, user_followers_count, longitude, latitude, retweet_count, favorite_count) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)".format(setting.TABLE_NAME)
val = (id_str, created_at, text, polarity, subjectivity, user_created_at, user_location, \
user_description, user_followers_count, longitude, latitude, retweet_count, favorite_count)
mycursor.execute(sql, val)
mydb.commit()
mycursor.close()
def on_error(self, status_code):
'''
Since Twitter API has rate limits, stop srcraping data as it exceed to the thresold.
'''
if status_code == 420:
# return False to disconnect the stream
return
def clean_tweet(self, tweet):
'''
Use sumple regex statemnents to clean tweet text by removing links and special characters
'''
return ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t]) \
|(\w+:\/\/\S+)", " ", tweet).split())
def deEmojify(text):
'''
Strip all non-ASCII characters to remove emoji characters
'''
if text:
return text.encode('ascii', 'ignore').decode('ascii')
else:
return None
mydb = mysql.connector.connect(
host="localhost",
user="root",
passwd="",
database="Twitterdb",
charset = 'utf8'
)
if mydb.is_connected():
'''
Check if this table exits. If not, then create a new one.
'''
mycursor = mydb.cursor()
mycursor.execute("""
SELECT COUNT(*)
FROM information_schema.tables
WHERE table_name = '{0}'
""".format(setting.TABLE_NAME))
if mycursor.fetchone()[0] != 1:
mycursor.execute("CREATE TABLE {} ({})".format(setting.TABLE_NAME, setting.TABLE_ATTRIBUTES))
mydb.commit()
mycursor.close()
auth = tweepy.OAuthHandler(credentials.API_KEYS, credentials.API_SECRET_KEYS)
auth.set_access_token(credentials.ACCESS_TOKEN, credentials.ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth = api.auth, listener = myStreamListener)
myStream.filter(languages=["en"], track = setting.TRACK_WORDS)
# Close the MySQL connection as it finished
# However, this won't be reached as the stream listener won't stop automatically
# Press STOP button to finish the process.
mydb.close()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ajayjg/omipynb/blob/master/omSpeech.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
OM **IPYNB**
---
#Imports
```
!pip install Keras==2.2.0
!pip install pandas==0.22.0
!pip install pandas-ml==0.5.0
!pip install tensorflow>=1.14.0
!pip install tensorflow-gpu>=1.14.0
!pip install scikit-learn==0.21
!pip install wget==3.2
```
#Download Dataset
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import wget
import tarfile
from shutil import rmtree
DATASET_URL = 'http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz'
ARCHIVE = os.path.basename(DATASET_URL)
wget.download(DATASET_URL)
if os.path.exists('data'):
rmtree('data')
os.makedirs('data/train')
with tarfile.open(ARCHIVE, 'r:gz') as tar:
tar.extractall(path='data/train')
os.remove(ARCHIVE)
```
# Training
```
%%file train.py
import numpy as np
# from sklearn.preprocessing import Imputer
# from sklearn.metrics import confusion_matrix
from pandas_ml import ConfusionMatrix
from sklearn.metrics import jaccard_similarity_score
#from keras.callbacks import Callback
import hashlib
import math
import os.path
import random
import re
import sys
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow.compat.v1 as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import Callback
import argparse
import os
from tensorflow.keras import backend as K
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.callbacks import TensorBoard
tf.compat.v1.disable_eager_execution()
def log_loss(y_true, y_pred, eps=1e-12):
y_pred = np.clip(y_pred, eps, 1. - eps)
ce = -(np.sum(y_true * np.log(y_pred), axis=1))
mce = ce.mean()
return mce
class ConfusionMatrixCallback(Callback):
def __init__(self, validation_data, validation_steps, wanted_words, all_words,
label2int):
self.validation_data = validation_data
self.validation_steps = validation_steps
self.wanted_words = wanted_words
self.all_words = all_words
self.label2int = label2int
self.int2label = {v: k for k, v in label2int.items()}
with open('confusion_matrix.txt', 'w'):
pass
with open('wanted_confusion_matrix.txt', 'w'):
pass
def accuracies(self, confusion_val):
accuracies = []
for i in range(confusion_val.shape[0]):
num = confusion_val[i, :].sum()
if num:
accuracies.append(confusion_val[i, i] / num)
else:
accuracies.append(0.0)
accuracies = np.float32(accuracies)
return accuracies
def accuracy(self, confusion_val):
num_correct = 0
for i in range(confusion_val.shape[0]):
num_correct += confusion_val[i, i]
accuracy = float(num_correct) / confusion_val.sum()
return accuracy
def on_epoch_end(self, epoch, logs=None):
y_true, y_pred = [], []
for i in range(self.validation_steps):
X_batch, y_true_batch = next(self.validation_data)
y_pred_batch = self.model.predict(X_batch)
y_true.extend(y_true_batch)
y_pred.extend(y_pred_batch)
y_true = np.float32(y_true)
y_pred = np.float32(y_pred)
val_loss = log_loss(y_true, y_pred)
# map integer labels to strings
y_true = list(y_true.argmax(axis=-1))
y_pred = list(y_pred.argmax(axis=-1))
y_true = [self.int2label[y] for y in y_true]
y_pred = [self.int2label[y] for y in y_pred]
confusion = ConfusionMatrix(y_true, y_pred)
accs = self.accuracies(confusion._df_confusion.values)
acc = self.accuracy(confusion._df_confusion.values)
# same for wanted words
y_true = [y if y in self.wanted_words else '_unknown_' for y in y_true]
y_pred = [y if y in self.wanted_words else '_unknown_' for y in y_pred]
wanted_words_confusion = ConfusionMatrix(y_true, y_pred)
wanted_accs = self.accuracies(wanted_words_confusion._df_confusion.values)
acc_line = ('\n[%03d]: val_categorical_accuracy: %.2f, '
'val_mean_categorical_accuracy_wanted: %.2f') % (
epoch, acc, wanted_accs.mean()) # noqa
with open('confusion_matrix.txt', 'a') as f:
f.write('%s\n' % acc_line)
f.write(confusion.to_dataframe().to_string())
with open('wanted_confusion_matrix.txt', 'a') as f:
f.write('%s\n' % acc_line)
f.write(wanted_words_confusion.to_dataframe().to_string())
logs['val_loss'] = val_loss
logs['val_categorical_accuracy'] = acc
logs['val_mean_categorical_accuracy_all'] = accs.mean()
logs['val_mean_categorical_accuracy_wanted'] = wanted_accs.mean()
# vghbjnm
def data_gen(audio_processor,
sess,
batch_size=128,
background_frequency=0.3,
background_volume_range=0.15,
foreground_frequency=0.3,
foreground_volume_range=0.15,
time_shift_frequency=0.3,
time_shift_range=[-500, 0],
mode='validation',
flip_frequency=0.0,
silence_volume_range=0.3):
ep_count = 0
offset = 0
if mode != 'training':
background_frequency = 0.0
background_volume_range = 0.0
foreground_frequency = 0.0
foreground_volume_range = 0.0
time_shift_frequency = 0.0
time_shift_range = [0, 0]
flip_frequency = 0.0
# silence_volume_range: stays the same for validation
while True:
X, y = audio_processor.get_data(
how_many=batch_size,
offset=0 if mode == 'training' else offset,
background_frequency=background_frequency,
background_volume_range=background_volume_range,
foreground_frequency=foreground_frequency,
foreground_volume_range=foreground_volume_range,
time_shift_frequency=time_shift_frequency,
time_shift_range=time_shift_range,
mode=mode,
sess=sess,
flip_frequency=flip_frequency,
silence_volume_range=silence_volume_range)
offset += batch_size
if offset > audio_processor.set_size(mode) - batch_size:
offset = 0
print('\n[Ep:%03d: %s-mode]' % (ep_count, mode))
ep_count += 1
yield X, y
def tf_roll(a, shift, a_len=16000):
# https://stackoverflow.com/questions/42651714/vector-shift-roll-in-tensorflow
def roll_left(a, shift, a_len):
shift %= a_len
rolled = tf.concat([a[a_len - shift:, :], a[:a_len - shift, :]], axis=0)
return rolled
def roll_right(a, shift, a_len):
shift = -shift
shift %= a_len
rolled = tf.concat([a[shift:, :], a[:shift, :]], axis=0)
return rolled
# https://stackoverflow.com/questions/35833011/how-to-add-if-condition-in-a-tensorflow-graph
return tf.cond(
tf.greater_equal(shift, 0),
true_fn=lambda: roll_left(a, shift, a_len),
false_fn=lambda: roll_right(a, shift, a_len))
# gvhbnm
MAX_NUM_WAVS_PER_CLASS = 2**27 - 1 # ~134M
SILENCE_LABEL = '_silence_'
SILENCE_INDEX = 0
UNKNOWN_WORD_LABEL = '_unknown_'
UNKNOWN_WORD_INDEX = 1
BACKGROUND_NOISE_DIR_NAME = '_background_noise_'
RANDOM_SEED = 59185
def prepare_words_list(wanted_words):
"""Prepends common tokens to the custom word list."""
return [SILENCE_LABEL, UNKNOWN_WORD_LABEL] + wanted_words
def which_set(filename, validation_percentage, testing_percentage):
"""Determines which data partition the file should belong to."""
dir_name = os.path.basename(os.path.dirname(filename))
if dir_name == 'unknown_unknown':
return 'training'
base_name = os.path.basename(filename)
hash_name = re.sub(r'_nohash_.*$', '', base_name)
hash_name_hashed = hashlib.sha1(tf.compat.as_bytes(hash_name)).hexdigest()
percentage_hash = ((int(hash_name_hashed, 16) % (MAX_NUM_WAVS_PER_CLASS + 1))
* (100.0 / MAX_NUM_WAVS_PER_CLASS))
if percentage_hash < validation_percentage:
result = 'validation'
elif percentage_hash < (testing_percentage + validation_percentage):
result = 'testing'
else:
result = 'training'
return result
def load_wav_file(filename):
"""Loads an audio file and returns a float PCM-encoded array of samples."""
with tf.Session(graph=tf.Graph()) as sess:
wav_filename_placeholder = tf.placeholder(tf.string, [])
wav_loader = tf.io.read_file(wav_filename_placeholder)
wav_decoder = tf.audio.decode_wav(wav_loader, desired_channels=1)
return sess.run(
wav_decoder, feed_dict={
wav_filename_placeholder: filename
}).audio.flatten()
def save_wav_file(filename, wav_data, sample_rate):
"""Saves audio sample data to a .wav audio file."""
with tf.Session(graph=tf.Graph()) as sess:
wav_filename_placeholder = tf.placeholder(tf.string, [])
sample_rate_placeholder = tf.placeholder(tf.int32, [])
wav_data_placeholder = tf.placeholder(tf.float32, [None, 1])
wav_encoder = tf.audio.encode_wav(wav_data_placeholder,
sample_rate_placeholder)
wav_saver = tf.io.write_file(wav_filename_placeholder, wav_encoder)
sess.run(
wav_saver,
feed_dict={
wav_filename_placeholder: filename,
sample_rate_placeholder: sample_rate,
wav_data_placeholder: np.reshape(wav_data, (-1, 1))
})
class AudioProcessor(object):
"""Handles loading, partitioning, and preparing audio training data."""
def __init__(self,
data_dirs,
silence_percentage,
unknown_percentage,
wanted_words,
validation_percentage,
testing_percentage,
model_settings,
output_representation=False):
self.data_dirs = data_dirs
assert output_representation in {'raw', 'spec', 'mfcc', 'mfcc_and_raw'}
self.output_representation = output_representation
self.model_settings = model_settings
for data_dir in self.data_dirs:
self.maybe_download_and_extract_dataset(data_dir)
self.prepare_data_index(silence_percentage, unknown_percentage,
wanted_words, validation_percentage,
testing_percentage)
self.prepare_background_data()
self.prepare_processing_graph(model_settings)
def maybe_download_and_extract_dataset(self, data_dir):
if not os.path.exists(data_dir):
print('Please download the dataset!')
sys.exit(0)
def prepare_data_index(self, silence_percentage, unknown_percentage,
wanted_words, validation_percentage,
testing_percentage):
"""Prepares a list of the samples organized by set and label."""
random.seed(RANDOM_SEED)
wanted_words_index = {}
for index, wanted_word in enumerate(wanted_words):
wanted_words_index[wanted_word] = index + 2
self.data_index = {'validation': [], 'testing': [], 'training': []}
unknown_index = {'validation': [], 'testing': [], 'training': []}
all_words = {}
# Look through all the subfolders to find audio samples
for data_dir in self.data_dirs:
search_path = os.path.join(data_dir, '*', '*.wav')
for wav_path in tf.io.gfile.glob(search_path):
word = re.search('.*/([^/]+)/.*.wav', wav_path).group(1).lower()
# Treat the '_background_noise_' folder as a special case,
# since we expect it to contain long audio samples we mix in
# to improve training.
if word == BACKGROUND_NOISE_DIR_NAME:
continue
all_words[word] = True
set_index = which_set(wav_path, validation_percentage,
testing_percentage)
# If it's a known class, store its detail, otherwise add it to the list
# we'll use to train the unknown label.
if word in wanted_words_index:
self.data_index[set_index].append({'label': word, 'file': wav_path})
else:
unknown_index[set_index].append({'label': word, 'file': wav_path})
if not all_words:
raise Exception('No .wavs found at ' + search_path)
for index, wanted_word in enumerate(wanted_words):
if wanted_word not in all_words:
raise Exception('Expected to find ' + wanted_word +
' in labels but only found ' +
', '.join(all_words.keys()))
# We need an arbitrary file to load as the input for the silence samples.
# It's multiplied by zero later, so the content doesn't matter.
silence_wav_path = self.data_index['training'][0]['file']
for set_index in ['validation', 'testing', 'training']:
set_size = len(self.data_index[set_index])
silence_size = int(math.ceil(set_size * silence_percentage / 100))
for _ in range(silence_size):
self.data_index[set_index].append({
'label': SILENCE_LABEL,
'file': silence_wav_path
})
# Pick some unknowns to add to each partition of the data set.
random.shuffle(unknown_index[set_index])
unknown_size = int(math.ceil(set_size * unknown_percentage / 100))
self.data_index[set_index].extend(unknown_index[set_index][:unknown_size])
# Make sure the ordering is random.
for set_index in ['validation', 'testing', 'training']:
# not really needed since the indices are chosen by random
random.shuffle(self.data_index[set_index])
# Prepare the rest of the result data structure.
self.words_list = prepare_words_list(wanted_words)
self.word_to_index = {}
for word in all_words:
if word in wanted_words_index:
self.word_to_index[word] = wanted_words_index[word]
else:
self.word_to_index[word] = UNKNOWN_WORD_INDEX
self.word_to_index[SILENCE_LABEL] = SILENCE_INDEX
def prepare_background_data(self):
"""Searches a folder for background noise audio and loads it into memory."""
self.background_data = []
background_dir = os.path.join(self.data_dirs[0], BACKGROUND_NOISE_DIR_NAME)
if not os.path.exists(background_dir):
return self.background_data
with tf.Session(graph=tf.Graph()) as sess:
wav_filename_placeholder = tf.placeholder(tf.string, [])
wav_loader = tf.io.read_file(wav_filename_placeholder)
wav_decoder = tf.audio.decode_wav(wav_loader, desired_channels=1)
search_path = os.path.join(self.data_dirs[0], BACKGROUND_NOISE_DIR_NAME,
'*.wav')
for wav_path in tf.io.gfile.glob(search_path):
wav_data = sess.run(
wav_decoder, feed_dict={
wav_filename_placeholder: wav_path
}).audio.flatten()
self.background_data.append(wav_data)
if not self.background_data:
raise Exception('No background wav files were found in ' + search_path)
def prepare_processing_graph(self, model_settings):
"""Builds a TensorFlow graph to apply the input distortions."""
desired_samples = model_settings['desired_samples']
self.wav_filename_placeholder_ = tf.placeholder(
tf.string, [], name='filename')
wav_loader = tf.io.read_file(self.wav_filename_placeholder_)
wav_decoder = tf.audio.decode_wav(
wav_loader, desired_channels=1, desired_samples=desired_samples)
# Allow the audio sample's volume to be adjusted.
self.foreground_volume_placeholder_ = tf.placeholder(
tf.float32, [], name='foreground_volme')
scaled_foreground = tf.multiply(wav_decoder.audio,
self.foreground_volume_placeholder_)
# Shift the sample's start position, and pad any gaps with zeros.
self.time_shift_placeholder_ = tf.placeholder(tf.int32, name='timeshift')
shifted_foreground = tf_roll(scaled_foreground,
self.time_shift_placeholder_)
# Mix in background noise.
self.background_data_placeholder_ = tf.placeholder(
tf.float32, [desired_samples, 1], name='background_data')
self.background_volume_placeholder_ = tf.placeholder(
tf.float32, [], name='background_volume')
background_mul = tf.multiply(self.background_data_placeholder_,
self.background_volume_placeholder_)
background_add = tf.add(background_mul, shifted_foreground)
# removed clipping: tf.clip_by_value(background_add, -1.0, 1.0)
self.background_clamp_ = background_add
self.background_clamp_ = tf.reshape(self.background_clamp_,
(1, model_settings['desired_samples']))
# Run the spectrogram and MFCC ops to get a 2D 'fingerprint' of the audio.
stfts = tf.signal.stft(
self.background_clamp_,
frame_length=model_settings['window_size_samples'],
frame_step=model_settings['window_stride_samples'],
fft_length=None)
self.spectrogram_ = tf.abs(stfts)
num_spectrogram_bins = self.spectrogram_.shape[-1]
lower_edge_hertz, upper_edge_hertz = 80.0, 7600.0
linear_to_mel_weight_matrix = \
tf.signal.linear_to_mel_weight_matrix(
model_settings['dct_coefficient_count'],
num_spectrogram_bins, model_settings['sample_rate'],
lower_edge_hertz, upper_edge_hertz)
mel_spectrograms = tf.tensordot(self.spectrogram_,
linear_to_mel_weight_matrix, 1)
mel_spectrograms.set_shape(self.spectrogram_.shape[:-1].concatenate(
linear_to_mel_weight_matrix.shape[-1:]))
log_mel_spectrograms = tf.log(mel_spectrograms + 1e-6)
self.mfcc_ = tf.signal.mfccs_from_log_mel_spectrograms(
log_mel_spectrograms)[:, :, :
model_settings['num_log_mel_features']] # :13
def set_size(self, mode):
"""Calculates the number of samples in the dataset partition."""
return len(self.data_index[mode])
def get_data(self,
how_many,
offset,
background_frequency,
background_volume_range,
foreground_frequency,
foreground_volume_range,
time_shift_frequency,
time_shift_range,
mode,
sess,
flip_frequency=0.0,
silence_volume_range=0.0):
"""Gather samples from the data set, applying transformations as needed."""
# Pick one of the partitions to choose samples from.
model_settings = self.model_settings
candidates = self.data_index[mode]
if how_many == -1:
sample_count = len(candidates)
else:
sample_count = max(0, min(how_many, len(candidates) - offset))
# Data and labels will be populated and returned.
if self.output_representation == 'raw':
data_dim = model_settings['desired_samples']
elif self.output_representation == 'spec':
data_dim = model_settings['spectrogram_length'] * model_settings[
'spectrogram_frequencies']
elif self.output_representation == 'mfcc':
data_dim = model_settings['spectrogram_length'] * \
model_settings['num_log_mel_features']
elif self.output_representation == 'mfcc_and_raw':
data_dim = model_settings['spectrogram_length'] * \
model_settings['num_log_mel_features']
raw_data = np.zeros((sample_count, model_settings['desired_samples']))
data = np.zeros((sample_count, data_dim))
labels = np.zeros((sample_count, model_settings['label_count']))
desired_samples = model_settings['desired_samples']
use_background = self.background_data and (mode == 'training')
pick_deterministically = (mode != 'training')
# Use the processing graph we created earlier to repeatedly to generate the
# final output sample data we'll use in training.
for i in xrange(offset, offset + sample_count):
# Pick which audio sample to use.
if how_many == -1 or pick_deterministically:
sample_index = i
sample = candidates[sample_index]
else:
sample_index = np.random.randint(len(candidates))
sample = candidates[sample_index]
# If we're time shifting, set up the offset for this sample.
if np.random.uniform(0.0, 1.0) < time_shift_frequency:
time_shift = np.random.randint(time_shift_range[0],
time_shift_range[1] + 1)
else:
time_shift = 0
input_dict = {
self.wav_filename_placeholder_: sample['file'],
self.time_shift_placeholder_: time_shift,
}
# Choose a section of background noise to mix in.
if use_background:
background_index = np.random.randint(len(self.background_data))
background_samples = self.background_data[background_index]
background_offset = np.random.randint(
0,
len(background_samples) - model_settings['desired_samples'])
background_clipped = background_samples[background_offset:(
background_offset + desired_samples)]
background_reshaped = background_clipped.reshape([desired_samples, 1])
if np.random.uniform(0, 1) < background_frequency:
background_volume = np.random.uniform(0, background_volume_range)
else:
background_volume = 0.0
# silence class with all zeros is boring!
if sample['label'] == SILENCE_LABEL and \
np.random.uniform(0, 1) < 0.9:
background_volume = np.random.uniform(0, silence_volume_range)
else:
background_reshaped = np.zeros([desired_samples, 1])
background_volume = 0.0
input_dict[self.background_data_placeholder_] = background_reshaped
input_dict[self.background_volume_placeholder_] = background_volume
# If we want silence, mute out the main sample but leave the background.
if sample['label'] == SILENCE_LABEL:
input_dict[self.foreground_volume_placeholder_] = 0.0
else:
# Turn it up or down
foreground_volume = 1.0
if np.random.uniform(0, 1) < foreground_frequency:
foreground_volume = 1.0 + np.random.uniform(-foreground_volume_range,
foreground_volume_range)
# flip sign
if np.random.uniform(0, 1) < flip_frequency:
foreground_volume *= -1.0
input_dict[self.foreground_volume_placeholder_] = foreground_volume
# Run the graph to produce the output audio.
if self.output_representation == 'raw':
data[i - offset, :] = sess.run(
self.background_clamp_, feed_dict=input_dict).flatten()
elif self.output_representation == 'spec':
data[i - offset, :] = sess.run(
self.spectrogram_, feed_dict=input_dict).flatten()
elif self.output_representation == 'mfcc':
data[i - offset, :] = sess.run(
self.mfcc_, feed_dict=input_dict).flatten()
elif self.output_representation == 'mfcc_and_raw':
raw_val, mfcc_val = sess.run([self.background_clamp_, self.mfcc_],
feed_dict=input_dict)
data[i - offset, :] = mfcc_val.flatten()
raw_data[i - offset, :] = raw_val.flatten()
label_index = self.word_to_index[sample['label']]
labels[i - offset, label_index] = 1
if self.output_representation != 'mfcc_and_raw':
return data, labels
else:
return [data, raw_data], labels
def get_unprocessed_data(self, how_many, model_settings, mode):
"""Gets sample data without transformations."""
candidates = self.data_index[mode]
if how_many == -1:
sample_count = len(candidates)
else:
sample_count = how_many
desired_samples = model_settings['desired_samples']
words_list = self.words_list
data = np.zeros((sample_count, desired_samples))
labels = []
with tf.Session(graph=tf.Graph()) as sess:
wav_filename_placeholder = tf.placeholder(tf.string, [], name='filename')
wav_loader = tf.io.read_file(wav_filename_placeholder)
wav_decoder = tf.audio.decode_wav(
wav_loader, desired_channels=1, desired_samples=desired_samples)
foreground_volume_placeholder = tf.placeholder(
tf.float32, [], name='foreground_volume')
scaled_foreground = tf.multiply(wav_decoder.audio,
foreground_volume_placeholder)
for i in range(sample_count):
if how_many == -1:
sample_index = i
else:
sample_index = np.random.randint(len(candidates))
sample = candidates[sample_index]
input_dict = {wav_filename_placeholder: sample['file']}
if sample['label'] == SILENCE_LABEL:
input_dict[foreground_volume_placeholder] = 0
else:
input_dict[foreground_volume_placeholder] = 1
data[i, :] = sess.run(scaled_foreground, feed_dict=input_dict).flatten()
label_index = self.word_to_index[sample['label']]
labels.append(words_list[label_index])
return data, labels
def summary(self):
"""Prints a summary of classes and label distributions."""
set_counts = {}
print('There are %d classes.' % (len(self.word_to_index)))
print("1%% <-> %d samples in 'training'" % int(
self.set_size('training') / 100))
for set_index in ['training', 'validation', 'testing']:
counts = {k: 0 for k in sorted(self.word_to_index.keys())}
num_total = self.set_size(set_index)
for data_point in self.data_index[set_index]:
counts[data_point['label']] += (1.0 / num_total) * 100.0
set_counts[set_index] = counts
print('%-13s%-6s%-6s%-6s' % ('', 'Train', 'Val', 'Test'))
for label_name in sorted(
self.word_to_index.keys(), key=self.word_to_index.get):
line = '%02d %-12s: ' % (self.word_to_index[label_name], label_name)
for set_index in ['training', 'validation', 'testing']:
line += '%.1f%% ' % (set_counts[set_index][label_name])
print(line)
#cevw
def preprocess(x):
x = (x + 0.8) / 7.0
x = K.clip(x, -5, 5)
return x
def preprocess_raw(x):
return x
Preprocess = Lambda(preprocess)
PreprocessRaw = Lambda(preprocess_raw)
def relu6(x):
return K.relu(x, max_value=6)
def conv_1d_time_stacked_model(input_size=16000, num_classes=11):
""" Creates a 1D model for temporal data.
Note: Use only
with compute_mfcc = False (e.g. raw waveform data).
Args:
input_size: How big the input vector is.
num_classes: How many classes are to be recognized.
Returns:
Compiled keras model
"""
input_layer = Input(shape=[input_size])
x = input_layer
x = Reshape([800, 20])(x)
x = PreprocessRaw(x)
def _reduce_conv(x, num_filters, k, strides=2, padding='valid'):
x = Conv1D(
num_filters,
k,
padding=padding,
use_bias=False,
kernel_regularizer=l2(0.00001))(
x)
x = BatchNormalization()(x)
x = Activation(relu6)(x)
x = MaxPool1D(pool_size=3, strides=strides, padding=padding)(x)
return x
def _context_conv(x, num_filters, k, dilation_rate=1, padding='valid'):
x = Conv1D(
num_filters,
k,
padding=padding,
dilation_rate=dilation_rate,
kernel_regularizer=l2(0.00001),
use_bias=False)(
x)
x = BatchNormalization()(x)
x = Activation(relu6)(x)
return x
x = _context_conv(x, 32, 1)
x = _reduce_conv(x, 48, 3)
x = _context_conv(x, 48, 3)
x = _reduce_conv(x, 96, 3)
x = _context_conv(x, 96, 3)
x = _reduce_conv(x, 128, 3)
x = _context_conv(x, 128, 3)
x = _reduce_conv(x, 160, 3)
x = _context_conv(x, 160, 3)
x = _reduce_conv(x, 192, 3)
x = _context_conv(x, 192, 3)
x = _reduce_conv(x, 256, 3)
x = _context_conv(x, 256, 3)
x = Dropout(0.3)(x)
x = Conv1D(num_classes, 5, activation='softmax')(x)
x = Reshape([-1])(x)
model = Model(input_layer, x, name='conv_1d_time_stacked')
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=3e-4),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
return model
def speech_model(model_type, input_size, num_classes=11, *args, **kwargs):
if model_type == 'conv_1d_time_stacked':
return conv_1d_time_stacked_model(input_size, num_classes)
else:
raise ValueError('Invalid model: %s' % model_type)
def prepare_model_settings(label_count,
sample_rate,
clip_duration_ms,
window_size_ms,
window_stride_ms,
dct_coefficient_count,
num_log_mel_features,
output_representation='raw'):
"""Calculates common settings needed for all models."""
desired_samples = int(sample_rate * clip_duration_ms / 1000)
window_size_samples = int(sample_rate * window_size_ms / 1000)
window_stride_samples = int(sample_rate * window_stride_ms / 1000)
length_minus_window = (desired_samples - window_size_samples)
spectrogram_frequencies = 257
if length_minus_window < 0:
spectrogram_length = 0
else:
spectrogram_length = 1 + int(length_minus_window / window_stride_samples)
if output_representation == 'mfcc':
fingerprint_size = num_log_mel_features * spectrogram_length
elif output_representation == 'raw':
fingerprint_size = desired_samples
elif output_representation == 'spec':
fingerprint_size = spectrogram_frequencies * spectrogram_length
elif output_representation == 'mfcc_and_raw':
fingerprint_size = num_log_mel_features * spectrogram_length
return {
'desired_samples': desired_samples,
'window_size_samples': window_size_samples,
'window_stride_samples': window_stride_samples,
'spectrogram_length': spectrogram_length,
'spectrogram_frequencies': spectrogram_frequencies,
'dct_coefficient_count': dct_coefficient_count,
'fingerprint_size': fingerprint_size,
'label_count': label_count,
'sample_rate': sample_rate,
'num_log_mel_features': num_log_mel_features
}
from collections import OrderedDict
def get_classes(wanted_only=False):
if wanted_only:
classes = 'stop down off right up go on yes left no'
classes = classes.split(' ')
assert len(classes) == 10
else:
classes = ('sheila nine stop bed four six down bird marvin cat off right '
'seven eight up three happy go zero on wow dog yes five one tree'
' house two left no') # noqa
classes = classes.split(' ')
assert len(classes) == 30
return classes
def get_int2label(wanted_only=False, extend_reversed=False):
classes = get_classes(
wanted_only=wanted_only, extend_reversed=extend_reversed)
classes = prepare_words_list(classes)
int2label = {i: l for i, l in enumerate(classes)}
int2label = OrderedDict(sorted(int2label.items(), key=lambda x: x[0]))
return int2label
def get_label2int(wanted_only=False, extend_reversed=False):
classes = get_classes(
wanted_only=wanted_only, extend_reversed=extend_reversed)
classes = prepare_words_list(classes)
label2int = {l: i for i, l in enumerate(classes)}
label2int = OrderedDict(sorted(label2int.items(), key=lambda x: x[1]))
return label2int
#train
parser = argparse.ArgumentParser(description='set input arguments')
parser.add_argument(
'-sample_rate',
action='store',
dest='sample_rate',
type=int,
default=16000,
help='Sample rate of audio')
parser.add_argument(
'-batch_size',
action='store',
dest='batch_size',
type=int,
default=32,
help='Size of the training batch')
parser.add_argument(
'-output_representation',
action='store',
dest='output_representation',
type=str,
default='raw',
help='raw, spec, mfcc or mfcc_and_raw')
parser.add_argument(
'-data_dirs',
'--list',
dest='data_dirs',
nargs='+',
required=True,
help='<Required> The list of data directories. e.g., data/train')
args = parser.parse_args()
parser.print_help()
print('input args: ', args)
if __name__ == '__main__':
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=1.0)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
tf.keras.backend.set_session(sess)
data_dirs = args.data_dirs
output_representation = args.output_representation
sample_rate = args.sample_rate
batch_size = args.batch_size
classes = get_classes(wanted_only=True)
model_settings = prepare_model_settings(
label_count=len(prepare_words_list(classes)),
sample_rate=sample_rate,
clip_duration_ms=1000,
window_size_ms=30.0,
window_stride_ms=10.0,
dct_coefficient_count=80,
num_log_mel_features=60,
output_representation=output_representation)
print(model_settings)
ap = AudioProcessor(
data_dirs=data_dirs,
wanted_words=classes,
silence_percentage=13.0,
unknown_percentage=60.0,
validation_percentage=10.0,
testing_percentage=0.0,
model_settings=model_settings,
output_representation=output_representation)
train_gen = data_gen(ap, sess, batch_size=batch_size, mode='training')
val_gen = data_gen(ap, sess, batch_size=batch_size, mode='validation')
model = speech_model(
'conv_1d_time_stacked',
model_settings['fingerprint_size']
if output_representation != 'raw' else model_settings['desired_samples'],
# noqa
num_classes=model_settings['label_count'],
**model_settings)
# embed()
checkpoints_path = os.path.join('checkpoints', 'conv_1d_time_stacked_model')
if not os.path.exists(checkpoints_path):
os.makedirs(checkpoints_path)
callbacks = [
ConfusionMatrixCallback(
val_gen,
ap.set_size('validation') // batch_size,
wanted_words=prepare_words_list(get_classes(wanted_only=True)),
all_words=prepare_words_list(classes),
label2int=ap.word_to_index),
ReduceLROnPlateau(
monitor='val_categorical_accuracy',
mode='max',
factor=0.5,
patience=4,
verbose=1,
min_lr=1e-5),
TensorBoard(log_dir='logs'),
ModelCheckpoint(
os.path.join(checkpoints_path,
'ep-{epoch:03d}-vl-{val_loss:.4f}.hdf5'),
save_best_only=True,
monitor='val_categorical_accuracy',
mode='max')
]
model.fit_generator(
train_gen,
steps_per_epoch=ap.set_size('training') // batch_size,
epochs=100,
verbose=1,
callbacks=callbacks)
eval_res = model.evaluate_generator(val_gen,
ap.set_size('validation') // batch_size)
print(eval_res)
```
# It's show time
```
%run train.py -sample_rate 16000 -batch_size 64 -output_representation raw -data_dirs data/train
```
| github_jupyter |
<img src="images/kiksmeisedwengougent.png" alt="Banner" width="1100"/>
<div>
<font color=#690027 markdown="1">
<h1>CLASSIFICATIE STOMATA OP BEZONDE EN BESCHADUWDE BLADEREN</h1>
</font>
</div>
<div class="alert alert-box alert-success">
In deze notebook zal je bezonde en beschaduwde bladeren van elkaar scheiden. De twee klassen zijn bij benadering lineair scheidbaar.
</div>
Krappa of crabwood is een snel groeiende boomsoort die veelvuldig voorkomt in het Amazonegebied. Volwassen exemplaren kunnen een diameter hebben van meer dan een meter en kunnen meer dan 40 meter hoog zijn. Het hout van hoge kwaliteit wordt gebruikt voor het maken van meubelen, vloeren, masten... Uit de schors wordt een koorstwerend middel gehaald. Uit de zaden produceert men een olie voor medicinale toepassingen, waaronder de behandeling van huidziekten en tetanos, en als afweermiddel voor insecten.
<table><tr>
<td> <img src="images/andirobaamazonica.jpg" alt="Drawing" width="200"/></td>
<td> <img src="images/crabwoodtree.jpg" alt="Drawing" width="236"/> </td>
</tr></table>
<center>
Foto's: Mauroguanandi [Public domain] [2] en P. S. Sena [CC BY-SA 4.0] [3].
</center>
Omdat sommige klimaatmodellen een stijging van de temperatuur en een vermindering in regenval voorspellen in de komende decennia, is het belangrijk om te weten hoe deze bomen zich aanpassen aan veranderende omstandigheden. <br>
Wetenschappers Camargo en Marenco deden onderzoek in het Amazonewoud [1].<br>
Naast de invloed van seizoensgebonden regenval, bekeken ze ook stomatale kenmerken van bladeren onder bezonde en onder beschaduwde condities.<br> Hiervoor werden een aantal planten, opgekweekt in de schaduw, verplaatst naar vol zonlicht gedurende 60 dagen. Een andere groep planten werd in de schaduw gehouden. <br>De kenmerken van de stomata werden opgemeten op afdrukken van de bladeren gemaakt met transparante nagellak.
### Nodige modules importeren
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from matplotlib import animation
from IPython.display import HTML
```
<div>
<font color=#690027 markdown="1">
<h2>1. Inlezen van de data</h2>
</font>
</div>
Lees met de module `pandas` de dataset in.
```
stomata = pd.read_csv(".data/schaduwzon.dat", header="infer") # in te lezen tabel heeft een hoofding
```
<div>
<font color=#690027 markdown="1">
<h2>2. Tonen van de ingelezen data</h2>
</font>
</div>
<div>
<font color=#690027 markdown="1">
<h3>2.1 Tabel met de data</h3>
</font>
</div>
Kijk de gegevens in.
```
stomata
```
Welke gegevens zijn kenmerken? <br> Welk gegeven is het label? <br>
Deze gegevens kunnen worden gevisualiseerd met een puntenwolk. Welke matrices heb je daarvoor nodig?
Antwoord:
De plantensoort is overal dezelfde: Carapa. <br>
De kenmerken zijn de stomatale dichtheid en de stomatale grootte. <br>
Het aantal monsters is 50.<br>
Het label is het milieu waarin het monster werd geplukt: zon of schaduw.<br>
Om de puntenwolk weer te geven, heb je twee matrices nodig met dimensie 50x1.
De onderzoekers zetten de stomatale dichtheid uit tegenover de stomatale lengte.<br> Ga op dezelfde manier te werk.
<div>
<font color=#690027 markdown="1">
<h3>2.2 De data weergeven in puntenwolk</h3>
</font>
</div>
```
x1 = stomata["stomatale lengte"] # kenmerk: lengte
x2 = stomata["stomatale dichtheid"] # kenmerk: dichtheid
x1 = np.array(x1) # kenmerk: lengte
x2 = np.array(x2) # kenmerk: dichtheid
# dichtheid t.o.v. lengte
plt.figure()
plt.scatter(x1[:25], x2[:25], color="lightgreen", marker="o", label="zon") # zon zijn eerste 25
plt.scatter(x1[25:], x2[25:], color="darkgreen", marker="o", label="schaduw") # schaduw zijn de volgende 25
plt.title("Carapa")
plt.xlabel("stomatale lengte (micron)")
plt.ylabel("stomatale densiteit (per mm²)")
plt.legend(loc="lower left")
plt.show()
```
<div>
<font color=#690027 markdown="1">
<h2>3. Standaardiseren</h2>
</font>
</div>
<div>
<font color=#690027 markdown="1">
<h3>3.1 Lineair scheidbaar?</h3>
</font>
</div>
Er zijn twee groepen te onderscheiden. Ze zijn op enkele punten na lineair scheidbaar.
De grootte-orde van deze gegevens is sterk verschillend. De gegevens moeten gestandaardiseerd worden.
<div>
<font color=#690027 markdown="1">
<h3>3.2 Standaardiseren</h3>
</font>
</div>
<div class="alert alert-block alert-warning">
Meer uitleg over het belang van standaardiseren vind je in de notebook 'Standaardiseren'.
</div>
```
x1_gem = np.mean(x1)
x1_std = np.std(x1)
x2_gem = np.mean(x2)
x2_std = np.std(x2)
x1 = (x1 - x1_gem) / x1_std
x2 = (x2 - x2_gem) / x2_std
# dichtheid t.o.v. lengte
plt.figure()
plt.scatter(x1[:25], x2[:25], color="lightgreen", marker="o", label="zon") # zon zijn eerste 25
plt.scatter(x1[25:], x2[25:], color="darkgreen", marker="o", label="schaduw") # schaduw zijn de volgende 25
plt.title("Carapa")
plt.xlabel("gestandaardiseerde stomatale lengte (micron)")
plt.ylabel("gestandaardiseerde stomatale densiteit (per mm²)")
plt.legend(loc="lower left")
plt.show()
```
<div>
<font color=#690027 markdown="1">
<h2>4. Classificatie met Perceptron</h2>
</font>
</div>
<div>
<font color=#690027 markdown="1">
<h3>4.1 Geannoteerde data</h3>
</font>
</div>
Het ML-systeem zal machinaal leren uit de 50 gelabelde voorbeelden.<br>
Lees de labels in.
```
y = stomata["milieu"] # labels: tweede kolom van de oorspronkelijke tabel
y = np.array(y)
print(y)
y = np.where(y == "zon", 1, 0) # labels numeriek maken, zon:1, schaduw:0
print(y)
X = np.stack((x1, x2), axis = 1) # omzetten naar gewenste formaat
```
<div>
<font color=#690027 markdown="1">
<h3>4.2 Perceptron</h3>
</font>
</div>
<div class="alert alert-box alert-info">
Als twee klassen lineair scheidbaar zijn, kan men een rechte vinden die beide klassen scheidt. Men kan de vergelijking van de scheidingslijn opschrijven in de vorm $ax+by+c=0$. Voor elk punt $(x_{1}, y_{1})$ in de ene klasse is dan $ax_{1}+by_{1}+c \geq 0$ en voor elk punt $(x_{2}, y_{2})$ in de andere klasse is dan $ax_{2} +by_{2}+c < 0$. <br>
Zolang dit niet voldaan is, moeten de coëfficiënten worden aangepast.<br>
De trainingset met bijhorende labels wordt enkele keren doorlopen. Voor elk punt worden de coëfficiënten aangepast indien nodig.
</div>
Er wordt een willekeurige rechte gekozen die de twee soorten bladeren zou moeten scheiden. Dit gebeurt door de coëfficiënten in de vergelijking van de rechte willekeurig te kiezen. Beide kanten van de scheidingslijn bepalen een andere klasse. <br>Met systeem wordt getraind met de trainingset en de gegeven labels. Voor elk punt van de trainingset wordt nagegaan of het punt aan de juiste kant van de scheidingslijn ligt. Bij een punt die niet aan de juiste kant van de scheidingslijn ligt, worden de coëfficiënten in de vergelijking van de rechte aangepast. <br>
De volledige trainingset wordt een aantal keer doorlopen. Het systeem leert gedurende deze 'pogingen' of *epochs*.
```
def grafiek(coeff_x1, coeff_x2, cte):
"""Plot scheidingsrechte ('decision boundary') en geeft vergelijking ervan."""
# stomatale densiteit t.o.v. lengte van stomata
plt.figure()
plt.scatter(x1[:25], x2[:25], color="lightgreen", marker="o", label="zon") # zon zijn eerste 25 (label 1)
plt.scatter(x1[25:], x2[25:], color="darkgreen", marker="o", label="schaduw") # schaduw zijn de volgende 25 (label 0)
x = np.linspace(-1.5, 1.5, 10)
y_r = -coeff_x1/coeff_x2 * x - cte/coeff_x2
print("De grens is een rechte met vgl.", coeff_x1, "* x1 +", coeff_x2, "* x2 +", cte, "= 0")
plt.plot(x, y_r, color="black")
plt.title("Classificatie Carapa")
plt.xlabel("gestandaardiseerde stomatale lengte (micron)")
plt.ylabel("gestandaardiseerde stomatale densiteit (per mm²)")
plt.legend(loc="lower left")
plt.show()
class Perceptron(object):
"""Perceptron classifier."""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
"""self heeft drie parameters: leersnelheid, aantal pogingen, willekeurigheid."""
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Fit training data."""
rgen = np.random.RandomState(self.random_state)
# kolommatrix van de gewichten ('weights')
# willekeurig gegenereerd uit normale verdeling met gemiddelde 0 en standaardafwijking 0.01
# aantal gewichten is aantal kenmerken in X plus 1 (+1 voor bias)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=X.shape[1]+1) # gewichtenmatrix die 3 gewichten bevat
print("Initiële willekeurige gewichten:", self.w_)
self.errors_ = [] # foutenlijst
# plot grafiek met initiële scheidingsrechte
print("Initiële willekeurige rechte:")
grafiek(self.w_[1], self.w_[2], self.w_[0])
gewichtenlijst = np.array([self.w_])
# gewichten punt per punt aanpassen, gebaseerd op feedback van de verschillende pogingen
for _ in range(self.n_iter):
print("epoch =", _)
errors = 0
teller = 0
for x, label in zip(X, y): # x is datapunt, y overeenkomstig label
print("teller =", teller) # tel punten, het zijn er acht
print("punt:", x, "\tlabel:", label)
gegiste_klasse = self.predict(x)
print("gegiste klasse =", gegiste_klasse)
# aanpassing nagaan voor dit punt
update = self.eta * (label - gegiste_klasse) # als update = 0, juiste klasse, geen aanpassing nodig
print("update =", update)
# grafiek en gewichten eventueel aanpassen na dit punt
if update !=0:
self.w_[1:] += update *x
self.w_[0] += update
errors += update
print("gewichten =", self.w_) # bepalen voorlopige 'decision boundary'
gewichtenlijst = np.append(gewichtenlijst, [self.w_], axis =0)
teller += 1
self.errors_.append(errors) # na alle punten, totale fout toevoegen aan foutenlijst
print("foutenlijst =", self.errors_)
return self, gewichtenlijst # geeft lijst gewichtenmatrices terug
def net_input(self, x): # punt invullen in de voorlopige scheidingsrechte
"""Berekenen van z = lineaire combinatie van de inputs inclusief bias en de weights voor elke gegeven punt."""
return np.dot(x, self.w_[1:]) + self.w_[0]
def predict(self, x):
"""Gist klasse."""
print("punt ingevuld in vergelijking rechte:", self.net_input(x))
klasse = np.where(self.net_input(x) >=0, 1, 0)
return klasse
# perceptron, leersnelheid 0.0001 en 20 pogingen
ppn = Perceptron(eta=0.0001, n_iter=20)
gewichtenlijst = ppn.fit(X,y)[1]
print("Gewichtenlijst =", gewichtenlijst)
# animatie
xcoord = np.linspace(-1.5, 1.5, 10)
ycoord = []
for w in gewichtenlijst:
y_r = -w[1]/w[2] * xcoord - w[0]/w[2]
ycoord.append(y_r)
ycoord = np.array(ycoord) # type casting
fig, ax = plt.subplots()
line, = ax.plot(xcoord, ycoord[0])
plt.scatter(x1[:25], x2[:25], color="lightgreen", marker="o", label="zon") # zon zijn eerste 25 (label 1)
plt.scatter(x1[25:], x2[25:], color="darkgreen", marker="o", label="schaduw") # schaduw zijn de volgende 25 (label 0)
ax.axis([-2,2,-2,2])
def animate(i):
line.set_ydata(ycoord[i]) # update de vergelijking van de rechte
return line,
plt.close() # om voorlopig plot-venster te sluiten, enkel animatiescherm nodig
anim = animation.FuncAnimation(fig, animate, interval=1000, repeat=False, frames=len(ycoord))
HTML(anim.to_jshtml())
```
Mooi resultaat! Maar nog niet optimaal.
### Opdracht 4.2
Wellicht bieden meer iteraties nog een beter resultaat. Probeer eens uit.
<div class="alert alert-block alert-info">
Omdat de klassen niet lineair scheidbaar zijn, zal het Perceptron er natuurlijk niet in slagen de fout op nul te krijgen. Door de leersnelheid en het aantal epochs zo goed mogelijke te kiezen, kan je een zo goed mogelijke scheiding proberen bekomen.<br>
Bij niet-lineair scheidbare klassen zal men daarom in machinaal leren geen Perceptron gebruiken, maar de klassen optimaal proberen scheiden op een andere manier: met gradient descent voor de aanpassingen en binary cross entropy om de fout te bepalen.
</div>
<img src="images/cclic.png" alt="Banner" align="left" width="100"/><br><br>
Notebook KIKS, zie <a href="http://www.aiopschool.be">AI op School</a>, van F. wyffels & N. Gesquière is in licentie gegeven volgens een <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationaal-licentie</a>.
<div>
<h2>Met steun van</h2>
</div>
<img src="images/kikssteun.png" alt="Banner" width="1100"/>
| github_jupyter |
# Loading and Checking Data
## Importing Libraries
```
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
use_cuda = torch.cuda.is_available()
```
## Loading Data
```
batch_size = 4
# These are the mean and standard deviation values for all pictures in the training set.
mean = (0.4914 , 0.48216, 0.44653)
std = (0.24703, 0.24349, 0.26159)
# Class to denormalize images to display later.
class DeNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
return tensor
# Creating instance of Functor
denorm = DeNormalize(mean, std)
# Load data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean, std)])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=4)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
# Do NOT shuffle the test set or else the order will be messed up
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=4)
# Classes in order
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
## Sample Images and Labels
```
# functions to show an image
def imshow(img):
img = denorm(img) # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
# Defining Model
## Fully-Connected DNN
```
class Net_DNN(nn.Module):
def __init__(self, architecture):
super().__init__()
self.layers = nn.ModuleList([
nn.Linear(architecture[layer], architecture[layer + 1])
for layer in range(len(architecture) - 1)])
def forward(self, data):
# Flatten the Tensor (i.e., dimensions 3 x 32 x 32) to a single column
data = data.view(data.size(0), -1)
for layer in self.layers:
layer_data = layer(data)
data = F.relu(layer_data)
return F.log_softmax(layer_data, dim=-1)
```
## Fully-CNN
```
class Net_CNN(nn.Module):
# Padding is set to 2 and stride to 2
# Padding ensures all edge pixels are exposed to the filter
# Stride = 2 is common practice
def __init__(self, layers, c, stride=2):
super().__init__()
self.layers = nn.ModuleList([
nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, padding=2, stride=stride)
for i in range(len(layers) - 1)])
self.pool = nn.AdaptiveMaxPool2d(1) # Simply takes the maximum value from the Tensor
self.out = nn.Linear(layers[-1], c)
def forward(self, data):
for layer in self.layers:
data = F.relu(layer(data))
data = self.pool(data)
data = data.view(data.size(0), -1)
return F.log_softmax(self.out(data), dim=-1)
```
## Chained CNN and NN
```
class Net_CNN_NN(nn.Module):
# Padding is set to 2 and stride to 2
# Padding ensures all edge pixels are exposed to the filter
# Stride = 2 is common practice
def __init__(self, layers, architecture, stride=2):
super().__init__()
# Fully Convolutional Layers
self.layers = nn.ModuleList([
nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, padding=2,stride=stride)
for i in range(len(layers) - 1)])
# Fully Connected Neural Network to map to output
self.layers_NN = nn.ModuleList([
nn.Linear(architecture[layer], architecture[layer + 1])
for layer in range(len(architecture) - 1)])
self.pool = nn.AdaptiveMaxPool2d(1) # Simply takes the maximum value from the Tensor
def forward(self, data):
for layer in self.layers:
data = F.relu(layer(data))
data = self.pool(data)
data = data.view(data.size(0), -1)
for layer in self.layers_NN:
layer_data = layer(data)
data = F.relu(layer_data)
return F.log_softmax(layer_data, dim=-1)
```
## Defining the NN, Loss Function and Optimizer
```
# ---------------------------------------------
# Uncomment the architecture you want to use
# ---------------------------------------------
# # DNN
# architecture = [32*32*3, 100, 100, 100, 100, 10]
# net = Net_DNN(architecture)
# # CNN
# architecture = [3, 20, 40, 80, 160]
# num_outputs = 10
# net = Net_CNN(architecture, num_outputs)
# # CNN with NN
# architecture = [3, 20, 40, 80]
# architecture_NN = [80, 40, 20, 10]
# num_outputs = 10
# net = Net_CNN_NN(architecture, architecture_NN)
if use_cuda:
net = net.cuda() # Training on the GPU
criterion = nn.CrossEntropyLoss()
```
## Loading Model
```
# ---------------------------------------------
# Uncomment the architecture you want to use
# ---------------------------------------------
# # DNN
# architecture = [32*32*3, 100, 100, 10]
# net = Net_DNN(architecture)
# # CNN
# architecture = [3, 20, 40, 80, 160]
# num_outputs = 10
# net = Net_CNN(architecture, num_outputs)
# criterion = nn.CrossEntropyLoss()
if use_cuda:
net = net.cuda() # Training on the GPU
# ---------------------------------------------
# Uetermine the path for the saved weights
# ---------------------------------------------
PATH = './checkpoints_CNN_v2/5'
# Load weights
net.load_state_dict(torch.load(PATH))
```
## Recording Loss
```
# Initialize a list of loss_results
loss_results = []
```
# Training Manual
```
# Set the Learning rate and epoch start and end points
start_epoch = 11
end_epoch = 15
lr = 0.0001
# Define the optimizer
optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9)
for epoch in range(start_epoch, end_epoch+1): # loop over the dataset multiple times
print("Epoch:", epoch)
running_loss = 0.0
for i, (inputs, labels) in enumerate(trainloader, 0):
# get the inputs
if use_cuda:
inputs, labels = inputs.cuda(), labels.cuda()
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels) # Inputs and Target values to GPU
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print(running_loss / 2000)
loss_results.append(running_loss / 2000)
running_loss = 0.0
PATH = './checkpoints_hybrid/' + str(epoch)
torch.save(net.state_dict(), PATH)
```
## Sample of the Results
```
# load a min-batch of the images
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
## Sample of Predictions
```
# For the images shown above, show the predictions
# first activate GPU processing
images, labels = images.cuda(), labels.cuda()
# Feed forward
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
## Total Test Set Accuracy
```
# Small code snippet to determine test accuracy
correct = 0
total = 0
for data in testloader:
# load images
images, labels = data
if use_cuda:
images, labels = images.cuda(), labels.cuda()
# feed forward
outputs = net(Variable(images))
# perform softmax regression
_, predicted = torch.max(outputs.data, 1)
# update stats
total += labels.size(0)
correct += (predicted == labels).sum()
# print the results
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
## Accuracy per Class for Test Set
```
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
if use_cuda:
images, labels = images.cuda(), labels.cuda()
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
# Print the accuracy per class
for i in range(10):
print(classes[i], 100 * class_correct[i] / class_total[i])
```
# Plot Loss
```
batch_size = 4
loss_samples_per_epoch = 6
num_epochs = 15
epochs_list = [(i/loss_samples_per_epoch) for i in range(1, num_epochs*loss_samples_per_epoch + 1)]
plt.semilogy(epochs_list, loss_results[:-6])
plt.ylabel('Loss')
plt.xlabel('Epoch Number')
plt.savefig('./DNN_v2.png', format='png', pad_inches=1, dpi=1200)
```
| github_jupyter |
## <center>Mempermudah para peneliti dan dokter dalam meneliti persebaran Covid-19 di US
Kelompok-1 :
1. Gunawan Adhiwirya
2. Reyhan Septri Asta
3. Muhammad Figo Mahendra
#### Langkah pertama kami me-import package yang di butuhkan

```
#import pandas
import pandas as pd
#import numpy
import numpy as np
#import seaborn
import seaborn as sns
#import matplotlib
import matplotlib.pyplot as plt
# Import Module LinearRegression digunakan untuk memanggil algoritma Linear Regression.
from sklearn.linear_model import LinearRegression
# import Module train_test_split digunakan untuk membagi data kita menjadi training dan testing set.
from sklearn.model_selection import train_test_split
# import modul mean_absolute_error dari library sklearn
from sklearn.metrics import mean_absolute_error
#import math agar program dapat menggunakan semua fungsi yang ada pada modul math.(ex:sqrt)
import math
# me-non aktifkan peringatan pada python
import warnings
warnings.filterwarnings('ignore')
from sklearn.cluster import KMeans
```
#### Kemudian load dataset yang akan dipakai

```
df_train = pd.read_csv('datacovid.csv')
df_train
```
#### Kemudian kita cek shape dari dataset yang di gunakan
```
df_train.shape
```
#### Lalu kita cek ringkasan dari dataset yang di pakai
count = Jumlah data
mean = Rata - rata
std = Standar Deviasi
min = Nilai Minimum
25% = Adalah Nilai 1/4 dari data tersebut
50% = Adalah Nilai 1/2 dari data tersebut
75% = Adalah Nilai 3/4 dari data tersebut
100% = Adalah Nilai Maksimum
```
df_train.describe()
```
#### Lalu dilakukan pengecekan apakah data tersebut ada yang kosong atau null
```
df_train.isnull().sum()
```
#### Membersihkan data yang memiliki nilai Null
```
#dikarenakan datanya masih ada yang null maka data yang memiliki nilai NaN akan di drop
df_train.dropna(inplace = True)
df_train
```
#### Kemudian kita cek tipe-tipe data yang ada pada dataset
```
#me-cek tipe data
df_train.info()
```
#### Kemudian melakukan pengecekan apakah data tersebut ada yang duplikat
```
#melihat jika ada data duplikat
df_train.duplicated().sum()
```
#### Lalu data yang sudah di proses sebelumnya di buat visualisasi dengan histogram
```
#membuat histogram
df_train.hist(figsize = (20,12))
```
#### Selanjutnya kita menghapus data yang tidak di perlukan
```
#dikarenakan kita tidak membutuhkan data pending dalam melakukan visualisasi kita, maka kolom pnew_death, pnew_case di drop
# dan kami memutuskan bahwa data prob_cases, prob_death, dan created_at di drop karena tidak ada hubungannya untuk peneliti dalam membantu meneliti penyebaran Covid
dft=df_train.drop(['pnew_death', 'pnew_case','prob_cases', 'prob_death','created_at'], axis = 1)
dft
```
#### Kemudian dilakukan pengecekan data duplikat kembali setelah menghapus data yang tidak digunakan
```
#pengecekan duplikasi setelah men-drop beberapa data
dft.duplicated().sum()
```
#### Selanjutnya kita mencari korelasi pada data dengan menggunakan visualisasi Heatmap
```
#mencari korelasi dengan menggunakan heatmap
fig , axes = plt.subplots(figsize = (14,12))
sns.heatmap(dft.corr(), annot=True)
dft.columns
```
Pada gambar diatas data yang memiliki korelasi terbaik adalah data tot_death dengan conf_death dan tot_cases dan conf_cases
Karena kami memiliki 2 nilai korelasi yang sama dan hubungan korelasinya kuat maka dari itu kami memutuskan untuk memilih salah satu korelasi sebagai data yang akan kami uji pada data tersebut yaitu data conf_cases dengan data tot_cases
#### Setelah mendapat data yang ingin digunakan, selanjutnya kita memeriksa apakah ada outlier (pencilan) pada data yang digunakan
```
#mememeriksa apakah data memiliki outlier(pencilan)
q1 = df_train.iloc[:,[2,3,]].quantile(0.25)
q3 = df_train.iloc[:,[2,3]].quantile(0.75)
IQR = q3 - q1
IQR
```
#### Setelah itu memeriksa data apakah ada outlier (pencilan) dengan menggunakan boxplot
```
#memeriksa apakah data memiliki outlier(pencilan) dengan menggunakan boxplot
df = df_train.iloc[:,[2,3]]
df.columns
fig, axes = plt.subplots(ncols = 2, nrows = 1, figsize = (18,8))
for i, ax in zip(df.columns, axes.flat):
sns.boxplot(x = df[i], ax = ax)
plt.show()
```
#### Setelah data outlier (pencilan) ditemukan, kita hapus data outliernya
```
#menghapus data ouliernya
Q1 = (df_train[['tot_cases', 'conf_cases']]).quantile(0.25)
Q3 = (df_train[['tot_cases', 'conf_cases']]).quantile(0.75)
IQR = Q3 - Q1
max = Q3 + (1.5*IQR)
min = Q1 - (1.5*IQR)
Jlebih = (df_train > max)
Jkurang = (df_train < min)
df_train = df_train.mask(Jlebih, max, axis=1)
df_train = df_train.mask(Jkurang, min, axis=1)
```
#### Setelah data outlier (pencilan) nya sudah di hapus, kita cek sekali lagi untuk memastikan apakah data outlier (pencilan) masih tersisa di dalam data
```
#memeriksa data apakah masih ada outlier setelah dilakukan penghapusan
df = df_train.iloc[:,[2,3]]
df.columns
fig, axes = plt.subplots(ncols = 2, nrows = 1, figsize = (18,8))
for i, ax in zip(df.columns, axes.flat):
sns.boxplot(x = df[i], ax = ax)
plt.show()
```
#### Setelah itu kita akan mengambil 2 data utama yang akan digunakan untuk analisis
```
dfu=df.head(50)
dfu
```
#### Selanjutnya kita visualisasikan prediksi data yang digunakan menggunakan Scatter Plot
```
plt.scatter(dfu['conf_cases'],dfu['tot_cases'])
plt.xlabel('conf_cases')
plt.ylabel('tot_cases')
plt.title('Scatter Plot conf_cases vs tot_cases')
plt.show()
x = dfu['conf_cases'].values.reshape(-1,1)
y = dfu['tot_cases'].values.reshape(-1,1)
```
#### Melihat nilai rata - rata dari Variabel X dan Y
```
x_mean = np.mean(x)
y_mean = np.mean(y)
print('nilai mean var x: ', x_mean,'\n'
'nilai mean var y: ', y_mean)
```
#### Kemudian melihat nilai korelasi koefisien pada data
```
atas = sum((x - x_mean)*(y - y_mean))
bawah = math.sqrt((sum((x - x_mean)**2)) * (sum((y - y_mean)**2)))
correlation = atas/bawah
print('Nilai Correlation Coefficient: ', correlation)
```
#### Melihat Slope pada data
Slope adalah tingkat kemiringan garis
```
# slope
# Slope adalah tingkat kemiringan garis, intercept
# adalah jarak titik y pada garis dari titik 0
variance = sum((x - x_mean)**2)
covariance = sum((x - x_mean) * (y - y_mean))
theta_1 = covariance/variance
print('Nilai theta_1: ',theta_1)
```
Intercept adalah jarak titik y pada garis dari titik 0
```
# intercept
theta_0 = y_mean - (theta_1 * x_mean)
print('Nilai theta_0: ',theta_0)
```
#### Melakukan manual prediksi
```
# prediction manual
y_pred = theta_0 + (theta_1 * 130)
print(y_pred)
```
#### Memvisualisasikan prediksi dengan scatter plot
```
# visualisasi prediksi dengan scatter plot
y_pred = theta_0 + (theta_1 * x)
plt.scatter(x,y)
plt.plot(x, y_pred, c='r')
plt.xlabel('conf_cases')
plt.ylabel('tot_cases')
plt.title('Plot conf_cases vs tot_cases')
x_train, x_test, y_train, y_test = train_test_split(x, y, train_size = 0.8, test_size = 0.2, random_state = 0)
```
#### Selanjutnya kita cek regresi koefisien dan intercepnya
```
regressor = LinearRegression()
regressor.fit(x_train, y_train)
print(regressor.coef_)
print(regressor.intercept_)
```
#### Mencetak score regresi untuk melihat akurasi
```
regressor.score(x_test, y_test)
```
#### Mencetak nilai korelasi dari score regresi
```
print('Correlation: ', math.sqrt(regressor.score(x_test,y_test)))
```
#### Memvisualisasikan regresi menggunakan data testing
```
y_prediksi = regressor.predict(x_test)
plt.scatter(x_test, y_test)
plt.plot(x_test, y_prediksi, c='r')
plt.xlabel('conf_cases')
plt.ylabel('tot_cases')
plt.title('Plot conf_cases vs tot_cases')
```
#### Memasukan dataframe ke dalam array kemudian memvisualisasikan dengan menggunakan metode Elbow
```
#memasukkan dataframe ke dalam array
data = np.array(df_train[["conf_cases", "tot_cases"]])
data
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i)
kmeans.fit(data)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('Elbow Method')
plt.xlabel('Jumlah Cluster')
plt.ylabel('WCSS')
plt.show()
```
Dari hasil Elbow Method diatas dapat di simpulkan bahwa jumlah cluster yang paling optimal dalam melakukan K-Means adalah 2 (dua)
#### Kemudian menentukan jumlah cluster dan centroidnya
```
df_cluster = df_train[["conf_cases", "tot_cases"]]
#centroid
Centroids = (df_cluster.sample(n = 2))
plt.figure(figsize = (10, 5))
plt.scatter(df_cluster["conf_cases"], df_cluster["tot_cases"], color = 'blue')
plt.scatter(Centroids["conf_cases"], Centroids["tot_cases"], color = 'coral')
plt.xlabel('confirm cases')
plt.ylabel('total cases')
plt.show()
```
Pada visualiasi Centroid di atas, dapat diketahui bahwa titik centroid tersebut memiliki sifat yang masih acak (random). Oleh karena itu kita melakukan proses pengelompokan dengan menggunakan metode K-Means
#### Kemudian melakukan perhitungan menggunakan K-Means
```
#K-Means
diff = 1
i = 0
while(diff!=0):
data_new = df_cluster
j = 1
for index1 ,row_c in Centroids.iterrows():
Y=[]
for index2,row_d in data_new.iterrows():
nd1=(row_c['conf_cases']-row_d['conf_cases'])**2
nd2=(row_c["tot_cases"]-row_d["tot_cases"])**2
nd=np.sqrt(nd1+nd2)
Y.append(nd)
df_cluster[j]=Y
j=j+1
hasil=[]
for index,row in df_cluster.iterrows():
min_dist=row[1]
pos=1
for j in range(2):
if row[j+1] < min_dist:
min_dist = row[j+1]
pos=j+1
hasil.append(pos)
df_cluster["Cluster"]=hasil
Centroids_new = df_cluster.groupby(["Cluster"]).mean()[["conf_cases", "tot_cases"]]
if i == 0:
diff=1
i=i+1
else:
diff = (Centroids_new['conf_cases'] - Centroids['conf_cases']).sum() + (Centroids_new["tot_cases"] - Centroids["tot_cases"]).sum()
print(diff.sum())
Centroids = df_cluster.groupby(["Cluster"]).mean()[['conf_cases',"tot_cases"]]
```
#### Menentukan kelompok centroid dan memvisualisasikannya
```
#menentukan kelompok centroid
warna=['red','green','blue']
plt.figure(figsize=(10,5))
for i in range(3):
df_Model=df_cluster[df_cluster["Cluster"] == i + 1]
plt.scatter(df_Model['conf_cases'], df_Model ["tot_cases"], color = warna[i])
plt.scatter(Centroids['conf_cases'],Centroids["tot_cases"],color='black')
plt.xlabel('confirm cases')
plt.ylabel("total cases")
plt.show()
```
Dapat dilihat pada Visualisasi di atas bahwa kedua titik Centroid sudah berada pada center (titik tengah) kelompoknya masing- masing, dimana pengelempokannya diberikan visualiasi warna yang berbeda
## Kesimpulan
Pada analisa yang di lakukan pada dataset, bisa disimpulkan bahwa pada hasil preprocessing di dapatkan 2 (dua) hubungan korelasi yang kuat yaitu, conf_cases dan tot_cases. Dimana conf_cases adalah kasus covid di Amerika Serikat yang sudah terkonfirmasi dan tot_cases adalah total kasus yang terjadi di Amerika Serikat. Kemudian pada data korelasi antara conf_death dan tot_death tidak dibutuhkan karena data ini tidak relevan untuk memprediksi perkembangan dan persebaran covid yang ada di USA.
Pada data conf_cases dan tot_cases akan dilakukan clustering dimana clustering tersebut sangat berguna untuk memprediksi dan menganalisa persebaran covid-19 di Amerika Serikat. Pada clustering, jumlah cluster yang optimal digunakan pada K-Means adalah berjumlah 2 (dua). Setelah melakukan proses analisa dengan metode K-means, hasil pengelompokan pada 2 (dua) cluster dapat di visualisasikan.
### Referensi
https://www.analyticsvidhya.com/blog/2019/08/comprehensive-guide-k-means-clustering/
https://towardsdatascience.com/machine-learning-algorithms-part-9-k-means-example-in-python-f2ad05ed5203
Tugas Mandiri dan Kelompok SPADA DIKTI
| github_jupyter |
# Analyzing HTSeq Data Using Two Differential Expression Modules
<p>The main goals of this project are:</p>
<ul>
<li>Analyze HTSeq count data with tools that assume an underlying <a href="https://en.wikipedia.org/wiki/Negative_binomial_distribution" target="_blank">negative binomial distribution</a> on the data</li>
<li>Analyze <a href="http://software.broadinstitute.org/cancer/software/genepattern/modules/docs/PreprocessReadCounts/1" target="_blank">normalized HTSeq count</a> data with tools that assume an underlying <a href="https://en.wikipedia.org/wiki/Normal_distribution" target="_blank">normal distribution</a> on the data.</li>
<li>Compare the results of differential gene expression analysis under the two scenarios above.</li>
</ul>
<p><img alt="2019-04-16_07_BioITWorld_Class-Project.jpg" src="https://datasets.genepattern.org/data/BioITWorld/2019-04-16_07_BioITWorld_Class-Project.jpg" /></p>
```
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.display(genepattern.session.register("https://cloud.genepattern.org/gp", "", ""))
```
## Section 1: Load and Filter the Dataset
In brief, the dataset we will use in this notebook is RNA-Seq counts downloaded from TCGA. We have selected 40 samples of Breast Invasive Carcinoma (BRCA), 20 of those samples come from tumor tissue and 20 come from their corresponding normal tissue.
### 1.1 Load the CLS file for future use by using the Python function below.
In order to make the phenotype labels file (the CLS file) easily accessible in the GenePattern modules in this notebook, we will use a Python function wrapped in a GenePattern UIBuilder cell titled **`Load URL Into Notebook {}`** using this function is as simple as typing the url which contains the data we want to load.
<div class="alert alert-info">
<ul>
<li><b>url</b>: Drag and drop the link to <a href="https://datasets.genepattern.org/data/TCGA_BRCA/WP_0_BRCA_cp_40_samples.cls">this CLS file</a><br>
<em>Note: It should display the file's url after you have done so.</em>
</ul>
</div>
```
@genepattern.build_ui(name="Load URL Into Notebook",
parameters={
"url": {"default":"https://datasets.genepattern.org/data/TCGA_BRCA/WP_0_BRCA_cp_40_samples.cls"},
"output_var":{"default":"", "hide":True}
})
def load_data_from_url(url):
"""This simple function """
return genepattern.GPUIOutput(files=[url])
```
<div class="well">
<em>Note:</em> you can use this function to load data from an URL in any of your notebooks
</div>
### 1.2 Filter out uninformative genes
<div class="alert alert-info">
<p>In order to remove the uninformative genes from the the HTseq dataset (i.e., the rows in the GCT file with the smallest variance), create a new cell below this one and use the <strong>PreprocessDataset*</strong> GenePattern module with these parameters:</p>
<ul>
<li><strong>input filename</strong>: Drag and drop the link to <a href="https://datasets.genepattern.org/data/TCGA_BRCA/WP_0_BRCA_cp_40_samples.gct" target="_blank">this GCT file</a><br />
<em>Note: It should display the file's url after you have done so.</em></li>
<li><strong>output filename: workshop_BRCA_filtered.gct</strong></li>
<li><strong>ceiling: </strong> 20000000.
<br />
<em>Note: The default value is 20,000 we are changing this value to 20,000,000.</em></li>
<li>The rest of the parameters can be left as default.</li>
</ul>
</div>
```
preprocessdataset_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00020')
preprocessdataset_job_spec = preprocessdataset_task.make_job_spec()
preprocessdataset_job_spec.set_parameter("input.filename", "")
preprocessdataset_job_spec.set_parameter("threshold.and.filter", "1")
preprocessdataset_job_spec.set_parameter("floor", "20")
preprocessdataset_job_spec.set_parameter("ceiling", "20000000")
preprocessdataset_job_spec.set_parameter("min.fold.change", "3")
preprocessdataset_job_spec.set_parameter("min.delta", "100")
preprocessdataset_job_spec.set_parameter("num.outliers.to.exclude", "0")
preprocessdataset_job_spec.set_parameter("row.normalization", "0")
preprocessdataset_job_spec.set_parameter("row.sampling.rate", "1")
preprocessdataset_job_spec.set_parameter("threshold.for.removing.rows", "")
preprocessdataset_job_spec.set_parameter("number.of.columns.above.threshold", "")
preprocessdataset_job_spec.set_parameter("log2.transform", "0")
preprocessdataset_job_spec.set_parameter("output.file.format", "3")
preprocessdataset_job_spec.set_parameter("output.file", "workshop_BRCA_filtered.gct")
preprocessdataset_job_spec.set_parameter("job.memory", "2 Gb")
preprocessdataset_job_spec.set_parameter("job.walltime", "02:00:00")
preprocessdataset_job_spec.set_parameter("job.cpuCount", "1")
genepattern.display(preprocessdataset_task)
```
---
## Section 2: Analyzing HTseq Counts Using DESeq2
The results you generate in this section will be used as the reference for comparison later in this notebook and will be refered to as **`DESeq2_results`**.
### 2.1 Perform differential gene expression using DESeq2
<div class="alert alert-info">
Create a new cell bellow this one and use the <b>DESeq2</b> GenePattern module using the following parameters:
<ul>
<li><b>input file</b>: From the dropdown menu, choose the output from the PreprocessDataset module (i.e., <b>workshop_BRCA_filtered.gct</b> if you used the suggested parameters in section 1).</li>
<li><b>cls file</b>: From the dropdown menu, choose the output from the <b>`Load URL Into Notebook {}`</b>> UIBuilder cell (i.e., <b>WP_0_BRCA_cp_40_samples.cls</b> if you used the suggested parameters in section 1).</li>
<li>Click on <b>Run</b> and move on to step 2.2 of this section once the job is complete. </li></ul>
</div>
```
deseq2_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00362')
deseq2_job_spec = deseq2_task.make_job_spec()
deseq2_job_spec.set_parameter("input.file", "")
deseq2_job_spec.set_parameter("cls.file", "")
deseq2_job_spec.set_parameter("confounding.variable.cls.file", "")
deseq2_job_spec.set_parameter("output.file.base", "<input.file_basename>")
deseq2_job_spec.set_parameter("qc.plot.format", "skip")
deseq2_job_spec.set_parameter("fdr.threshold", "0.1")
deseq2_job_spec.set_parameter("top.N.count", "20")
deseq2_job_spec.set_parameter("random.seed", "779948241")
deseq2_job_spec.set_parameter("job.memory", "2 Gb")
deseq2_job_spec.set_parameter("job.walltime", "02:00:00")
deseq2_job_spec.set_parameter("job.cpuCount", "1")
genepattern.display(deseq2_task)
```
### 2.2 Extract top 25 differentially expressed genes and save them to a DataFrame for later use
<div class="alert alert-info">We will parse the one of the TXT files from the previous cell (<strong>DESeq2</strong>) and extract only the information that we want (i.e., the name and rank of the 100 most differentially expressed genes) and save that list in a python dictionary named <strong><code>DESeq2_results</code></strong>. To do so, we are using the GenePattern UI Builder in the next cell. Feel free to check out the underlying code if you want. Set the input parameters as follows:
<ul>
<li>Send the <strong>first output</strong> of <strong>DESeq2</strong> to Extract Ranked Gene List From TXT GenePattern Variable { }
<ul>
<li>Hint: the name of the file should be <strong>workshop_BRCA_filtered.normal.vs.tumor.DESeq2_results_report.txt</strong></li>
<li>From the dropdown menu, choose the output from the DESeq2 module (i.e., <b>...results_report.txt</b> if you used the suggested parameters in section 1)</li>
</ul>
</li>
<li><strong>file var</strong>: the action just before this one should have populated this parameter with a long URL similar to this one: <em>https://gp-beta-ami.genepattern.org/gp/jobResults/1234567/workshop_BRCA_filtered.normal.vs.tumor.DESeq2_results_report.txt</em>.</li>
<li><strong>number of genes</strong>: 25 (default)</li>
<li><strong>verbose</strong>: true (default)</li>
<li>Confirm that the <strong>output variable</strong> is is set to be <strong>DESeq2_results</strong></li>
<li>Run the cell.</li>
</ul>
</div>
```
import genepattern
@genepattern.build_ui(name="Extract Ranked Gene List From TXT GenePattern Variable",
parameters={
"file_var": {
"type": "file",
"kinds": ["txt"],
},
"number_of_genes": {"default":25},
"output_var": {"default":"DESeq2_results"},
})
def extract_genes_from_txt(file_var:'URL of the results_report_txt file from DESeq2',
number_of_genes:'How many genes to extract'=100,
verbose:'Whether or not to print the gene list'=True):
genes_dict = {} # Initializing the dictionary of genes and rankings
# Get the job number and name of the file
temp = file_var.split('/')
# programatically access that job to open the file
gp_file = eval('job'+temp[5]+'.get_file("'+temp[6]+'")')
py_file = gp_file.open()
py_file.readline()
rank = 1
for line in py_file.readlines():
formatted_line = str(line,'utf-8').strip('\n').split('\t')
genes_dict[formatted_line[0]] = rank
if rank >= number_of_genes:
break
rank += 1
if verbose:
# For display only
for gene in genes_dict:
print("{}: {}".format(genes_dict[gene],gene))
return genes_dict
```
---
## Section 3: Analyzing HTSeq Counts Using ComparativeMarkerSelection
These results will be used for comparison later in this notebook and will be refered to as **`CMS_results`**
### 3.1 Transform HTSeq counts by using VoomNormalize
<div class="alert alert-info">
<h3 style="margin-top: 0;"> Instructions <i class="fa fa-info-circle"></i></h3>
Create a new cell bellow this one and use the <strong>VoomNormalize</strong> GenePattern module with the following parameters:
<ul>
<li><strong>input file</strong>: The output from the <strong>PreprocessDataset</strong> module (i.e., <strong>workshop_BRCA_filtered.gct</strong> if you used the suggested parameters in section 1).</li>
<li><strong>cls file</strong>: The output from the <strong>`Load URL Into Notebook {}`</strong> UIBuilder cell (i.e., <strong>WP_0_BRCA_cp_40_samples.cls</strong> is you used the suggested parameters in section 1).</li>
<li><strong>output file</strong>: leave as default.</li>
</ul>
</div>
```
voomnormalize_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00355')
voomnormalize_job_spec = voomnormalize_task.make_job_spec()
voomnormalize_job_spec.set_parameter("input.file", "")
voomnormalize_job_spec.set_parameter("cls.file", "")
voomnormalize_job_spec.set_parameter("output.file", "<input.file_basename>.preprocessed.gct")
voomnormalize_job_spec.set_parameter("expression.value.filter.threshold", "1")
voomnormalize_job_spec.set_parameter("job.memory", "2 Gb")
voomnormalize_job_spec.set_parameter("job.walltime", "02:00:00")
voomnormalize_job_spec.set_parameter("job.cpuCount", "1")
genepattern.display(voomnormalize_task)
```
### 3.2 Perform differential gene expression analysis on transformed counts using ComparativeMarkerSelection
<div class="alert alert-info">Create a new cell bellow this one and use the <strong>ComparativeMarkerSelection</strong> GenePattern module with the following parameters:
<ul>
<li><strong>input file</strong>: The output from the <strong>PreprocessReadCounts</strong> module (i.e., <strong>workshop_BRCA_filtered.preprocessed.gct</strong> if you used the suggested parameters in step 5.1 of this section).</li>
<li><strong>cls file</strong>: The output from the <strong>`Load URL Into Notebook {}`</strong> UIBuilder cell (i.e., <strong>WP_0_BRCA_cp_40_samples.cls</strong> is you used the suggested parameters in section 1).</li>
<li>The rest of the parameters can be left as default.</li>
</ul>
</div>
```
comparativemarkerselection_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00044')
comparativemarkerselection_job_spec = comparativemarkerselection_task.make_job_spec()
comparativemarkerselection_job_spec.set_parameter("input.file", "")
comparativemarkerselection_job_spec.set_parameter("cls.file", "")
comparativemarkerselection_job_spec.set_parameter("confounding.variable.cls.file", "")
comparativemarkerselection_job_spec.set_parameter("test.direction", "2")
comparativemarkerselection_job_spec.set_parameter("test.statistic", "0")
comparativemarkerselection_job_spec.set_parameter("min.std", "")
comparativemarkerselection_job_spec.set_parameter("number.of.permutations", "10000")
comparativemarkerselection_job_spec.set_parameter("log.transformed.data", "false")
comparativemarkerselection_job_spec.set_parameter("complete", "false")
comparativemarkerselection_job_spec.set_parameter("balanced", "false")
comparativemarkerselection_job_spec.set_parameter("random.seed", "779948241")
comparativemarkerselection_job_spec.set_parameter("smooth.p.values", "true")
comparativemarkerselection_job_spec.set_parameter("phenotype.test", "one versus all")
comparativemarkerselection_job_spec.set_parameter("output.filename", "<input.file_basename>.comp.marker.odf")
comparativemarkerselection_job_spec.set_parameter("job.memory", "2 Gb")
comparativemarkerselection_job_spec.set_parameter("job.walltime", "02:00:00")
comparativemarkerselection_job_spec.set_parameter("job.cpuCount", "1")
genepattern.display(comparativemarkerselection_task)
```
### 3.3 Extract top 100 genes and save to a dictionary for later use.
<div class="alert alert-info">
<p>We will parse the ODF file from the <strong>ComparativeMarkerSelection</strong> you just ran (using the <strong>preprocessed</strong> data) and extract only the information that we want (i.e., the name and rank of the 100 most differentially expressed genes) and save that list in a python dictionary named <strong><code>transformed_normal_results</code></strong>. To do so, we are using the GenePattern UI Builder in the next cell. Feel free to check out the underlying code if you want. Set the input parameters as follows:</p>
<ul>
<li>Choose the <em>workshop_BRCA_filtered.preprocessed.comp.marker.odf</em> from the dropdown menu of the cell below.</li>
The action just before this one should have populated this parameter with a long URL similar to this one: <em>https://gp-beta-ami.genepattern.org/gp/jobResults/1234567/workshop_BRCA_filtered.preprocessed.comp.marker.odf</em>.
<li><strong>number of genes</strong>: 100 (default)</li>
<li><strong>verbose</strong>: true (default)</li>
<li>Confirm that the <strong>output variable</strong> is is set to be <strong>CMS_results</strong></li>
<li>Run the cell.</li>
</ul>
<em>The Pandas warning can be ignored</em>
</div>
```
import warnings
warnings.filterwarnings('ignore')
from gp.data import ODF
#transformed_normal_results = custom_CMSreader(**INSERT_THE_VALUE_YOU_COPIED_IN_THE_PREVIOUS_CELL_HERE**, number_of_genes=100)
def custom_CMSreader(GP_ODF:'URL of the ODF output from ComparativeMarkerSelection',
number_of_genes:'How many genes to extract'=100,
verbose:'Whether or not to print the gene list'=True):
# Get the job number and name of the file
temp = GP_ODF.split('/')
# programatically access that job to open the file
GP_ODF = eval('ODF(job'+temp[5]+'.get_file("'+temp[6]+'"))')
# GP_ODF = GP_ODF.dataframe
GP_ODF = GP_ODF.loc[GP_ODF['Rank']<=number_of_genes,['Rank','Feature']]
GP_ODF.set_index('Feature', inplace=True)
to_return = GP_ODF.to_dict()['Rank']
if verbose:
# For display only
genes_list = sorted([[v,k] for k,v in to_return.items()])
for gene in genes_list:
print("{}: {}".format(gene[0],gene[1]))
return to_return
# naive_normal_results = custom_CMSreader(**INSERT_THE_VALUE_YOU_COPIED_IN_THE_PREVIOUS_CELL_HERE**, number_of_genes=100)
genepattern.GPUIBuilder(custom_CMSreader,
name="Extract Ranked Gene List From ODF GenePattern Variable",
parameters={
"GP_ODF": { "name": "Comparative Marker Selection ODF filename",
"type": "file",
"kinds": ["Comparative Marker Selection", "odf","ODF"],
"description":"The output from ComparativeMarkerSelection",
},
"number_of_genes": {"default":25},
"output_var": {"default":"CMS_results"},
})
```
---
## Section 4: Comparing Results of the Negative Binomial and Transformed Normal Models
In this short section we compare the dictionaries which contain the lists of top differentially expressed genes and their ranks. Use the following parameters:
- **reference list**: DESeq2_results
- **new list**: CMS_results
```
from scipy.stats import kendalltau as kTau
def compare_dictionaries(reference_list, new_list):
# print(reference_list)
# print(new_list)
# compute how many of the genes in ref are in new
common = (list(set(reference_list) & set(new_list)))
ref_common = [reference_list[temp] for temp in common]
new_common = [new_list[temp] for temp in common]
kendall_tau = kTau(ref_common,new_common)[0] # Kendall's Tau measures the similarity between to ordered lists.
metric = 0.5*(1+kendall_tau) * len(common)/len(reference_list) # Penalizing low overlap between lists.
print("There is a {:.3g}% overlap.".format(100*len(common)/len(reference_list)),
"Custom metric is {:.3g} (simmilarity metric range [0,1])".format(metric),
"Kendall's tau is {:.3g}".format(kendall_tau))
print("---")
print(f'Here are the ranks of the new the {len(ref_common)} genes which overlap:')
print(ref_common)
print(new_common)
# print( len(common)/len(reference_list))
return metric
# compare_dictionaries(negative_binomial_results, naive_normal_results)
genepattern.GPUIBuilder(compare_dictionaries, name="Compare Two Ranked Lists",
parameters={
"output_var":{"default":"temp_result_1",
"hide":True}
})
# compare_dictionaries(negative_binomial_results, transformed_normal_results)
```
---
| github_jupyter |
## Apprentissage supervisé: Forêts d'arbres aléatoires (Random Forests)
Intéressons nous maintenant à un des algorithmes les plus popualires de l'état de l'art. Cet algorithme est non-paramétrique et porte le nom de **forêts d'arbres aléatoires**
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
```
## A l'origine des forêts d'arbres aléatoires : l'arbre de décision
Les fôrets aléatoires appartiennent à la famille des méthodes d'**apprentissage ensembliste** et sont construits à partir d'**arbres de décision**. Pour cette raison, nous allons tout d'abord présenter les arbres de décisions.
Un arbre de décision est une manière très intuitive de résoudre un problème de classification. On se contente de définir un certain nombre de questions qui vont permetre d'identifier la classe adéquate.
```
import fig_code.figures as fig
fig.plot_example_decision_tree()
```
Le découpage binaire des données est rapide a mettre en oeuvre. La difficulté va résider dans la manière de déterminer quelle est la "bonne" question à poser.
C'est tout l'enjeu de la phase d'apprentissage d'un arbre de décision. L'algorithme va déterminer, au vue d'un ensemble de données, quelle question (ou découpage...) va apporter le plus gros gain d'information.
### Construction d'un arbre de décision
Voici un exemple de classifieur à partir d'un arbre de décision en utlisiant la libraire scikit-learn.
Nous commencons par définir un jeu de données en 2 dimensions avec des labels associés:
```
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
```
Nous avons précemment défini une fonction qui va faciliter la visualisation du processus :
```
from fig_code.figures import visualize_tree, plot_tree_interactive
```
On utilise maintenant le module ``interact`` dans Ipython pour visualiser les découpages effectués par l'arbre de décision en fonction de la profondeur de l'arbre (*depth* en anglais), i.e. le nombre de questions que l'arbre peut poser :
```
plot_tree_interactive(X, y);
```
**Remarque** : à chaque augmentation de la profondeur de l'arbre, chaque branche est découpée en deux **à l'expection** des branches qui contiennent uniquement des points d'une unique classe.
L'arbre de décision est une méthode de classification non paramétrique facile à mettre en oeuvre
**Question: Observez-vous des problèmes avec cette modélisation ?**
## Arbre de décision et sur-apprentissage
Un problème avec les arbres de décision est qu'ils ont tendance à **sur-apprendre** rapidement sur les données d'apprentissage. En effet, ils ont une forte tendance à capturer le bruit présent dans les données plutôt que la vraie distribution recherchée. Par exemple, si on construit 2 arbres à partir de sous ensembles des données définies précédemment, on obtient les deux classifieurs suivants:
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
```
Les 2 classifieurs ont des différences notables si on regarde en détails les figures. Lorsque'on va prédire la classe d'un nouveau point, cela risque d'être impacté par le bruit dans les données plus que par le signal que l'on cherche à modéliser.
## Prédictions ensemblistes: Forêts aléatoires
Une façon de limiter ce problème de sur-apprentissage est d'utiliser un **modèle ensembliste**: un méta-estimateur qui va aggréger les predictions de mutliples estimateurs (qui peuvent sur-apprendre individuellement). Grace à des propriétés mathématiques plutôt magiques (!), la prédiction aggrégée de ces estimateurs s'avère plus performante et robuste que les performances des estimateurs considérés individuellement.
Une des méthodes ensemblistes les plus célèbres est la méthode des **forêts d'arbres aléatoires** qui aggrège les prédictions de multiples arbres de décision.
Il y a beaucoup de littératures scientifiques pour déterminer la façon de rendre aléatoires ces arbres mais donner un exemple concret, voici un ensemble de modèle qui utilise seulement un sous échantillon des données :
```
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
def fit_randomized_tree(random_state=0):
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
clf = DecisionTreeClassifier(max_depth=5)
#on utilise seulement 250 exemples choisis aléatoirement sur les 300 disponibles
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from ipywidgets import interact
interact(fit_randomized_tree, random_state=(0, 100));
```
On peut observer dans le détail les changements du modèle en fonction du tirage aléatoire des données qu'il utilise pour l'apprentissage, alors que la distribution des données est figée !
La forêt aléatoire va faire des caluls similaires, mais va aggréger l'ensemble des arbres aléatoires générés pour construire une unique prédiction:
```
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
from sklearn.svm import SVC
clf = SVC(kernel='linear')
clf.fit(X, y)
visualize_tree(clf,X, y, boundaries=False)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
```
En moyennant 100 arbres de décision "perturbés" aléatoirement, nous obtenons une prédiction aggrégé qui modélise avec plus de précision nos données.
*(Remarque: ci dessus, notre perturbation aléatoire est effectué en echantillonant de manière aléatoire nos données... Les arbres aléatoires utilisent des techniques plus sophistiquées, pour plus de détails voir la [documentation de scikit-learn](http://scikit-learn.org/stable/modules/ensemble.html#forest)*)
## Exemple 1 : utilisation en régression
On considère pour cet exemple un cas d'tétude différent des exemples précédent de classification. Les arbres aléatoires peuvent être également utilisés sur des problèmes de régression (c'est à dire la prédiction d'une variable continue plutôt que discrète).
L'estimateur que nous utiliserons est le suivant: ``sklearn.ensemble.RandomForestRegressor``.
Nous présentons rapidement comment il peut être utilisé:
```
from sklearn.ensemble import RandomForestRegressor
# On commence par créer un jeu de données d'apprentissage
x = 10 * np.random.rand(100)
def model(x, sigma=0.):
# sigma controle le bruit
# sigma=0 pour avoir une distribution "parfaite"
oscillation_rapide = np.sin(5 * x)
oscillation_lente = np.sin(0.5 * x)
bruit = sigma * np.random.randn(len(x))
return oscillation_rapide + oscillation_lente + bruit
y = model(x)
plt.figure(figsize=(10,5))
plt.scatter(x, y);
xfit = np.linspace(0, 10, num=1000)
# yfit contient les prédictions de la forêt aléatoire à partir des données bruités
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
# ytrue contient les valuers du modèle qui génèrent nos données avec un bruit nul
ytrue = model(xfit, sigma=0)
plt.figure(figsize=(10,5))
#plt.scatter(x, y)
plt.plot(xfit, yfit, '-r', label = 'forêt aléatoire')
plt.plot(xfit, ytrue, '-g', alpha=0.5, label = 'distribution non bruitée')
plt.legend();
```
On observe que les forêts aléatoires, de manière non-paramétrique, arrivent à estimer une distribution avec de mutliples périodicités sans aucune intervention de notre part pour définir ces périodicités !
---
**Hyperparamètres**
Utilisons l'outil d'aide inclus dans Ipython pour explorer la classe ``RandomForestRegressor``. Pour cela on rajoute un ? à la fin de l'objet
```
RandomForestRegressor?
```
Quelle sont les options disponibles pour le ``RandomForestRegressor``?
Quelle influence sur le graphique précédent si on modifie ces valeurs?
Ces paramètres de classe sont appelés les **hyperparamètres** d'un modèle.
---
```
# Exercice : proposer un modèle de régression à vecteur support permettant de modéliser le phénomène
from sklearn.svm import SVR
SVMreg = SVR().fit(x[:, None], y)
yfit_SVM = SVMreg.predict(xfit[:, None])
plt.figure(figsize=(10,5))
plt.scatter(x, y)
plt.plot(xfit, yfit_SVM, '-r', label = 'SVM')
plt.plot(xfit, ytrue, '-g', alpha=0.5, label = 'distribution non bruitée')
plt.legend();
SVR?
```
| github_jupyter |
### Clipping
En un juego a menudo tenemos la necesidad de dibujar solo en una pate
de la pantalla. Por ejemplo, en un juego de estrategia, al estilo
del [Command and Conquer](https://es.wikipedia.org/wiki/Command_%26_Conquer), podemos
querer dividir la pantalla en dos. Una parte superior grande donse se muestra un mapa
y un panel inferior, más pequeño, donde se muestre información de nuestras tropas,
del estado de municiones, etc... Obviamente, no queremos que al dibujar el mapa
pinte cosas enciam del panel de estado, ni al contrario.
Para solucionar esto se puede usar una característica de las superficies de pygame
que es al _clipping area_ o área de recorte. Cada superficie tiene una área de recorte,
un rectángulo (`un objeto de la clase `pygame.Rect`) que puede estar activa a no. Si
definimos y activamos el area, todas las operaciones de
dibujo se verán limitadas al rectángulo del área, dejando el resto de la superficie
intacta.
Para definir un area de recorte,se usa el método `set_clip` de las superficies, pasandole
como parámetro un objeto de tipo `Rect`. Se puede obtener el aárea definida en cualquier
momento llamanda a `get_clip`.
El siguiente codigo muestra una simulacion de un juego de estrategia con la configuración
explicada antes. La primera llamada a `set_clip` define como área sobre la que se puede pintar
la parte superior, correspondiente al mapa (definido en el rectángulo `map_area`, y la
segunda limita el área utilizable a el area de información:
Often when you are building a screen for a game, you will want to draw only to a portion of the
display. For instance, in a strategy Command & Conquer–like game, you might have the top of
the screen as a scrollable map, and below it a panel that displays troop information. But when
you start to draw the troop images to the screen, you don’t want to have them draw over the
information panel. To solve this problem, surfaces have a clipping area, which is a rectangle
that defines what part of the screen can be drawn to. To set the clipping area, call the set_clip
method of a surface object with a Rect-style object. You can also retrieve the current clipping
region by calling get_clip.
The following snippet shows how we might use clipping to construct the screen for a strat-
egy game. The first call to clip sets the region so that the call to draw_map will only be able to
draw onto the top half of the screen. The second call to set_clip sets the clipping area to the
remaining portion of the screen:
screen.set_clip(0, 0, 640, 300)
draw_map()
screen.set_clip(0, 300, 640, 180)
draw_panel()
```
import pygame
import random
SIZE = WIDTH, HEIGHT = 800, 600
FPS = 60
BLACK = (0, 0, 0)
GRAY = (128, 128, 128)
CYAN = (0, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
class Soldier:
def __init__(self, x=None, y=None):
self.x = x or random.randrange(WIDTH)
self.y = y or random.randrange(HEIGHT)
def update(self):
self.x += random.choice([-2, -1, -1, 0, 0, 0, 1, 1, 2])
self.y += random.choice([-2, -1, -1, 0, 0, 0, 1, 1, 2])
class Troop(Soldier):
def draw(self, canvas):
pygame.draw.circle(canvas, CYAN, (self.x, self.y), 5)
def is_enemy(self):
return False
class Enemy(Soldier):
def draw(self, canvas):
r = pygame.Rect(self.x-5, self.y-5, 11, 11)
pygame.draw.rect(canvas, RED, r)
def is_enemy(self):
return True
map_area = pygame.Rect((0, 0), (WIDTH, HEIGHT-40))
info_area = pygame.Rect((0, HEIGHT-32), (WIDTH, 40))
pygame.init()
try:
pygame.display.set_caption("Clippeng Demo")
screen = pygame.display.set_mode(SIZE, 0, 24)
# Parte de inicialización del juego
crowd = [Troop() for _ in range(10)] + [Enemy() for _ in range(12)]
clock = pygame.time.Clock()
in_game = True
while in_game:
# Obtener datos de entrada
for event in pygame.event.get():
if event.type == pygame.QUIT:
in_game = False
# Recalcular el estado del juego, en base al estado actual y a las entradas
for soldier in crowd:
soldier.update()
# Representamos el nuevo estado
screen.set_clip(map_area)
screen.fill(BLACK)
for soldier in crowd:
soldier.draw(screen)
screen.set_clip(info_area)
screen.fill(GRAY)
pos = 5
for soldier in crowd:
r = pygame.Rect((pos, 0), (4, HEIGHT))
if soldier.is_enemy():
pygame.draw.rect(screen, RED, r)
else:
pygame.draw.rect(screen, GREEN, r)
pos += 5
pygame.display.update()
clock.tick(FPS)
finally:
pygame.quit()
```
**Ejercicio**: Modificar el programa para que detecte los eventos de pulsar
la tecla del ratón. Si en las coordenadas pulsadas hay un soldado (Ya
sea nuestro o del enemigo), borrarlo. Para nuestros efectos, borrarlo es
simplemente eliminarlo de la lista `crowd`. Para eso se puede usar
el método `remove(elem)` de las listas.
```
l = [1, 2, 3, 4, 5, 6]
print(l)
l.remove(4)
print(l)
```
| github_jupyter |
# Matrix Factorization for Recommender Systems - Part 2
As seen in [Part 1](https://online-ml.github.io/examples/matrix-factorization-for-recommender-systems-part-1), strength of [Matrix Factorization (MF)](https://en.wikipedia.org/wiki/Matrix_factorization_(recommender_systems)) lies in its ability to deal with sparse and high cardinality categorical variables. In this second tutorial we will have a look at Factorization Machines (FM) algorithm and study how it generalizes the power of MF.
**Table of contents of this tutorial series on matrix factorization for recommender systems:**
- [Part 1 - Traditional Matrix Factorization methods for Recommender Systems](https://online-ml.github.io/examples/matrix-factorization-for-recommender-systems-part-1)
- [Part 2 - Factorization Machines and Field-aware Factorization Machines](https://online-ml.github.io/examples/matrix-factorization-for-recommender-systems-part-2)
- [Part 3 - Large scale learning and better predictive power with multiple pass learning](https://online-ml.github.io/examples/matrix-factorization-for-recommender-systems-part-3)
## Factorization Machines
Steffen Rendel came up in 2010 with [Factorization Machines](https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf), an algorithm able to handle any real valued feature vector, combining the advantages of general predictors with factorization models. It became quite popular in the field of online advertising, notably after winning several Kaggle competitions. The modeling technique starts with a linear regression to capture the effects of each variable individually:
$$
\normalsize
\hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j}
$$
Then are added interaction terms to learn features relations. Instead of learning a single and specific weight per interaction (as in [polynomial regression](https://en.wikipedia.org/wiki/Polynomial_regression)), a set of latent factors is learnt per feature (as in MF). An interaction is calculated by multiplying involved features product with their latent vectors dot product. The degree of factorization — or model order — represents the maximum number of features per interaction considered. The model equation for a factorization machine of degree $d$ = 2 is defined as:
$$
\normalsize
\hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{j=1}^{p} \sum_{j'=j+1}^{p} \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle x_{j} x_{j'}
$$
Where $\normalsize \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle$ is the dot product of $j$ and $j'$ latent vectors:
$$
\normalsize
\langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle = \sum_{f=1}^{k} \mathbf{v}_{j, f} \cdot \mathbf{v}_{j', f}
$$
Higher-order FM will be covered in a following section, just note that factorization models express their power in sparse settings, which is also where higher-order interactions are hard to estimate.
Strong emphasis must be placed on feature engineering as it allows FM to mimic most factorization models and significantly impact its performance. High cardinality categorical variables one hot encoding is the most frequent step before feeding the model with data. For more efficiency, `river` FM implementation considers string values as categorical variables and automatically one hot encode them. FM models have their own module [river.facto](https://online-ml.github.io/api/overview/#facto).
## Mimic Biased Matrix Factorization (BiasedMF)
Let's start with a simple example where we want to reproduce the Biased Matrix Factorization model we trained in the previous tutorial. For a fair comparison with [Part 1 example](https://online-ml.github.io/examples/matrix-factorization-for-recommender-systems-part-1/#biased-matrix-factorization-biasedmf), let's set the same evaluation framework:
```
from river import datasets
from river import metrics
from river.evaluate import progressive_val_score
def evaluate(model):
X_y = datasets.MovieLens100K()
metric = metrics.MAE() + metrics.RMSE()
_ = progressive_val_score(X_y, model, metric, print_every=25_000, show_time=True, show_memory=True)
```
In order to build an equivalent model we need to use the same hyper-parameters. As we can't replace FM intercept by the global running mean we won't be able to build the exact same model:
```
from river import compose
from river import facto
from river import meta
from river import optim
from river import stats
fm_params = {
'n_factors': 10,
'weight_optimizer': optim.SGD(0.025),
'latent_optimizer': optim.SGD(0.05),
'sample_normalization': False,
'l1_weight': 0.,
'l2_weight': 0.,
'l1_latent': 0.,
'l2_latent': 0.,
'intercept': 3,
'intercept_lr': .01,
'weight_initializer': optim.initializers.Zeros(),
'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.1, seed=73),
}
regressor = compose.Select('user', 'item')
regressor |= facto.FMRegressor(**fm_params)
model = meta.PredClipper(
regressor=regressor,
y_min=1,
y_max=5
)
evaluate(model)
```
Both MAE are very close to each other (0.7486 vs 0.7485) showing that we almost reproduced [reco.BiasedMF](https://online-ml.github.io/api/reco/BiasedMF/) algorithm. The cost is a naturally slower running time as FM implementation offers more flexibility.
## Feature engineering for FM models
Let's study the basics of how to properly encode data for FM models. We are going to keep using MovieLens 100K as it provides various feature types:
```
import json
for x, y in datasets.MovieLens100K():
print(f'x = {json.dumps(x, indent=4)}\ny = {y}')
break
```
The features we are going to add to our model don't improve its predictive power. Nevertheless, they are useful to illustrate different methods of data encoding:
1. Set-categorical variables
We have seen that categorical variables are one hot encoded automatically if set to strings, in the other hand, set-categorical variables must be encoded explicitly by the user. A good way of doing so is to assign them a value of $1/m$, where $m$ is the number of elements of the sample set. It gives the feature a constant "weight" across all samples preserving model's stability. Let's create a routine to encode movies genres this way:
```
def split_genres(x):
genres = x['genres'].split(', ')
return {f'genre_{genre}': 1 / len(genres) for genre in genres}
```
2. Numerical variables
In practice, transforming numerical features into categorical ones works better in most cases. Feature binning is the natural way, but finding good bins is sometimes more an art than a science. Let's encode users age with something simple:
```
def bin_age(x):
if x['age'] <= 18:
return {'age_0-18': 1}
elif x['age'] <= 32:
return {'age_19-32': 1}
elif x['age'] < 55:
return {'age_33-54': 1}
else:
return {'age_55-100': 1}
```
Let's put everything together:
```
fm_params = {
'n_factors': 14,
'weight_optimizer': optim.SGD(0.01),
'latent_optimizer': optim.SGD(0.025),
'intercept': 3,
'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.05, seed=73),
}
regressor = compose.Select('user', 'item')
regressor += (
compose.Select('genres') |
compose.FuncTransformer(split_genres)
)
regressor += (
compose.Select('age') |
compose.FuncTransformer(bin_age)
)
regressor |= facto.FMRegressor(**fm_params)
model = meta.PredClipper(
regressor=regressor,
y_min=1,
y_max=5
)
evaluate(model)
```
Note that using more variables involves factorizing a larger latent space, then increasing the number of latent factors $k$ often helps capturing more information.
Some other feature engineering tips from [3 idiots' winning solution](https://www.kaggle.com/c/criteo-display-ad-challenge/discussion/10555) for Kaggle [Criteo display ads](https://www.kaggle.com/c/criteo-display-ad-challenge) competition in 2014:
- Infrequent modalities often bring noise and little information, transforming them into a special tag can help
- In some cases, sample-wise normalization seems to make the optimization problem easier to be solved
## Higher-Order Factorization Machines (HOFM)
The model equation generalized to any order $d \geq 2$ is defined as:
$$
\normalsize
\hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{l=2}^{d} \sum_{j_1=1}^{p} \cdots \sum_{j_l=j_{l-1}+1}^{p} \left(\prod_{j'=1}^{l} x_{j_{j'}} \right) \left(\sum_{f=1}^{k_l} \prod_{j'=1}^{l} v_{j_{j'}, f}^{(l)} \right)
$$
```
hofm_params = {
'degree': 3,
'n_factors': 12,
'weight_optimizer': optim.SGD(0.01),
'latent_optimizer': optim.SGD(0.025),
'intercept': 3,
'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.05, seed=73),
}
regressor = compose.Select('user', 'item')
regressor += (
compose.Select('genres') |
compose.FuncTransformer(split_genres)
)
regressor += (
compose.Select('age') |
compose.FuncTransformer(bin_age)
)
regressor |= facto.HOFMRegressor(**hofm_params)
model = meta.PredClipper(
regressor=regressor,
y_min=1,
y_max=5
)
evaluate(model)
```
As said previously, high-order interactions are often hard to estimate due to too much sparsity, that's why we won't spend too much time here.
## Field-aware Factorization Machines (FFM)
[Field-aware variant of FM (FFM)](https://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf) improved the original method by adding the notion of "*fields*". A "*field*" is a group of features that belong to a specific domain (e.g. the "*users*" field, the "*items*" field, or the "*movie genres*" field).
FFM restricts itself to pairwise interactions and factorizes separated latent spaces — one per combination of fields (e.g. users/items, users/movie genres, or items/movie genres) — instead of a common one shared by all fields. Therefore, each feature has one latent vector per field it can interact with — so that it can learn the specific effect with each different field.
The model equation is defined by:
$$
\normalsize
\hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{j=1}^{p} \sum_{j'=j+1}^{p} \langle \mathbf{v}_{j, f_{j'}}, \mathbf{v}_{j', f_{j}} \rangle x_{j} x_{j'}
$$
Where $f_j$ and $f_{j'}$ are the fields corresponding to $j$ and $j'$ features, respectively.
```
ffm_params = {
'n_factors': 8,
'weight_optimizer': optim.SGD(0.01),
'latent_optimizer': optim.SGD(0.025),
'intercept': 3,
'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.05, seed=73),
}
regressor = compose.Select('user', 'item')
regressor += (
compose.Select('genres') |
compose.FuncTransformer(split_genres)
)
regressor += (
compose.Select('age') |
compose.FuncTransformer(bin_age)
)
regressor |= facto.FFMRegressor(**ffm_params)
model = meta.PredClipper(
regressor=regressor,
y_min=1,
y_max=5
)
evaluate(model)
```
Note that FFM usually needs to learn smaller number of latent factors $k$ than FM as each latent vector only deals with one field.
## Field-weighted Factorization Machines (FwFM)
[Field-weighted Factorization Machines (FwFM)](https://arxiv.org/abs/1806.03514) address FFM memory issues caused by its large number of parameters, which is in the order of *feature number* times *field number*. As FFM, FwFM is an extension of FM restricted to pairwise interactions, but instead of factorizing separated latent spaces, it learns a specific weight $r_{f_j, f_{j'}}$ for each field combination modelling the interaction strength.
The model equation is defined as:
$$
\normalsize
\hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{j=1}^{p} \sum_{j'=j+1}^{p} r_{f_j, f_{j'}} \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle x_{j} x_{j'}
$$
```
fwfm_params = {
'n_factors': 10,
'weight_optimizer': optim.SGD(0.01),
'latent_optimizer': optim.SGD(0.025),
'intercept': 3,
'seed': 73,
}
regressor = compose.Select('user', 'item')
regressor += (
compose.Select('genres') |
compose.FuncTransformer(split_genres)
)
regressor += (
compose.Select('age') |
compose.FuncTransformer(bin_age)
)
regressor |= facto.FwFMRegressor(**fwfm_params)
model = meta.PredClipper(
regressor=regressor,
y_min=1,
y_max=5
)
evaluate(model)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AlbertoRosado1/desihigh/blob/main/nbody.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
from IPython.display import clear_output
from time import sleep
import sys
sys.path.append('/content/drive/MyDrive/desihigh')
import time
import astropy
import itertools
import matplotlib
import numpy as np
import pylab as pl
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.cosmology import FlatLambdaCDM
from IPython.display import YouTubeVideo
from tools.flops import flops
#%matplotlib notebook
%matplotlib inline
plt.style.use('dark_background')
```
# DESI and the fastest supercomputer in the West
Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[<sup>1</sup>](#Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy:
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/summit.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later):
So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace:
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/abacus.png?raw=1" alt="Drawing" style="width: 600px;"/>
Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI:
```
YouTubeVideo('08LBltePDZw', width=800, height=400)
```
[Dark matter](https://en.wikipedia.org/wiki/Dark_matter#:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe.
Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage).
We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate):
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/MilkyWay.jpg?raw=1" alt="Drawing" style="width: 1000px;"/>
To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration.
```
def init_dof(npt=1):
# Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter
# as we see above.
xs = np.random.uniform(0., 1., npt)
ys = np.random.uniform(0., 1., npt)
zs = np.random.uniform(0., 1., npt)
pos = np.vstack((xs,ys,zs)).T
vel = np.zeros_like(pos)
return pos, vel
pos[0][0] = 1
pos[0]
ls = []
for i in ls:
for j in
pos[i]
mass_r = np.random.uniform(0., 1., npt)
mass_r
```
The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others:
```
def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True):
# eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf.
# Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance.
delta_r = particles - pos
result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0)
# If 'pos' is one of the particles, then technically we've including the "self-force"
# But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should!
if doimages:
# Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a
# corresponding acceleration due to the image of the particle produced by applying periodic shifts to its
# position.
shift = np.array([-1, 0, 1])
images = []
for triple in itertools.product(shift, repeat=3):
images.append(triple)
images.remove((0, 0, 0))
images = np.array(images)
for image in images:
delta_r_displaced = delta_r + image
result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0)
return result
```
In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.
Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time.
```
fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150)
xmin, xmax, ymin, ymax = (0., 1., 0., 1.)
Ngrid = 100
xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid))
epsilon = 0.1
weights = np.zeros_like(xx)
npt = 10
pos, vel = init_dof(npt=npt)
for par in pos:
weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.)
ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower')
ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w')
ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False)
ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs")
ax.set_xlim(0., 1.)
ax.set_ylim(0., 1.)
fig.tight_layout()
```
This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea!
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/Holmberg.png?raw=1" alt="Drawing" style="width: 800px;"/>
Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).
To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up.
```
# We'll sample the equations of motion in discrete time steps.
dt = 5e-4
nsteps = 500
timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False)
# Number and mass of particles
npt = 2
mass = 0.25
# Whether to draw arrows for the acceleration and velocity
draw_acc = True
draw_vel = False
# A small drag term to simulate the real drag dark matter particles experience due to the expanding universe
drag = 1e-2
```
Now we simply have to run the simulation!
```
fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150)
ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False)
# Reinitialise particles.
pos, vel = init_dof(npt=npt)
# A helper function to make a nice-looking legend for our arrows
# from https://stackoverflow.com/a/22349717
def make_legend_arrow(legend, orig_handle,
xdescent, ydescent,
width, height, fontsize):
p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height)
return p
for index_in_timestep, time in enumerate(timesteps):
ax.clear()
ax.set_title(f'N-body simulation with $N={npt}$ particles')
step_label = ax.text(0.03, .97, f'Step {index_in_timestep}',
transform=ax.transAxes, verticalalignment='top', c='k',
bbox=dict(color='w', alpha=0.8))
dvel = np.zeros_like(vel)
dpos = np.zeros_like(pos)
acc = np.zeros_like(pos)
for index_in_particle in range(npt):
acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1)
# Update velocities.
dvel[index_in_particle] = dt * acc[index_in_particle]
# Update positions.
dpos[index_in_particle] = dt * vel[index_in_particle]
vel += dvel - drag*vel
pos += dpos
# Our simulation has periodic boundaries, if you go off one side you come back on the other!
pos = pos % 1.
ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w')
# Draw arrows representing the velocity and acceleration vectors, if requested
# The code here is a little verbose to get nice-looking arrows in the legend
arrows = []
if draw_vel:
ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0)
arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')]
if draw_acc:
ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0)
arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')]
if draw_vel or draw_acc:
ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)},
facecolor='k', edgecolor='white', framealpha=0.8,
loc='lower right')
ax.set_xlim(0., 1.)
ax.set_ylim(0., 1.)
fig.canvas.draw()
# Reinitialise particles.
pos, vel = init_dof(npt=npt)
# A helper function to make a nice-looking legend for our arrows
# from https://stackoverflow.com/a/22349717
def make_legend_arrow(legend, orig_handle,
xdescent, ydescent,
width, height, fontsize):
p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height)
return p
for index_in_timestep, time in enumerate(timesteps):
clear_output(wait=True)
fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150)
ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False)
ax.clear()
ax.set_title(f'N-body simulation with $N={npt}$ particles')
step_label = ax.text(0.03, .97, f'Step {index_in_timestep}',
transform=ax.transAxes, verticalalignment='top', c='k',
bbox=dict(color='w', alpha=0.8))
dvel = np.zeros_like(vel)
dpos = np.zeros_like(pos)
acc = np.zeros_like(pos)
for index_in_particle in range(npt):
acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1,doimages=False)
# Update velocities.
dvel[index_in_particle] = dt * acc[index_in_particle]
# Update positions.
dpos[index_in_particle] = dt * vel[index_in_particle]
vel += dvel - drag*vel
pos += dpos
# Our simulation has periodic boundaries, if you go off one side you come back on the other!
pos = pos % 1.
ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w')
# Draw arrows representing the velocity and acceleration vectors, if requested
# The code here is a little verbose to get nice-looking arrows in the legend
arrows = []
if draw_vel:
ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0)
arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')]
if draw_acc:
ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0)
arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')]
if draw_vel or draw_acc:
ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)},
facecolor='k', edgecolor='white', framealpha=0.8,
loc='lower right')
#if index_in_timestep%10 == 1:
ax.set_xlim(0., 1.)
ax.set_ylim(0., 1.)
#fig.canvas.draw()
plt.show(fig)
sleep(0.001)
#temp_points.remove()
```
Try playing around with the settings! More than 100 particles won't run very smoothly, however.
With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies.
Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!):
```
YouTubeVideo('LQMLFryA_7k', width=800, height=400)
```
With this great success, comes added responsibility. Global computing infrastructure (the data centers that power the internet, the cloud, and supercomputers like Summit), while fantastic for DESI and science, now has a [carbon footprint](https://en.wikipedia.org/wiki/Carbon_footprint) comparable to the [world airline industry](https://www.hpcwire.com/solution_content/ibm/cross-industry/five-tips-to-reduce-your-hpc-carbon-footprint/) and consumes the same amount of electicity as the country of Iran (82 million people!).
More worrying still, this will soon grow from 2% of the World's energy consumption, to ~30%. An extraordinary rate!
Fortunately, Summit is also among the greenest of supercomputers. It's 14.7 GFlops/watt means a #1 ranking on the [global Green 500 list 2019](https://www.top500.org/lists/green500/2019/06/).
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/Sequoia.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
### Footnote
1. Well, at least Summit *was* the world's fastest supercomputer while DESI scientists were using it in early 2020. Japan's Fugaku supercomputer overtook Summit in June 2020. The world's 500 fastest supercomputers are tracked on the "Top500" website here: https://www.top500.org/lists/top500/2020/06/. Better luck next year, USA!
| github_jupyter |
# Import necessary dependencies and settings
```
import pandas as pd
import numpy as np
import re
import nltk
import matplotlib.pyplot as plt
pd.options.display.max_colwidth = 200
%matplotlib inline
# Sample corpus of text documents
corpus = ['The sky is blue and beautiful.',
'Love this blue and beautiful sky!',
'The quick brown fox jumps over the lazy dog.',
"A king's breakfast has sausages, ham, bacon, eggs, toast and beans",
'I love green eggs, ham, sausages and bacon!',
'The brown fox is quick and the blue dog is lazy!',
'The sky is very blue and the sky is very beautiful today',
'The dog is lazy but the brown fox is quick!'
]
labels = ['weather', 'weather', 'animals', 'food', 'food', 'animals', 'weather', 'animals']
corpus = np.array(corpus)
corpus_df = pd.DataFrame({'Document': corpus,
'Category': labels})
corpus_df = corpus_df[['Document', 'Category']]
corpus_df
```
# Simple text pre-processing
```
wpt = nltk.WordPunctTokenizer()
stop_words = nltk.corpus.stopwords.words('english')
def normalize_document(doc):
# lower case and remove special characters\whitespaces
doc = re.sub(r'[^a-zA-Z\s]', '', doc, re.I|re.A)
doc = doc.lower()
doc = doc.strip()
# tokenize document
tokens = wpt.tokenize(doc)
# filter stopwords out of document
filtered_tokens = [token for token in tokens if token not in stop_words]
# re-create document from filtered tokens
doc = ' '.join(filtered_tokens)
return doc
normalize_corpus = np.vectorize(normalize_document)
norm_corpus = normalize_corpus(corpus)
norm_corpus
```
# Word Embeddings
## Load up sample corpus - Bible
```
from nltk.corpus import gutenberg
from string import punctuation
bible = gutenberg.sents('bible-kjv.txt')
remove_terms = punctuation + '0123456789'
norm_bible = [[word.lower() for word in sent if word not in remove_terms] for sent in bible]
norm_bible = [' '.join(tok_sent) for tok_sent in norm_bible]
norm_bible = filter(None, normalize_corpus(norm_bible))
norm_bible = [tok_sent for tok_sent in norm_bible if len(tok_sent.split()) > 2]
print('Total lines:', len(bible))
print('\nSample line:', bible[10])
print('\nProcessed line:', norm_bible[10])
```
## Implementing a word2vec model using a CBOW (Continuous Bag of Words) neural network architecture
### Build Vocabulary
```
from keras.preprocessing import text
from keras.utils import np_utils
from keras.preprocessing import sequence
tokenizer = text.Tokenizer()
tokenizer.fit_on_texts(norm_bible)
word2id = tokenizer.word_index
word2id['PAD'] = 0
id2word = {v:k for k, v in word2id.items()}
wids = [[word2id[w] for w in text.text_to_word_sequence(doc)] for doc in norm_bible]
vocab_size = len(word2id)
embed_size = 100
window_size = 2
print('Vocabulary Size:', vocab_size)
print('Vocabulary Sample:', list(word2id.items())[:10])
```
### Build (context_words, target_word) pair generator
```
def generate_context_word_pairs(corpus, window_size, vocab_size):
context_length = window_size*2
for words in corpus:
sentence_length = len(words)
for index, word in enumerate(words):
context_words = []
label_word = []
start = index - window_size
end = index + window_size + 1
context_words.append([words[i]
for i in range(start, end)
if 0 <= i < sentence_length
and i != index])
label_word.append(word)
x = sequence.pad_sequences(context_words, maxlen=context_length)
y = np_utils.to_categorical(label_word, vocab_size)
yield (x, y)
i = 0
for x, y in generate_context_word_pairs(corpus=wids, window_size=window_size, vocab_size=vocab_size):
if 0 not in x[0]:
print('Context (X):', [id2word[w] for w in x[0]], '-> Target (Y):', id2word[np.argwhere(y[0])[0][0]])
if i == 10:
break
i += 1
```
### Build CBOW Deep Network Model
```
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Embedding, Lambda
cbow = Sequential()
cbow.add(Embedding(input_dim=vocab_size, output_dim=embed_size, input_length=window_size*2))
cbow.add(Lambda(lambda x: K.mean(x, axis=1), output_shape=(embed_size,)))
cbow.add(Dense(vocab_size, activation='softmax'))
cbow.compile(loss='categorical_crossentropy', optimizer='rmsprop')
print(cbow.summary())
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(cbow, show_shapes=True, show_layer_names=False,
rankdir='TB').create(prog='dot', format='svg'))
```
### Train model for 5 epochs
```
for epoch in range(1, 6):
loss = 0.
i = 0
for x, y in generate_context_word_pairs(corpus=wids, window_size=window_size, vocab_size=vocab_size):
i += 1
loss += cbow.train_on_batch(x, y)
if i % 100000 == 0:
print('Processed {} (context, word) pairs'.format(i))
print('Epoch:', epoch, '\tLoss:', loss)
print()
```
### Get word embeddings
```
weights = cbow.get_weights()[0]
weights = weights[1:]
print(weights.shape)
pd.DataFrame(weights, index=list(id2word.values())[1:]).head()
```
### Build a distance matrix to view the most similar words (contextually)
```
from sklearn.metrics.pairwise import euclidean_distances
# compute pairwise distance matrix
distance_matrix = euclidean_distances(weights)
print(distance_matrix.shape)
# view contextually similar words
similar_words = {search_term: [id2word[idx] for idx in distance_matrix[word2id[search_term]-1].argsort()[1:6]+1]
for search_term in ['god', 'jesus', 'noah', 'egypt', 'john', 'gospel', 'moses','famine']}
similar_words
```
## Implementing a word2vec model using a skip-gram neural network architecture
### Build Vocabulary
```
from keras.preprocessing import text
tokenizer = text.Tokenizer()
tokenizer.fit_on_texts(norm_bible)
word2id = tokenizer.word_index
id2word = {v:k for k, v in word2id.items()}
vocab_size = len(word2id) + 1
embed_size = 100
wids = [[word2id[w] for w in text.text_to_word_sequence(doc)] for doc in norm_bible]
print('Vocabulary Size:', vocab_size)
print('Vocabulary Sample:', list(word2id.items())[:10])
```
### Build and View sample skip grams ((word1, word2) -> relevancy)
```
from keras.preprocessing.sequence import skipgrams
# generate skip-grams
skip_grams = [skipgrams(wid, vocabulary_size=vocab_size, window_size=10) for wid in wids]
# view sample skip-grams
pairs, labels = skip_grams[0][0], skip_grams[0][1]
for i in range(10):
print("({:s} ({:d}), {:s} ({:d})) -> {:d}".format(
id2word[pairs[i][0]], pairs[i][0],
id2word[pairs[i][1]], pairs[i][1],
labels[i]))
```
### Build Skip-gram Deep Network Model
```
from keras.layers import Merge
from keras.layers.core import Dense, Reshape
from keras.layers.embeddings import Embedding
from keras.models import Sequential
word_model = Sequential()
word_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
word_model.add(Reshape((embed_size, )))
context_model = Sequential()
context_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
context_model.add(Reshape((embed_size,)))
model = Sequential()
model.add(Merge([word_model, context_model], mode="dot"))
model.add(Dense(1, kernel_initializer="glorot_uniform", activation="sigmoid"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
print(model.summary())
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True, show_layer_names=False,
rankdir='TB').create(prog='dot', format='svg'))
```
### Train the model for 5 epochs
```
for epoch in range(1, 6):
loss = 0
for i, elem in enumerate(skip_grams):
pair_first_elem = np.array(list(zip(*elem[0]))[0], dtype='int32')
pair_second_elem = np.array(list(zip(*elem[0]))[1], dtype='int32')
labels = np.array(elem[1], dtype='int32')
X = [pair_first_elem, pair_second_elem]
Y = labels
if i % 10000 == 0:
print('Processed {} (skip_first, skip_second, relevance) pairs'.format(i))
loss += model.train_on_batch(X,Y)
print('Epoch:', epoch, 'Loss:', loss)
```
### Get word embeddings
```
merge_layer = model.layers[0]
word_model = merge_layer.layers[0]
word_embed_layer = word_model.layers[0]
weights = word_embed_layer.get_weights()[0][1:]
print(weights.shape)
pd.DataFrame(weights, index=id2word.values()).head()
```
### Build a distance matrix to view the most similar words (contextually)
```
from sklearn.metrics.pairwise import euclidean_distances
distance_matrix = euclidean_distances(weights)
print(distance_matrix.shape)
similar_words = {search_term: [id2word[idx] for idx in distance_matrix[word2id[search_term]-1].argsort()[1:6]+1]
for search_term in ['god', 'jesus', 'noah', 'egypt', 'john', 'gospel', 'moses','famine']}
similar_words
```
### Visualize word embeddings
```
from sklearn.manifold import TSNE
words = sum([[k] + v for k, v in similar_words.items()], [])
words_ids = [word2id[w] for w in words]
word_vectors = np.array([weights[idx] for idx in words_ids])
print('Total words:', len(words), '\tWord Embedding shapes:', word_vectors.shape)
tsne = TSNE(n_components=2, random_state=0, n_iter=10000, perplexity=3)
np.set_printoptions(suppress=True)
T = tsne.fit_transform(word_vectors)
labels = words
plt.figure(figsize=(14, 8))
plt.scatter(T[:, 0], T[:, 1], c='steelblue', edgecolors='k')
for label, x, y in zip(labels, T[:, 0], T[:, 1]):
plt.annotate(label, xy=(x+1, y+1), xytext=(0, 0), textcoords='offset points')
```
## Leveraging gensim for building a word2vec model
```
from gensim.models import word2vec
# tokenize sentences in corpus
wpt = nltk.WordPunctTokenizer()
tokenized_corpus = [wpt.tokenize(document) for document in norm_bible]
# Set values for various parameters
feature_size = 100 # Word vector dimensionality
window_context = 30 # Context window size
min_word_count = 1 # Minimum word count
sample = 1e-3 # Downsample setting for frequent words
w2v_model = word2vec.Word2Vec(tokenized_corpus, size=feature_size,
window=window_context, min_count=min_word_count,
sample=sample, iter=50)
# view similar words based on gensim's model
similar_words = {search_term: [item[0] for item in w2v_model.wv.most_similar([search_term], topn=5)]
for search_term in ['god', 'jesus', 'noah', 'egypt', 'john', 'gospel', 'moses','famine']}
similar_words
```
## Visualizing word embeddings
```
from sklearn.manifold import TSNE
words = sum([[k] + v for k, v in similar_words.items()], [])
wvs = w2v_model.wv[words]
tsne = TSNE(n_components=2, random_state=0, n_iter=10000, perplexity=2)
np.set_printoptions(suppress=True)
T = tsne.fit_transform(wvs)
labels = words
plt.figure(figsize=(14, 8))
plt.scatter(T[:, 0], T[:, 1], c='orange', edgecolors='r')
for label, x, y in zip(labels, T[:, 0], T[:, 1]):
plt.annotate(label, xy=(x+1, y+1), xytext=(0, 0), textcoords='offset points')
```
## Applying the word2vec model on our sample corpus
```
wpt = nltk.WordPunctTokenizer()
tokenized_corpus = [wpt.tokenize(document) for document in norm_corpus]
# Set values for various parameters
feature_size = 10 # Word vector dimensionality
window_context = 10 # Context window size
min_word_count = 1 # Minimum word count
sample = 1e-3 # Downsample setting for frequent words
w2v_model = word2vec.Word2Vec(tokenized_corpus, size=feature_size,
window=window_context, min_count = min_word_count,
sample=sample, iter=100)
```
## Visualize word embeddings
```
from sklearn.manifold import TSNE
words = w2v_model.wv.index2word
wvs = w2v_model.wv[words]
tsne = TSNE(n_components=2, random_state=0, n_iter=5000, perplexity=2)
np.set_printoptions(suppress=True)
T = tsne.fit_transform(wvs)
labels = words
plt.figure(figsize=(12, 6))
plt.scatter(T[:, 0], T[:, 1], c='orange', edgecolors='r')
for label, x, y in zip(labels, T[:, 0], T[:, 1]):
plt.annotate(label, xy=(x+1, y+1), xytext=(0, 0), textcoords='offset points')
```
## Sample word embedding
```
w2v_model.wv['sky']
```
## Build framework for getting document level embeddings
```
def average_word_vectors(words, model, vocabulary, num_features):
feature_vector = np.zeros((num_features,),dtype="float64")
nwords = 0.
for word in words:
if word in vocabulary:
nwords = nwords + 1.
feature_vector = np.add(feature_vector, model[word])
if nwords:
feature_vector = np.divide(feature_vector, nwords)
return feature_vector
def averaged_word_vectorizer(corpus, model, num_features):
vocabulary = set(model.wv.index2word)
features = [average_word_vectors(tokenized_sentence, model, vocabulary, num_features)
for tokenized_sentence in corpus]
return np.array(features)
w2v_feature_array = averaged_word_vectorizer(corpus=tokenized_corpus, model=w2v_model,
num_features=feature_size)
pd.DataFrame(w2v_feature_array)
```
## Clustering with word embeddings
```
from sklearn.cluster import AffinityPropagation
ap = AffinityPropagation()
ap.fit(w2v_feature_array)
cluster_labels = ap.labels_
cluster_labels = pd.DataFrame(cluster_labels, columns=['ClusterLabel'])
pd.concat([corpus_df, cluster_labels], axis=1)
from sklearn.decomposition import PCA
pca = PCA(n_components=2, random_state=0)
pcs = pca.fit_transform(w2v_feature_array)
labels = ap.labels_
categories = list(corpus_df['Category'])
plt.figure(figsize=(8, 6))
for i in range(len(labels)):
label = labels[i]
color = 'orange' if label == 0 else 'blue' if label == 1 else 'green'
annotation_label = categories[i]
x, y = pcs[i]
plt.scatter(x, y, c=color, edgecolors='k')
plt.annotate(annotation_label, xy=(x+1e-4, y+1e-3), xytext=(0, 0), textcoords='offset points')
```
## GloVe Embeddings with spaCy
```
import spacy
nlp = spacy.load('en_vecs')
total_vectors = len(nlp.vocab.vectors)
print('Total word vectors:', total_vectors)
```
## Visualize GloVe word embeddings
```
unique_words = list(set([word for sublist in [doc.split() for doc in norm_corpus] for word in sublist]))
word_glove_vectors = np.array([nlp(word).vector for word in unique_words])
pd.DataFrame(word_glove_vectors, index=unique_words)
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0, n_iter=5000, perplexity=3)
np.set_printoptions(suppress=True)
T = tsne.fit_transform(word_glove_vectors)
labels = unique_words
plt.figure(figsize=(12, 6))
plt.scatter(T[:, 0], T[:, 1], c='orange', edgecolors='r')
for label, x, y in zip(labels, T[:, 0], T[:, 1]):
plt.annotate(label, xy=(x+1, y+1), xytext=(0, 0), textcoords='offset points')
```
## Cluster documents with GloVe Embeddings
```
doc_glove_vectors = np.array([nlp(str(doc)).vector for doc in norm_corpus])
km = KMeans(n_clusters=3, random_state=0)
km.fit_transform(doc_glove_vectors)
cluster_labels = km.labels_
cluster_labels = pd.DataFrame(cluster_labels, columns=['ClusterLabel'])
pd.concat([corpus_df, cluster_labels], axis=1)
```
# Leveraging gensim for building a FastText model
```
from gensim.models.fasttext import FastText
wpt = nltk.WordPunctTokenizer()
tokenized_corpus = [wpt.tokenize(document) for document in norm_bible]
# Set values for various parameters
feature_size = 100 # Word vector dimensionality
window_context = 50 # Context window size
min_word_count = 5 # Minimum word count
sample = 1e-3 # Downsample setting for frequent words
ft_model = FastText(tokenized_corpus, size=feature_size, window=window_context,
min_count=min_word_count,sample=sample, sg=1, iter=50)
# view similar words based on gensim's model
similar_words = {search_term: [item[0] for item in ft_model.wv.most_similar([search_term], topn=5)]
for search_term in ['god', 'jesus', 'noah', 'egypt', 'john', 'gospel', 'moses','famine']}
similar_words
from sklearn.decomposition import PCA
words = sum([[k] + v for k, v in similar_words.items()], [])
wvs = ft_model.wv[words]
pca = PCA(n_components=2)
np.set_printoptions(suppress=True)
P = pca.fit_transform(wvs)
labels = words
plt.figure(figsize=(18, 10))
plt.scatter(P[:, 0], P[:, 1], c='lightgreen', edgecolors='g')
for label, x, y in zip(labels, P[:, 0], P[:, 1]):
plt.annotate(label, xy=(x+0.06, y+0.03), xytext=(0, 0), textcoords='offset points')
ft_model.wv['jesus']
print(ft_model.wv.similarity(w1='god', w2='satan'))
print(ft_model.wv.similarity(w1='god', w2='jesus'))
st1 = "god jesus satan john"
print('Odd one out for [',st1, ']:', ft_model.wv.doesnt_match(st1.split()))
st2 = "john peter james judas"
print('Odd one out for [',st2, ']:', ft_model.wv.doesnt_match(st2.split()))
```
| github_jupyter |
[Sascha Spors](https://orcid.org/0000-0001-7225-9992),
Professorship Signal Theory and Digital Signal Processing,
[Institute of Communications Engineering (INT)](https://www.int.uni-rostock.de/),
Faculty of Computer Science and Electrical Engineering (IEF),
[University of Rostock, Germany](https://www.uni-rostock.de/en/)
# Tutorial Signals and Systems (Signal- und Systemtheorie)
Summer Semester 2021 (Bachelor Course #24015)
- lecture: https://github.com/spatialaudio/signals-and-systems-lecture
- tutorial: https://github.com/spatialaudio/signals-and-systems-exercises
WIP...
The project is currently under heavy development while adding new material for the summer semester 2021
Feel free to contact lecturer [frank.schultz@uni-rostock.de](https://orcid.org/0000-0002-3010-0294)
## Fourier Series Right Time Shift <-> Phase Mod
```
import numpy as np
import matplotlib.pyplot as plt
def my_sinc(x): # we rather use definition sinc(x) = sin(x)/x, thus:
return np.sinc(x/np.pi)
Th_des = [1, 0.2]
om = np.linspace(-100, 100, 1000)
plt.figure(figsize=(10, 8))
plt.subplot(2,1,1)
for idx, Th in enumerate(Th_des):
A = 1/Th # such that sinc amplitude is always 1
# Fourier transform for single rect pulse
Xsinc = A*Th * my_sinc(om*Th/2)
Xsinc_phase = Xsinc*np.exp(-1j*om*Th/2)
plt.plot(om, Xsinc, 'C7', lw=1)
plt.plot(om, np.abs(Xsinc_phase), label=r'$T_h$=%1.0e s' % Th, lw=5-idx)
plt.legend()
plt.title(r'Fourier transform of single rectangular impulse with $A=1/T_h$ right-shifted by $\tau=T_h/2$')
plt.ylabel(r'magnitude $|X(\mathrm{j}\omega)|$')
plt.xlim(om[0], om[-1])
plt.grid(True)
plt.subplot(2,1,2)
for idx, Th in enumerate(Th_des):
Xsinc = A*Th * my_sinc(om*Th/2)
Xsinc_phase = Xsinc*np.exp(-1j*om*Th/2)
plt.plot(om, np.angle(Xsinc_phase), label=r'$T_h$=%1.0e s' % Th, lw=5-idx)
plt.legend()
plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'phase $\angle X(\mathrm{j}\omega)$')
plt.xlim(om[0], om[-1])
plt.ylim(-4, +4)
plt.grid(True)
plt.savefig('A8A2DEE53A.pdf')
```
## Copyright
This tutorial is provided as Open Educational Resource (OER), to be found at
https://github.com/spatialaudio/signals-and-systems-exercises
accompanying the OER lecture
https://github.com/spatialaudio/signals-and-systems-lecture.
Both are licensed under a) the Creative Commons Attribution 4.0 International
License for text and graphics and b) the MIT License for source code.
Please attribute material from the tutorial as *Frank Schultz,
Continuous- and Discrete-Time Signals and Systems - A Tutorial Featuring
Computational Examples, University of Rostock* with
``main file, github URL, commit number and/or version tag, year``.
| github_jupyter |
```
# !pip install scikeras
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.compose import make_column_selector, make_column_transformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.metrics import ConfusionMatrixDisplay, balanced_accuracy_score
from sklearn.dummy import DummyClassifier
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
# from scikeras.wrappers import KerasClassifier
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.callbacks import EarlyStopping
```
## Import Data
```
data = pd.read_csv('./demographics-data/classification_data_demographics.csv')
data.head(2)
```
## Transform and Scale Data
### Column Transformer
```
X = data.drop(columns=['labels'])
ct = make_column_transformer(
(OneHotEncoder(sparse=False, handle_unknown='ignore'), make_column_selector(dtype_include=object)),
remainder='passthrough',
verbose_feature_names_out=False
)
X_encoded = ct.fit_transform(X)
X_encoded
ct.get_feature_names_out()
X_encoded = pd.DataFrame(X_encoded, columns=ct.get_feature_names_out())
X_encoded.head(2)
```
### Scaling
```
X_encoded_scaled = StandardScaler().fit_transform(X_encoded)
```
## Target
```
y = data['labels']
y_categorical = to_categorical(y, 3)
```
## Baseline
```
y.value_counts(normalize=True)
```
## Test/Train Split
```
X_train, X_test, y_train, y_test = train_test_split(X_encoded_scaled, y_categorical, stratify=y, random_state=13)
X_train.shape
y_train.shape
```
## Random Forest
```
rf = RandomForestClassifier(n_jobs=-1)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
```
### Confusion Matrix Display
```
y_preds = rf.predict(X_test)
ConfusionMatrixDisplay.from_predictions(y_test.argmax(axis=1), np.rint(y_preds).argmax(axis=1), cmap='Blues')
;
plt.title("Random Forest Confusion Matrix")
plt.savefig('./figures/confusion_matrix_random_forest.png')
```
### Balanced Accuracy Score
```
balanced_accuracy_score(y_test.argmax(axis=1), y_preds.argmax(axis=1))
```
### Feature Importances
```
pd.DataFrame([ct.get_feature_names_out(), rf.feature_importances_]).T.sort_values(by=1, ascending=False).head(10)
```
## Extra Trees
```
et = ExtraTreesClassifier(n_jobs=-1)
et.fit(X_train, y_train)
et.score(X_test, y_test)
```
### Confusion Matrix Display
```
y_preds = et.predict(X_test)
ConfusionMatrixDisplay.from_predictions(y_test.argmax(axis=1), np.rint(y_preds).argmax(axis=1), cmap='Blues')
;
plt.title("Extra Trees Confusion Matrix")
plt.savefig('./figures/confusion_matrix_extra_trees.png')
```
### Balanced Accuracy Score
```
balanced_accuracy_score(y_test.argmax(axis=1), y_preds.argmax(axis=1))
```
## RandomizedSearchCV on Extra Trees
### Extra Trees
```
et = ExtraTreesClassifier()
et.get_params().keys()
params = {
'n_estimators': range(100, 1000)
}
```
### RandomizedSearchCV
```
# rs = RandomizedSearchCV(
# et,
# params,
# n_jobs=-1
# )
# rs_result = rs.fit(X_train, y_train)
# # Result summary
# print(f"Best score: {rs_result.best_score_}. Used these parameters: {rs_result.best_params_}")
# # This part copied from machine learning mastery prints out all results to check where improvements can be made
# means = rs_result.cv_results_['mean_test_score']
# stds = rs_result.cv_results_['std_test_score']
# params = rs_result.cv_results_['params']
# for mean, stdev, param in zip(means, stds, params):
# print("%f (%f) with: %r" % (mean, stdev, param))
# y_preds = rs_result.best_estimator_.predict(X_test)
# balanced_accuracy_score(y_test.argmax(axis=1), y_preds.argmax(axis=1))
```
| github_jupyter |
# Mandatory Assignment 1
This is the first of two mandatory assignments which must be completed during the course. First some practical information:
* When is the assignment due?: **23:59, Sunday, August 19, 2018.**
* How do you grade the assignment?: You will **peergrade** each other as primary grading.
* Can i work with my group?: **yes**
The assigment consist of one to tree problems from each of the exercise sets you have solved so far (excluding Set 1). We've tried to select problems which are self contained, but it might be nessecary to solve some of the previous exercises in each set to fully answer the problems in this assignment.
## Problems from Exercise Set 2:
> **Ex. 2.2**: Make two lists. The first should be numbered. The second should be unnumbered and contain at least one sublevel.
### Converted (Using shortkey M)
1. Ordered 1
1. Ordered 2
1. Ordered 1.1
1. Ordered 1.2
- Unnumbered 1
- Unnumbered 2
- Unnumbered sublevel 1
## Problems from Exercise set 3:
> **Ex. 3.1.3:** Let `l1 = ['r ', 'Is', '>', ' < ', 'g ', '?']`. Create from `l1` the sentence `"Is r > g?"` using your knowledge about string formatting. Store this new string in a variable called `answer_31`. Make sure there is only one space in between worlds.
>
>> _Hint:_ You should be able to combine the above informations to solve this exercise.
```
# [Answer to Ex. 3.1.3 here]
l1 = ['r ', 'Is', '>', ' < ', 'g ', '?']
# answer_31 =
answer_31 = l1[1] + " " + l1[0].strip() + " " + l1[2] + " " + l1[4].strip() + l1[-1]
print(answer_31)
assert answer_31 == "Is r > g?"
```
> **Ex. 3.1.4**: Create an empty dictionary `words` using the `dict()`function. Then add each of the words in `['animal', 'coffee', 'python', 'unit', 'knowledge', 'tread', 'arise']` as a key, with the value being a boolean indicator for whether the word begins with a vowel. The results should look like `{'bacon': False, 'asynchronous': True ...}`. Store the result in a new variable called `answer_32`.
>
>> _Hint:_ You might want co first construct a function that asseses whether a given word begins with a vowel or not.
```
# [Answer to Ex. 3.1.4 here]
W = ['animal', 'coffee', 'python', 'unit', 'knowledge', 'tread', 'arise']
# answer_32 =
# YOUR CODE HERE
words = dict()
keys = ['animal', 'coffee', 'python', 'unit', 'knowledge', 'tread', 'arise']
values = []
for word in keys:
if word[0] in "aeiouy":
values.append(True)
else:
values.append(False)
words = list(zip(keys, values))
answer_32 = dict(words)
print(answer_32)
assert answer_32 == {i: i[0] in 'aeiou' for i in W}
assert sorted(answer_32) == sorted(W)
```
> **Ex. 3.3.2:** use the `requests` module (get it with `pip install requests`) and `construct_link()` which you defined in the previous question (ex 3.3.1) to request birth data from the "FOD" table. Get all available years (variable "Tid"), but only female births (BARNKON=P) . Unpack the json payload and store the result. Wrap the whole thing in a function which takes an url as input and returns the corresponding output.
>
> Store the birth data in a new variable called `answer_33`.
>
>> _Hint:_ The `requests.response` object has a `.json()` method.
>
>> _Note:_ you wrote `construct_link()` in 3.3.1, if you didn't heres the link you need to get: `https://api.statbank.dk/v1/data/FOLK1A/JSONSTAT?lang=en&Tid=*`
```
'https://api.statbank.dk/v1/data/FOLK1A/JSONSTAT?lang=en&Tid=*'
def construct_link(table_id, variables):
base = 'https://api.statbank.dk/v1/data/{id}/JSONSTAT?lang=en'.format(id = table_id)
for var in variables:
base += '&{v}'.format(v = var)
return base
construct_link('FOLK1A', ['Tid=*'])
URL = construct_link('FOD',['Tid=*','BARNKON=P'])
import requests
import pprint
import json
def unpackjson(x , file_name):
response = requests.get(x)
response_json = response.json()
with open(file_name, 'w') as f:
response_json_str = json.dumps(response_json)
f.write(response_json_str)
return(response_json)
response_json = unpackjson(URL, 'my_filelec')
answer_33 = response_json
assert sorted(answer_33['dataset'].keys()) == ['dimension', 'label', 'source', 'updated', 'value']
assert 'BARNKON' in answer_33['dataset']['dimension'].keys()
```
## Problems from exercise set 4
```
import numpy as np
import pandas as pd
```
> **Ex. 4.1.1:** Use Pandas' CSV reader to fetch daily data weather from 1864 for various stations - available [here](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/). Store the dataframe in a variable called `answer_41`.
>
>> *Hint 1*: for compressed files you may need to specify the keyword `compression`.
>
>> *Hint 2*: keyword `header` can be specified as the CSV has no column names.
>
>> *Hint 3*: Specify the path, as the URL linking directly to the 1864 file.
```
# [Answer to Ex. 4.1.1 here]
# answer_41 =
url = 'https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/1864.csv.gz'
answer_41 = pd.read_csv(url,
compression='gzip',
header=None)#.iloc[:,:4] #for exercise we want all 8 columns..
assert answer_41.shape == (27349, 8)
assert list(answer_41.columns) == list(range(8))
```
> **Ex. 4.1.2:** Structure your weather DataFrame by using only the relevant columns (station identifier, data, observation type, observation value), rename them. Make sure observations are correctly formated (how many decimals should we add? one?).
>
> Store the resulting dataframe in a new variable called `answer_42`.
>
>> *Hint:* rename can be done with `df.columns=COLS` where `COLS` is a list of column names.
```
# [Answer to Ex. 4.1.2 here]
# answer_42 =
# YOUR CODE HERE
answer_42 = answer_41.iloc[:,:4]
answer_42.columns = ['station', 'datetime', 'obs_type', 'obs_value']
answer_42['obs_value'] = answer_42['obs_value'] / 10
assert answer_42.shape == (27349, 4)
assert 144.8 in [answer_42[i].max() for i in answer_42]
assert -666.0 in [answer_42[i].min() for i in answer_42]
assert 18640101 in [answer_42[i].min() for i in answer_42]
```
> **Ex. 4.1.3:** Select data for the station `ITE00100550` and only observations for maximal temperature. Make a copy of the DataFrame. Explain in a one or two sentences how copying works.
>
> Store the subsetted dataframe in a new variable called `answer_43`.
>
>> *Hint 1*: the `&` operator works elementwise on boolean series (like `and` in core python).
>
>> *Hint 2*: copying of the dataframe is done with the `copy` method for DataFrames.
```
# [Answer to Ex. 4.1.3 here]
# answer_43 =
# YOUR CODE HERE
answer_43 = answer_42[(answer_42.station == 'ITE00100550') & (answer_42.obs_type == 'TMAX')].copy()
assert 'ITE00100550' in [answer_43[i].min() for i in answer_43]
assert 'ITE00100550' in [answer_43[i].max() for i in answer_43]
assert 'TMAX' in [answer_43[i].min() for i in answer_43]
assert 'TMAX' in [answer_43[i].max() for i in answer_43]
```
> **Ex. 4.1.4:** Make a new column in `answer_44` called `TMAX_F` where you have converted the temperature variables to Fahrenheit. Make sure not to overwrite `answer_43`.
>
> Store the resulting dataframe in a variable called `answer_44`.
>
>> *Hint*: Conversion is $F = 32 + 1.8*C$ where $F$ is Fahrenheit and $C$ is Celsius.
```
# [Answer to Ex. 4.1.4 here]
answer_44 = answer_43.copy()
# answer_44 =
# YOUR CODE HERE
answer_44['TMAX_F'] = 32 + 1.8 * answer_44['obs_value']
assert set(answer_44.columns) - set(answer_43.columns) == {'TMAX_F'}
```
## Problems from exercise set 5
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
iris = sns.load_dataset('iris')
titanic = sns.load_dataset('titanic')
```
> **Ex. 5.1.1:**: Show the first five rows of the titanic dataset. What information is in the dataset? Use a barplot to show the probability of survival for men and women within each passenger class. Can you make a boxplot showing the same information (why/why not?). _Bonus:_ show a boxplot for the fare-prices within each passenger class.
>
> Spend five minutes discussing what you can learn about the survival-selection aboard titanic from the figure(s).
>
> > _Hint:_ https://seaborn.pydata.org/generated/seaborn.barplot.html, specifically the `hue` option.
```
# [Answer to Ex. 5.1.1 here]
# YOUR CODE HERE
print(titanic.head(5))
fig1 = sns.barplot(x = 'sex', y = 'survived', hue = 'class', data = titanic)
#boxplot for fare prices
sns.boxplot(x='class', y='fare', data=titanic)
```
> **Ex. 5.1.2:** Using the iris flower dataset, draw a scatterplot of sepal length and petal length. Include a second order polynomial fitted to the data. Add a title to the plot and rename the axis labels.
> _Discuss:_ Is this a meaningful way to display the data? What could we do differently?
>
> For a better understanding of the dataset this image might be useful:
> <img src="iris_pic.png" alt="Drawing" style="width: 200px;"/>
>
>> _Hint:_ use the `.regplot` method from seaborn.
```
# [Answer to Ex. 5.1.2 here]
# YOUR CODE HERE
iris.head(5)
plt.scatter(x=iris['sepal_length'], y=iris['petal_length'])
flower = sns.regplot(x='sepal_length', y='petal_length', order = 2, data = iris)
flower.set_title('Flowers')
flower.set_ylabel('xss')
flower.set_xlabel('asdasd')
```
> **Ex. 5.1.3:** Combine the two of the figures you created above into a two-panel figure similar to the one shown here:
> <img src="Example.png" alt="Drawing" style="width: 600px;"/>
>
> Save the figure as a png file on your computer.
>> _Hint:_ See [this question](https://stackoverflow.com/questions/41384040/subplot-for-seaborn-boxplot) on stackoverflow for inspiration.
```
# [Answer to Ex. 5.1.3 here]
# YOUR CODE HERE
f, axes = plt.subplots(1, 2)
flower = sns.regplot(x='sepal_length', y='petal_length', order = 2, data = iris, ax=axes[0])
flower.set_title('Flowers')
flower.set_ylabel('xss')
flower.set_xlabel('asdasd')
fig1 = sns.barplot(x = 'sex', y = 'survived', hue = 'class', data = titanic, ax=axes[1])
```
> **Ex. 5.1.4:** Use [pairplot with hue](https://seaborn.pydata.org/generated/seaborn.pairplot.html) to create a figure that clearly shows how the different species vary across measurements. Change the color palette and remove the shading from the density plots. _Bonus:_ Try to explain how the `diag_kws` argument works (_hint:_ [read here](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python))
```
# [Answer to Ex. 5.1.4 here]
# YOUR CODE HERE
sns.pairplot(iris, hue = 'species', diag_kws=dict(shade=False), palette="dark") #
```
## Problems from exercise set 6
> _Note:_ In the exercises we asked you to download weather data from the NOAA website. For this assignment the data are loaded in the following code cell into two pandas dataframes.
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
weather_1864 = pd.read_csv('weather_data_1864.csv')
```
> **Ex. 6.1.4:** Extract the country code from the station name into a separate column.
>
> Create a new column in `weather_1864` called `answer_61` and store the country codes here.
>
>> _Hint:_ The station column contains a GHCND ID, given to each weather station by NOAA. The format of these ID's is a 2-3 letter country code, followed by a integer identifying the specific station. A simple approach is to assume a fixed length of the country ID. A more complex way would be to use the [`re`](https://docs.python.org/2/library/re.html) module.
```
weather_1864.head()
# [Answer to Ex. 6.1.4]
# weather_1864['answer_61'] =
# YOUR CODE HERE
#answer_42['station'].unique()
#weather_1864['answer_61']=answer_42['station'].str.replace('\d','')
weather_1864['answer_61'] = weather_1864['station'].str[:2]
#wsorted(.unique())
#sorted(['SZ', 'CA', 'EZ', 'GM', 'AU', 'IT', 'BE', 'UK', 'EI', 'AG', 'AS']) # der er for få lande???
assert sorted(weather_1864['answer_61'].str[:2].unique()) == sorted(['SZ', 'CA', 'EZ', 'GM', 'AU', 'IT', 'BE', 'UK', 'EI', 'AG', 'AS'])
```
> **Ex. 6.1.5:** Make a function that downloads and formats the weather data according to previous exercises in Exercise Section 4.1, 6.1. You should use data for ALL stations but still only select maximal temperature. _Bonus:_ To validate that your function works plot the temperature curve for each country in the same window. Use `plt.legend()` to add a legend.
>
> Name your function `prepareWeatherData`.
```
# [Answer to Ex. 6.1.5]
def prepareWeatherData(year):
# Your code here
return
# YOUR CODE HERE
def prepareWeatherData(year):
start_url ='https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/'
endpoint_url = year + '.csv.gz'
url = start_url + endpoint_url
df_weather = pd.read_csv(url,
compression='gzip',
header=None).iloc[:,:4]
df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']
df_weather['obs_value'] = df_weather['obs_value'] / 10
df_select = df_weather[(df_weather.obs_type == 'TMAX')].copy()
df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
#Converting strings to the correct datetime format.
df_sorted['datetime']=pd.to_datetime(df_sorted['datetime'].astype(str))
#New column month.
month = df_sorted['datetime'].dt.month # taking month
df_sorted['month'] = month
#Making date time as indexindex.
#df_sorted = df_sorted.set_index('datetime') TAGET UD FOR DENNE ASSIGNMENT
return(df_sorted)
data1 = prepareWeatherData('1864')
print(data1.shape)
data1.head()
assert prepareWeatherData('1864').shape == (5686, 6)
```
## Problems from exercise set 7
> _Note:_ Once again if you haven't managed to download the data from NOAA, you can refer to the github repo to get csv-files containing the required data.
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
# Increases the plot size a little
mpl.rcParams['figure.figsize'] = 11, 6
```
> **Ex. 7.1.1:** Plot the monthly max,min, mean, first and third quartiles for maximum temperature for our station with the ID _'ITE00100550'_ in 1864.
> *Hint*: the method `describe` computes all these measures.
```
# [Answer to Ex. 7.1.1]
# YOUR CODE HERE
def dlweather(year):
start_url ='https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/'
endpoint_url = year + '.csv.gz'
url = start_url + endpoint_url
df_weather = pd.read_csv(url,
compression='gzip',
header=None).iloc[:,:5]
df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value','tst']
df_weather['obs_value'] = df_weather['obs_value'] / 10
df_select = df_weather[(df_weather.obs_type == 'TMAX')].copy()
df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
#Converting strings to the correct datetime format.
df_sorted['datetime']=pd.to_datetime(df_sorted['datetime'].astype(str))
#New column month.
month = df_sorted['datetime'].dt.month # taking month
df_sorted['month'] = month
#Making date time as indexindex.
#df_sorted = df_sorted.set_index('datetime') TAGET UD FOR DENNE ASSIGNMENT
return(df_sorted)
weathersp = dlweather('1864')
weathersp = weathersp[(weathersp.station == 'ITE00100550')].copy()
weathersp.head()
#grouping on month
split_vars = ['month']
apply_vars = ['TMAX_F']
#apply_fcts = ['max','min','mean','']
weathersp_final = weathersp.groupby(split_vars)[apply_vars].describe()
print(weathersp_final.head())
#dropping count and std, husk at det er ligesom en ordbog.
weathersp_final = weathersp_final['TMAX_F'][['mean','min','25%','50%','75%','max']]
weathersp_final.plot(figsize=(10,7))
```
> **Ex. 7.1.2:** Get the processed data from years 1864-1867 as a list of DataFrames. Convert the list into a single DataFrame by concatenating vertically.
>
> Name the concatenated data `answer_72`
```
# [Answer to Ex. 7.1.2]
# YOUR CODE HERE
dc={}
for year in ['1864','1865','1866','1867']:
dc[year] = dlweather(year)
answer_72 = pd.concat([dc['1864'],dc['1865'],dc['1866'],dc['1867']], join ='outer',axis=0, sort=False)
assert answer_72.shape == (30003, 7)
```
> **Ex. 7.1.3:** Parse the station location data which you can find at https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt. Merge station locations onto the weather data spanning 1864-1867.
>
> Store the merged data in a new variable called `answer_73`.
>
> _Hint:_ The location data have the folllowing format,
```
------------------------------
Variable Columns Type
------------------------------
ID 1-11 Character
LATITUDE 13-20 Real
LONGITUDE 22-30 Real
ELEVATION 32-37 Real
STATE 39-40 Character
NAME 42-71 Character
GSN FLAG 73-75 Character
HCN/CRN FLAG 77-79 Character
WMO ID 81-85 Character
------------------------------
```
> *Hint*: The station information has fixed width format - does there exist a pandas reader for that?
```
# [Answer to Ex. 7.1.3]
# YOUR CODE HERE
df_country = pd.read_fwf('https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt') #fwf = fixed width format
#print(df_country.head())
assert answer_73.shape == (5686, 15) or answer_73.shape == (30003, 15)
```
## Problems from exercise set 8
> **Ex. 8.1.2.:** Use the `request` module to collect the first page of job postings.
>
> Store the response.json() object in a new variable called `answer_81`.
>
```
# [Answer to Ex. 8.1.2]
# YOUR CODE HERE
import pandas as pd
import requests
url ='https://job.jobnet.dk/CV/FindWork/Search?Offset=20' # tag /Search?Offset=20 af hvis du vil have ALLE jobs.
r = requests.get(url)
r.status_code
answer_81=r.json()
answer_81.keys()
assert sorted(answer_81.keys()) == sorted(['Expression', 'Facets', 'JobPositionPostings', 'TotalResultCount'])
```
> **Ex. 8.1.3.:** Store the 'TotalResultCount' value for later use. Also create a dataframe from the 'JobPositionPostings' field in the json. Name this dataframe `answer_82`.
```
# [Answer to Ex. 8.1.3]
# answer_82 =
# YOUR CODE HERE
trCount= r.json()['TotalResultCount']
answer_82=pd.DataFrame(r.json()['JobPositionPostings'])
answer_82.shape
assert answer_82.shape == (20,44)
```
| github_jupyter |
```
import os
import argparse
from keras.preprocessing.image import ImageDataGenerator
from keras import callbacks
import numpy as np
from keras import layers, models, optimizers
from keras import backend as K
from keras.utils import to_categorical
import matplotlib.pyplot as plt
from utils import combine_images
from PIL import Image
from capsulelayers import CapsuleLayer, PrimaryCap, Length, Mask
from keras.utils import multi_gpu_model
K.set_image_data_format('channels_last')
class dotdict(dict):
"""dot.notation access to dictionary attributes"""
__getattr__ = dict.get
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
args={
'epochs':200,
'batch_size':32,
'lr':0.001, #Initial learning rate
'lr_decay':0.9, #The value multiplied by lr at each epoch. Set a larger value for larger epochs
'lam_recon':0.392, #The coefficient for the loss of decoder
'routings':3, #Number of iterations used in routing algorithm. should > 0
'shift_fraction':0.2, #Fraction of pixels to shift at most in each direction.
'debug':False, #Save weights by TensorBoard
'save_dir':'./result',
'digit':1,
'gpus':2,
'train_dir':'./data/train/',
'test_dir':'./data/test/'
}
args=dotdict(args)
if not os.path.exists(args.save_dir):
os.makedirs(args.save_dir)
# Load Data
train_datagen = ImageDataGenerator(rescale = 1./255,
horizontal_flip=True,
rotation_range = args.shift_fraction,
zoom_range = args.shift_fraction,
width_shift_range = args.shift_fraction,
height_shift_range = args.shift_fraction)
#generator = train_datagen.flow(x, y, batch_size=batch_size)
train_set = train_datagen.flow_from_directory(args.train_dir,
target_size = (64, 64),
batch_size = args.batch_size,
class_mode = 'categorical')
test_datagen = ImageDataGenerator(rescale = 1./255)
test_set = test_datagen.flow_from_directory(args.test_dir,
target_size = (64, 64),
batch_size = 3753, #args.batch_size,
class_mode = 'categorical')
def margin_loss(y_true, y_pred):
"""
Margin loss for Eq.(4). When y_true[i, :] contains not just one `1`, this loss should work too. Not test it.
:param y_true: [None, n_classes]
:param y_pred: [None, num_capsule]
:return: a scalar loss value.
"""
L = y_true * K.square(K.maximum(0., 0.9 - y_pred)) + \
0.5 * (1 - y_true) * K.square(K.maximum(0., y_pred - 0.1))
return K.mean(K.sum(L, 1))
def train(model, args):
"""
Training a CapsuleNet
:param model: the CapsuleNet model
:param data: a tuple containing training and testing data, like `((x_train, y_train), (x_test, y_test))`
:param args: arguments
:return: The trained model
"""
# callbacks
log = callbacks.CSVLogger(args.save_dir + '/log.csv')
tb = callbacks.TensorBoard(log_dir=args.save_dir + '/tensorboard-logs',
batch_size=args.batch_size, histogram_freq=int(args.debug))
checkpoint = callbacks.ModelCheckpoint(args.save_dir + '/Model{epoch:02d}_{val_acc:.2f}.h5', monitor='val_capsnet_acc',
save_best_only=True, save_weights_only=False, verbose=1)
lr_decay = callbacks.LearningRateScheduler(schedule=lambda epoch: args.lr * (args.lr_decay ** epoch))
# compile the model
model.compile(optimizer=optimizers.Adam(lr=args.lr),
loss=[margin_loss, 'mse'],
loss_weights=[1., args.lam_recon],
metrics={'capsnet': 'accuracy'})
# Begin: Training with data augmentation ---------------------------------------------------------------------#
def train_generator(batch_size, shift_fraction=0.2):
while 1:
x_batch, y_batch = train_set.next()
yield ([x_batch, y_batch], [y_batch, x_batch])
# Training with data augmentation. If shift_fraction=0., also no augmentation.
x_test, y_test = test_set.next()
model.fit_generator(generator=train_generator(args.batch_size,args.shift_fraction),
steps_per_epoch=int(len(train_set.classes) / args.batch_size),
epochs=args.epochs,
validation_data = [[x_test, y_test], [y_test, x_test]],
callbacks=[log, tb, checkpoint, lr_decay])
# End: Training with data augmentation -----------------------------------------------------------------------#
model.save(args.save_dir + '/trained_model.h5')
print('Trained model saved to \'%s/trained_model.h5\'' % args.save_dir)
from utils import plot_log
plot_log(args.save_dir + '/log.csv', show=True)
return model
#define model
def CapsNet(input_shape, n_class, routings):
"""
A Capsule Network.
:param input_shape: data shape, 3d, [width, height, channels]
:param n_class: number of classes
:param routings: number of routing iterations
:return: Two Keras Models, the first one used for training, and the second one for evaluation.
`eval_model` can also be used for training.
"""
x = layers.Input(shape=input_shape)
# Layer 1: Just a conventional Conv2D layer
conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding='valid', activation='relu', name='conv1')(x)
# Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size=9, strides=2, padding='valid')
# Layer 3: Capsule layer. Routing algorithm works here.
digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings,
name='digitcaps')(primarycaps)
# Layer 4: This is an auxiliary layer to replace each capsule with its length. Just to match the true label's shape.
# If using tensorflow, this will not be necessary. :)
out_caps = Length(name='capsnet')(digitcaps)
# Decoder network.
y = layers.Input(shape=(n_class,))
masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. For training
masked = Mask()(digitcaps) # Mask using the capsule with maximal length. For prediction
# Shared Decoder model in training and prediction
decoder = models.Sequential(name='decoder')
decoder.add(layers.Dense(512, activation='relu', input_dim=16*n_class))
decoder.add(layers.Dense(1024, activation='relu'))
decoder.add(layers.Dense(np.prod(input_shape), activation='sigmoid'))
decoder.add(layers.Reshape(target_shape=input_shape, name='out_recon'))
# Models for training and evaluation (prediction)
train_model = models.Model([x, y], [out_caps, decoder(masked_by_y)])
eval_model = models.Model(x, [out_caps, decoder(masked)])
# manipulate model
noise = layers.Input(shape=(n_class, 16))
noised_digitcaps = layers.Add()([digitcaps, noise])
masked_noised_y = Mask()([noised_digitcaps, y])
manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))
return train_model, eval_model, manipulate_model
model, eval_model, manipulate_model = CapsNet(input_shape=train_set.image_shape,
n_class=train_set.num_classes,
routings=args.routings)
model.summary()
# train or model
train(model=model, args=args)
final_test_set = test_datagen.flow_from_directory(args.test_dir,
target_size = (64, 64),
batch_size = 3753, #args.batch_size,
class_mode = 'categorical')
#Reconstruct the image
def manipulate_latent(model):
print('-'*30 + 'Begin: manipulate' + '-'*30)
x_test, y_test = final_test_set.next()
index = np.argmax(y_test, 1) == args.digit
number = np.random.randint(low=0, high=sum(index) - 1)
x, y = x_test[index][number], y_test[index][number]
x, y = np.expand_dims(x, 0), np.expand_dims(y, 0)
noise = np.zeros([1, 5, 16])
x_recons = []
for dim in range(16):
for r in [-0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15]:
tmp = np.copy(noise)
tmp[:,:,dim] = r
x_recon = model.predict([x, y, tmp])
x_recons.append(x_recon)
x_recons = np.concatenate(x_recons)
img = combine_images(x_recons, height=16)
image = img*255
Image.fromarray(image.astype(np.uint8)).save(args.save_dir + '/manipulate-%d.png' % args.digit)
print('manipulated result saved to %s/manipulate-%d.png' % (args.save_dir, args.digit))
print('-' * 30 + 'End: manipulate' + '-' * 30)
#function to test
final_test_set = test_datagen.flow_from_directory(args.test_dir,
target_size = (64, 64),
shuffle=False,
batch_size = 3753, #args.batch_size,
class_mode = 'categorical')
def test(model):
x_test, y_test = final_test_set.next()
y_pred, x_recon = model.predict(x_test, batch_size=100)
print('-'*30 + 'Begin: test' + '-'*30)
print('Test acc:', np.sum(np.argmax(y_pred, 1) == np.argmax(y_test, 1))/y_test.shape[0])
img = combine_images(np.concatenate([x_test[:50],x_recon[:50]]))
image = img * 255
Image.fromarray(image.astype(np.uint8)).save(args.save_dir + "/real_and_recon.png")
print()
print('Reconstructed images are saved to %s/real_and_recon.png' % args.save_dir)
print('-' * 30 + 'End: test' + '-' * 30)
plt.imshow(plt.imread(args.save_dir + "/real_and_recon.png"))
plt.show()
print('-' * 30 + 'Test Metrics' + '-' * 30)
np.savetxt("./result/capsnet_657.csv", y_pred, delimiter=",")
y_pred = np.argmax(y_pred,axis = 1)
y_actual = np.argmax(y_test, axis = 1)
classnames=[]
for classname in final_test_set.class_indices:
classnames.append(classname)
confusion_mtx = confusion_matrix(y_actual, y_pred)
print(confusion_mtx)
target_names = classnames
print(classification_report(y_actual, y_pred, target_names=target_names))
print("accuracy= ",(confusion_mtx.diagonal().sum()/confusion_mtx.sum())*100)
##Evaluation
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import confusion_matrix, classification_report
from keras.models import load_model
model.load_weights('./result/trained_model.h5')
#manipulate_latent(manipulate_model)
test(model=eval_model)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Custom Federated Algorithms, Part 2: Implementing Federated Averaging
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.15.0/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.15.0/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This tutorial is the second part of a two-part series that demonstrates how to
implement custom types of federated algorithms in TFF using the
[Federated Core (FC)](../federated_core.md), which serves as a foundation for
the [Federated Learning (FL)](../federated_learning.md) layer (`tff.learning`).
We encourage you to first read the
[first part of this series](custom_federated_algorithms_1.ipynb), which
introduce some of the key concepts and programming abstractions used here.
This second part of the series uses the mechanisms introduced in the first part
to implement a simple version of federated training and evaluation algorithms.
We encourage you to review the
[image classification](federated_learning_for_image_classification.ipynb) and
[text generation](federated_learning_for_text_generation.ipynb) tutorials for a
higher-level and more gentle introduction to TFF's Federated Learning APIs, as
they will help you put the concepts we describe here in context.
## Before we start
Before we start, try to run the following "Hello World" example to make sure
your environment is correctly setup. If it doesn't work, please refer to the
[Installation](../install.md) guide for instructions.
```
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
# TODO(b/148678573,b/148685415): must use the ReferenceExecutor because it
# supports unbounded references and tff.sequence_* intrinsics.
tff.framework.set_default_context(tff.test.ReferenceExecutor())
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
```
## Implementing Federated Averaging
As in
[Federated Learning for Image Classification](federated_learning_for_image_classification.ipynb),
we are going to use the MNIST example, but since this is intended as a low-level
tutorial, we are going to bypass the Keras API and `tff.simulation`, write raw
model code, and construct a federated data set from scratch.
### Preparing federated data sets
For the sake of a demonstration, we're going to simulate a scenario in which we
have data from 10 users, and each of the users contributes knowledge how to
recognize a different digit. This is about as
non-[i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables)
as it gets.
First, let's load the standard MNIST data:
```
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
[(x.dtype, x.shape) for x in mnist_train]
```
The data comes as Numpy arrays, one with images and another with digit labels, both
with the first dimension going over the individual examples. Let's write a
helper function that formats it in a way compatible with how we feed federated
sequences into TFF computations, i.e., as a list of lists - the outer list
ranging over the users (digits), the inner ones ranging over batches of data in
each client's sequence. As is customary, we will structure each batch as a pair
of tensors named `x` and `y`, each with the leading batch dimension. While at
it, we'll also flatten each image into a 784-element vector and rescale the
pixels in it into the `0..1` range, so that we don't have to clutter the model
logic with data conversions.
```
NUM_EXAMPLES_PER_USER = 1000
BATCH_SIZE = 100
def get_data_for_digit(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, min(len(all_samples), NUM_EXAMPLES_PER_USER), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x':
np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y':
np.array([source[1][i] for i in batch_samples], dtype=np.int32)
})
return output_sequence
federated_train_data = [get_data_for_digit(mnist_train, d) for d in range(10)]
federated_test_data = [get_data_for_digit(mnist_test, d) for d in range(10)]
```
As a quick sanity check, let's look at the `Y` tensor in the last batch of data
contributed by the fifth client (the one corresponding to the digit `5`).
```
federated_train_data[5][-1]['y']
```
Just to be sure, let's also look at the image corresponding to the last element of that batch.
```
from matplotlib import pyplot as plt
plt.imshow(federated_train_data[5][-1]['x'][-1].reshape(28, 28), cmap='gray')
plt.grid(False)
plt.show()
```
### On combining TensorFlow and TFF
In this tutorial, for compactness we immediately decorate functions that
introduce TensorFlow logic with `tff.tf_computation`. However, for more complex
logic, this is not the pattern we recommend. Debugging TensorFlow can already be
a challenge, and debugging TensorFlow after it has been fully serialized and
then re-imported necessarily loses some metadata and limits interactivity,
making debugging even more of a challenge.
Therefore, **we strongly recommend writing complex TF logic as stand-alone
Python functions** (that is, without `tff.tf_computation` decoration). This way
the TensorFlow logic can be developed and tested using TF best practices and
tools (like eager mode), before serializing the computation for TFF (e.g., by invoking `tff.tf_computation` with a Python function as the argument).
### Defining a loss function
Now that we have the data, let's define a loss function that we can use for
training. First, let's define the type of input as a TFF named tuple. Since the
size of data batches may vary, we set the batch dimension to `None` to indicate
that the size of this dimension is unknown.
```
BATCH_SPEC = collections.OrderedDict(
x=tf.TensorSpec(shape=[None, 784], dtype=tf.float32),
y=tf.TensorSpec(shape=[None], dtype=tf.int32))
BATCH_TYPE = tff.to_type(BATCH_SPEC)
str(BATCH_TYPE)
```
You may be wondering why we can't just define an ordinary Python type. Recall
the discussion in [part 1](custom_federated_algorithms_1.ipynb), where we
explained that while we can express the logic of TFF computations using Python,
under the hood TFF computations *are not* Python. The symbol `BATCH_TYPE`
defined above represents an abstract TFF type specification. It is important to
distinguish this *abstract* TFF type from concrete Python *representation*
types, e.g., containers such as `dict` or `collections.namedtuple` that may be
used to represent the TFF type in the body of a Python function. Unlike Python,
TFF has a single abstract type constructor `tff.StructType` for tuple-like
containers, with elements that can be individually named or left unnamed. This
type is also used to model formal parameters of computations, as TFF
computations can formally only declare one parameter and one result - you will
see examples of this shortly.
Let's now define the TFF type of model parameters, again as a TFF named tuple of
*weights* and *bias*.
```
MODEL_SPEC = collections.OrderedDict(
weights=tf.TensorSpec(shape=[784, 10], dtype=tf.float32),
bias=tf.TensorSpec(shape=[10], dtype=tf.float32))
MODEL_TYPE = tff.to_type(MODEL_SPEC)
print(MODEL_TYPE)
```
With those definitions in place, now we can define the loss for a given model, over a single batch. Note the usage of `@tf.function` decorator inside the `@tff.tf_computation` decorator. This allows us to write TF using Python like semantics even though were inside a `tf.Graph` context created by the `tff.tf_computation` decorator.
```
# NOTE: `forward_pass` is defined separately from `batch_loss` so that it can
# be later called from within another tf.function. Necessary because a
# @tf.function decorated method cannot invoke a @tff.tf_computation.
@tf.function
def forward_pass(model, batch):
predicted_y = tf.nn.softmax(
tf.matmul(batch['x'], model['weights']) + model['bias'])
return -tf.reduce_mean(
tf.reduce_sum(
tf.one_hot(batch['y'], 10) * tf.math.log(predicted_y), axis=[1]))
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE)
def batch_loss(model, batch):
return forward_pass(model, batch)
```
As expected, computation `batch_loss` returns `float32` loss given the model and
a single data batch. Note how the `MODEL_TYPE` and `BATCH_TYPE` have been lumped
together into a 2-tuple of formal parameters; you can recognize the type of
`batch_loss` as `(<MODEL_TYPE,BATCH_TYPE> -> float32)`.
```
str(batch_loss.type_signature)
```
As a sanity check, let's construct an initial model filled with zeros and
compute the loss over the batch of data we visualized above.
```
initial_model = collections.OrderedDict(
weights=np.zeros([784, 10], dtype=np.float32),
bias=np.zeros([10], dtype=np.float32))
sample_batch = federated_train_data[5][-1]
batch_loss(initial_model, sample_batch)
```
Note that we feed the TFF computation with the initial model defined as a
`dict`, even though the body of the Python function that defines it consumes
model parameters as `model.weight` and `model.bias`. The arguments of the call
to `batch_loss` aren't simply passed to the body of that function.
What happens when we invoke `batch_loss`?
The Python body of `batch_loss` has already been traced and serialized in the above cell where it was defined. TFF acts as the caller to `batch_loss`
at the computation definition time, and as the target of invocation at the time
`batch_loss` is invoked. In both roles, TFF serves as the bridge between TFF's
abstract type system and Python representation types. At the invocation time,
TFF will accept most standard Python container types (`dict`, `list`, `tuple`,
`collections.namedtuple`, etc.) as concrete representations of abstract TFF
tuples. Also, although as noted above, TFF computations formally only accept a
single parameter, you can use the familiar Python call syntax with positional
and/or keyword arguments in case where the type of the parameter is a tuple - it
works as expected.
### Gradient descent on a single batch
Now, let's define a computation that uses this loss function to perform a single
step of gradient descent. Note how in defining this function, we use
`batch_loss` as a subcomponent. You can invoke a computation constructed with
`tff.tf_computation` inside the body of another computation, though typically
this is not necessary - as noted above, because serialization looses some
debugging information, it is often preferable for more complex computations to
write and test all the TensorFlow without the `tff.tf_computation` decorator.
```
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)
def batch_train(initial_model, batch, learning_rate):
# Define a group of model variables and set them to `initial_model`. Must
# be defined outside the @tf.function.
model_vars = collections.OrderedDict([
(name, tf.Variable(name=name, initial_value=value))
for name, value in initial_model.items()
])
optimizer = tf.keras.optimizers.SGD(learning_rate)
@tf.function
def _train_on_batch(model_vars, batch):
# Perform one step of gradient descent using loss from `batch_loss`.
with tf.GradientTape() as tape:
loss = forward_pass(model_vars, batch)
grads = tape.gradient(loss, model_vars)
optimizer.apply_gradients(
zip(tf.nest.flatten(grads), tf.nest.flatten(model_vars)))
return model_vars
return _train_on_batch(model_vars, batch)
str(batch_train.type_signature)
```
When you invoke a Python function decorated with `tff.tf_computation` within the
body of another such function, the logic of the inner TFF computation is
embedded (essentially, inlined) in the logic of the outer one. As noted above,
if you are writing both computations, it is likely preferable to make the inner
function (`batch_loss` in this case) a regular Python or `tf.function` rather
than a `tff.tf_computation`. However, here we illustrate that calling one
`tff.tf_computation` inside another basically works as expected. This may be
necessary if, for example, you do not have the Python code defining
`batch_loss`, but only its serialized TFF representation.
Now, let's apply this function a few times to the initial model to see whether
the loss decreases.
```
model = initial_model
losses = []
for _ in range(5):
model = batch_train(model, sample_batch, 0.1)
losses.append(batch_loss(model, sample_batch))
losses
```
### Gradient descent on a sequence of local data
Now, since `batch_train` appears to work, let's write a similar training
function `local_train` that consumes the entire sequence of all batches from one
user instead of just a single batch. The new computation will need to now
consume `tff.SequenceType(BATCH_TYPE)` instead of `BATCH_TYPE`.
```
LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)
@tff.federated_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)
def local_train(initial_model, learning_rate, all_batches):
# Mapping function to apply to each batch.
@tff.federated_computation(MODEL_TYPE, BATCH_TYPE)
def batch_fn(model, batch):
return batch_train(model, batch, learning_rate)
return tff.sequence_reduce(all_batches, initial_model, batch_fn)
str(local_train.type_signature)
```
There are quite a few details buried in this short section of code, let's go
over them one by one.
First, while we could have implemented this logic entirely in TensorFlow,
relying on `tf.data.Dataset.reduce` to process the sequence similarly to how
we've done it earlier, we've opted this time to express the logic in the glue
language, as a `tff.federated_computation`. We've used the federated operator
`tff.sequence_reduce` to perform the reduction.
The operator `tff.sequence_reduce` is used similarly to
`tf.data.Dataset.reduce`. You can think of it as essentially the same as
`tf.data.Dataset.reduce`, but for use inside federated computations, which as
you may remember, cannot contain TensorFlow code. It is a template operator with
a formal parameter 3-tuple that consists of a *sequence* of `T`-typed elements,
the initial state of the reduction (we'll refer to it abstractly as *zero*) of
some type `U`, and the *reduction operator* of type `(<U,T> -> U)` that alters the
state of the reduction by processing a single element. The result is the final
state of the reduction, after processing all elements in a sequential order. In
our example, the state of the reduction is the model trained on a prefix of the
data, and the elements are data batches.
Second, note that we have again used one computation (`batch_train`) as a
component within another (`local_train`), but not directly. We can't use it as a
reduction operator because it takes an additional parameter - the learning rate.
To resolve this, we define an embedded federated computation `batch_fn` that
binds to the `local_train`'s parameter `learning_rate` in its body. It is
allowed for a child computation defined this way to capture a formal parameter
of its parent as long as the child computation is not invoked outside the body
of its parent. You can think of this pattern as an equivalent of
`functools.partial` in Python.
The practical implication of capturing `learning_rate` this way is, of course,
that the same learning rate value is used across all batches.
Now, let's try the newly defined local training function on the entire sequence
of data from the same user who contributed the sample batch (digit `5`).
```
locally_trained_model = local_train(initial_model, 0.1, federated_train_data[5])
```
Did it work? To answer this question, we need to implement evaluation.
### Local evaluation
Here's one way to implement local evaluation by adding up the losses across all data
batches (we could have just as well computed the average; we'll leave it as an
exercise for the reader).
```
@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def local_eval(model, all_batches):
# TODO(b/120157713): Replace with `tff.sequence_average()` once implemented.
return tff.sequence_sum(
tff.sequence_map(
tff.federated_computation(lambda b: batch_loss(model, b), BATCH_TYPE),
all_batches))
str(local_eval.type_signature)
```
Again, there are a few new elements illustrated by this code, let's go over them
one by one.
First, we have used two new federated operators for processing sequences:
`tff.sequence_map` that takes a *mapping function* `T->U` and a *sequence* of
`T`, and emits a sequence of `U` obtained by applying the mapping function
pointwise, and `tff.sequence_sum` that just adds all the elements. Here, we map
each data batch to a loss value, and then add the resulting loss values to
compute the total loss.
Note that we could have again used `tff.sequence_reduce`, but this wouldn't be
the best choice - the reduction process is, by definition, sequential, whereas
the mapping and sum can be computed in parallel. When given a choice, it's best
to stick with operators that don't constrain implementation choices, so that
when our TFF computation is compiled in the future to be deployed to a specific
environment, one can take full advantage of all potential opportunities for a
faster, more scalable, more resource-efficient execution.
Second, note that just as in `local_train`, the component function we need
(`batch_loss`) takes more parameters than what the federated operator
(`tff.sequence_map`) expects, so we again define a partial, this time inline by
directly wrapping a `lambda` as a `tff.federated_computation`. Using wrappers
inline with a function as an argument is the recommended way to use
`tff.tf_computation` to embed TensorFlow logic in TFF.
Now, let's see whether our training worked.
```
print('initial_model loss =', local_eval(initial_model,
federated_train_data[5]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[5]))
```
Indeed, the loss decreased. But what happens if we evaluated it on another
user's data?
```
print('initial_model loss =', local_eval(initial_model,
federated_train_data[0]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[0]))
```
As expected, things got worse. The model was trained to recognize `5`, and has
never seen a `0`. This brings the question - how did the local training impact
the quality of the model from the global perspective?
### Federated evaluation
This is the point in our journey where we finally circle back to federated types
and federated computations - the topic that we started with. Here's a pair of
TFF types definitions for the model that originates at the server, and the data
that remains on the clients.
```
SERVER_MODEL_TYPE = tff.FederatedType(MODEL_TYPE, tff.SERVER)
CLIENT_DATA_TYPE = tff.FederatedType(LOCAL_DATA_TYPE, tff.CLIENTS)
```
With all the definitions introduced so far, expressing federated evaluation in
TFF is a one-liner - we distribute the model to clients, let each client invoke
local evaluation on its local portion of data, and then average out the loss.
Here's one way to write this.
```
@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)
def federated_eval(model, data):
return tff.federated_mean(
tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))
```
We've already seen examples of `tff.federated_mean` and `tff.federated_map`
in simpler scenarios, and at the intuitive level, they work as expected, but
there's more in this section of code than meets the eye, so let's go over it
carefully.
First, let's break down the *let each client invoke local evaluation on its
local portion of data* part. As you may recall from the preceding sections,
`local_eval` has a type signature of the form `(<MODEL_TYPE, LOCAL_DATA_TYPE> ->
float32)`.
The federated operator `tff.federated_map` is a template that accepts as a
parameter a 2-tuple that consists of the *mapping function* of some type `T->U`
and a federated value of type `{T}@CLIENTS` (i.e., with member constituents of
the same type as the parameter of the mapping function), and returns a result of
type `{U}@CLIENTS`.
Since we're feeding `local_eval` as a mapping function to apply on a per-client
basis, the second argument should be of a federated type `{<MODEL_TYPE,
LOCAL_DATA_TYPE>}@CLIENTS`, i.e., in the nomenclature of the preceding sections,
it should be a federated tuple. Each client should hold a full set of arguments
for `local_eval` as a member consituent. Instead, we're feeding it a 2-element
Python `list`. What's happening here?
Indeed, this is an example of an *implicit type cast* in TFF, similar to
implicit type casts you may have encountered elsewhere, e.g., when you feed an
`int` to a function that accepts a `float`. Implicit casting is used scarcily at
this point, but we plan to make it more pervasive in TFF as a way to minimize
boilerplate.
The implicit cast that's applied in this case is the equivalence between
federated tuples of the form `{<X,Y>}@Z`, and tuples of federated values
`<{X}@Z,{Y}@Z>`. While formally, these two are different type signatures,
looking at it from the programmers's perspective, each device in `Z` holds two
units of data `X` and `Y`. What happens here is not unlike `zip` in Python, and
indeed, we offer an operator `tff.federated_zip` that allows you to perform such
conversions explicity. When the `tff.federated_map` encounters a tuple as a
second argument, it simply invokes `tff.federated_zip` for you.
Given the above, you should now be able to recognize the expression
`tff.federated_broadcast(model)` as representing a value of TFF type
`{MODEL_TYPE}@CLIENTS`, and `data` as a value of TFF type
`{LOCAL_DATA_TYPE}@CLIENTS` (or simply `CLIENT_DATA_TYPE`), the two getting
filtered together through an implicit `tff.federated_zip` to form the second
argument to `tff.federated_map`.
The operator `tff.federated_broadcast`, as you'd expect, simply transfers data
from the server to the clients.
Now, let's see how our local training affected the average loss in the system.
```
print('initial_model loss =', federated_eval(initial_model,
federated_train_data))
print('locally_trained_model loss =',
federated_eval(locally_trained_model, federated_train_data))
```
Indeed, as expected, the loss has increased. In order to improve the model for
all users, we'll need to train in on everyone's data.
### Federated training
The simplest way to implement federated training is to locally train, and then
average the models. This uses the same building blocks and patters we've already
discussed, as you can see below.
```
SERVER_FLOAT_TYPE = tff.FederatedType(tf.float32, tff.SERVER)
@tff.federated_computation(SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE,
CLIENT_DATA_TYPE)
def federated_train(model, learning_rate, data):
return tff.federated_mean(
tff.federated_map(local_train, [
tff.federated_broadcast(model),
tff.federated_broadcast(learning_rate), data
]))
```
Note that in the full-featured implementation of Federated Averaging provided by
`tff.learning`, rather than averaging the models, we prefer to average model
deltas, for a number of reasons, e.g., the ability to clip the update norms,
for compression, etc.
Let's see whether the training works by running a few rounds of training and
comparing the average loss before and after.
```
model = initial_model
learning_rate = 0.1
for round_num in range(5):
model = federated_train(model, learning_rate, federated_train_data)
learning_rate = learning_rate * 0.9
loss = federated_eval(model, federated_train_data)
print('round {}, loss={}'.format(round_num, loss))
```
For completeness, let's now also run on the test data to confirm that our model
generalizes well.
```
print('initial_model test loss =',
federated_eval(initial_model, federated_test_data))
print('trained_model test loss =', federated_eval(model, federated_test_data))
```
This concludes our tutorial.
Of course, our simplified example doesn't reflect a number of things you'd need
to do in a more realistic scenario - for example, we haven't computed metrics
other than loss. We encourage you to study
[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)
of federated averaging in `tff.learning` as a more complete example, and as a
way to demonstrate some of the coding practices we'd like to encourage.
| github_jupyter |
# Basic Circuit Identities
```
from qiskit import QuantumCircuit, Aer, execute
from qiskit.circuit import Gate
from math import pi
qc = QuantumCircuit(2)
c = 0
t = 1
```
When we program quantum computers, our aim is always to build useful quantum circuits from the basic building blocks. But sometimes, we might not have all the basic building blocks we want. In this section, we'll look at how we can transform basic gates into each other, and how to use them to build some gates that are slightly more complex \(but still pretty basic\).
Many of the techniques discussed in this chapter were first proposed in a paper by Barenco and coauthors in 1995 [1].
## Contents
1. [Making a Controlled-Z from a CNOT](#c-from-cnot)
2. [Swapping Qubits](#swapping)
3. [Controlled Rotations](#controlled-rotations)
4. [The Toffoli](#ccx)
5. [Arbitrary rotations from H and T](#arbitrary-rotations)
6. [References](#references)
## 1. Making a Controlled-Z from a CNOT <a id="c-from-cnot"></a>
The controlled-Z or `cz` gate is another well-used two-qubit gate. Just as the CNOT applies an $X$ to its target qubit whenever its control is in state $|1\rangle$, the controlled-$Z$ applies a $Z$ in the same case. In Qiskit it can be invoked directly with
```
# a controlled-Z
qc.cz(c,t)
qc.draw()
```
where c and t are the control and target qubits. In IBM Q devices, however, the only kind of two-qubit gate that can be directly applied is the CNOT. We therefore need a way to transform one to the other.
The process for this is quite simple. We know that the Hadamard transforms the states $|0\rangle$ and $|1\rangle$ to the states $|+\rangle$ and $|-\rangle$ respectively. We also know that the effect of the $Z$ gate on the states $|+\rangle$ and $|-\rangle$ is the same as that for $X$ on the states $|0\rangle$ and $|1\rangle$ respectively. From this reasoning, or from simply multiplying matrices, we find that
$$
H X H = Z,\\\\
H Z H = X.
$$
The same trick can be used to transform a CNOT into a controlled-$Z$. All we need to do is precede and follow the CNOT with a Hadamard on the target qubit. This will transform any $X$ applied to that qubit into a $Z$.
```
qc = QuantumCircuit(2)
# also a controlled-Z
qc.h(t)
qc.cx(c,t)
qc.h(t)
qc.draw()
```
More generally, we can transform a single CNOT into a controlled version of any rotation around the Bloch sphere by an angle $\pi$, by simply preceding and following it with the correct rotations. For example, a controlled-$Y$:
```
qc = QuantumCircuit(2)
# a controlled-Y
qc.sdg(t)
qc.cx(c,t)
qc.s(t)
qc.draw()
```
and a controlled-$H$:
```
qc = QuantumCircuit(2)
# a controlled-H
qc.ry(pi/4,t)
qc.cx(c,t)
qc.ry(-pi/4,t)
qc.draw()
```
## 2. Swapping Qubits <a id="swapping"></a>
```
a = 0
b = 1
```
Sometimes we need to move information around in a quantum computer. For some qubit implementations, this could be done by physically moving them. Another option is simply to move the state between two qubits. This is done by the SWAP gate.
```
qc = QuantumCircuit(2)
# swaps states of qubits a and b
qc.swap(a,b)
qc.draw()
```
The command above directly invokes this gate, but let's see how we might make it using our standard gate set. For this, we'll need to consider a few examples.
First, we'll look at the case that qubit a is in state $|1\rangle$ and qubit b is in state $|0\rangle$. For this we'll apply the following gates:
```
qc = QuantumCircuit(2)
# swap a 1 from a to b
qc.cx(a,b) # copies 1 from a to b
qc.cx(b,a) # uses the 1 on b to rotate the state of a to 0
qc.draw()
```
This has the effect of putting qubit b in state $|1\rangle$ and qubit a in state $|0\rangle$. In this case at least, we have done a SWAP.
Now let's take this state and SWAP back to the original one. As you may have guessed, we can do this with the reverse of the above process:
```
# swap a q from b to a
qc.cx(b,a) # copies 1 from b to a
qc.cx(a,b) # uses the 1 on a to rotate the state of b to 0
qc.draw()
```
Note that in these two processes, the first gate of one would have no effect on the initial state of the other. For example, when we swap the $|1\rangle$ b to a, the first gate is `cx(b,a)`. If this were instead applied to a state where no $|1\rangle$ was initially on b, it would have no effect.
Note also that for these two processes, the final gate of one would have no effect on the final state of the other. For example, the final `cx(b,a)` that is required when we swap the $|1\rangle$ from a to b has no effect on the state where the $|1\rangle$ is not on b.
With these observations, we can combine the two processes by adding an ineffective gate from one onto the other. For example,
```
qc = QuantumCircuit(2)
qc.cx(b,a)
qc.cx(a,b)
qc.cx(b,a)
qc.draw()
```
We can think of this as a process that swaps a $|1\rangle$ from a to b, but with a useless `qc.cx(b,a)` at the beginning. We can also think of it as a process that swaps a $|1\rangle$ from b to a, but with a useless `qc.cx(b,a)` at the end. Either way, the result is a process that can do the swap both ways around.
It also has the correct effect on the $|00\rangle$ state. This is symmetric, and so swapping the states should have no effect. Since the CNOT gates have no effect when their control qubits are $|0\rangle$, the process correctly does nothing.
The $|11\rangle$ state is also symmetric, and so needs a trivial effect from the swap. In this case, the first CNOT gate in the process above will cause the second to have no effect, and the third undoes the first. Therefore, the whole effect is indeed trivial.
We have thus found a way to decompose SWAP gates into our standard gate set of single-qubit rotations and CNOT gates.
```
qc = QuantumCircuit(2)
# swaps states of qubits a and b
qc.cx(b,a)
qc.cx(a,b)
qc.cx(b,a)
qc.draw()
```
It works for the states $|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$, and if it works for all the states in the computational basis, it must work for all states generally. This circuit therefore swaps all possible two-qubit states.
The same effect would also result if we changed the order of the CNOT gates:
```
qc = QuantumCircuit(2)
# swaps states of qubits a and b
qc.cx(a,b)
qc.cx(b,a)
qc.cx(a,b)
qc.draw()
```
This is an equally valid way to get the SWAP gate.
The derivation used here was very much based on the z basis states, but it could also be done by thinking about what is required to swap qubits in states $|+\rangle$ and $|-\rangle$. The resulting ways of implementing the SWAP gate will be completely equivalent to the ones here.
#### Quick Exercise:
- Find a different circuit that swaps qubits in the states $|+\rangle$ and $|-\rangle$, and show that this is equivalent to the circuit shown above.
```
from qiskit.visualization import array_to_latex
backend = Aer.get_backend('aer_simulator')
qc01 = QuantumCircuit(2)
qc01.cx(0,1)
qc01.cx(1,0)
qc01.cx(0,1)
qc01.save_unitary()
gate01 = execute(qc01,backend).result().get_unitary()
array_to_latex(gate01, prefix="\\text{gate01} = ")
qcpm = QuantumCircuit(2)
qcpm.h(0)
qcpm.h(1)
qcpm.cx(0,1)
qcpm.cx(1,0)
qcpm.cx(0,1)
qcpm.h(0)
qcpm.h(1)
qcpm.save_unitary()
gatepm = execute(qcpm,backend).result().get_unitary()
array_to_latex(gatepm, prefix="\\text{gatepm} = ")
```
## 3. Controlled Rotations <a id="controlled-rotations"></a>
We have already seen how to build controlled $\pi$ rotations from a single CNOT gate. Now we'll look at how to build any controlled rotation.
First, let's consider arbitrary rotations around the y axis. Specifically, consider the following sequence of gates.
```
qc = QuantumCircuit(2)
theta = pi # theta can be anything (pi chosen arbitrarily)
qc.ry(theta/2,t)
qc.cx(c,t)
qc.ry(-theta/2,t)
qc.cx(c,t)
qc.draw()
```
If the control qubit is in state $|0\rangle$, all we have here is a $R_y(\theta/2)$ immediately followed by its inverse, $R_y(-\theta/2)$. The end effect is trivial. If the control qubit is in state $|1\rangle$, however, the `ry(-theta/2)` is effectively preceded and followed by an X gate. This has the effect of flipping the direction of the y rotation and making a second $R_y(\theta/2)$. The net effect in this case is therefore to make a controlled version of the rotation $R_y(\theta)$.
This method works because the x and y axis are orthogonal, which causes the x gates to flip the direction of the rotation. It therefore similarly works to make a controlled $R_z(\theta)$. A controlled $R_x(\theta)$ could similarly be made using CNOT gates.
We can also make a controlled version of any single-qubit rotation, $V$. For this we simply need to find three rotations A, B and C, and a phase $\alpha$ such that
$$
ABC = I, ~~~e^{i\alpha}AZBZC = V
$$
We then use controlled-Z gates to cause the first of these relations to happen whenever the control is in state $|0\rangle$, and the second to happen when the control is state $|1\rangle$. An $R_z(2\alpha)$ rotation is also used on the control to get the right phase, which will be important whenever there are superposition states.
```
A = Gate('A', 1, [])
B = Gate('B', 1, [])
C = Gate('C', 1, [])
alpha = 1 # arbitrarily define alpha to allow drawing of circuit
qc = QuantumCircuit(2)
qc.append(C, [t])
qc.cz(c,t)
qc.append(B, [t])
qc.cz(c,t)
qc.append(A, [t])
qc.p(alpha,c)
qc.draw()
```

Here `A`, `B` and `C` are gates that implement $A$ , $B$ and $C$, respectively.
## 4. The Toffoli <a id="ccx"></a>
The Toffoli gate is a three-qubit gate with two controls and one target. It performs an X on the target only if both controls are in the state $|1\rangle$. The final state of the target is then equal to either the AND or the NAND of the two controls, depending on whether the initial state of the target was $|0\rangle$ or $|1\rangle$. A Toffoli can also be thought of as a controlled-controlled-NOT, and is also called the CCX gate.
```
qc = QuantumCircuit(3)
a = 0
b = 1
t = 2
# Toffoli with control qubits a and b and target t
qc.ccx(a,b,t)
qc.draw()
```
To see how to build it from single- and two-qubit gates, it is helpful to first show how to build something even more general: an arbitrary controlled-controlled-U for any single-qubit rotation U. For this we need to define controlled versions of $V = \sqrt{U}$ and $V^\dagger$. In the code below, we use `cp(theta,c,t)` and `cp(-theta,c,t)`in place of the undefined subroutines `cv` and `cvdg` respectively. The controls are qubits $a$ and $b$, and the target is qubit $t$.
```
qc = QuantumCircuit(3)
qc.cp(theta,b,t)
qc.cx(a,b)
qc.cp(-theta,b,t)
qc.cx(a,b)
qc.cp(theta,a,t)
qc.draw()
```

By tracing through each value of the two control qubits, you can convince yourself that a U gate is applied to the target qubit if and only if both controls are 1. Using ideas we have already described, you could now implement each controlled-V gate to arrive at some circuit for the doubly-controlled-U gate. It turns out that the minimum number of CNOT gates required to implement the Toffoli gate is six [2].

*This is a Toffoli with 3 qubits(q0,q1,q2) respectively. In this circuit example, q0 is connected with q2 but q0 is not connected with q1.
The Toffoli is not the unique way to implement an AND gate in quantum computing. We could also define other gates that have the same effect, but which also introduce relative phases. In these cases, we can implement the gate with fewer CNOTs.
For example, suppose we use both the controlled-Hadamard and controlled-$Z$ gates, which can both be implemented with a single CNOT. With these we can make the following circuit:
```
qc = QuantumCircuit(3)
qc.ch(a,t)
qc.cz(b,t)
qc.ch(a,t)
qc.draw()
```
For the state $|00\rangle$ on the two controls, this does nothing to the target. For $|11\rangle$, the target experiences a $Z$ gate that is both preceded and followed by an H. The net effect is an $X$ on the target. For the states $|01\rangle$ and $|10\rangle$, the target experiences either just the two Hadamards \(which cancel each other out\) or just the $Z$ \(which only induces a relative phase\). This therefore also reproduces the effect of an AND, because the value of the target is only changed for the $|11\rangle$ state on the controls -- but it does it with the equivalent of just three CNOT gates.
## 5. Arbitrary rotations from H and T <a id="arbitrary-rotations"></a>
The qubits in current devices are subject to noise, which basically consists of gates that are done by mistake. Simple things like temperature, stray magnetic fields or activity on neighboring qubits can make things happen that we didn't intend.
For large applications of quantum computers, it will be necessary to encode our qubits in a way that protects them from this noise. This is done by making gates much harder to do by mistake, or to implement in a manner that is slightly wrong.
This is unfortunate for the single-qubit rotations $R_x(\theta)$, $R_y(\theta)$ and $R_z(\theta)$. It is impossible to implement an angle $\theta$ with perfect accuracy, such that you are sure that you are not accidentally implementing something like $\theta + 0.0000001$. There will always be a limit to the accuracy we can achieve, and it will always be larger than is tolerable when we account for the build-up of imperfections over large circuits. We will therefore not be able to implement these rotations directly in fault-tolerant quantum computers, but will instead need to build them in a much more deliberate manner.
Fault-tolerant schemes typically perform these rotations using multiple applications of just two gates: $H$ and $T$.
The T gate is expressed in Qiskit as `.t()`:
```
qc = QuantumCircuit(1)
qc.t(0) # T gate on qubit 0
qc.draw()
```
It is a rotation around the z axis by $\theta = \pi/4$, and so is expressed mathematically as $R_z(\pi/4) = e^{i\pi/8~Z}$.
In the following we assume that the $H$ and $T$ gates are effectively perfect. This can be engineered by suitable methods for error correction and fault-tolerance.
Using the Hadamard and the methods discussed in the last chapter, we can use the T gate to create a similar rotation around the x axis.
```
qc = QuantumCircuit(1)
qc.h(0)
qc.t(0)
qc.h(0)
qc.draw()
```
Now let's put the two together. Let's make the gate $R_z(\pi/4)~R_x(\pi/4)$.
```
qc = QuantumCircuit(1)
qc.h(0)
qc.t(0)
qc.h(0)
qc.t(0)
qc.draw()
```
Since this is a single-qubit gate, we can think of it as a rotation around the Bloch sphere. That means that it is a rotation around some axis by some angle. We don't need to think about the axis too much here, but it clearly won't be simply x, y or z. More important is the angle.
The crucial property of the angle for this rotation is that it is an irrational multiple of $\pi$. You can prove this yourself with a bunch of math, but you can also see the irrationality in action by applying the gate. Keeping in mind that every time we apply a rotation that is larger than $2\pi$, we are doing an implicit modulos by $2\pi$ on the rotation angle. Thus, repeating the combined rotation mentioned above $n$ times results in a rotation around the same axis by a different angle. As a hint to a rigorous proof, recall that an irrational number cannot be be written as what?
We can use this to our advantage. Each angle will be somewhere between $0$ and $2\pi$. Let's split this interval up into $n$ slices of width $2\pi/n$. For each repetition, the resulting angle will fall in one of these slices. If we look at the angles for the first $n+1$ repetitions, it must be true that at least one slice contains two of these angles due to the pigeonhole principle. Let's use $n_1$ to denote the number of repetitions required for the first, and $n_2$ for the second.
With this, we can prove something about the angle for $n_2-n_1$ repetitions. This is effectively the same as doing $n_2$ repetitions, followed by the inverse of $n_1$ repetitions. Since the angles for these are not equal \(because of the irrationality\) but also differ by no greater than $2\pi/n$ \(because they correspond to the same slice\), the angle for $n_2-n_1$ repetitions satisfies
$$
\theta_{n_2-n_1} \neq 0, ~~~~-\frac{2\pi}{n} \leq \theta_{n_2-n_1} \leq \frac{2\pi}{n} .
$$
We therefore have the ability to do rotations around small angles. We can use this to rotate around angles that are as small as we like, just by increasing the number of times we repeat this gate.
By using many small-angle rotations, we can also rotate by any angle we like. This won't always be exact, but it is guaranteed to be accurate up to $2\pi/n$, which can be made as small as we like. We now have power over the inaccuracies in our rotations.
So far, we only have the power to do these arbitrary rotations around one axis. For a second axis, we simply do the $R_z(\pi/4)$ and $R_x(\pi/4)$ rotations in the opposite order.
```
qc = QuantumCircuit(1)
qc.t(0)
qc.h(0)
qc.t(0)
qc.h(0)
qc.draw()
```
The axis that corresponds to this rotation is not the same as that for the gate considered previously. We therefore now have arbitrary rotation around two axes, which can be used to generate any arbitrary rotation around the Bloch sphere. We are back to being able to do everything, though it costs quite a lot of $T$ gates.
It is because of this kind of application that $T$ gates are so prominent in quantum computation. In fact, the complexity of algorithms for fault-tolerant quantum computers is often quoted in terms of how many $T$ gates they'll need. This motivates the quest to achieve things with as few $T$ gates as possible. Note that the discussion above was simply intended to prove that $T$ gates can be used in this way, and does not represent the most efficient method we know.
## 6. References <a id="references"></a>
[1] [Barenco, *et al.* 1995](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.3457?cm_mc_uid=43781767191014577577895&cm_mc_sid_50200000=1460741020)
[2] [Shende and Markov, 2009](http://dl.acm.org/citation.cfm?id=2011799)
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
```
%pylab inline
import numpy as np
import math
```
Condiciones iniciales del problema.
```
q=-1.602176565E-19
v_0=3.0E5
theta_0=0
B=-10**(-4)
m=9.1093829E-31
N=1000
```
Cálculo del tiempo que tarda la partícula en volver a cruzar el eje $x$ teóricamente t definición del intervalo de tiempo a observar.
```
def tiempo(q,B,theta_0, m,N):
t_max=(m/(q*B))*(2*theta_0+math.pi)
dt=t_max/N
t=np.arange(0,t_max+dt,dt)
return t, t_max
time,t_max=tiempo(q,B,theta_0, m,N)
print("El tiempo total del recorrido teóricamente hasta llegar al detector es {}.".format(t_max))
```
Ecuaciones de posición teóricas respecto al tiempo $t$ para $\theta_0$ arbitrario y na velocidad $v_0$ de entrada.
```
def posicion(q,B,v_0,theta_0,m,t):
omega=q*B/m
x=-v_0*np.cos(theta_0-omega*t)/omega+v_0*np.cos(theta_0)/omega
y=-v_0*np.sin(theta_0-omega*t)/omega+v_0*np.sin(theta_0)/omega
return x,y
```
Gráfica del recorrido circular de una partícula de carga $q$ que incide en el eje $x$ con rapidez $v_0$ y ángulo de incidencia $\theta_0=0,$debido a un campo $\mathbf{B}$ perpendicular.
```
plt.figure(figsize=(50/6,5))
xTeo,yTeo=posicion(q,B,v_0,theta_0,m,time)
plt.plot(xTeo,yTeo, label="Trayectoria circular")
plt.plot(xTeo,np.zeros(len(xTeo)), c="black")
plt.legend()
plt.xlabel("Posición en x (m)")
plt.ylabel("Posición en y (m)")
```
Cálculo de la posición final de la partícula al llegar al detector, es decir el punto en el que la trayectoria cruza el eje $x$ nuevamente.
```
x_max, y_max=posicion(q,B,v_0,theta_0,m,t_max)
print("Teóricamente a partícula alcanza el detector cuando este se encuentra en x={}m y y={}m.".format(x_max,y_max))
```
Cálculo del momento inicial $p_0=mv_0$, final $p_f=\frac{1}{2}qBx$ y la diferencia de momento que comprueba la conservación del momento lineal.
```
p_0=m*v_0
p_f=0.5*q*B*x_max
dp=np.abs(p_f-p_0)
print("El momento inicial de la partícula es {} kg m/s, el momento final es {} kg m/s y la diferencia de momento es {} kg m/s.".format(p_0,p_f,dp))
```
Definición de la función de trayectoria de la partícula que incide con rapidez $v_0$ y ángulo $\theta_0$ a una región de campo magnético erpendicular $\mathbf{B}$; siguiendo el paso a paso de Feynmann.
```
def pasoApaso(q,B,v_0,theta_0,m):
N=10000
t=0.0
omega=q*B/m
dt=1/(omega*N)
x=[0]
y=[0]
v_x=-v_0*np.sin(theta_0)
v_y=v_0*np.cos(theta_0)
while y[-1]>=0:
a_x=omega*v_y
a_y=-omega*v_x
x_new=x[-1]+v_x*dt
y_new=y[-1]+v_y*dt
x.append(x_new)
y.append(y_new)
v_x=v_x+a_x*dt
v_y=v_y+a_y*dt
t=t+dt
x=np.array(x)
y=np.array(y)
return x,y,t
```
Gráfica de la trayectoria circular por medio del Método de Feynmann de una partícula de carga $q$ que incide en el eje $x$ con rapidez $v_0$ y ángulo de incidencia $\theta_0=0,$debido a un campo $\mathbf{B}$ perpendicular.
```
plt.figure(figsize=(50/6,5))
xF,yF,t_maxF=pasoApaso(q,B,v_0,theta_0,m)
plt.plot(xF,yF, label="Trayectoria circular")
plt.plot(xF,np.zeros(len(xF)), c="black")
plt.legend()
plt.xlabel("Posición en x (m)")
plt.ylabel("Posición en y (m)")
```
Cálculo numérico de la posición final de la partícula al llegar al detector.
```
xF_max=xF[-1]
yF_max=yF[-1]
print("Mediante el Método de Feynmann la partícula alcanza el detector cuando este se encuentra en x={}m y y={}m.".format(xF_max,yF_max))
```
Cálculo del momento inicial $p_0=mv_0$, final $p_f=\frac{1}{2}qBx$ y la diferencia de momento que comprueba la conservación del momento lineal.
```
pF_0=m*v_0
pF_f=0.5*q*B*xF_max
dpF=np.abs(pF_f-pF_0)
print("El momento inicial de la partícula es {} kg m/s, el momento final es {} kg m/s y la diferencia de momento es {} kg m/s.".format(pF_0,pF_f,dpF))
```
Definición de cambio en la velocidad y la función de trayectoria de la partícula que incide con rapidez $v_0$ y ángulo $\theta_0$ a una región de campo magnético erpendicular $\mathbf{B}$; siguiendo el paso a paso del Método de Runge Kutta de cuarto orden.
```
def delta(omega,v_x,v_y,dt):
delta11=dt*omega*v_y
delta12=dt*omega*(v_y+delta11/2)
delta13=dt*omega*(v_y+delta12/2)
delta14=dt*omega*(v_y+delta13)
delta1=(delta11+2*delta12+2*delta13+delta14)/6
delta21=-dt*omega*v_x
delta22=-dt*omega*(v_x+delta21/2)
delta23=-dt*omega*(v_x+delta22/2)
delta24=-dt*omega*(v_x+delta23)
delta2=(delta21+2*delta22+2*delta23+delta24)/6
return delta1, delta2
def rungePaso(q,B,v_0,theta_0,m,N):
t=0.0
omega=q*B/m
dt=1/(omega*N)
x=[0]
y=[0]
v_x=-v_0*np.sin(theta_0)
v_y=v_0*np.cos(theta_0)
while y[-1]>=0:
x_new=x[-1]+v_x*dt
y_new=y[-1]+v_y*dt
x.append(x_new)
y.append(y_new)
v_x=v_x+delta(omega,v_x,v_y,dt)[0]
v_y=v_y+delta(omega,v_x,v_y,dt)[1]
t=t+dt
x=np.array(x)
y=np.array(y)
return x,y,t
```
Gráfica de la trayectoria circular por medio del Método de Runge Kutta 4 de una partícula de carga $q$ que incide en el eje $x$ con rapidez $v_0$ y ángulo de incidencia $\theta_0=0,$debido a un campo $\mathbf{B}$ perpendicular.
```
plt.figure(figsize=(50/6,5))
xR,yR,t_maxR=rungePaso(q,B,v_0,theta_0,m,N)
plt.plot(xR,yR, label="Trayectoria circular")
plt.plot(xR,np.zeros(len(xR)), c="black")
plt.legend()
plt.xlabel("Posición en x (m)")
plt.ylabel("Posición en y (m)")
```
Cálculo numérico de la posición final de la partícula al llegar al detector.
```
xR_max=xR[-1]
yR_max=yR[-1]
print("Mediante el Método de Runge Kutta 4 la partícula alcanza el detector cuando este se encuentra en x={}m y y={}m.".format(xR_max,yR_max))
```
Cálculo del momento inicial $p_0=mv_0$, final $p_f=\frac{1}{2}qBx$ y la diferencia de momento que comprueba la conservación del momento lineal.
```
pR_0=m*v_0
pR_f=0.5*q*B*xR_max
dpR=np.abs(pR_f-pR_0)
print("El momento inicial de la partícula es {} kg m/s, el momento final es {} kg m/s y la diferencia de momento es {} kg m/s.".format(pR_0,pR_f,dpR))
```
Comparación gráfica de los métodos numéricos para $\theta_0=0$.
```
plt.figure(figsize=(12,6))
plt.title("Recorridos de partículas con carga {}C, masa {}kg, velocidad {}m/s, ángulo de entrada {}rad, debidas a un campo perpendicular B={}T.".format(q,m,v_0,theta_0,B))
plt.subplot(1,2,1)
plt.plot(xF,yF, label="PasoFeynmann", c="green")
plt.legend()
plt.xlabel("Posición x (m)")
plt.xlabel("Posición y (m)")
plt.subplot(1,2,2)
plt.plot(xR,yR, label="PasoRunge")
plt.legend()
plt.xlabel("Posición en x (m)")
plt.xlabel("Posición en y (m)")
plt.savefig("recorridos.jpg")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from astropy import units as u
from astropy import constants as const
import matplotlib as mpl
from jupyterthemes import jtplot #These two lines can be skipped if you are not using jupyter themes
jtplot.reset()
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=67.4, Om0=0.314)
import scipy as sp
import multiprocessing as mp
import time
start_total = time.time()
import os
my_path = '/home/tomi/Documentos/Fisica/Tesis/escrito-tesis/images/'
from lenstronomy.LensModel.lens_model import LensModel
from lenstronomy.LensModel.lens_model_extensions import LensModelExtensions
from lenstronomy.LensModel.Solver.lens_equation_solver import LensEquationSolver
zl = 0.2; zs = 1.2
Dl = cosmo.angular_diameter_distance(zl)
Ds = cosmo.angular_diameter_distance(zs)
Dls = cosmo.angular_diameter_distance_z1z2(zl, zs)
G = const.G
rho_crit = (cosmo.critical_density(zl)).to(u.kg/u.m**3)
c_light = (const.c).to(u.cm/u.second)
#r0 = 10*u.kpc
r0 = 10.0*u.kpc
#r0 = 0.1*u.kpc
pi = np.pi
def scale_radius(v,Dl,Ds,Dls): #this is e0 in eq 3.42 meneghetti, eq 1 barnacka 2014
return (4.*pi*v**2/c_light**2*Dl*Dls/Ds).decompose()
def theta_E_SIS():
'in arcsec'
pre_theta_E = (scale_radius(v,Dl,Ds,Dls)/Dl).decompose()
return pre_theta_E*u.rad.to('arcsec', equivalencies=u.dimensionless_angles())
v = 180 *u.km/u.s
ss_r = scale_radius(v,Dl,Ds,Dls)
print('scale radius (m): ',ss_r)
print('scale radius (kpc): ',ss_r.to(u.kpc))
print('theta_E: ',theta_E_SIS() ,'arcsec')
theta_E_num = theta_E_SIS()
elipt = 0.3
re = (const.e.esu**2/const.m_e/(c_light**2)).decompose()
print('Classic electron radius: ',re)
```
The lensing potential by a point mass is given by
$$
\psi = \frac{4GM}{c^2} \frac{D_{ls}}{D_l D_s} ln |\vec{\theta}|
$$
In terms of the Einstein radius
$$
\theta_{e_1} = \sqrt{\frac{4GM}{c^2} \frac{D_{ls}}{D_l D_s}} \\
\psi = \theta_{e_1}^2 ln |\vec{\theta}|
$$
Lets start with
$$
M_1 = 10^3 M_\odot \\
M_2 = 10^4 M_\odot \\
M_3 = 10^5 M_\odot \\
M_4 = 10^6 M_\odot \\
M_5 = 10^8 M_\odot
$$
The mass of the main lens is
$$
M(\theta_e) = \theta_e^2 \frac{c^2}{4G} \frac{D_l D_s}{D_{ls}}
$$
```
M_e = (theta_E_num*u.arcsec)**2 * c_light**2 /4 / G * Dl * Ds / Dls
M_e = (M_e/(u.rad)**2).decompose()
ms = 1.98847e30*u.kg
M_e/ms
```
$$
M(0.73 \mathrm{arcsec}) = 5.91\cdot 10^{10} M_\odot
$$
```
ms = 1.98847e30*u.kg #solar mass
m1 = ms*1e3
m2 = ms*1e4
m3 = ms*1e5
m4 = ms*1e6
m5 = ms*1e8
theta_E_1 = np.sqrt(4*G*m1/c_light**2*Dls/Dl/Ds)
theta_E_1 = theta_E_1.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles())
theta_E_1.value
theta_E_2 = np.sqrt(4*G*m2/c_light**2*Dls/Dl/Ds)
theta_E_2 = theta_E_2.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles())
theta_E_2.value
theta_E_3 = np.sqrt(4*G*m3/c_light**2*Dls/Dl/Ds)
theta_E_3 = theta_E_3.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles())
theta_E_3.value
theta_E_4 = np.sqrt(4*G*m4/c_light**2*Dls/Dl/Ds)
theta_E_4 = theta_E_4.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles())
theta_E_4.value
theta_E_5 = np.sqrt(4*G*m5/c_light**2*Dls/Dl/Ds)
theta_E_5 = theta_E_5.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles())
theta_E_5.value
```
### Two black holes of $ M_4 = 10^6 M_\odot$
```
x1 = -0.26631755 + 2e-3
y1 = -0.26631755
x0 = y0 = - 0.26631755
b1x = -3.5e-3 + x0 ; b1y = -2e-3 + y0
b2x = 0 + x0 ; b2y = -1.5e-3 + y0
b3x = 3.5e-3 + x0 ; b3y = 2.5e-3 + y0
x2 = -0.26631755 - 2e-3
y2 = -0.26631755 - 4e-3
lens_model_list = ['SIEBH2']
lensModel = LensModel(lens_model_list)
lensEquationSolver = LensEquationSolver(lensModel)
kwargs = {'theta_E':theta_E_num.value,'eta':0*elipt, 'theta_E_1':theta_E_4.value, 'x1':x1, 'y1':y1, 'theta_E_2':theta_E_4.value, 'x2':x2, 'y2':y2}
kwargs_lens_list = [kwargs]
from lenstronomy.LensModel.Profiles.sie_black_hole_2 import SIEBH2
perfil = SIEBH2()
t = [0,0,0,0]
phi = [0,0,0,0]
phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value
phi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value
phi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value
print(t[3] - t[1])
print(t[3] - t[2])
print(t[2] - t[1])
phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1]*0)*(u.arcsec**2).to('rad**2')).to('s').value
phi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2]*0)*(u.arcsec**2).to('rad**2')).to('s').value
phi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3]*0)*(u.arcsec**2).to('rad**2')).to('s').value
print(t[3] - t[1])
print(t[3] - t[2])
print(t[2] - t[1])
phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value
phi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value
phi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value
print(t[3] - t[1])
print(t[3] - t[2])
print(t[2] - t[1])
```
# massless black holes
```
from lenstronomy.LensModel.Profiles.sie_black_hole_2 import SIEBH2
perfil = SIEBH2()
t = [0,0,0,0]
phi = [0,0,0,0]
phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value
phi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value
phi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value
print(t[3] - t[1])
print(t[3] - t[2])
print(t[2] - t[1])
phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1]*0)*(u.arcsec**2).to('rad**2')).to('s').value
phi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2]*0)*(u.arcsec**2).to('rad**2')).to('s').value
phi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3]*0)*(u.arcsec**2).to('rad**2')).to('s').value
print(t[3] - t[1])
print(t[3] - t[2])
print(t[2] - t[1])
phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value
phi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value
phi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)
t[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value
print(t[3] - t[1])
print(t[3] - t[2])
print(t[2] - t[1])
```
### $ M_1 = 10^3 M_\odot$
```
x1 = -0.06630 - .05e-3
y1 = -0.0662868
x2 = -0.06630 + .05e-3
y2 = -0.0662868
lens_model_list = ['SIEBH2']
lensModel = LensModel(lens_model_list)
lensEquationSolver = LensEquationSolver(lensModel)
kwargs = {'theta_E':theta_E_num.value,'eta':0*elipt, 'theta_E_1':theta_E_1.value, 'x1':x1, 'y1':y1, 'theta_E_2':theta_E_1.value, 'x2':x2, 'y2':y2}
kwargs_lens_list = [kwargs]
x0 = -.0662968
y0 = -.0663026
#blobs position
b1x = x0 - -.1e-3 ; b1y = y0 - -.1e-3
b2x = x0 - .1e-3 ; b2y = y0 - .1e-3
from lenstronomy.LensModel.Profiles.sie_black_hole_2 import SIEBH2
perfil = SIEBH2()
t_ = [0,0]
phi = [0,0]
phi[0] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t_[0] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[0])*(u.arcsec**2).to('rad**2')).to('s').value
phi[1] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t_[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value
print(t_[0] - t_[1])
t_ = [0,0]
phi = [0,0]
phi[0] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t_[0] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[0]*0)*(u.arcsec**2).to('rad**2')).to('s').value
phi[1] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t_[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[1]*0)*(u.arcsec**2).to('rad**2')).to('s').value
print(t_[0] - t_[1])
t_ = [0,0]
phi = [0,0]
phi[0] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t_[0] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[0])*(u.arcsec**2).to('rad**2')).to('s').value
phi[1] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)
t_[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value
print(t_[0] - t_[1])
end_total = time.time()
print('total time: ',(end_total-start_total)/60.,' minutes')
```
| github_jupyter |
# Issue: the GO Term tags and names are mismatched on the output of the maaslin2 term files
## Objective: Identify and repair the mismatached go tags and names
```
library(phyloseq)
setwd("/media/jochum00/Aagaard_Raid3/microbial/GO_term_analysis/R_Maaslin2/") # Change the current working directory
```
import the base phyloseq object
```
bac_pseq<-readRDS("bac_pseq.rds")
write.table(as.data.frame(tax_table(bac_pseq)),file = "bac_pseq_tax_table.tsv",sep = "\t")
```
These names are ok, so what is the difference
```
bac_pseq_no_neg<-subset_samples(bac_pseq, sample_type!="neg_control")
write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = "bac_pseq_no_neg_tax_table.tsv",sep = "\t")
```
names are still ok
```
bac_pseq_no_neg<-subset_samples(bac_pseq_no_neg, sample_type!="Unknown")
write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = "bac_pseq_no_neg_tax_table.tsv",sep = "\t")
```
still ok
```
names<-paste(taxa_names(bac_pseq_no_neg),get_taxa_unique(bac_pseq_no_neg,taxonomic.rank = "name" ),sep = "-")
taxa_names(bac_pseq_no_neg)<-names
write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = "bac_pseq_no_neg_tax_table.tsv",sep = "\t")
```
This messed it up somehow
GO:0030640 polyketide catabolic process TRUE GO:0030640-polyketide catabolic process biological_process 4 polyketide catabolic process
GO:0021731 trigeminal motor nucleus development TRUE GO:0021731-trigeminal motor nucleus development biological_process 4 trigeminal motor nucleus development
GO:0032970 regulation of actin filament FALSE GO:0032970-regulation of actin filament-based process biological_process 4 regulation of actin filament-based process
GO:0002009 morphogenesis of an epithelium TRUE GO:0002009-morphogenesis of an epithelium biological_process 4 morphogenesis of an epithelium
GO:0060359 adhesion of symbiont to host cell FALSE GO:0060359-adhesion of symbiont to host cell biological_process 4 response to ammonium ion
GO:0044650 viral tail assembly FALSE GO:0044650-viral tail assembly biological_process 4 adhesion of symbiont to host cell
GO:0098003 inflammatory response FALSE GO:0098003-inflammatory response biological_process 4 viral tail assembly
GO:0006954 hydrogen peroxide metabolic process FALSE GO:0006954-hydrogen peroxide metabolic process biological_process 4 inflammatory response

## I THINK IT MESSED UP BECAUSE OF THE "-" DELIMITER
Lets try to fix that
```
bac_pseq_no_neg<-subset_samples(bac_pseq, sample_type!="neg_control")
bac_pseq_no_neg<-subset_samples(bac_pseq_no_neg, sample_type!="Unknown")
length(taxa_names(bac_pseq_no_neg))
length(get_taxa_unique(bac_pseq_no_neg,taxonomic.rank = "name"))
```
## Well this is an issue lol
```
tax<-data.frame(tax_table(bac_pseq_no_neg))
names<-paste(rownames(tax),tax$name,sep="-")
length(names)
taxa_names(bac_pseq_no_neg)<-names
```
Could it of really been that easy?
```
write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = "bac_pseq_no_neg_tax_table.tsv",sep = "\t")
```
WOW it really was that easy! Problem Solved!
| github_jupyter |
```
#El Mehdi CHOUHAM version 1
# mehdichouham@gmail.com for Tictactrip
import pandas as pd
import numpy as np
import datetime as dt
import plotly.graph_objects as go
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ARDRegression
ticket_data=pd.read_csv('./data/ticket_data.csv', sep=',', index_col='id')
stations=pd.read_csv('./data/stations.csv', sep=',', index_col='id')
providers=pd.read_csv('./data/providers.csv', sep=',', index_col='id')
cities=pd.read_csv('./data/cities.csv', sep=',', index_col='id')
```
## Summary :
1 - Dataset review
<br>2 - Data extraction exercise
<br> 2 - 1 - Prices
<br> 2 - 2 - Durations
<br> 2 - 3 - Price and duration difference per transport mean and per travel range
<br>3 - Bonus
<br> 3 - 1 - Kilometer price per company
<br> 3 - 1 - New trainline !
## 1 - Datasets review
```
providers.head()
ticket_data.head(2)
stations.head(2)
providers.head(2)
cities.head(2)
```
## 2 - Data extraction exercise
### 2 - 1 - Prices
```
df = ticket_data.merge(providers['fullname'], left_on='company', right_on='id', how='left')
df = df.merge(stations['unique_name'], left_on='o_city', right_on='id', how='left').merge(stations['unique_name'], left_on='d_city', right_on='id', how='left').rename(index=str,
columns={"unique_name_x": "o_station_name", "unique_name_y" : "d_station_name", "price_in_cents" : "price_in_euros"})
df = df.drop(columns=['arrival_ts','company', 'o_station', 'd_station', 'o_city', 'd_city', 'departure_ts', 'search_ts', 'middle_stations', 'other_companies'])
df.price_in_euros = round(df.price_in_euros/100, 5)
df.describe().loc[['mean','min','max']]
```
### 2 - 2 - Durations
```
ticket_data['duration']=pd.to_datetime(ticket_data['arrival_ts'])-pd.to_datetime(ticket_data['departure_ts'])
ticket_data.loc[:,['price_in_cents','duration']].describe().loc[['mean','min','max']]
```
### 2 - 3 - Price and duration difference per transport mean and per travel range
```
def absc_curvi(o_lat, d_lat, delta_long) :
return m.acos( m.sin(o_lat)*m.sin(d_lat) + m.cos(o_lat)*m.cos(d_lat)*m.cos(delta_long))
r_terre = 6378137
to_rad = np.pi/180
#Calcul et ajout de la distance à partir de l'abscisse curviligne
coords = ticket_data.merge(stations, how='left', right_on='id', left_on='o_station').rename(index=str, columns={"longitude": "o_longitude", "latitude": "o_latitude", "unique_name": "o_station"})
coords = coords.merge(stations, how='left', right_on='id', left_on='d_station').rename(columns={"longitude": "d_longitude", "latitude": "d_latitude", "unique_name": "d_station"})
coords['delta_long'] = coords.d_longitude - coords.o_longitude
coords['absc_curvi'] = np.arccos( np.sin(coords.o_latitude*to_rad)*np.sin(coords.d_latitude*to_rad) + np.cos(coords.o_latitude*to_rad)*np.cos(coords.d_latitude*to_rad)*np.cos(coords.delta_long*to_rad))
coords['distance'] = r_terre * coords['absc_curvi'] /1000
coords = coords.drop(columns=[ 'middle_stations', 'o_station', 'd_station', 'departure_ts', 'arrival_ts', 'search_ts', 'delta_long', 'o_latitude', 'o_longitude','d_latitude', 'd_longitude', 'other_companies'])
# coords[coords.o_name == "Massy-Palaiseau"][coords.d_name == "Gare Lille-Europe"] matches google maps distance !!
# Prise en compte du type de transport
trajets = coords.merge(providers['transport_type'], left_on='company', right_on='id', how='left').drop(columns=[ 'company', 'o_city', 'd_city'])
trajets
trajets_0_200 = trajets[trajets['distance'] <= 200]
trajets_201_800 = trajets[(trajets['distance'] > 200) & (trajets['distance'] < 800)]
trajets_800_2000 = trajets[(800 < trajets['distance']) & (trajets['distance'] < 2000)]
trajets_over_2000 = trajets[trajets['distance'] > 2000]
print("Le plus grand trajet est : ", trajets.distance.max(), "Km !\n")
```
### Paths per travel ranges :
```
transport_types = []; list_0_200 = []; list_201_800 = []; list_800_2000 = []; list_over_2000 = [];
for each in pd.unique(trajets['transport_type']) :
transport_types.append(each)
for each in transport_types:
seconds = trajets_0_200.duration[trajets_0_200['transport_type'] == each].mean()
list_0_200.append([trajets_0_200.price_in_cents[trajets_0_200['transport_type'] == each].mean(), seconds.total_seconds()])
for each in transport_types:
seconds = trajets_201_800.duration[trajets_201_800['transport_type'] == each].mean()
list_201_800.append([trajets_201_800.price_in_cents[trajets_201_800['transport_type'] == each].mean(), seconds.total_seconds()])
for each in transport_types:
seconds = trajets_800_2000.duration[trajets_800_2000['transport_type'] == each].mean()
list_800_2000.append([trajets_800_2000.price_in_cents[trajets_800_2000['transport_type'] == each].mean(), seconds.total_seconds()])
for each in transport_types:
seconds = trajets_over_2000.duration[trajets_over_2000['transport_type'] == each].mean()
list_over_2000.append([trajets_over_2000.price_in_cents[trajets_over_2000['transport_type'] == each].mean(), seconds.total_seconds()])
averages = np.array([list_0_200, list_201_800, list_800_2000, list_over_2000])
travel_ranges = ['from 0 Km to 200 Km', '201 Km to 800 Km', '801 Km to 2000 Km'] #, 'over 2000 Km'], dismissed
fig_1 = go.Figure()
fig_1.data = []; fig_1.layout={} #reset
for idx, each_transport in enumerate(transport_types) :
fig_1.add_trace(go.Scatter(x = travel_ranges, y = averages[:, idx, 0]/100, name=each_transport))
fig_1.update_layout(title='Average Price per travel range per transport mean',
xaxis_title='Travel ranges',
yaxis_title='Price (in euros)')
fig_1.show()
fig_2 = go.Figure()
fig_2.data = []; fig_2.layout = {} #reset
for idx, each_transport in enumerate(transport_types) :
fig_2.add_trace(go.Scatter(x = travel_ranges, y = averages[:, idx, 1]/60, name=each_transport))
fig_2.update_layout(title='Average Duration per travel range per transport mean',
xaxis_title='Travel ranges',
yaxis_title='Duration (in minutes)')
fig_2.show()
```
## 3 - Bonus
### 3 - 1 - Kilometer price per company
```
company_info = 'fullname' #or fullname
trajets_companies = coords.merge(providers[['name','fullname']], how='left', right_on='id', left_on='company')
trajets_companies['kilometer_price_in_euros'] = trajets_companies['price_in_cents']/(100*trajets['distance'])
trajets_companies = trajets_companies.drop(columns=['price_in_cents', 'company', 'duration', 'absc_curvi'])
fig_3 = go.Figure(
data=[go.Bar(x=trajets_companies.groupby([company_info]).kilometer_price_in_euros.mean().index, y=trajets_companies.groupby([company_info]).kilometer_price_in_euros.mean())],
)
fig_3.update_layout(title="Kilometer price per company",
xaxis_title='Providers',
yaxis_title='Kilometer price (euros/Km)')
fig_3.show()
```
### 3 - 2 - New trainline !
```
#dataset
cities.loc[(cities.unique_name == 'paris'),'population'] = 2187526
nl = ticket_data.merge(cities[['unique_name', 'latitude', 'longitude', 'population']], how='left', right_on='id', left_on='o_city').rename(index=str, columns={"longitude": "o_longitude", "latitude": "o_latitude", "unique_name": "o_city_name", 'population': 'o_city_pop'})
nl = nl.merge(cities[['unique_name', 'latitude', 'longitude', 'population']], how='left', right_on='id', left_on='d_city').rename(columns={"longitude": "d_longitude", "latitude": "d_latitude", "unique_name": "d_city_name", 'population': 'd_city_pop'})
nl = nl.merge(providers['fullname'], how='left', right_on='id', left_on='company')
delta_long = nl.d_longitude - nl.o_longitude
nl['distance'] = r_terre * (np.arccos( np.sin(nl.o_latitude*to_rad)*np.sin(nl.d_latitude*to_rad) + np.cos(nl.o_latitude*to_rad)*np.cos(nl.d_latitude*to_rad)*np.cos(delta_long*to_rad)))/1000
nl = nl.rename(columns={'price_in_cents' : 'price_in_euros'})
nl['price_in_euros'] = nl['price_in_euros']/100
nl['month'] = pd.DatetimeIndex(nl['departure_ts']).month
nl['hour'] = pd.DatetimeIndex(nl['departure_ts']).hour
nl['delta_purchase_hours'] = (pd.to_datetime(nl['departure_ts'])-pd.to_datetime(nl['search_ts'])).dt.total_seconds()/3600
nl['duration'] = nl['duration'].dt.total_seconds()/60
nl = nl.drop(columns=[ 'middle_stations', 'o_station', 'd_station', 'departure_ts', 'arrival_ts', 'search_ts', 'o_latitude', 'o_longitude','d_latitude', 'd_longitude', 'other_companies', 'o_city', 'd_city', 'company'])
#we only keep, datas we think are relevant for our prediction; here :
# origine city and destination city names and populations, the month and the hour for the travel, company and the search ts
nl = nl.dropna()
nl=nl.reindex(columns=list(nl.columns)[1:]+[list(nl.columns)[0]])
nl.head(2)
```
### Encoding and training
```
encod = OneHotEncoder(handle_unknown='ignore')
nl_str = nl[['o_city_name','d_city_name', 'fullname']]
encoded_nl_str = encod.fit_transform(nl_str)
encoded_nl = pd.concat((pd.DataFrame(encoded_nl_str.toarray(), index=nl.index), nl[[ 'duration', 'o_city_pop', 'd_city_pop', 'distance', 'month', 'hour','delta_purchase_hours','price_in_euros']]), axis=1)
X_train, X_test, y_train, y_test = train_test_split(encoded_nl.iloc[:,:-1],encoded_nl.iloc[:,-1], test_size=0.2, random_state=42, shuffle=True)
%%time
clf = ARDRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
```
### Prediction
An interesting trip we can predict the price of is a high speed trainline between Lyon and Bordeaux, two big cities to which people usually travel to by plane for the lack of a 'TGV'
We provide :
- the duration
- both cities populations and names
- the month of departure
- the hour of departure
- the time difference between the ticket purchase and the departure in minutes.
```
def f(V):
return np.concatenate((encod.transform(np.array(V[:3]).reshape((1,-1))).toarray()[0], V[3:])).reshape((1,-1))
def predict(trip):
return round(float(clf.predict(f(trip))), 2)
%%time
trip = ['lyon', #o_city_name
'bordeaux', #d_city_name
'TGV', #provider fullname
513275, #duration
249712, #o_city_pop
1122005, #d_city_name
800, #distance
2, #month
12, #hour
80] #delta_purchase_hours
print('Le prix d\'un voyage de ', trip[1], ' vers ', trip[2], 'est de : \n', predict(trip), 'euros ! \n')
```
### Other interesting ideas :
- Group by paths, for a interesting paths, plot price curves (+ / time lapses)
- Group by paths, mean price for transportation_mean/paths, ( mean price for class_paths)
- Joint company & provider for same class, (donnée inconnues ?)
- Draw coordinates on map
- Correlation between number of travels per region
- Pour une nouvelle ligne, on peut estimer le prix en fonction du prix du kilometre, denivelée des coordonnées ?; populations des villes d'orignies/ destination. + période creuse
- Utilisation de labelles d'édition d'étiquette/A* pour trouver le trajet le plus court d'une station 1 à une station 2 à partir de la liste des stations (théorie des graphes)
### Thank you !
| github_jupyter |
```
#default_exp data.load
#export
from fastai2.torch_basics import *
from torch.utils.data.dataloader import _MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter,_DatasetKind
_loaders = (_MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter)
from nbdev.showdoc import *
bs = 4
letters = list(string.ascii_lowercase)
```
## DataLoader
```
#export
def _wif(worker_id):
set_num_threads(1)
info = get_worker_info()
ds = info.dataset.d
ds.nw,ds.offs = info.num_workers,info.id
set_seed(info.seed)
ds.wif()
class _FakeLoader(GetAttr):
_auto_collation,collate_fn,drop_last,dataset_kind,_dataset_kind,_index_sampler = (
False,noops,False,_DatasetKind.Iterable,_DatasetKind.Iterable,Inf.count)
def __init__(self, d, pin_memory, num_workers, timeout):
self.dataset,self.default,self.worker_init_fn = self,d,_wif
store_attr(self, 'd,pin_memory,num_workers,timeout')
def __iter__(self): return iter(self.d.create_batches(self.d.sample()))
@property
def multiprocessing_context(self): return (None,multiprocessing)[self.num_workers>0]
@contextmanager
def no_multiproc(self):
old_nw = self.num_workers
try:
self.num_workers = 0
yield self.d
finally: self.num_workers = old_nw
_collate_types = (ndarray, Tensor, typing.Mapping, str)
#export
def fa_collate(t):
b = t[0]
return (default_collate(t) if isinstance(b, _collate_types)
else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
else default_collate(t))
#e.g. x is int, y is tuple
t = [(1,(2,3)),(1,(2,3))]
test_eq(fa_collate(t), default_collate(t))
test_eq(L(fa_collate(t)).map(type), [Tensor,tuple])
t = [(1,(2,(3,4))),(1,(2,(3,4)))]
test_eq(fa_collate(t), default_collate(t))
test_eq(L(fa_collate(t)).map(type), [Tensor,tuple])
test_eq(L(fa_collate(t)[1]).map(type), [Tensor,tuple])
#export
def fa_convert(t):
return (default_convert(t) if isinstance(t, _collate_types)
else type(t)([fa_convert(s) for s in t]) if isinstance(t, Sequence)
else default_convert(t))
t0 = array([1,2])
t = [t0,(t0,t0)]
test_eq(fa_convert(t), default_convert(t))
test_eq(L(fa_convert(t)).map(type), [Tensor,tuple])
#export
class SkipItemException(Exception): pass
#export
@funcs_kwargs
class DataLoader(GetAttr):
_noop_methods = 'wif before_iter after_item before_batch after_batch after_iter'.split()
for o in _noop_methods:
exec(f"def {o}(self, x=None, *args, **kwargs): return x")
_methods = _noop_methods + 'create_batches create_item create_batch retain \
get_idxs sample shuffle_fn do_batch create_batch'.split()
_default = 'dataset'
def __init__(self, dataset=None, bs=None, num_workers=0, pin_memory=False, timeout=0, batch_size=None,
shuffle=False, drop_last=False, indexed=None, n=None, device=None, **kwargs):
if batch_size is not None: bs = batch_size # PyTorch compatibility
assert not (bs is None and drop_last)
if indexed is None: indexed = dataset is not None and hasattr(dataset,'__getitem__')
if n is None:
try: n = len(dataset)
except TypeError: pass
store_attr(self, 'dataset,bs,shuffle,drop_last,indexed,n,pin_memory,timeout,device')
self.rng,self.nw,self.offs = random.Random(),1,0
self.fake_l = _FakeLoader(self, pin_memory, num_workers, timeout)
def __len__(self):
if self.n is None: raise TypeError
if self.bs is None: return self.n
return self.n//self.bs + (0 if self.drop_last or self.n%self.bs==0 else 1)
def get_idxs(self):
idxs = Inf.count if self.indexed else Inf.nones
if self.n is not None: idxs = list(itertools.islice(idxs, self.n))
if self.shuffle: idxs = self.shuffle_fn(idxs)
return idxs
def sample(self):
idxs = self.get_idxs()
return (b for i,b in enumerate(idxs) if i//(self.bs or 1)%self.nw==self.offs)
def __iter__(self):
self.randomize()
self.before_iter()
for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
if self.device is not None: b = to_device(b, self.device)
yield self.after_batch(b)
self.after_iter()
if hasattr(self, 'it'): delattr(self, 'it')
def create_batches(self, samps):
self.it = iter(self.dataset) if self.dataset is not None else None
res = filter(lambda o:o is not None, map(self.do_item, samps))
yield from map(self.do_batch, self.chunkify(res))
def new(self, dataset=None, cls=None, **kwargs):
if dataset is None: dataset = self.dataset
if cls is None: cls = type(self)
cur_kwargs = dict(dataset=dataset, num_workers=self.fake_l.num_workers, pin_memory=self.pin_memory, timeout=self.timeout,
bs=self.bs, shuffle=self.shuffle, drop_last=self.drop_last, indexed=self.indexed, device=self.device)
for n in self._methods: cur_kwargs[n] = getattr(self, n)
return cls(**merge(cur_kwargs, kwargs))
@property
def prebatched(self): return self.bs is None
def do_item(self, s):
try: return self.after_item(self.create_item(s))
except SkipItemException: return None
def chunkify(self, b): return b if self.prebatched else chunked(b, self.bs, self.drop_last)
def shuffle_fn(self, idxs): return self.rng.sample(idxs, len(idxs))
def randomize(self): self.rng = random.Random(self.rng.randint(0,2**32-1))
def retain(self, res, b): return retain_types(res, b[0] if is_listy(b) else b)
def create_item(self, s): return next(self.it) if s is None else self.dataset[s]
def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)
def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
def one_batch(self):
if self.n is not None and len(self)==0: raise ValueError(f'This DataLoader does not contain any batches')
with self.fake_l.no_multiproc(): res = first(self)
if hasattr(self, 'it'): delattr(self, 'it')
return res
```
Override `item` and use the default infinite sampler to get a stream of unknown length (`stop()` when you want to stop the stream).
```
class RandDL(DataLoader):
def create_item(self, s):
r = random.random()
return r if r<0.95 else stop()
L(RandDL())
L(RandDL(bs=4, drop_last=True)).map(len)
dl = RandDL(bs=4, num_workers=4, drop_last=True)
L(dl).map(len)
test_eq(dl.fake_l.num_workers, 4)
with dl.fake_l.no_multiproc():
test_eq(dl.fake_l.num_workers, 0)
L(dl).map(len)
test_eq(dl.fake_l.num_workers, 4)
def _rand_item(s):
r = random.random()
return r if r<0.95 else stop()
L(DataLoader(create_item=_rand_item))
```
If you don't set `bs`, then `dataset` is assumed to provide an iterator or a `__getitem__` that returns a batch.
```
ds1 = DataLoader(letters)
test_eq(L(ds1), letters)
test_eq(len(ds1), 26)
test_shuffled(L(DataLoader(letters, shuffle=True)), letters)
ds1 = DataLoader(letters, indexed=False)
test_eq(L(ds1), letters)
test_eq(len(ds1), 26)
t2 = L(tensor([0,1,2]),tensor([3,4,5]))
ds2 = DataLoader(t2)
test_eq_type(L(ds2), t2)
t3 = L(array([0,1,2]),array([3,4,5]))
ds3 = DataLoader(t3)
test_eq_type(L(ds3), t3.map(tensor))
ds4 = DataLoader(t3, create_batch=noop, after_iter=lambda: setattr(t3, 'f', 1))
test_eq_type(L(ds4), t3)
test_eq(t3.f, 1)
```
If you do set `bs`, then `dataset` is assumed to provide an iterator or a `__getitem__` that returns a single item of a batch.
```
def twoepochs(d): return ' '.join(''.join(o) for _ in range(2) for o in d)
ds1 = DataLoader(letters, bs=4, drop_last=True, num_workers=0)
test_eq(twoepochs(ds1), 'abcd efgh ijkl mnop qrst uvwx abcd efgh ijkl mnop qrst uvwx')
ds1 = DataLoader(letters,4,num_workers=2)
test_eq(twoepochs(ds1), 'abcd efgh ijkl mnop qrst uvwx yz abcd efgh ijkl mnop qrst uvwx yz')
ds1 = DataLoader(range(12), bs=4, num_workers=3)
test_eq_type(L(ds1), L(tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10,11])))
ds1 = DataLoader([str(i) for i in range(11)], bs=4, after_iter=lambda: setattr(t3, 'f', 2))
test_eq_type(L(ds1), L(['0','1','2','3'],['4','5','6','7'],['8','9','10']))
test_eq(t3.f, 2)
it = iter(DataLoader(map(noop,range(20)), bs=4, num_workers=1))
test_eq_type([next(it) for _ in range(3)], [tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10,11])])
class SleepyDL(list):
def __getitem__(self,i):
time.sleep(random.random()/50)
return super().__getitem__(i)
t = SleepyDL(letters)
%time test_eq(DataLoader(t, num_workers=0), letters)
%time test_eq(DataLoader(t, num_workers=2), letters)
%time test_eq(DataLoader(t, num_workers=4), letters)
dl = DataLoader(t, shuffle=True, num_workers=1)
test_shuffled(L(dl), letters)
test_shuffled(L(dl), L(dl))
class SleepyQueue():
"Simulate a queue with varying latency"
def __init__(self, q): self.q=q
def __iter__(self):
while True:
time.sleep(random.random()/100)
try: yield self.q.get_nowait()
except queues.Empty: return
q = Queue()
for o in range(30): q.put(o)
it = SleepyQueue(q)
%time test_shuffled(L(DataLoader(it, num_workers=4)), range(30))
class A(TensorBase): pass
for nw in (0,2):
t = A(tensor([1,2]))
dl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=nw)
b = first(dl)
test_eq(type(b), A)
t = (A(tensor([1,2])),)
dl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=nw)
b = first(dl)
test_eq(type(b[0]), A)
class A(TensorBase): pass
t = A(tensor(1,2))
tdl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=2, after_batch=to_device)
b = first(tdl)
test_eq(type(b), A)
# Unknown attributes are delegated to `dataset`
test_eq(tdl.pop(), tensor(1,2))
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
**Chapter 2 – End-to-end Machine Learning project**
*Welcome to Machine Learning Housing Corp.! Your task is to predict median house values in Californian districts, given a number of features from these districts.*
*This notebook contains all the sample code and solutions to the exercices in chapter 2.*
<table align="left">
<td>
<a href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/02_end_to_end_machine_learning_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
</td>
<td>
<a target="_blank" href="https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/master/02_end_to_end_machine_learning_project.ipynb"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" /></a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "end_to_end_project"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Get the data
```
import pandas as pd
housing = pd.read_csv("insurance.csv")
housing.head(7)
sns.boxplot(x = "region", y = "charges", data = housing)
housing.drop("region", axis=1, inplace=True)
housing.info()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=30, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
# to make this notebook's output identical at every run
np.random.seed(42)
import numpy as np
# For illustration only. Sklearn has train_test_split()
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
len(train_set)
len(test_set)
from zlib import crc32
def test_set_check(identifier, test_ratio):
return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32
def split_train_test_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
```
The implementation of `test_set_check()` above works fine in both Python 2 and Python 3. In earlier releases, the following implementation was proposed, which supported any hash function, but was much slower and did not support Python 2:
```
test_set.head()
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["smoker"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
strat_test_set["smoker"].value_counts() / len(strat_test_set)
housing["smoker"].value_counts() / len(housing)
```
# Discover and visualize the data to gain insights
```
housing = strat_train_set.copy()
housing.plot(kind="scatter", x="age", y="charges")
save_fig("bad_visualization_plot")
housing.plot(kind="scatter", x="age", y="charges", alpha=0.4,
s=housing["bmi"], figsize=(10,7), sharex=False)
plt.legend()
save_fig("housing_prices_scatterplot")
corr_matrix = housing.corr()
corr_matrix["charges"].sort_values(ascending=False)
# from pandas.tools.plotting import scatter_matrix # For older versions of Pandas
from pandas.plotting import scatter_matrix
attributes = ["charges", "age", "bmi",
"children"]
scatter_matrix(housing[attributes], figsize=(12, 8))
save_fig("scatter_matrix_plot")
housing.plot(kind="scatter", x="bmi", y="charges",
alpha=0.1)
plt.axis([15, 60, 0, 55000])
save_fig("income_vs_house_value_scatterplot")
corr_matrix = housing.corr()
corr_matrix["charges"].sort_values(ascending=False)
housing.describe()
```
# Prepare the data for Machine Learning algorithms
```
housing = strat_train_set.drop("charges", axis=1) # drop labels for training set
housing_labels = strat_train_set["charges"].copy()
strat_train_set.head()
sample_incomplete_rows = housing[housing.isnull().any(axis=1)].head()
sample_incomplete_rows
median = housing["children"].median()
sample_incomplete_rows["children"].fillna(median, inplace=True) # option 3
sample_incomplete_rows
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy="median")
```
Remove the text attribute because median can only be calculated on numerical attributes:
```
housing_num = housing.select_dtypes(include=[np.number])
imputer.fit(housing_num)
imputer.statistics_
```
Check that this is the same as manually computing the median of each attribute:
```
housing_num.median().values
```
Transform the training set:
```
X = imputer.transform(housing_num)
type(X)
housing_tr = pd.DataFrame(X, columns=housing_num.columns,
index=housing.index)
housing_tr.loc[sample_incomplete_rows.index.values]
imputer.strategy
housing_tr = pd.DataFrame(X, columns=housing_num.columns,
index=housing_num.index)
housing_tr.head()
```
Now let's preprocess the categorical input feature, `ocean_proximity`:
```
housing_cat = housing[["sex", "smoker"]]
housing_cat.head(10)
housing_ord = housing[["children"]]
from sklearn.preprocessing import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
housing_ord_encoded = ordinal_encoder.fit_transform(housing_ord)
housing_ord_encoded
from sklearn.preprocessing import OneHotEncoder
cat_encoder = OneHotEncoder()
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
```
By default, the `OneHotEncoder` class returns a sparse array, but we can convert it to a dense array if needed by calling the `toarray()` method:
```
housing_cat_1hot.toarray()
```
Alternatively, you can set `sparse=False` when creating the `OneHotEncoder`:
```
cat_encoder = OneHotEncoder(sparse=False)
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
cat_encoder.categories_
```
Let's create a custom transformer to add extra attributes:
Now let's build a pipeline for preprocessing the numerical attributes:
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing_num_tr
from sklearn.compose import ColumnTransformer
num_attribs = ["age", "bmi"]
cat_attribs = ["sex", "smoker"]
ord_attribs = ["children"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
("ord", OrdinalEncoder(), ord_attribs)
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
housing_prepared.shape
housing_prepared[0:10]
```
# Select and train a model
```
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
# let's try the full preprocessing pipeline on a few training instances
some_data = housing.iloc[:4]
some_labels = housing_labels.iloc[:4]
some_data_prepared = full_pipeline.transform(some_data)
print("Predictions:", lin_reg.predict(some_data_prepared))
```
Compare against the actual values:
```
print("Labels:", list(some_labels))
some_data_prepared
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
```
**Note**: since Scikit-Learn 0.22, you can get the RMSE directly by calling the `mean_squared_error()` function with `squared=False`.
```
from sklearn.metrics import mean_absolute_error
lin_mae = mean_absolute_error(housing_labels, housing_predictions)
lin_mae
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(random_state=42)
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
```
# Fine-tune your model
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
def display_scores(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
display_scores(tree_rmse_scores)
lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
lin_rmse_scores = np.sqrt(-lin_scores)
display_scores(lin_rmse_scores)
```
**Note**: we specify `n_estimators=100` to be future-proof since the default value is going to change to 100 in Scikit-Learn 0.22 (for simplicity, this is not shown in the book).
```
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(housing_prepared, housing_labels)
housing_predictions = forest_reg.predict(housing_prepared)
forest_mse = mean_squared_error(housing_labels, housing_predictions)
forest_rmse = np.sqrt(forest_mse)
forest_rmse
from sklearn.model_selection import cross_val_score
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
pd.Series(np.sqrt(-scores)).describe()
from sklearn.svm import SVR
svm_reg = SVR(kernel="linear")
svm_reg.fit(housing_prepared, housing_labels)
housing_predictions = svm_reg.predict(housing_prepared)
svm_mse = mean_squared_error(housing_labels, housing_predictions)
svm_rmse = np.sqrt(svm_mse)
svm_rmse
from sklearn.model_selection import GridSearchCV
param_grid = [
# try 12 (3×4) combinations of hyperparameters
{'n_estimators': [10, 30, 50], 'max_features': [4, 6, 8, 10]},
# then try 6 (2×3) combinations with bootstrap set as False
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor(random_state=42)
# train across 5 folds, that's a total of (12+6)*5=90 rounds of training
grid_search = GridSearchCV(forest_reg, param_grid, cv=5,
scoring='neg_mean_squared_error',
return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
```
The best hyperparameter combination found:
```
grid_search.best_params_
grid_search.best_estimator_
```
Let's look at the score of each hyperparameter combination tested during the grid search:
```
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
pd.DataFrame(grid_search.cv_results_)
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distribs = {
'n_estimators': randint(low=1, high=200),
'max_features': randint(low=1, high=8),
}
forest_reg = RandomForestRegressor(random_state=42)
rnd_search = RandomizedSearchCV(forest_reg, param_distributions=param_distribs,
n_iter=10, cv=5, scoring='neg_mean_squared_error', random_state=42)
rnd_search.fit(housing_prepared, housing_labels)
cvres = rnd_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
cat_encoder_list = list(full_pipeline.named_transformers_["cat"].categories_)
cat_one_hot_attribs = [item for sublist in cat_encoder_list for item in sublist]
attributes = num_attribs + cat_one_hot_attribs + ord_attribs
sorted(zip(feature_importances, attributes), reverse=True)
#list(zip(feature_importances, attributes))
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("charges", axis=1)
y_test = strat_test_set["charges"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
```
We can compute a 95% confidence interval for the test RMSE:
```
from scipy import stats
confidence = 0.95
squared_errors = (final_predictions - y_test) ** 2
np.sqrt(stats.t.interval(confidence, len(squared_errors) - 1,
loc=squared_errors.mean(),
scale=stats.sem(squared_errors)))
```
We could compute the interval manually like this:
```
m = len(squared_errors)
mean = squared_errors.mean()
tscore = stats.t.ppf((1 + confidence) / 2, df=m - 1)
tmargin = tscore * squared_errors.std(ddof=1) / np.sqrt(m)
np.sqrt(mean - tmargin), np.sqrt(mean + tmargin)
```
Alternatively, we could use a z-scores rather than t-scores:
```
zscore = stats.norm.ppf((1 + confidence) / 2)
zmargin = zscore * squared_errors.std(ddof=1) / np.sqrt(m)
np.sqrt(mean - zmargin), np.sqrt(mean + zmargin)
```
# Extra material
## A full pipeline with both preparation and prediction
```
full_pipeline_with_predictor = Pipeline([
("preparation", full_pipeline),
("linear", LinearRegression())
])
full_pipeline_with_predictor.fit(housing, housing_labels)
full_pipeline_with_predictor.predict(some_data)
```
## Model persistence using joblib
```
my_model = full_pipeline_with_predictor
import joblib
joblib.dump(my_model, "my_model.pkl") # DIFF
#...
my_model_loaded = joblib.load("my_model.pkl") # DIFF
```
Congratulations! You already know quite a lot about Machine Learning. :)
| github_jupyter |
```
import pandas as pd
import numpy as np
import requests
import psycopg2
import json
import simplejson
import urllib
import config
import ast
from operator import itemgetter
from sklearn.cluster import KMeans
from sqlalchemy import create_engine
!pip install --upgrade pip
!pip install sqlalchemy
!pip install psycopg2
!pip install simplejson
!pip install config
conn_str = "dbname='travel_with_friends' user='zoesh' host='localhost'"
# conn_str = "dbname='travel_with_friends' user='Zoesh' host='localhost'"
import math
import random
def distL2((x1,y1), (x2,y2)):
"""Compute the L2-norm (Euclidean) distance between two points.
The distance is rounded to the closest integer, for compatibility
with the TSPLIB convention.
The two points are located on coordinates (x1,y1) and (x2,y2),
sent as parameters"""
xdiff = x2 - x1
ydiff = y2 - y1
return math.sqrt(xdiff*xdiff + ydiff*ydiff) + .5
def distL1((x1,y1), (x2,y2)):
"""Compute the L1-norm (Manhattan) distance between two points.
The distance is rounded to the closest integer, for compatibility
with the TSPLIB convention.
The two points are located on coordinates (x1,y1) and (x2,y2),
sent as parameters"""
return abs(x2-x1) + abs(y2-y1)+.5
def mk_matrix(coord, dist):
"""Compute a distance matrix for a set of points.
Uses function 'dist' to calculate distance between
any two points. Parameters:
-coord -- list of tuples with coordinates of all points, [(x1,y1),...,(xn,yn)]
-dist -- distance function
"""
n = len(coord)
D = {} # dictionary to hold n times n matrix
for i in range(n-1):
for j in range(i+1,n):
[x1,y1] = coord[i]
[x2,y2] = coord[j]
D[i,j] = dist((x1,y1), (x2,y2))
D[j,i] = D[i,j]
return n,D
def read_tsplib(filename):
"basic function for reading a TSP problem on the TSPLIB format"
"NOTE: only works for 2D euclidean or manhattan distances"
f = open(filename, 'r');
line = f.readline()
while line.find("EDGE_WEIGHT_TYPE") == -1:
line = f.readline()
if line.find("EUC_2D") != -1:
dist = distL2
elif line.find("MAN_2D") != -1:
dist = distL1
else:
print "cannot deal with non-euclidean or non-manhattan distances"
raise Exception
while line.find("NODE_COORD_SECTION") == -1:
line = f.readline()
xy_positions = []
while 1:
line = f.readline()
if line.find("EOF") != -1: break
(i,x,y) = line.split()
x = float(x)
y = float(y)
xy_positions.append((x,y))
n,D = mk_matrix(xy_positions, dist)
return n, xy_positions, D
def mk_closest(D, n):
"""Compute a sorted list of the distances for each of the nodes.
For each node, the entry is in the form [(d1,i1), (d2,i2), ...]
where each tuple is a pair (distance,node).
"""
C = []
for i in range(n):
dlist = [(D[i,j], j) for j in range(n) if j != i]
dlist.sort()
C.append(dlist)
return C
def length(tour, D):
"""Calculate the length of a tour according to distance matrix 'D'."""
z = D[tour[-1], tour[0]] # edge from last to first city of the tour
for i in range(1,len(tour)):
z += D[tour[i], tour[i-1]] # add length of edge from city i-1 to i
return z
def randtour(n):
"""Construct a random tour of size 'n'."""
sol = range(n) # set solution equal to [0,1,...,n-1]
random.shuffle(sol) # place it in a random order
return sol
def nearest(last, unvisited, D):
"""Return the index of the node which is closest to 'last'."""
near = unvisited[0]
min_dist = D[last, near]
for i in unvisited[1:]:
if D[last,i] < min_dist:
near = i
min_dist = D[last, near]
return near
def nearest_neighbor(n, i, D):
"""Return tour starting from city 'i', using the Nearest Neighbor.
Uses the Nearest Neighbor heuristic to construct a solution:
- start visiting city i
- while there are unvisited cities, follow to the closest one
- return to city i
"""
unvisited = range(n)
unvisited.remove(i)
last = i
tour = [i]
while unvisited != []:
next = nearest(last, unvisited, D)
tour.append(next)
unvisited.remove(next)
last = next
return tour
def exchange_cost(tour, i, j, D):
"""Calculate the cost of exchanging two arcs in a tour.
Determine the variation in the tour length if
arcs (i,i+1) and (j,j+1) are removed,
and replaced by (i,j) and (i+1,j+1)
(note the exception for the last arc).
Parameters:
-t -- a tour
-i -- position of the first arc
-j>i -- position of the second arc
"""
n = len(tour)
a,b = tour[i],tour[(i+1)%n]
c,d = tour[j],tour[(j+1)%n]
return (D[a,c] + D[b,d]) - (D[a,b] + D[c,d])
def exchange(tour, tinv, i, j):
"""Exchange arcs (i,i+1) and (j,j+1) with (i,j) and (i+1,j+1).
For the given tour 't', remove the arcs (i,i+1) and (j,j+1) and
insert (i,j) and (i+1,j+1).
This is done by inverting the sublist of cities between i and j.
"""
n = len(tour)
if i>j:
i,j = j,i
assert i>=0 and i<j-1 and j<n
path = tour[i+1:j+1]
path.reverse()
tour[i+1:j+1] = path
for k in range(i+1,j+1):
tinv[tour[k]] = k
def improve(tour, z, D, C):
"""Try to improve tour 't' by exchanging arcs; return improved tour length.
If possible, make a series of local improvements on the solution 'tour',
using a breadth first strategy, until reaching a local optimum.
"""
n = len(tour)
tinv = [0 for i in tour]
for k in range(n):
tinv[tour[k]] = k # position of each city in 't'
for i in range(n):
a,b = tour[i],tour[(i+1)%n]
dist_ab = D[a,b]
improved = False
for dist_ac,c in C[a]:
if dist_ac >= dist_ab:
break
j = tinv[c]
d = tour[(j+1)%n]
dist_cd = D[c,d]
dist_bd = D[b,d]
delta = (dist_ac + dist_bd) - (dist_ab + dist_cd)
if delta < 0: # exchange decreases length
exchange(tour, tinv, i, j);
z += delta
improved = True
break
if improved:
continue
for dist_bd,d in C[b]:
if dist_bd >= dist_ab:
break
j = tinv[d]-1
if j==-1:
j=n-1
c = tour[j]
dist_cd = D[c,d]
dist_ac = D[a,c]
delta = (dist_ac + dist_bd) - (dist_ab + dist_cd)
if delta < 0: # exchange decreases length
exchange(tour, tinv, i, j);
z += delta
break
return z
def localsearch(tour, z, D, C=None):
"""Obtain a local optimum starting from solution t; return solution length.
Parameters:
tour -- initial tour
z -- length of the initial tour
D -- distance matrix
"""
n = len(tour)
if C == None:
C = mk_closest(D, n) # create a sorted list of distances to each node
while 1:
newz = improve(tour, z, D, C)
if newz < z:
z = newz
else:
break
return z
def multistart_localsearch(k, n, D, report=None):
"""Do k iterations of local search, starting from random solutions.
Parameters:
-k -- number of iterations
-D -- distance matrix
-report -- if not None, call it to print verbose output
Returns best solution and its cost.
"""
C = mk_closest(D, n) # create a sorted list of distances to each node
bestt=None
bestz=None
for i in range(0,k):
tour = randtour(n)
z = length(tour, D)
z = localsearch(tour, z, D, C)
if z < bestz or bestz == None:
bestz = z
bestt = list(tour)
if report:
report(z, tour)
return bestt, bestz
# db_name = "travel_with_friends"
# TABLES ={}
# TABLES['full_trip_table'] = (
# "CREATE TABLE `full_trip_table` ("
# " `user_id` int(11) NOT NULL AUTO_INCREMENT,"
# " `full_trip_id` date NOT NULL,"
# " `trip_location_ids` varchar(14) NOT NULL,"
# " `default` varchar(16) NOT NULL,"
# " `county` enum('M','F') NOT NULL,"
# " `state` date NOT NULL,"
# " `details` ,"
# " `n_days`,"
# " PRIMARY KEY (`full_trip_id`)"
# ") ENGINE=InnoDB")
# def create_tables():
# """ create tables in the PostgreSQL database"""
# commands = (
# """
# CREATE TABLE full_trip_table (
# index INTEGER PRIMARY KEY,
# user_id VARCHAR(225) NOT NULL,
# full_trip_id VARCHAR(225) NOT NULL,
# trip_location_ids VARCHAR(225),
# default BOOLEAN NOT NULL,
# county VARCHAR(225) NOT NULL,
# state VARCHAR(225) NOT NULL,
# details VARCHAR(MAX),
# n_days VARCHAR(225) NOT NULL
# )
# """,
# """ CREATE TABLE day_trip_table (
# trip_locations_id
# full_day
# default
# county
# state
# details
# )
# """,
# """
# CREATE TABLE poi_detail_table (
# part_id INTEGER PRIMARY KEY,
# file_extension VARCHAR(5) NOT NULL,
# drawing_data BYTEA NOT NULL,
# FOREIGN KEY (part_id)
# REFERENCES parts (part_id)
# ON UPDATE CASCADE ON DELETE CASCADE
# )
# """,
# """
# CREATE TABLE google_travel_time_table (
# index INTEGER PRIMARY KEY,
# id_ VARCHAR NOT NULL,
# orig_name VARCHAR,
# orig_idx VARCHAR,
# dest_name VARCHAR,
# dest_idx VARCHAR,
# orig_coord0 INTEGER,
# orig_coord1 INTEGER,
# dest_coord0 INTEGER,
# dest_coord1 INTEGER,
# orig_coords VARCHAR,
# dest_coords VARCHAR,
# google_driving_url VARCHAR,
# google_walking_url VARCHAR,
# driving_result VARCHAR,
# walking_result VARCHAR,
# google_driving_time INTEGER,
# google_walking_time INTEGER
# )
# """)
# conn = None
# try:
# # read the connection parameters
# params = config()
# # connect to the PostgreSQL server
# conn = psycopg2.connect(**params)
# cur = conn.cursor()
# # create table one by one
# for command in commands:
# cur.execute(command)
# # close communication with the PostgreSQL database server
# cur.close()
# # commit the changes
# conn.commit()
# except (Exception, psycopg2.DatabaseError) as error:
# print(error)
# finally:
# if conn is not None:
# conn.close()
# full_trip_table = pd.DataFrame(columns =['user_id', 'full_trip_id', 'trip_location_ids', 'default', 'county', 'state', 'details', 'n_days'])
# day_trip_locations_table = pd.DataFrame(columns =['trip_locations_id','full_day', 'default', 'county', 'state','details'])
# google_travel_time_table = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\
# 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\
# 'google_walking_url','driving_result','walking_result','google_driving_time',\
# 'google_walking_time'])
# read poi details csv file
poi_detail = pd.read_csv("./step9_poi.csv", index_col=0)
poi_detail['address'] = None
poi_detail['rating']=poi_detail['rating'].fillna(0)
#read US city state and county csv file
df_counties = pd.read_csv('./us_cities_states_counties.csv',sep='|')
#find counties without duplicate
df_counties_u = df_counties.drop('City alias',axis = 1).drop_duplicates()
def init_db_tables():
full_trip_table = pd.DataFrame(columns =['user_id', 'full_trip_id', 'trip_location_ids', 'default', 'county', 'state', 'details', 'n_days'])
day_trip_locations_table = pd.DataFrame(columns =['trip_locations_id','full_day', 'default', 'county', 'state','details','event_type','event_ids'])
google_travel_time_table = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\
'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\
'google_walking_url','driving_result','walking_result','google_driving_time',\
'google_walking_time'])
day_trip_locations_table.loc[0] = ['CALIFORNIA-SAN-DIEGO-1-3-0', True, True, 'SAN DIEGO', 'California',
["{'address': '15500 San Pasqual Valley Rd, Escondido, CA 92027, USA', 'id': 2259, 'day': 0, 'name': u'San Diego Zoo Safari Park'}", "{'address': 'Safari Walk, Escondido, CA 92027, USA', 'id': 2260, 'day': 0, 'name': u'Meerkat'}", "{'address': '1999 Citracado Parkway, Escondido, CA 92029, USA', 'id': 3486, 'day': 0, 'name': u'Stone'}", "{'address': '1999 Citracado Parkway, Escondido, CA 92029, USA', 'id': 3487, 'day': 0, 'name': u'Stone Brewery'}", "{'address': 'Mount Woodson Trail, Poway, CA 92064, USA', 'id': 4951, 'day': 0, 'name': u'Lake Poway'}", "{'address': '17130 Mt Woodson Rd, Ramona, CA 92065, USA', 'id': 4953, 'day': 0, 'name': u'Potato Chip Rock'}", "{'address': '17130 Mt Woodson Rd, Ramona, CA 92065, USA', 'id': 4952, 'day': 0, 'name': u'Mt. Woodson'}"],
'big','[2259, 2260,3486,3487,4951,4953,4952]']
google_travel_time_table.loc[0] = ['439300002871', u'Moonlight Beach', 4393.0,
u'Carlsbad Flower Fields', 2871.0, -117.29692141333341,
33.047769600024424, -117.3177652511278, 33.124079753475236,
'33.0477696,-117.296921413', '33.1240797535,-117.317765251',
'https://maps.googleapis.com/maps/api/distancematrix/json?origins=33.0477696,-117.296921413&destinations=33.1240797535,-117.317765251&mode=driving&language=en-EN&sensor=false&key=AIzaSyDJh9EWCA_v0_B3SvjzjUA3OSVYufPJeGE',
'https://maps.googleapis.com/maps/api/distancematrix/json?origins=33.0477696,-117.296921413&destinations=33.1240797535,-117.317765251&mode=walking&language=en-EN&sensor=false&key=AIzaSyDJh9EWCA_v0_B3SvjzjUA3OSVYufPJeGE',
"{'status': 'OK', 'rows': [{'elements': [{'duration': {'text': '14 mins', 'value': 822}, 'distance': {'text': '10.6 km', 'value': 10637}, 'status': 'OK'}]}], 'origin_addresses': ['233 C St, Encinitas, CA 92024, USA'], 'destination_addresses': ['5754-5780 Paseo Del Norte, Carlsbad, CA 92008, USA']}",
"{'status': 'OK', 'rows': [{'elements': [{'duration': {'text': '2 hours 4 mins', 'value': 7457}, 'distance': {'text': '10.0 km', 'value': 10028}, 'status': 'OK'}]}], 'origin_addresses': ['498 B St, Encinitas, CA 92024, USA'], 'destination_addresses': ['5754-5780 Paseo Del Norte, Carlsbad, CA 92008, USA']}",
13.0, 124.0]
full_trip_table.loc[0] = ['gordon_lee01', 'CALIFORNIA-SAN-DIEGO-1-3',
"['CALIFORNIA-SAN-DIEGO-1-3-0', 'CALIFORNIA-SAN-DIEGO-1-3-1', 'CALIFORNIA-SAN-DIEGO-1-3-2']",
True, 'SAN DIEGO', 'California',
'["{\'address\': \'15500 San Pasqual Valley Rd, Escondido, CA 92027, USA\', \'id\': 2259, \'day\': 0, \'name\': u\'San Diego Zoo Safari Park\'}", "{\'address\': \'Safari Walk, Escondido, CA 92027, USA\', \'id\': 2260, \'day\': 0, \'name\': u\'Meerkat\'}", "{\'address\': \'1999 Citracado Parkway, Escondido, CA 92029, USA\', \'id\': 3486, \'day\': 0, \'name\': u\'Stone\'}", "{\'address\': \'1999 Citracado Parkway, Escondido, CA 92029, USA\', \'id\': 3487, \'day\': 0, \'name\': u\'Stone Brewery\'}", "{\'address\': \'Mount Woodson Trail, Poway, CA 92064, USA\', \'id\': 4951, \'day\': 0, \'name\': u\'Lake Poway\'}", "{\'address\': \'17130 Mt Woodson Rd, Ramona, CA 92065, USA\', \'id\': 4953, \'day\': 0, \'name\': u\'Potato Chip Rock\'}", "{\'address\': \'17130 Mt Woodson Rd, Ramona, CA 92065, USA\', \'id\': 4952, \'day\': 0, \'name\': u\'Mt. Woodson\'}", "{\'address\': \'1 Legoland Dr, Carlsbad, CA 92008, USA\', \'id\': 2870, \'day\': 1, \'name\': u\'Legoland\'}", "{\'address\': \'5754-5780 Paseo Del Norte, Carlsbad, CA 92008, USA\', \'id\': 2871, \'day\': 1, \'name\': u\'Carlsbad Flower Fields\'}", "{\'address\': \'211-359 The Strand N, Oceanside, CA 92054, USA\', \'id\': 2089, \'day\': 1, \'name\': u\'Oceanside Pier\'}", "{\'address\': \'211-359 The Strand N, Oceanside, CA 92054, USA\', \'id\': 2090, \'day\': 1, \'name\': u\'Pier\'}", "{\'address\': \'1016-1024 Neptune Ave, Encinitas, CA 92024, USA\', \'id\': 2872, \'day\': 1, \'name\': u\'Encinitas\'}", "{\'address\': \'625 Pan American Rd E, San Diego, CA 92101, USA\', \'id\': 147, \'day\': 2, \'name\': u\'Balboa Park\'}", "{\'address\': \'1849-1863 Zoo Pl, San Diego, CA 92101, USA\', \'id\': 152, \'day\': 2, \'name\': u\'San Diego Zoo\'}", "{\'address\': \'701-817 Coast Blvd, La Jolla, CA 92037, USA\', \'id\': 148, \'day\': 2, \'name\': u\'La Jolla\'}", "{\'address\': \'10051-10057 Pebble Beach Dr, Santee, CA 92071, USA\', \'id\': 4630, \'day\': 2, \'name\': u\'Santee Lakes\'}", "{\'address\': \'Lake Murray Bike Path, La Mesa, CA 91942, USA\', \'id\': 4545, \'day\': 2, \'name\': u\'Lake Murray\'}", "{\'address\': \'4905 Mt Helix Dr, La Mesa, CA 91941, USA\', \'id\': 4544, \'day\': 2, \'name\': u\'Mt. Helix\'}", "{\'address\': \'1720 Melrose Ave, Chula Vista, CA 91911, USA\', \'id\': 1325, \'day\': 2, \'name\': u\'Thick-billed Kingbird\'}", "{\'address\': \'711 Basswood Ave, Imperial Beach, CA 91932, USA\', \'id\': 1326, \'day\': 2, \'name\': u\'Lesser Sand-Plover\'}"]',
3.0]
engine = create_engine('postgresql://zoesh@localhost:5432/travel_with_friends')
# full_trip_table = pd.read_csv('./full_trip_table.csv', index_col= 0)
# full_trip_table.to_sql('full_trip_table', engine,if_exists='append')
full_trip_table.to_sql('full_trip_table',engine, if_exists = "replace")
day_trip_locations_table.to_sql('day_trip_table',engine, if_exists = "replace")
google_travel_time_table.to_sql('google_travel_time_table',engine, if_exists = "replace")
poi_detail.to_sql('poi_detail_table',engine, if_exists = "replace")
df_counties = pd.read_csv('/Users/zoesh/Desktop/travel_with_friends/travel_with_friends/us_cities_states_counties.csv',sep='|')
df_counties_u = df_counties.drop('City alias',axis = 1).drop_duplicates()
df_counties_u.columns = ["city","state_abb","state","county"]
df_counties_u.to_sql('county_table',engine, if_exists = "replace")
init_db_tables()
def cold_start_places(df, county, state, city, number_days, first_day_full = True, last_day_full = True):
if len(county.values) != 0:
county = county.values[0]
temp_df = df[(df['county'] == county) & (df['state'] == state)]
else:
temp_df = df[(df['city'] == city) & (df['state'] == state)]
return county, temp_df
def find_county(state, city):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select county from county_table where city = '%s' and state = '%s';" %(city.title(), state.title()))
county = cur.fetchone()
conn.close()
if county:
return county[0]
else:
return None
def db_start_location(county, state, city):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
if county:
cur.execute("select index, coord0, coord1, adjusted_normal_time_spent, poi_rank, rating from poi_detail_table where county = '%s' and state = '%s'; "%(county.upper(), state))
else:
print "else"
cur.execute("select index, coord0, coord1, adjusted_normal_time_spent, poi_rank, rating from poi_detail_table where city = '%s' and state = '%s'; "%(city, state))
a = cur.fetchall()
conn.close()
return np.array(a)
a1= db_start_location('San Francisco',"California","San Francisco")
poi_coords = a1[:,1:3]
n_days = 1
current_events =[]
big_ix, med_ix, small_ix =[],[],[]
kmeans = KMeans(n_clusters=n_days).fit(poi_coords)
i = 0
for ix, label in enumerate(kmeans.labels_):
# print ix, label
if label == i:
time = a1[ix,3]
event_ix = a1[ix,0]
current_events.append(event_ix)
if time > 180 :
big_ix.append(ix)
elif time >= 120 :
med_ix.append(ix)
else:
small_ix.append(ix)
print big_ix, med_ix, small_ix
# big_ = a1[[big_ix],4:]
# print med_ix
# print a1
big_ = a1[big_ix][:,[0,4,5]]
med_ = a1[med_ix][:,[0,4,5]]
small_ = a1[small_ix][:,[0,4,5]]
list_a1 = sorted_events(a1,small_ix)
from operator import itemgetter
med_s= sorted(med_, key=itemgetter(1,2))
a = [[1,3,3],[2,2,5],[3,4,4],[4,2,2]]
# [sorted(a, key=itemgetter(1,2)) for i in range(100)]
b = sorted(a, key=itemgetter(1,2))
a.sort(key=lambda k: (k[1], -k[2]), reverse=False)
# print med_s
print a
def default_cold_start_places(df,df_counties_u, day_trip_locations,full_trip_table,df_poi_travel_info,number_days = [1,2,3,4,5]):
df_c = df_counties_u.groupby(['State full','County']).count().reset_index()
for state, county,_,_ in df_c.values[105:150]:
temp_df = df[(df['county'] == county) & (df['state'] == state)]
if temp_df.shape[0]!=0:
if sum(temp_df.adjusted_normal_time_spent) < 360:
number_days = [1]
elif sum(temp_df.adjusted_normal_time_spent) < 720:
number_days = [1,2]
big_events = temp_df[temp_df.adjusted_normal_time_spent > 180]
med_events = temp_df[(temp_df.adjusted_normal_time_spent>= 120)&(temp_df.adjusted_normal_time_spent<=180)]
small_events = temp_df[temp_df.adjusted_normal_time_spent < 120]
for i in number_days:
n_days = i
full_trip_table, day_trip_locations, new_trip_df1, df_poi_travel_info = \
default_search_cluster_events(df, df_counties_u, county, state, big_events,med_events, \
small_events, temp_df, n_days,day_trip_locations, full_trip_table,\
df_poi_travel_info)
print county, state
print full_trip_table.shape, len(day_trip_locations), new_trip_df1.shape, df_poi_travel_info.shape
return None
# full_trip_table = pd.DataFrame(columns =['user_id', 'full_trip_id', 'trip_location_ids', 'default', 'county', 'state', 'details', 'n_days'])
# day_trip_locations_table = pd.DataFrame(columns =['trip_locations_id','full_day', 'default', 'county', 'state','details'])
# google_travel_time_table = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\
# 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\
# 'google_walking_url','driving_result','walking_result','google_driving_time',\
# 'google_walking_time'])
# google_travel_time_table.loc[0] = ['000000000001', "home", '0000', 'space', '0001', 999 ,999, 999.1,999.1, "999,999","999.1,999.1","http://google.com","http://google.com", "", "", 0, 0 ]
# google_travel_time_table.index=google_travel_time_table.index.astype(int)
# engine = create_engine('postgresql://Gon@localhost:5432/travel_with_friends')
# # full_trip_table = pd.read_csv('./full_trip_table.csv', index_col= 0)
# # full_trip_table.to_sql('full_trip_table', engine,if_exists='append')
# full_trip_table.to_sql('full_trip_table',engine, if_exists = "append")
# day_trip_locations_table.to_sql('day_trip_table',engine, if_exists = "append")
# google_travel_time_table.to_sql('google_travel_time_table',engine, if_exists = "append")
# # df.to_sql('poi_detail_table',engine, if_exists = "append")
def trip_df_cloest_distance(trip_df, event_type):
points = trip_df[['coord0','coord1']].values.tolist()
n, D = mk_matrix(points, distL2) # create the distance matrix
if len(points) >= 3:
if event_type == 'big':
tour = nearest_neighbor(n, trip_df.shape[0]-1, D) # create a greedy tour, visiting city 'i' first
z = length(tour, D)
z = localsearch(tour, z, D)
elif event_type == 'med':
tour = nearest_neighbor(n, trip_df.shape[0]-2, D) # create a greedy tour, visiting city 'i' first
z = length(tour, D)
z = localsearch(tour, z, D)
else:
tour = nearest_neighbor(n, 0, D) # create a greedy tour, visiting city 'i' first
z = length(tour, D)
z = localsearch(tour, z, D)
return tour
else:
return range(len(points))
def get_event_ids_list(trip_locations_id):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select event_ids,event_type from day_trip_table where trip_locations_id = '%s' " %(trip_locations_id))
event_ids,event_type = cur.fetchone()
event_ids = ast.literal_eval(event_ids)
conn.close()
return event_ids,event_type
def db_event_cloest_distance(trip_locations_id=None,event_ids=None, event_type = 'add',new_event_id = None):
if new_event_id or not event_ids:
event_ids, event_type = get_event_ids_list(trip_locations_id)
if new_event_id:
event_ids.append(new_event_id)
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
print event_ids
points = np.zeros((len(event_ids), 3))
for i,v in enumerate(event_ids):
cur.execute("select index, coord0, coord1 from poi_detail_table where index = %i;"%(float(v)))
points[i] = cur.fetchone()
conn.close()
n,D = mk_matrix(points[:,1:], distL2)
if len(points) >= 3:
if event_type == 'add':
tour = nearest_neighbor(n, 0, D)
# create a greedy tour, visiting city 'i' first
z = length(tour, D)
z = localsearch(tour, z, D)
return np.array(event_ids)[tour], event_type
#need to figure out other cases
else:
tour = nearest_neighbor(n, 0, D)
# create a greedy tour, visiting city 'i' first
z = length(tour, D)
z = localsearch(tour, z, D)
return np.array(event_ids)[tour], event_type
else:
return np.array(event_ids), event_type
event_ids = np.array([1,2,3])
np.array(event_ids)
def check_full_trip_id(full_trip_id, debug):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select details from full_trip_table where full_trip_id = '%s'" %(full_trip_id))
a = cur.fetchone()
conn.close()
if bool(a):
if not debug:
return a[0]
else:
return True
else:
return False
def check_full_trip_id(day_trip_id, debug):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select details from day_trip_table where trip_locations_id = '%s'" %(day_trip_id))
a = cur.fetchone()
conn.close()
if bool(a):
if not debug:
return a[0]
else:
return True
else:
return False
def check_travel_time_id(new_id):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select google_driving_time from google_travel_time_table where id_ = '%s'" %(new_id))
a = cur.fetchone()
conn.close()
if bool(a):
return True
else:
return False
def sorted_events(info,ix):
'''
find the event_id, ranking and rating columns
sorted base on ranking then rating
return sorted list
'''
event_ = info[ix][:,[0,4,5]]
return np.array(sorted(event_, key=lambda x: (x[1], -x[2])))
# test_ai_ids, type_= create_trip_df(big_,medium_,small_)
big_.shape, med_.shape, small_.shape
med_[0,1] ,small_[0,1] , med_[0,0]
# small_[0:8,0].concatenate(medium_[0:2,0])
c = small_[0:10,]
d = med_[0:2,]
print np.vstack((c,d))
# print list(np.array(sorted(np.vstack((c,d)), key=lambda x: (x[1],-x[2])))[:,0])
print list(np.array(sorted(np.vstack((small_[0:10,:],med_)), key=lambda x: (x[1],-x[2])))[:,0])
# np.vstack((small_[0:10,],med_))
a_ids, a_type=create_event_id_list(big_,med_,small_)
# conn = psycopg2.connect(conn_str)
# cur = conn.cursor()
# for i,v in enumerate(a_ids):
# # print i, v, type(v)
# cur.execute("select index, coord0, coord1 from poi_detail_table where index = %i;"%(float(v)))
# aaa = cur.fetchone()
# # print aaa
# conn.close()
# new_a_ids, new_a_type=db_event_cloest_distance(event_ids = a_ids, event_type = a_type)
# print new_a_ids, new_a_type
def create_event_id_list(big_,medium_,small_):
event_type = ''
if big_.shape[0] >= 1:
if (medium_.shape[0] < 2) or (big_[0,1] <= medium_[0,1]):
if small_.shape[0] >= 6:
event_ids = np.concatenate((big_[0,0],small_[0:6,0]),axis=0)
else:
event_ids = np.concatenate((big_[0,0],small_[:,0]),axis=0)
event_type = 'big'
else:
if small_.shape[0] >= 8:
event_ids = np.concatenate((medium_[0:2,0],small_[0:8,0]),axis=0)
else:
event_ids = np.concatenate((medium_[0:2,0],small_[:,0]),axis=0)
event_type = 'med'
elif medium_.shape[0] >= 2:
if small_.shape[0] >= 8:
event_ids = np.concatenate((medium_[0:2,0],small_[0:8,0]),axis=0)
else:
event_ids = np.concatenate((medium_[0:2,0],small_[:,0]),axis=0)
event_type = 'med'
elif medium_.shape[0]> 0:
if small_.shape[0] >= 10:
event_ids = np.array(sorted(np.vstack((small_[0:10,:],medium_)), key=lambda x: (x[1],-x[2])))[:,0]
else:
event_ids = np.array(sorted(np.vstack((small_,medium_)), key=lambda x: (x[1],-x[2])))[:,0]
event_type = 'small'
else:
if small_.shape[0]> 0:
event_ids = small_[:,0]
event_type = 'small'
return event_ids, event_type
def create_trip_df(big_,medium_,small_):
event_type = ''
if big_.shape[0] >= 1:
if (medium_.shape[0] < 2) or (big_.iloc[0].poi_rank <= medium_.iloc[0].poi_rank):
if small_.shape[0] >= 6:
trip_df = small_.iloc[0:6].append(big_.iloc[0])
else:
trip_df = small_.append(big_.iloc[0])
event_type = 'big'
else:
if small_.shape[0] >= 8:
trip_df = small_.iloc[0:8].append(medium_.iloc[0:2])
else:
trip_df = small_.append(medium_.iloc[0:2])
event_type = 'med'
elif medium_.shape[0] >= 2:
if small_.shape[0] >= 8:
trip_df = small_.iloc[0:8].append(medium_.iloc[0:2])
else:
trip_df = small_.append(medium_.iloc[0:2])
event_type = 'med'
else:
if small_.shape[0] >= 10:
trip_df = small_.iloc[0:10].append(medium_).sort_values(['poi_rank', 'rating'], ascending=[True, False])
else:
trip_df = small_.append(medium_).sort_values(['poi_rank', 'rating'], ascending=[True, False])
event_type = 'small'
return trip_df, event_type
my_key = 'AIzaSyDJh9EWCA_v0_B3SvjzjUA3OSVYufPJeGE'
# my_key = 'AIzaSyAwx3xg6oJ0yiPV3MIunBa1kx6N7v5Tcw8'
def google_driving_walking_time(tour,trip_df,event_type):
poi_travel_time_df = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\
'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\
'google_walking_url','driving_result','walking_result','google_driving_time',\
'google_walking_time'])
ids_, orig_names,orid_idxs,dest_names,dest_idxs,orig_coord0s,orig_coord1s,dest_coord0s,dest_coord1s = [],[],[],[],[],[],[],[],[]
orig_coordss,dest_coordss,driving_urls,walking_urls,driving_results,walking_results,driving_times,walking_times = [],[],[],[],[],[],[],[]
trip_id_list=[]
for i in range(len(tour)-1):
id_ = str(trip_df.loc[trip_df.index[tour[i]]].name) + '0000'+str(trip_df.loc[trip_df.index[tour[i+1]]].name)
result_check_travel_time_id = check_travel_time_id(id_)
if not result_check_travel_time_id:
orig_name = trip_df.loc[trip_df.index[tour[i]]]['name']
orig_idx = trip_df.loc[trip_df.index[tour[i]]].name
dest_name = trip_df.loc[trip_df.index[tour[i+1]]]['name']
dest_idx = trip_df.loc[trip_df.index[tour[i+1]]].name
orig_coord0 = trip_df.loc[trip_df.index[tour[i]]]['coord0']
orig_coord1 = trip_df.loc[trip_df.index[tour[i]]]['coord1']
dest_coord0 = trip_df.loc[trip_df.index[tour[i+1]]]['coord0']
dest_coord1 = trip_df.loc[trip_df.index[tour[i+1]]]['coord1']
orig_coords = str(orig_coord1)+','+str(orig_coord0)
dest_coords = str(dest_coord1)+','+str(dest_coord0)
google_driving_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}".\
format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)
google_walking_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}".\
format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)
driving_result= simplejson.load(urllib.urlopen(google_driving_url))
walking_result= simplejson.load(urllib.urlopen(google_walking_url))
if driving_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':
google_driving_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}".\
format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)
driving_result= simplejson.load(urllib.urlopen(google_driving_url))
if walking_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':
google_walking_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}".\
format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)
walking_result= simplejson.load(urllib.urlopen(google_walking_url))
if (driving_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND') and (walking_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND'):
new_df = trip_df.drop(trip_df.iloc[tour[i+1]].name)
new_tour = trip_df_cloest_distance(new_df,event_type)
return google_driving_walking_time(new_tour,new_df, event_type)
try:
google_driving_time = driving_result['rows'][0]['elements'][0]['duration']['value']/60
except:
print driving_result
try:
google_walking_time = walking_result['rows'][0]['elements'][0]['duration']['value']/60
except:
google_walking_time = 9999
poi_travel_time_df.loc[len(df_poi_travel_time)]=[id_,orig_name,orig_idx,dest_name,dest_idx,orig_coord0,orig_coord1,dest_coord0,\
dest_coord1,orig_coords,dest_coords,google_driving_url,google_walking_url,\
str(driving_result),str(walking_result),google_driving_time,google_walking_time]
driving_result = str(driving_result).replace("'", '"')
walking_result = str(walking_result).replace("'", '"')
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select max(index) from google_travel_time_table")
index = cur.fetchone()[0]+1
# print "startindex:", index , type(index)
# index += 1
# print "end index: " ,index
cur.execute("INSERT INTO google_travel_time_table VALUES (%i, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s','%s', '%s', '%s', '%s', '%s', '%s', %s, %s);"%(index, id_, orig_name, orig_idx, dest_name, dest_idx, orig_coord0, orig_coord1, dest_coord0,\
dest_coord1, orig_coords, dest_coords, google_driving_url, google_walking_url,\
str(driving_result), str(walking_result), google_driving_time, google_walking_time))
conn.commit()
conn.close()
else:
trip_id_list.append(id_)
return tour, trip_df, poi_travel_time_df
def db_google_driving_walking_time(event_ids, event_type):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
google_ids = []
driving_time_list = []
walking_time_list = []
name_list = []
for i,v in enumerate(event_ids[:-1]):
id_ = str(v) + '0000'+str(event_ids[i+1])
result_check_travel_time_id = check_travel_time_id(id_)
if not result_check_travel_time_id:
cur.execute("select name, coord0, coord1 from poi_detail_table where index = '%s'"%(v))
orig_name, orig_coord0, orig_coord1 = cur.fetchone()
orig_idx = v
cur.execute("select name, coord0, coord1 from poi_detail_table where index = '%s'"%(event_ids[i+1]))
dest_name, dest_coord0, dest_coord1 = cur.fetchone()
dest_idx = event_ids[i+1]
orig_coords = str(orig_coord1)+','+str(orig_coord0)
dest_coords = str(dest_coord1)+','+str(dest_coord0)
google_driving_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}".\
format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)
google_walking_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}".\
format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)
driving_result= simplejson.load(urllib.urlopen(google_driving_url))
walking_result= simplejson.load(urllib.urlopen(google_walking_url))
if driving_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':
google_driving_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}".\
format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)
driving_result= simplejson.load(urllib.urlopen(google_driving_url))
if walking_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':
google_walking_url = "https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}".\
format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)
walking_result= simplejson.load(urllib.urlopen(google_walking_url))
if (driving_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND') and (walking_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND'):
new_event_ids = list(event_ids)
new_event_ids.pop(i+1)
new_event_ids = db_event_cloest_distance(event_ids=new_event_ids, event_type = event_type)
return db_google_driving_walking_time(new_event_ids, event_type)
try:
google_driving_time = driving_result['rows'][0]['elements'][0]['duration']['value']/60
except:
print v, id_, driving_result #need to debug for this
try:
google_walking_time = walking_result['rows'][0]['elements'][0]['duration']['value']/60
except:
google_walking_time = 9999
# return event_ids, google_driving_time, google_walking_time
cur.execute("select max(index) from google_travel_time_table")
index = cur.fetchone()[0]+1
# print "startindex:", index , type(index)
# index += 1
# print "end index: " ,index
driving_result = str(driving_result).replace("'",'"')
walking_result = str(walking_result).replace("'",'"')
orig_name = orig_name.replace("'","''")
dest_name = dest_name.replace("'","''")
cur.execute("INSERT INTO google_travel_time_table VALUES (%i, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s','%s', '%s', '%s', '%s', '%s', '%s', %s, %s);"%(index, id_, orig_name, orig_idx, dest_name, dest_idx, orig_coord0, orig_coord1, dest_coord0,\
dest_coord1, orig_coords, dest_coords, google_driving_url, google_walking_url,\
str(driving_result), str(walking_result), google_driving_time, google_walking_time))
# cur.execute("select google_driving_time, google_walking_time from google_travel_time_table \
# where id_ = '%s'" %(id_))
conn.commit()
name_list.append(orig_name+" to "+ dest_name)
google_ids.append(id_)
driving_time_list.append(google_driving_time)
walking_time_list.append(google_walking_time)
else:
cur.execute("select orig_name, dest_name, google_driving_time, google_walking_time from google_travel_time_table \
where id_ = '%s'" %(id_))
orig_name, dest_name, google_driving_time, google_walking_time = cur.fetchone()
name_list.append(orig_name+" to "+ dest_name)
google_ids.append(id_)
driving_time_list.append(google_driving_time)
walking_time_list.append(google_walking_time)
conn.close()
return event_ids, google_ids, name_list, driving_time_list, walking_time_list
trip_locations_id = 'CALIFORNIA-SAN-DIEGO-1-3-0'
default = 1
n_days =3
full_day = 1
poi_coords = df_events[['coord0','coord1']]
trip_location_ids, full_trip_details =[],[]
kmeans = KMeans(n_clusters=n_days).fit(poi_coords)
# print kmeans.labels_
i=0
current_events = []
big_ix = []
small_ix = []
med_ix = []
for ix, label in enumerate(kmeans.labels_):
if label == i:
time = df_events.iloc[ix].adjusted_normal_time_spent
event_ix = df_events.iloc[ix].name
current_events.append(event_ix)
if time > 180 :
big_ix.append(event_ix)
elif time >= 120 :
med_ix.append(event_ix)
else:
small_ix.append(event_ix)
# all_big = big.sort_values(['poi_rank', 'rating'], ascending=[True, False])
big_ = df_events.loc[big_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
small_ = df_events.loc[small_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
medium_ = df_events.loc[med_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
# trip_df, event_type = create_trip_df(big_,medium_,small_)
# tour = trip_df_cloest_distance(trip_df, event_type)
# new_tour, new_trip_df, df_poi_travel_time = google_driving_walking_time(tour,trip_df,event_type)
# new_trip_df = new_trip_df.iloc[new_tour]
# new_trip_df1,new_df_poi_travel_time,total_time = remove_extra_events(new_trip_df, df_poi_travel_time)
# new_trip_df1['address'] = df_addresses(new_trip_df1, new_df_poi_travel_time)
# event_ids, event_type=db_event_cloest_distance(trip_locations_id)
# event_ids, google_ids, name_list, driving_time_list, walking_time_list =db_google_driving_walking_time(event_ids, event_type)
# event_ids, driving_time_list, walking_time_list, total_time_spent = db_remove_extra_events(event_ids, driving_time_list, walking_time_list)
# db_address(event_ids)
# values = day_trip(event_ids, county, state, default, full_day,n_days,i)
# day_trip_locations.loc[len(day_trip_locations)] = values
# trip_location_ids.append(values[0])
# full_trip_details.extend(values[-1])
def get_fulltrip_data_default(state, city, n_days, day_trip_locations = True, full_trip_table = True, default = True, debug = True):
county = find_county(state, city)
trip_location_ids, full_trip_details =[],[]
full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])
if not check_full_trip_id(full_trip_id, debug):
county_list_info = db_start_location(county, state, city)
poi_coords = county_list_info[:,1:3]
kmeans = KMeans(n_clusters=n_days).fit(poi_coords)
if not county_list_info:
return "error: county_list_info is empty"
for i in range(n_days):
if not check_day_trip_id(day_trip, debug):
trip_locations_id = '-'.join([str(state), str(county.replace(' ','-')),str(int(default)), str(n_days),str(i)])
current_events, big_ix, small_ix, med_ix = [],[],[],[]
for ix, label in enumerate(kmeans.labels_):
if label == i:
time = county_list_info[ix,3]
event_ix = county_list_info[ix,0]
current_events.append(event_ix)
if time > 180 :
big_ix.append(ix)
elif time >= 120 :
med_ix.append(ix)
else:
small_ix.append(ix)
big_ = sorted_events(county_list_info, big_ix)
med_ = sorted_events(county_list_info, med_ix)
small_ = sorted_events(county_list_info, small_ix)
event_ids, event_type = create_event_id_list(big_, med_, small_)
event_ids, event_type = db_event_cloest_distance(event_ids = event_ids, event_type = event_type)
event_ids, google_ids, name_list, driving_time_list, walking_time_list =db_google_driving_walking_time(event_ids, event_type)
event_ids, driving_time_list, walking_time_list, total_time_spent = db_remove_extra_events(event_ids, driving_time_list, walking_time_list)
db_address(event_ids)
values = db_day_trip(event_ids, county, state, default, full_day,n_days,i)
# insert to day_trip ....
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("insert into day_trip_table (trip_locations_id,full_day, default, county, state, details, event_type, event_ids) VALUES ( '%s', %s, %s, '%s', '%s', '%s', '%s', '%s')" %( trip_location_id, full_day, default, county, state, details, event_type, event_ids))
conn.commit()
conn.close()
trip_location_ids.append(values[0])
full_trip_details.extend(values[-1])
else:
print "error: already have this day, please check the next day"
trip_location_ids.append(trip_locations_id)
# call db find day trip detail
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select details from day_trip_table where trip_locations_id = '%s';"%(trip_locations_id) )
day_trip_detail = fetchall()
conn.close()
full_trip_details.extend(day_trip_detail)
full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])
details = full_trip_details
user_id = "Admin"
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("insert into full_trip_table(user_id, full_trip_id,trip_location_ids, default, county, state, details, n_days) VALUES ('%s', '%s', '%s', %s, '%s', '%s', '%s', %s)" %(user_id, full_trip_id, str(trip_location_ids), default, county, state, details, n_days))
conn.commit()
conn.close()
return "finish update %s, %s into database" %(state, county)
else:
return "%s, %s already in database" %(state, county)
def db_day_trip(event_ids, county, state, default, full_day,n_days,i):
conn=psycopg2.connect(conn_str)
cur = conn.cursor()
# cur.execute("select state, county, count(*) AS count from poi_detail_table where index in %s GROUP BY state, county order by count desc;" %(tuple(test_event_ids_list),))
# a = cur.fetchall()
# state = a[0][0].upper()
# county = a[0][1].upper()
trip_locations_id = '-'.join([str(state), str(county.replace(' ','-')),str(int(default)), str(n_days),str(i)])
#details dict includes: id, name,address, day
cur.execute("select index, name, address from poi_detail_table where index in %s;" %(tuple(event_ids),))
a = cur.fetchall()
details = [str({'id': a[x][0],'name': a[x][1],'address': a[x][2], 'day': i}) for x in range(len(a))]
conn.close()
return [trip_locations_id, full_day, default, county, state, details]
conn=psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select state, county, count(*) AS count from poi_detail_table where index in %s GROUP BY state, county order by count desc;" %(tuple(test_event_ids_list),))
a = cur.fetchall()
print a[0][0].upper()
# details = [str({'id': a[x][0],'name': a[x][1],'address': a[x][2], 'day': i}) for x in range(a)]
conn.close()
def extend_full_trip_details(full_trip_details):
details = {}
addresses = []
ids = []
days = []
names = []
for item in full_trip_details:
addresses.append(eval(item)['address'])
ids.append(eval(item)['id'])
days.append(eval(item)['day'])
names.append(eval(item)['name'])
details['addresses'] = addresses
details['ids'] = ids
details['days'] = days
details['names'] = names
return str(full_trip_details)
event_ids
print event_ids, google_ids, name_list, driving_time_list, walking_time_list
init_db_tables()
def remove_extra_events(trip_df, df_poi_travel_time):
if sum(trip_df.adjusted_normal_time_spent)+sum(df_poi_travel_time.google_driving_time) > 480:
new_trip_df = trip_df[:-1]
new_df_poi_travel_time = df_poi_travel_time[:-1]
return remove_extra_events(new_trip_df,new_df_poi_travel_time)
else:
return trip_df, df_poi_travel_time, sum(trip_df.adjusted_normal_time_spent)+sum(df_poi_travel_time.google_driving_time)
def db_remove_extra_events(event_ids, driving_time_list,walking_time_list):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select sum (adjusted_normal_time_spent) from poi_detail_table where index in %s;" %(tuple(event_ids),))
time_spent = cur.fetchone()[0]
conn.close()
time_spent += sum(np.minimum(np.array(driving_time_list),np.array(walking_time_list)))
if time_spent > 480:
update_event_ids = event_ids[:-1]
update_driving_time_list = driving_time_list[:-1]
update_walking_time_list = walking_time_list[:-1]
return db_remove_extra_events(update_event_ids, update_driving_time_list, update_walking_time_list)
else:
return event_ids, driving_time_list, walking_time_list, time_spent
def df_addresses(new_trip_df1, new_df_poi_travel_time):
my_lst = []
print new_trip_df1.index.values
for i in new_trip_df1.index.values:
temp_df = new_df_poi_travel_time[i == new_df_poi_travel_time.orig_idx.values]
if temp_df.shape[0]>0:
address = eval(temp_df.driving_result.values[0])['origin_addresses'][0]
my_lst.append(address)
else:
try:
temp_df = new_df_poi_travel_time[i == new_df_poi_travel_time.dest_idx.values]
address = eval(temp_df.driving_result.values[0])['destination_addresses'][0]
my_lst.append(address)
except:
print new_trip_df1, new_df_poi_travel_time
return my_lst
def check_address(index):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select address from poi_detail_table where index = %s;"%(index))
a = cur.fetchone()[0]
conn.close()
if a:
return True
else:
return False
def db_address(event_ids):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
for i in event_ids[:-1]:
if not check_address(i):
cur.execute("select driving_result from google_travel_time_table where orig_idx = %s;" %(i))
a= cur.fetchone()[0]
add = ast.literal_eval(a)['origin_addresses'][0]
cur.execute("update poi_detail_table set address = '%s' where index = %s;" %(add, i))
conn.commit()
last = event_ids[-1]
if not check_address(last):
cur.execute("select driving_result from google_travel_time_table where dest_idx = %s;" %(last))
a= cur.fetchone()[0]
add = ast.literal_eval(a)['destination_addresses'][0]
cur.execute("update poi_detail_table set address = '%s' where index = %s;" %(add, last))
conn.commit()
conn.close()
from numpy import *
test_event_ids_list = append(273, event_ids)
# event_ids
print test_event_ids_list
conn=psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select state, county, count(*) AS count from poi_detail_table where index in %s GROUP BY state, county order by count desc;" %(tuple(test_event_ids_list),))
a = cur.fetchall()
print a[0][0].upper()
# details = [str({'id': a[x][0],'name': a[x][1],'address': a[x][2], 'day': i}) for x in range(a)]
conn.close()
day_trip_locations = 'San Diego, California'
f, d, n, d= search_cluster_events(df, county, state, city, 3, day_trip_locations, full_trip_table, default = True)
import time
t1=time.time()
# [index, ranking,score]
a = [[1,2,3],[2,2,6],[3,3,3],[4,3,10]]
from operator import itemgetter
print sorted(a, key=lambda x: (x[1], -x[2]))
time.time()-t1
'''
Most important event that will call all the functions and return the day details for the trip
'''
def search_cluster_events(df, county, state, city, n_days, day_trip_locations = True, full_trip_table = True, default = True, debug = True):
county, df_events =cold_start_places(df, county, state, city, n_days)
poi_coords = df_events[['coord0','coord1']]
kmeans = KMeans(n_clusters=n_days).fit(poi_coords)
new_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])
if not check_full_trip_id(new_trip_id, debug):
trip_location_ids = []
full_trip_details = []
for i in range(n_days):
current_events = []
big_ix = []
small_ix = []
med_ix = []
for ix, label in enumerate(kmeans.labels_):
if label == i:
event_ix = poi_coords.index[ix]
current_events.append(event_ix)
if event_ix in big.index:
big_ix.append(event_ix)
elif event_ix in med.index:
med_ix.append(event_ix)
else:
small_ix.append(event_ix)
all_big = big.sort_values(['poi_rank', 'rating'], ascending=[True, False])
big_ = big.loc[big_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
small_ = small.loc[small_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
medium_ = med.loc[med_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
# print 'big:', big_, 'small:', small_, 'msize:', medium_
trip_df, event_type = create_trip_df(big_,medium_,small_)
# print event_type
tour = trip_df_cloest_distance(trip_df, event_type)
# print tour
new_tour, new_trip_df, df_poi_travel_time = google_driving_walking_time(tour,trip_df,event_type)
# print new_tour, new_trip_df
# return new_trip_df, df_poi_travel_time
new_trip_df = new_trip_df.iloc[new_tour]
new_trip_df1,new_df_poi_travel_time,total_time = remove_extra_events(new_trip_df, df_poi_travel_time)
# print new_trip_df1
new_trip_df1['address'] = df_addresses(new_trip_df1, new_df_poi_travel_time)
# print 'total time:', total_ti
values = day_trip(new_trip_df1, county, state, default, full_day,n_days,i)
day_trip_locations.loc[len(day_trip_locations)] = values
trip_location_ids.append(values[0])
full_trip_details.extend(values[-1])
df_poi_travel_info = df_poi_travel_info.append(new_df_poi_travel_time)
full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])
details = extend_full_trip_details(full_trip_details)
full_trip_table.loc[len(full_trip_table)] = ["adam", full_trip_id, str(trip_location_ids), default, county, state, details, n_days]
return full_trip_table, day_trip_locations, new_trip_df1, df_poi_travel_info
def default_search_cluster_events(df, df_counties_u, county, state, big,med, small, \
temp, n_days,day_trip_locations, full_trip_table,df_poi_travel_info):
# df_poi_travel_info = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\
# 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\
# 'google_walking_url','driving_result','walking_result','google_driving_time',\
# 'google_walking_time'])
poi_coords = temp[['coord0','coord1']]
kmeans = KMeans(n_clusters=n_days, random_state=0).fit(poi_coords)
# print kmeans.labels_
full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])
trip_location_ids = []
full_trip_details = []
for i in range(n_days):
current_events = []
big_ix = []
small_ix = []
med_ix = []
for ix, label in enumerate(kmeans.labels_):
if label == i:
event_ix = poi_coords.index[ix]
current_events.append(event_ix)
if event_ix in big.index:
big_ix.append(event_ix)
elif event_ix in med.index:
med_ix.append(event_ix)
else:
small_ix.append(event_ix)
all_big = big.sort_values(['poi_rank', 'rating'], ascending=[True, False])
big_ = big.loc[big_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
small_ = small.loc[small_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
medium_ = med.loc[med_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])
trip_df, event_type = create_trip_df(big_,medium_,small_)
tour = trip_df_cloest_distance(trip_df, event_type)
new_tour, new_trip_df, df_poi_travel_time = google_driving_walking_time(tour,trip_df,event_type)
new_trip_df = new_trip_df.iloc[new_tour]
new_trip_df1,new_df_poi_travel_time,total_time = remove_extra_events(new_trip_df, df_poi_travel_time)
new_trip_df1['address'] = df_addresses(new_trip_df1, new_df_poi_travel_time)
values = day_trip(new_trip_df1, county, state, default, full_day,n_days,i)
day_trip_locations.loc[len(day_trip_locations)] = values
trip_location_ids.append(values[0])
full_trip_details.extend(values[-1])
# print 'trave time df \n',new_df_poi_travel_time
df_poi_travel_info = df_poi_travel_info.append(new_df_poi_travel_time)
full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])
details = extend_full_trip_details(full_trip_details)
full_trip_table.loc[len(full_trip_table)] = [user_id, full_trip_id, \
str(trip_location_ids), default, county, state, details, n_days]
return full_trip_table, day_trip_locations, new_trip_df1, df_poi_travel_info
###Next Steps: Add control from the users. funt1: allow to add events,(specific name or auto add)
### auto route to the most appropirate order
###funt2: allow to reorder the events. funt3: allow to delete the events.
###funt4: allow to switch a new event-next to the switch and x mark icon,check mark to confirm the new place and auto order
###New table for the trip info...features including trip id, event place, days, specific date, trip details. (trip tour, trip)
def ajax_available_events(county, state):
county=county.upper()
state = state.title()
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select index, name from poi_detail_table where county='%s' and state='%s'" %(county,state))
poi_lst = [item for item in cur.fetchall()]
conn.close()
return poi_lst
def add_event(trip_locations_id, event_day, event_id=None, event_name=None, full_day = True, unseen_event = False):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select * from day_trip_table where trip_locations_id='%s'" %(trip_locations_id))
(index, trip_locations_id, full_day, default, county, state, detail, event_type, event_ids) = cur.fetchone()
if unseen_event:
index += 1
trip_locations_id = '-'.join([str(eval(i)['id']) for i in eval(detail)])+'-'+event_name.replace(' ','-')+'-'+event_day
cur.execute("select details from day_trip_locations where trip_locations_id='%s'" %(trip_locations_id))
a = cur.fetchone()
if bool(a):
conn.close()
return trip_locations_id, a[0]
else:
cur.execute("select max(index) from day_trip_locations")
index = cur.fetchone()[0]+1
detail = list(eval(detail))
#need to make sure the type is correct for detail!
new_event = "{'address': 'None', 'id': 'None', 'day': %s, 'name': u'%s'}"%(event_day, event_name)
detail.append(new_event)
#get the right format of detail: change from list to string and remove brackets and convert quote type
new_detail = str(detail).replace('"','').replace('[','').replace(']','').replace("'",'"')
cur.execute("INSERT INTO day_trip_locations VALUES (%i, '%s',%s,%s,'%s','%s','%s');" %(index, trip_locations_id, full_day, False, county, state, new_detail))
conn.commit()
conn.close()
return trip_locations_id, detail
else:
event_ids = add_event_cloest_distance(trip_locations_id, event_id)
event_ids, google_ids, name_list, driving_time_list, walking_time_list = db_google_driving_walking_time(event_ids,event_type = 'add')
trip_locations_id = '-'.join(event_ids)+'-'+event_day
cur.execute("select details from day_trip_locations where trip_locations_id='%s'" %(trip_locations_id))
a = cur.fetchone()
if not a:
details = []
db_address(event_ids)
for item in event_ids:
cur.execute("select index, name, address from poi_detail_table where index = '%s';" %(item))
a = cur.fetchone()
detail = {'id': a[0],'name': a[1],'address': a[2], 'day': event_day}
details.append(detail)
#need to make sure event detail can append to table!
cur.execute("insert into day_trip_table (trip_locations_id,full_day, default, county, state, details, event_type, event_ids) VALUES ( '%s', %s, %s, '%s', '%s', '%s', '%s', '%s')" %( trip_location_id, full_day, False, county, state, details, event_type, event_ids))
conn.commit()
conn.close()
return trip_locations_id, details
else:
conn.close()
#need to make sure type is correct.
return trip_locations_id, a[0]
def remove_event(trip_locations_id, remove_event_id, remove_event_name=None, event_day=None, full_day = True):
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select * from day_trip_table where trip_locations_id='%s'" %(trip_locations_id))
(index, trip_locations_id, full_day, default, county, state, detail, event_type, event_ids) = cur.fetchone()
new_event_ids = ast.literal_eval(event_ids)
new_event_ids.remove(remove_event_id)
new_trip_locations_id = '-'.join(str(event_id) for event_id in new_event_ids)
cur.execute("select * from day_trip_table where trip_locations_id='%s'" %(new_trip_locations_id))
check_id = cur.fetchone()
if check_id:
return new_trip_locations_id, check_id[-3]
detail = ast.literal_eval(detail[1:-1])
for index, trip_detail in enumerate(detail):
if ast.literal_eval(trip_detail)['id'] == remove_event_id:
remove_index = index
break
new_detail = list(detail)
new_detail.pop(remove_index)
new_detail = str(new_detail).replace("'","''")
default = False
cur.execute("select max(index) from day_trip_table where trip_locations_id='%s'" %(trip_locations_id))
new_index = cur.fetchone()[0]
new_index+=1
cur.execute("INSERT INTO day_trip_table VALUES (%i, '%s', %s, %s, '%s', '%s', '%s', '%s','%s');" \
%(new_index, new_trip_locations_id, full_day, default, county, state, new_detail, event_type, new_event_ids))
conn.commit()
conn.close()
return new_trip_locations_id, new_detail
def event_type_time_spent(adjusted_normal_time_spent):
if adjusted_normal_time_spent > 180:
return 'big'
elif adjusted_normal_time_spent >= 120:
return 'med'
else:
return 'small'
def switch_event_list(full_trip_id, trip_locations_id, switch_event_id, switch_event_name=None, event_day=None, full_day = True):
# new_trip_locations_id, new_detail = remove_event(trip_locations_id, switch_event_id)
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select name, city, county, state, coord0, coord1,poi_rank, adjusted_normal_time_spent from poi_detail_table where index=%s" %(switch_event_id))
name, city, county, state,coord0, coord1,poi_rank, adjusted_normal_time_spent = cur.fetchone()
event_type = event_type_time_spent(adjusted_normal_time_spent)
avialable_lst = ajax_available_events(county, state)
cur.execute("select trip_location_ids,details from full_trip_table where full_trip_id=%s" %(full_trip_id))
full_trip_detail = cur.fetchone()
full_trip_detail = ast.literal_eval(full_trip_detail)
full_trip_ids = [ast.literal_eval(item)['id'] for item in full_trip_detail]
switch_lst = []
for item in avialable_lst:
index = item[0]
if index not in full_trip_ids:
event_ids = [switch_event_id, index]
event_ids, google_ids, name_list, driving_time_list, walking_time_list = db_google_driving_walking_time(event_ids, event_type='switch')
if min(driving_time_list[0], walking_time_list[0]) <= 60:
cur.execute("select poi_rank, rating, adjusted_normal_time_spent from poi_detail_table where index=%s" %(index))
target_poi_rank, target_rating, target_adjusted_normal_time_spent = cur.fetchone()
target_event_type = event_type_time_spent(target_adjusted_normal_time_spent)
switch_lst.append([target_poi_rank, target_rating, target_event_type==event_type])
#need to sort target_event_type, target_poi_rank and target_rating
return {switch_event_id: switch_lst}
def switch_event(trip_locations_id, switch_event_id, final_event_id, event_day):
new_trip_locations_id, new_detail = remove_event(trip_locations_id, switch_event_id)
new_trip_locations_id, new_detail = add_event(new_trip_locations_id, event_day, final_event_id, full_day = True, unseen_event = False)
return new_trip_locations_id, new_detail
ajax_available_events(county='San Francisco', state = "California")
county='San Francisco'.upper()
state = "California"
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select index, name from poi_detail_table where county='%s' and state='%s'" %(county,state))
full_trip_id = 'CALIFORNIA-SAN-DIEGO-1-3'
cur.execute("select details from full_trip_table where full_trip_id='%s'" %(full_trip_id))
full_trip_detail = cur.fetchone()[0]
full_trip_detail = ast.literal_eval(full_trip_detail)
[ast.literal_eval(item)['id'] for item in full_trip_detail]
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select * from poi_detail_table where name='%s'" %(trip_locations_id))
'Blue Springs State Park' in df.name
"select * from poi_detail_table where index=%s" %(remove_event_id)
trip_locations_id = 'CALIFORNIA-SAN-DIEGO-1-3-0'
remove_event_id = 3486
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
cur.execute("select * from day_trip_table where trip_locations_id='%s'" %(trip_locations_id))
(index, trip_locations_id, full_day, default, county, state, detail, event_type, event_ids) = cur.fetchone()
# event_ids = ast.literal_eval(event_ids)
# print detail, '\n'
new_event_ids = ast.literal_eval(event_ids)
new_event_ids.remove(remove_event_id)
new_trip_locations_id = '-'.join(str(id_) for id_ in new_event_ids)
# event_ids.remove(remove_event_id)
detail = ast.literal_eval(detail[1:-1])
print type(detail[0])
for index, trip_detail in enumerate(detail):
if ast.literal_eval(trip_detail)['id'] == remove_event_id:
remove_index = index
break
new_detail = list(detail)
new_detail.pop(remove_index)
new_detail = str(new_detail).replace("'","''")
'-'.join(str(id_) for id_ in new_event_ids)
#Tasks:
#0. Run the initial to debug with all the cities and counties for the poi_detail_table in hand.
#1. Continue working on add/suggest/remove features
#2. Start the new feature that allows user to generate the google map route for the day
#3. new feature that allows user to explore outside the city from a direction away from the started location
#4. get all the state and national park data into database and rework the ranking system and the poi_detail_table!
```
| github_jupyter |
```
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
from functools import lru_cache
from numba import jit
import community
import warnings; warnings.simplefilter('ignore')
@jit(nopython = True)
def generator(A):
B = np.zeros((len(A)+2, len(A)+2), np.int_)
B[1:-1,1:-1] = A
for i in range(len(B)):
for j in range(len(B)):
count = 0
count += B[i][j]
if i-1 > 0:
count += B[i-1][j]
if i+1 < len(B):
count += B[i+1][j]
if j-1 > 0:
count += B[i][j-1]
if j+1 < len(B):
count += B[i][j+1]
if count == 0:
B[i][j] = 1
if count > 4:
B[i][j] = 1
if count <= 4 and count > 0:
B[i][j] = 0
Bnext = np.zeros_like(B, np.int_)
Bnext = np.triu(B,1) + B.T - np.diag(np.diag(B))
for i in range(len(Bnext)):
for j in range(len(Bnext)):
if Bnext[i][j] > 1:
Bnext[i][j] = 1
return(Bnext)
try:
from functools import lru_cache
except ImportError:
from backports.functools_lru_cache import lru_cache
def generator2(A_, number):
time = 0
while time < number:
A_ = generator(A_)
time += 1
return A_
g1 = nx.erdos_renyi_graph(3, 0.8)
A1 = nx.to_numpy_matrix(g1)
print(A1)
nx.draw(g1, node_size=150, alpha=0.5, with_labels=True, font_weight = 'bold')
#plt.savefig('g1_0.png')
plt.show()
gen_A1 = generator2(A1, 100)
gen_g1 = nx.from_numpy_matrix(gen_A1)
nx.draw(gen_g1, node_size=10, alpha=0.5)
#plt.savefig('g1_100.png')
plt.show()
partition = community.best_partition(gen_g1)
pos = nx.spring_layout(gen_g1)
plt.figure(figsize=(8, 8))
plt.axis('off')
nx.draw_networkx_nodes(gen_g1, pos, node_size=10, cmap=plt.cm.RdYlBu, node_color=list(partition.values()))
nx.draw_networkx_edges(gen_g1, pos, alpha=0.3)
#plt.savefig('g1_100_community.png')
plt.show(gen_g1)
g2 = nx.erdos_renyi_graph(4, 0.8)
A2 = nx.to_numpy_matrix(g2)
print(A2)
nx.draw(g2, node_size=150, alpha=0.5, with_labels=True, font_weight = 'bold')
#plt.savefig('g2_0.png')
plt.show()
gen_A2 = generator2(A2, 100)
gen_g2 = nx.from_numpy_matrix(gen_A2)
nx.draw(gen_g2, node_size=10, alpha=0.5)
#plt.savefig('g2_100.png')
plt.show()
partition = community.best_partition(gen_g2)
pos = nx.spring_layout(gen_g2)
plt.figure(figsize=(8, 8))
plt.axis('off')
nx.draw_networkx_nodes(gen_g2, pos, node_size=10, cmap=plt.cm.RdYlBu, node_color=list(partition.values()))
nx.draw_networkx_edges(gen_g2, pos, alpha=0.3)
#plt.savefig('g2_100_community.png')
plt.show(gen_g2)
```
| github_jupyter |
# SkillFactory
## Введение в ML, введение в sklearn
В этом задании мы с вами рассмотрим данные с конкурса [Задача предсказания отклика клиентов ОТП Банка](http://www.machinelearning.ru/wiki/index.php?title=%D0%97%D0%B0%D0%B4%D0%B0%D1%87%D0%B0_%D0%BF%D1%80%D0%B5%D0%B4%D1%81%D0%BA%D0%B0%D0%B7%D0%B0%D0%BD%D0%B8%D1%8F_%D0%BE%D1%82%D0%BA%D0%BB%D0%B8%D0%BA%D0%B0_%D0%BA%D0%BB%D0%B8%D0%B5%D0%BD%D1%82%D0%BE%D0%B2_%D0%9E%D0%A2%D0%9F_%D0%91%D0%B0%D0%BD%D0%BA%D0%B0_%28%D0%BA%D0%BE%D0%BD%D0%BA%D1%83%D1%80%D1%81%29)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (12,5)
```
### Грузим данные
Считаем описание данных
```
df_descr = pd.read_csv('data/otp_description.csv', sep='\t', encoding='utf8')
df_descr
```
Считаем обучающую выборки и тестовую (которую мы как бы не видим)
```
df_train = pd.read_csv('data/otp_train.csv', sep='\t', encoding='utf8')
df_train.shape
df_test = pd.read_csv('data/otp_test.csv', sep='\t', encoding='utf8')
df_test.shape
df_train.head()
```
## Объединим две выборки
Так как пока мы пока не умеем работать sklearn Pipeline, то для того, чтобы после предобработки столбцы в двух выборках находились на своих местах.
Для того, чтобы в дальнейшем отделить их введем новый столбец "sample"
```
df_train.loc[:, 'sample'] = 'train'
df_test.loc[:, 'sample'] = 'test'
df = df_test.append(df_train).reset_index(drop=True)
df.shape
```
### Чуть-чуть посмотрим на данные
Посмотрим типы данных и их заполняемость
```
df.info()
```
Видим, что часть данных - object, скорее всего стоки.
Давайте выведем эти значения для каждого столбца
```
for i in df_train.columns: # перебираем все столбцы
if str(df_train[i].dtype) == 'object': # если тип столбца - object
print('='*10)
print(i) # выводим название столбца
print(set(df_train[i])) # выводим все его значения (но делаем set - чтоб значения не повторялись)
print('\n') # выводим пустую строку
```
Mожно заметить что некоторые переменные, которые обозначены как строки (например PERSONAL_INCOME) на самом деле числа, но по какой-то причине были распознаны как строки
Причина же что использовалась запятая для разделения не целой части числа..
Перекодировать их можно например так:
```
df['PERSONAL_INCOME'].map(lambda x: x.replace(',', '.')).astype('float')
```
Такой эффект наблюдается в столбцах `PERSONAL_INCOME`, `CREDIT`, `FST_PAYMENT`, `LOAN_AVG_DLQ_AMT`, `LOAN_MAX_DLQ_AMT`
### Теперь ваше небольшое исследование
#### Задание 1. Есть ли пропуски в данных? Что с ними сделать?
(единственного верного ответа нет - аргументируйте)
```
# Пропуски для данных можно попробовать восстановить по косвенным признакам. Допустим WORK_TIME - можно принять, что
# если имеется GEN_TITLE и|или ORG_TP_STATE, JOB_DIR и т.п, то можно предположить, что WORK_TIME будет равен FACT_LIVING_TERM
# Если по косвенным определить трудно, то можно сгруппировать наверное признаки, создав группы, которые закодируют кортеж фич,
# и на место столбцов подставить код. Либо пытаться определить среднее, полагая, что разброс вряд ли будет больше одной сигмы.
# А некоторые поля - допустим регионы - можно опустить, судя по условию задачи это не слишком значимые признаки.
```
#### Задание 2. Есть ли категориальные признаки? Что с ними делать?
```
# Ввести кодирование - либо по средним значениям группы,
# либо по количеству входящих объектов, либо пользоваться LabelEncoder, OneHotEncoder
```
#### Задание 3. Фунция предобработки
Напишите функцию, которая бы
* Удаляло идентификатор `AGREEMENT_RK`
* Избавлялась от проблем с '.' и ',' в стобцах PERSONAL_INCOME, CREDIT, FST_PAYMENT, LOAN_AVG_DLQ_AMT, LOAN_MAX_DLQ_AMT
* Что-то делала с пропусками
* Кодировала категориальные признаки
В результате, ваш датафрейм должен содержать только числа и не содержать пропусков!
```
def collect_emptyvalue_features(_df):
return _df.columns[_df.isnull().any()].values
def preproc_data(df_input):
df_output = df_input.copy()
# Удаляло идентификатор AGREEMENT_RK
df_output.drop(['AGREEMENT_RK'], axis=1, inplace=True)
# Избавлялась от проблем с '.' и ',' в стобцах PERSONAL_INCOME, CREDIT, FST_PAYMENT,
# LOAN_AVG_DLQ_AMT, LOAN_MAX_DLQ_AMT
convertable_featrues = ['PERSONAL_INCOME', 'CREDIT', 'FST_PAYMENT', 'LOAN_AVG_DLQ_AMT', 'LOAN_MAX_DLQ_AMT']
for cf in convertable_features:
df_output[cf] = df_output[cf].map(lambda v: v.replace(',', '.')).astype('float')
# Что-то делала с пропусками
# 1st variant
# дропаем те фичи,которые имеют пропуски, потому что для них трудно определить заполнитель
# вероятно это сразу приводит к уменьшению точности модели
df_output.drop(['TP_PROVINCE',
'REGION_NM',
'REG_ADDRESS_PROVINCE',
'POSTAL_ADDRESS_PROVINCE',
'FACT_ADDRESS_PROVINCE',
'JOB_DIR'], axis=1, inplace=True)
#
# 2nd variant
# Попробуем указать данным полям статус - не указан
# for c in ['TP_PROVINCE', 'REGION_NM', 'ORG_TP_STATE', 'JOB_DIR']:
# df_output[c].fillna('Не указано', inplace=True)
df_output['ORG_TP_STATE'].fillna('Не указано', inplace=True)
df_output['GEN_TITLE'].fillna('Другое', inplace=True)
df_output['GEN_INDUSTRY'].fillna('Другие сферы', inplace=True)
df_output['PREVIOUS_CARD_NUM_UTILIZED'].fillna(0, inplace=True)
df_output['ORG_TP_FCAPITAL'].fillna('Без участия', inplace=True)
df_output.loc[(df_output['SOCSTATUS_PENS_FL'] == 1)
& (df_output['SOCSTATUS_WORK_FL'] == 0)
& df_output['WORK_TIME'].isnull(), 'WORK_TIME'] = 0
df_output['WORK_TIME'].fillna(df_output[~df_output['WORK_TIME'].isnull()]['WORK_TIME'].median(), inplace=True)
# Если я что-то упустил, кинем исключение, чтобы взглянуть на данные и определить заполнитель
evf = collect_emptyvalue_features(df_output)
if evf.size > 0:
print(evf)
# raise TypeError('There are some features with empty values')
# Кодировала категориальные признаки
#
# 1st variant
# df_output = pd.get_dummies(df_output, columns=['ORG_TP_FCAPITAL', 'EDUCATION', 'MARITAL_STATUS',
# 'GEN_INDUSTRY', 'GEN_TITLE', 'ORG_TP_STATE',
# 'JOB_DIR', 'FAMILY_INCOME', 'REG_ADDRESS_PROVINCE',
# 'FACT_ADDRESS_PROVINCE', 'POSTAL_ADDRESS_PROVINCE',
# 'TP_PROVINCE', 'REGION_NM'])
df_output = pd.get_dummies(df_output, columns=['EDUCATION',
'MARITAL_STATUS',
'GEN_INDUSTRY',
'GEN_TITLE',
'FAMILY_INCOME',
'ORG_TP_FCAPITAL',
'ORG_TP_STATE'])
#
# 2nd variant
# df_output['ORG_TP_FCAPITAL'] = np.where(df_output['ORG_TP_FCAPITAL'].str.contains('Без участия'), 1, 0)
# from sklearn.preprocessing import LabelEncoder
# le_edu = LabelEncoder()
# le_ms = LabelEncoder()
# le_gi = LabelEncoder()
# le_gt = LabelEncoder()
# le_ots = LabelEncoder()
# # le_jd = LabelEncoder()
# le_fi = LabelEncoder()
# # le_rap = LabelEncoder()
# # le_fap = LabelEncoder()
# # le_pap = LabelEncoder()
# # le_tp = LabelEncoder()
# # le_rn = LabelEncoder()
# df_output['EDUCATION'] = le_edu.fit_transform(df_output['EDUCATION'])
# df_output['MARITAL_STATUS'] = le_ms.fit_transform(df_output['MARITAL_STATUS'])
# df_output['GEN_INDUSTRY'] = le_gi.fit_transform(df_output['GEN_INDUSTRY'])
# df_output['GEN_TITLE'] = le_gt.fit_transform(df_output['GEN_TITLE'])
# df_output['ORG_TP_STATE'] = le_ots.fit_transform(df_output['ORG_TP_STATE'])
# # df_output['JOB_DIR'] = le_jd.fit_transform(df_output['JOB_DIR'])
# df_output['FAMILY_INCOME'] = le_fi.fit_transform(df_output['FAMILY_INCOME'])
# # df_output['REG_ADDRESS_PROVINCE'] = le_rap.fit_transform(df_output['REG_ADDRESS_PROVINCE'])
# # df_output['FACT_ADDRESS_PROVINCE'] = le_fap.fit_transform(df_output['FACT_ADDRESS_PROVINCE'])
# # df_output['POSTAL_ADDRESS_PROVINCE'] = le_pap.fit_transform(df_output['POSTAL_ADDRESS_PROVINCE'])
# # df_output['TP_PROVINCE'] = le_tp.fit_transform(df_output['TP_PROVINCE'])
# # df_output['REGION_NM'] = le_rn.fit_transform(df_output['REGION_NM'])
return df_output
df_output = preproc_data(df)
df_output.select_dtypes(include='object') # here must be only `sample`
df_preproc = df.pipe(preproc_data)
df_train_preproc = df_preproc.query('sample == "train"').drop(['sample'], axis=1)
df_test_preproc = df_preproc.query('sample == "test"').drop(['sample'], axis=1)
```
#### Задание 4. Отделите целевую переменную и остальные признаки
Должно получится:
* 2 матрицы: X и X_test
* 2 вектора: y и y_test
```
y = df_train_preproc['TARGET']
X = df_train_preproc.drop(['TARGET'], axis=1)
y_test = df_test_preproc['TARGET']
X_test = df_test_preproc.drop(['TARGET'], axis=1)
X.shape, y.shape
```
#### Задание 5. Обучение и оценка качества разных моделей
```
from sklearn.cross_validation import train_test_split
# train_test_split?
# test_size=0.3, random_state=42
## Your Code Here
X_train, X_train_test, y_train, y_train_test = train_test_split(X, y, test_size=0.3, random_state=42)
X_train.shape, y_train.shape, X_train_test.shape, y_train_test.shape
# Попробовать следующие "черные ящики": интерфейс одинаковый
# fit,
# predict,
# predict_proba
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
## Your Code Here
dtc = DecisionTreeClassifier()
dtc.fit(X_train, y_train)
predict_dtc = dtc.predict(X_train_test)
predict_proba_dtc = dtc.predict_proba(X_train_test)
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
predict_rfc = rfc.predict(X_train_test)
predict_proba_rfc = rfc.predict_proba(X_train_test)
lr = LogisticRegression()
lr.fit(X_train, y_train)
predict_lr = lr.predict(X_train_test)
predict_proba_lr = lr.predict_proba(X_train_test)
# Посчитать метрики стандартные
# accuracy, precision, recall
from sklearn.metrics import accuracy_score, precision_score, recall_score
models_accuracy = {
'dtc' : accuracy_score(y_train_test, predict_dtc),
'rfc' : accuracy_score(y_train_test, predict_rfc),
'lr' : accuracy_score(y_train_test, predict_lr)
}
best_accuracy_score = max(models_accuracy.values())
models_accuracy, best_accuracy_score, list(filter(lambda k: models_accuracy.get(k) == best_accuracy_score, models_accuracy.keys()))
models_precision_score = {
'dtc' : precision_score(y_train_test, predict_dtc),
'rfc' : precision_score(y_train_test, predict_rfc),
'lr' : precision_score(y_train_test, predict_lr)
}
best_precision_score = max(models_precision_score.values())
models_precision_score, best_precision_score, list(filter(lambda k: models_precision_score.get(k) == best_precision_score, models_precision_score.keys()))
models_recall_score = {
'dtc' : recall_score(y_train_test, predict_dtc),
'rfc' : recall_score(y_train_test, predict_rfc),
'lr' : recall_score(y_train_test, predict_lr)
}
best_recall_score = max(models_recall_score.values())
models_recall_score, best_recall_score, list(filter(lambda k: models_recall_score.get(k) == best_recall_score, models_recall_score.keys()))
models_accuracy, models_precision_score, models_recall_score
# Визуалищировать эти метрики всех моделей на одном графике (чтоб визуально посмотреть)
## Your Code Here
from sklearn.metrics import precision_recall_curve
from matplotlib import pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve
precision_prc_dtc, recall_prc_dtc, treshold_prc_dtc = precision_recall_curve(y_train_test, predict_proba_dtc[:,1])
precision_prc_rfc, recall_prc_rfc, treshold_prc_rfc = precision_recall_curve(y_train_test, predict_proba_rfc[:,1])
precision_prc_lr, recall_prc_lr, treshold_prc_lr = precision_recall_curve(y_train_test, predict_proba_lr[:,1])
%matplotlib inline
plt.figure(figsize=(10, 10))
plt.plot(precision_prc_dtc, recall_prc_dtc, label='dtc')
plt.plot(precision_prc_rfc, recall_prc_rfc, label='rfc')
plt.plot(precision_prc_lr, recall_prc_lr, label='lr')
plt.legend(loc='upper right')
plt.ylabel('recall')
plt.xlabel('precision')
plt.grid(True)
plt.title('Precision Recall Curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
# Потроить roc-кривые всех можелей на одном графике
# Вывести roc_auc каждой моделе
## Your Code Here
fpr_dtc, tpr_dtc, thresholds_dtc = roc_curve(y_train_test, predict_proba_dtc[:,1])
fpr_rfc, tpr_rfc, thresholds_rfc = roc_curve(y_train_test, predict_proba_rfc[:,1])
fpr_lr, tpr_lr, thresholds_lr = roc_curve(y_train_test, predict_proba_lr[:,1])
plt.figure(figsize=(10, 10))
plt.plot(fpr_dtc, tpr_dtc, label='dtc')
plt.plot(fpr_rfc, tpr_rfc, label='rfc')
plt.plot(fpr_lr, tpr_lr, label='lr')
plt.legend()
plt.plot([1.0, 0], [1.0, 0])
plt.ylabel('tpr')
plt.xlabel('fpr')
plt.grid(True)
plt.title('ROC curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
roc_auc_score(y_train_test, predict_proba_dtc[:,1]), roc_auc_score(y_train_test, predict_proba_rfc[:,1]), roc_auc_score(y_train_test, predict_proba_lr[:,1])
from sklearn.cross_validation import cross_val_score
from sklearn.model_selection import StratifiedKFold
# Сделать k-fold (10 фолдов) кросс-валидацию каждой модели
# И посчитать средний roc_auc
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=123)
## Your Code Here
s1 = cross_val_score( dtc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits())
for train_ind, test_ind in cv.split(X, y):
x_train_xval_ml = np.array(X)[train_ind,:]
x_test_xval_ml = np.array(X)[test_ind,:]
y_train_xval_ml = np.array(y)[train_ind]
s2 = cross_val_score( dtc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits())
s3 = s1 = cross_val_score( dtc, X, y, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, X, y, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, X, y, scoring='roc_auc', cv=cv.get_n_splits())
s1,s2,s3
# Взять лучшую модель и сделать predict (с вероятностями (!!!)) для test выборки
## Your Code Here
predict_lr_XTEST_proba = lr.predict_proba(X_test)
# Померить roc_auc на тесте
fpr_lr, tpr_lr, thresholds_lr = roc_curve(y_test, predict_lr_XTEST_proba[:, 1])
plt.figure(figsize=(10, 10))
plt.plot(fpr_dtc, tpr_dtc, label='lr')
plt.legend()
plt.plot([1.0, 0], [1.0, 0])
plt.ylabel('tpr')
plt.xlabel('fpr')
plt.grid(True)
plt.title('ROC curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
roc_auc_score(y_test, predict_lr_XTEST_proba[:, 1])
```
| github_jupyter |
# Classification
**Data - Social network Ads**
## Importing the Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Importing the dataset
```
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
## Splitting the dataset into the Training set and Test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
import warnings
warnings.filterwarnings('ignore')
```
### 1) Logistic Regression
```
#Training the Logistic Regression model on the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
#Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
```
### 2) K-Nearest Neighbors (K-NN)
```
#Training the K-NN model on the Training set
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
### 3) Support Vector Machine (SVM)
```
#Training the SVM model on the Training set
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
### 4) Kernel SVM
```
#Training the Kernel SVM model on the Training set
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
### 5) Naive Bayes
```
#Training the Naive Bayes model on the Training set
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
### 6) Decision Tree Classification
```
#Training the Decision Tree Classification model on the Training set
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
### 7) Random Forest Classification
```
#Training the Random Forest Classification model on the Training set
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
#Predicting a new result
print(classifier.predict(sc.transform([[30,87000]])))
#Predicting the Test set results
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
| github_jupyter |
# [Best viewed in NBviewer](https://nbviewer.jupyter.org/github/ETCBC/heads/blob/master/tutorial.ipynb)
# Heads Tutorial
## Introduction
This notebook provides a basic introduction to the `heads` edge and node features for BHSA, produced in `etcbc/heads`. Syntactic phrase heads are important because they provide the semantic core for a phrase. To give a simple example:
> a good man
In this phrase, the word "man" serves as the phrase head. The word "good" is an adjective, and "a" is an indefinite article. While important, "good" and "a" are more optional, and their influence is mostly local to the phrase and to their relation to "man." The phrase head, by comparison, has a stronger influence on the surrounding syntactic-semantic environment. For instance:
> A good man walks.
The use of the subject phrase head "man" coincides with the use of the verb phrase "walks." This is possible because "man" has the semantic property of "things that walk," or perhaps animacy. Having heads for Hebrew phrases in BHSA enables more sophisticated analyses such as [participant tracking](https://en.wikipedia.org/wiki/Named-entity_recognition) and [word embeddings](https://en.wikipedia.org/wiki/Word_embedding) (see use case [here](https://nbviewer.jupyter.org/github/codykingham/noun_semantics_SBL18/blob/master/analysis/noun_semantics.ipynb)).
## Phrase Types and Their Heads
Different phrases in BHSA have different type values (BHSA: `typ`, [see bottom here](https://etcbc.github.io/bhsa/features/hebrew/c/typ)). A type value is derived from the part of speech of the phrase's head. Below are some example type values and expected parts of speech in BHSA:
| type | BHSA typ | head part of speech |
|--------------------|------|---------------------|
| verb phrase | VP | verb |
| noun phrase | NP | noun (BHSA: subs) |
| preposition phrase | PP | preposition |
| conjunction phrase | CP | conjunction |
| adverb phrase | AdvP | adverb |
| adjective phrase | AdjP | adjective |
Additional phrase types and parts of speech can be browsed [here](https://etcbc.github.io/bhsa/features/hebrew/c/typ) and [here](https://etcbc.github.io/bhsa/features/hebrew/c/pdp.html). To give an example: in the phrase בראשית, the head is simply ב, since this phrase has the type: prepositional phrase, or `PP`. The `heads` data is calculated for phrases using these correspondences. So its important to know when searching for certain kinds of data. In case one wants to isolate ראשית (and ignore any potential modifying elements) another feature is made available, called `nheads` (nominal heads), which provides for such cases.
## Dataset
The `heads` dataset can be downloaded and used with Text-Fabric's `BHSA` app. An additional argument is passed to the `use` method: `mod=etcbc/heads/tf`. The location is not a local directory, but rather provides TF with coordinates to find the latest data in Github. The coordinates are organized as `organization/repository/tf`.
```
from tf.app import use
A = use('bhsa', mod='etcbc/heads/tf', hoist=globals())
```
Four primary datasets are available in `heads`:
* `head` - an edge feature from a syntactic phrase head to its containing phrase node
* `obj_prep` - an edge feature from an object of a preposition to its governing preposition
* `nhead` - an edge feature from a nominal syntactic phrase head to its containing phrase node; handy for prepositional phrases with nominal elements contained within
* `sem_set` - a semantic set feature which contains the following feature values:
* `quant` - enhanced quantifier sets
* `prep` - enhanced preposition sets
## Getting a Phrase Head
The feature `head` contains an edge feature from a word node to the phrase for which it contributes headship. Let's have a look at heads in Genesis 1 with a simple query and display:
```
# configure TF display
A.displaySetup(condenseType='phrase', condensed=True)
# search for heads in Genesis 1:1-2
genesis_heads = '''
book book@en=Genesis
chapter chapter=1
verse verse<2
phrase
<head- word
'''
genesis_heads = A.search(genesis_heads)
A.show(genesis_heads)
```
Note how the selected heads agree with their phrase types: prepositional phrases have preposition heads, noun phrases have noun heads (`subs`), etc. Note also how the `head` feature encodes multiple heads in cases where there are coordinated heads.
Below we find cases where there are at least 3 coordinate heads in a phrase.
```
three_heads = '''
p:phrase
w1:word
< w2:word
< w3:word
p <head- w1
p <head- w2
p <head- w3
'''
three_heads = A.search(three_heads)
A.show(three_heads, end=10, withNodes=True)
```
### Hand Coding Heads
Heads features can also be accessed by hand-coding with the `E` and `F` classes. The `E` (edge) class must be called on a word node with `.f` (edge from word node).
```
T.text(1)
E.head.f(1)
T.text(E.head.f(1)[0])
```
But the heads can also be located by calling `E.head` on the phrase, but with `.t` at the end (edge to):
```
E.head.t(651542)
T.text(E.head.t(651542)[0])
```
NB that the edge class always returns a tuple. This is because multiple edges are possible. This is valuable in cases where you want to find a phrase with multiple heads, as we did in the template above:
```
example = three_heads[0][0]
E.head.t(example)
A.prettyTuple((example,) + E.head.t(example), seqNumber=0, withNodes=True)
```
## Getting an Object of a Preposition
Often one wants to find objects of prepositions without accidentally selecting secondary, modifying elements, and without omitting coordinated objects. This is now possible with the new `obj_prep` edge feature. A few examples are provided below. We highlight prepositions in blue, and their objects in pink.
```
E.obj_prep.f(23)
find_objs = '''
phrase
word
<obj_prep- word
'''
find_objs = A.search(find_objs)
highlights = {}
for result in find_objs:
highlights[result[1]] = 'lightblue'
highlights[result[2]] = 'pink'
A.show(find_objs, highlights=highlights, end=10)
```
Note that prepositional terms such as פני and תוך are properly treated as prepositions. This is due to a new custom set of prepositional words which were needed when processing the `heads` and `obj_prep` features. This feature is made available in `sem_set`, which has two values: `prep` and `quant`.
Below are cases of `prep` that are marked in BHSA as `subs` (noun):
```
sem_preps = A.search('''
word pdp=subs sem_set=prep
''')
A.show(sem_preps, end=10, extraFeatures={'sem_set'})
```
Returning to `obj_prep`, let's find cases where a preposition has more than one object. We do this with a hand-coded method this time. NB that only the cases of multiple objects of prepositions are highlighted.
```
results = []
highlights = {}
for prep in F.sem_set.s('prep'):
objects = E.obj_prep.t(prep)
if len(objects) > 1:
phrase = L.u(prep, 'phrase')[0]
results.append((phrase, prep)+objects)
# update highlights
highlights[prep] = 'lightblue'
highlights.update({obj:'pink' for obj in objects})
A.show(results, highlights=highlights, end=10, withNodes=True)
```
## Getting Nominal Heads
There are cases where it is beneficial to simply select any nominal head elements from underneath prepositional phrases ("nominal" here meaning any non-prepositional head). This is especially relevant when prepositional phrases are chained together, and the nominal element is difficult to recover. For these cases there is the feature `nhead`. Below we find such cases with a simple search:
```
find_nheads = A.search('''
p:phrase typ=PP
/with/
=: word sem_set=prep
<: word sem_set=prep
/-/
< w1:word
< w2:word
p <nhead- w1
p <nhead- w2
''')
A.show(find_nheads, end=10, withNodes=True)
```
## Using Enhanced Prepositions and Quantifiers
The `sem_set` feature brings enhanced preposition and quantifier semantic data, which was used in the calculations of `heads` features. We have already seen `prep` in action. Let's have a look at quantifiers.
In the BHSA base dataset, quantifiers can only be known through the `ls` (lexical set) feature where there is a value of `card` for cardinal numbers. But other kinds of quantifiers are not idenfiable. Let's have a look at what kinds of quantifiers `sem_set` = `quant` makes available...
```
sem_quants = A.search('''
phrase
word ls#card sem_set=quant
''')
A.show(sem_quants, end=5, extraFeatures={'sem_set'})
```
These are all cases of כל. Let's find other kinds. We add a shuffle to randomize them as well.
```
import random
nonKL = [result for result in sem_quants if F.lex.v(result[1]) != 'KL/']
random.shuffle(nonKL)
for i, result in enumerate(nonKL[:10]):
A.prettyTuple(result, extraFeatures={'sem_set'}, seqNumber=i)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import sys
os.chdir(sys.path[0]+"/../data")
import urllib.request
from bs4 import BeautifulSoup
import pandas as pd
import re
from tqdm import tqdm
categories = [
"100 metres, Men",
"200 metres, Men",
"400 metres, Men",
"800 metres, Men",
"1,500 metres, Men",
"5,000 metres, Men",
"10,000 metres, Men",
"Marathon, Men",
"110 metres Hurdles, Men",
"400 metres Hurdles, Men",
"3,000 metres Steeplechase, Men",
"4 x 100 metres Relay, Men",
"4 x 400 metres Relay, Men",
"20 kilometres Walk, Men",
"50 kilometres Walk, Men",
"100 metres, Women",
"200 metres, Women",
"400 metres, Women",
"800 metres, Women",
"1,500 metres, Women",
"5,000 metres, Women",
"10,000 metres, Women",
"Marathon, Women",
"110 metres Hurdles, Women",
"400 metres Hurdles, Women",
"3,000 metres Steeplechase, Women",
"4 x 100 metres Relay, Women",
"4 x 400 metres Relay, Women",
"20 kilometres Walk, Women",
]
data = []
for edition in tqdm(range(1,62)): # Data from 1896 to 2020
edition_url = f"http://www.olympedia.org/editions/{edition}/sports/ATH"
response = urllib.request.urlopen(edition_url)
edition_soup = BeautifulSoup(response, 'html.parser')
title = edition_soup.find_all("h1")[0]
if "Winter" in title.text: continue # Skip winter olympics
try:
edition_year = int(re.findall(r"\d+", title.text)[0])
except IndexError:
continue # Sometimes the page seems to not exist?
for category in categories:
try:
elem = edition_soup.find_all("a", string=category)[0]
except IndexError:
continue
href = elem.get('href')
event_url = "http://www.olympedia.org" + href
response = urllib.request.urlopen(event_url)
soup = BeautifulSoup(response, 'html.parser')
table = soup.find_all("table", {"class":"table table-striped"})[0]
df = pd.read_html(str(table))[0]
try:
# gold_medal_time = float(re.findall(r"[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)",
# df['Final'][0].split()[0])[0][0]
# )
final_time_raw = df['Final'][0].split()[0]
h, m, s = re.findall(r"^(?:(\d{0,2})-)?(?:(\d{0,2}):)?(\d{0,2}\.?\d*)",
final_time_raw)[0]
h,m,s = int(h) if len(h)>0 else 0, int(m) if len(m)>0 else 0, float(s)
gold_medal_time = h*60*60+m*60+s
except KeyError:
continue
data.append({
"category" : category,
"year": edition_year,
"time" : gold_medal_time,
"reference": event_url,
})
df = pd.DataFrame(data)
df.to_csv('olympics_athletic_gold_medal_times.csv')
df
```
| github_jupyter |
```
#pip install xgboost
import xgboost as xgb
from sklearn.metrics import mean_squared_error
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
df = pd.read_csv('2019_data.csv')
df.head(-5)
df.dropna(subset=['LSOA code', 'Month', 'Location', 'Crime type', 'Longitude', 'Latitude', 'Employment Domain Score',
'Income Domain Score', 'IDACI Score', 'IDAOPI Score', 'Police Strength',
'Police Funding', 'Population'], inplace=True)
df.head(-5)
df.sort_values(by="Month")
df.columns
#convert from british ',' to america '.'
df['Employment Domain Score'] = df['Employment Domain Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)
df['Income Domain Score'] = df['Income Domain Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)
df['IDACI Score'] = df['IDACI Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)
df['IDAOPI Score'] = df['IDAOPI Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)
df['Police Strength'] = df['Police Strength'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)
df.dtypes
df["Population"] = df["Population"].astype(str)
df['Population'] = df['Population'].str.replace('[A-Za-z]', '').str.replace('.', '').astype(float)
data = df.groupby('Reported by').agg({'Crime type':'count', 'Employment Domain Score':'mean', 'Income Domain Score':'mean', 'IDACI Score':'mean', 'IDAOPI Score':'mean', 'Police Strength':'mean', 'Income Domain Score':'mean', 'Police Funding':'mean', 'Population':'mean' })
data.reset_index()
X, y = data.iloc[:,0:],data.iloc[:,0]
X
y
X, y = data.iloc[:,:-1],data.iloc[:,-1]
data_dmatrix = xgb.DMatrix(data=X,label=y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=111)
xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 5, alpha = 10, n_estimators = 10)
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
rmse = np.sqrt(mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
```
## K-FOld cross validation
```
params = {"objective":"reg:linear",'colsample_bytree': 0.3,'learning_rate': 0.1,
'max_depth': 10, 'alpha': 10}
cv_results = xgb.cv(dtrain=data_dmatrix, params=params, nfold=3,
num_boost_round=50,early_stopping_rounds=10,metrics="rmse", as_pandas=True, seed=123)
cv_results.head()
print((cv_results["test-rmse-mean"]).tail(1))
xg_reg = xgb.train(params=params, dtrain=data_dmatrix, num_boost_round=10)
import matplotlib.pyplot as plt
xgb.plot_tree(xg_reg,num_trees=0)
plt.rcParams['figure.figsize'] = [50, 10]
plt.show()
xgb.plot_importance(xg_reg)
plt.rcParams['figure.figsize'] = [5, 5]
plt.savefig("feature_importance.png")
plt.show()
```
| github_jupyter |
```
# Jovian Commit Essentials
# Please retain and execute this cell without modifying the contents for `jovian.commit` to work
!pip install jovian --upgrade -q
import jovian
jovian.set_project('pandas-practice-assignment')
jovian.set_colab_id('1EMzM1GAuekn6b3mjbgjC83UH-2XgQHAe')
```
# Assignment 3 - Pandas Data Analysis Practice
*This assignment is a part of the course ["Data Analysis with Python: Zero to Pandas"](https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas)*
In this assignment, you'll get to practice some of the concepts and skills covered this tutorial: https://jovian.ml/aakashns/python-pandas-data-analysis
As you go through this notebook, you will find a **???** in certain places. To complete this assignment, you must replace all the **???** with appropriate values, expressions or statements to ensure that the notebook runs properly end-to-end.
Some things to keep in mind:
* Make sure to run all the code cells, otherwise you may get errors like `NameError` for undefined variables.
* Do not change variable names, delete cells or disturb other existing code. It may cause problems during evaluation.
* In some cases, you may need to add some code cells or new statements before or after the line of code containing the **???**.
* Since you'll be using a temporary online service for code execution, save your work by running `jovian.commit` at regular intervals.
* Questions marked **(Optional)** will not be considered for evaluation, and can be skipped. They are for your learning.
You can make submissions on this page: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice
If you are stuck, you can ask for help on the community forum: https://jovian.ml/forum/t/assignment-3-pandas-practice/11225/3 . You can get help with errors or ask for hints, describe your approach in simple words, link to documentation, but **please don't ask for or share the full working answer code** on the forum.
## How to run the code and save your work
The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks.
Before staring the assignment, let's save a snapshot of the assignment to your Jovian.ml profile, so that you can access it later, and continue your work.
```
import jovian
jovian.commit(project='pandas-practice-assignment', environment=None)
# Run the next line to install Pandas
!pip install pandas --upgrade
import pandas as pd
```
In this assignment, we're going to analyze an operate on data from a CSV file. Let's begin by downloading the CSV file.
```
from urllib.request import urlretrieve
urlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/countries.csv',
'countries.csv')
```
Let's load the data from the CSV file into a Pandas data frame.
```
countries_df = pd.read_csv('countries.csv')
countries_df
```
**Q: How many countries does the dataframe contain?**
Hint: Use the `.shape` method.
```
num_countries = countries_df.shape[0]
print('There are {} countries in the dataset'.format(num_countries))
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Retrieve a list of continents from the dataframe?**
*Hint: Use the `.unique` method of a series.*
```
continents = countries_df["continent"].unique()
continents
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: What is the total population of all the countries listed in this dataset?**
```
total_population = countries_df["population"].sum()
print('The total population is {}.'.format(int(total_population)))
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: (Optional) What is the overall life expectancy across in the world?**
*Hint: You'll need to take a weighted average of life expectancy using populations as weights.*
```
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe containing 10 countries with the highest population.**
*Hint: Chain the `sort_values` and `head` methods.*
```
most_populous_df = countries_df.sort_values("population", ascending=False)
most_populous_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Add a new column in `countries_df` to record the overall GDP per country (product of population & per capita GDP).**
```
countries_df['gdp'] = countries_df["population"]*countries_df["gdp_per_capita"]
countries_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: (Optional) Create a dataframe containing 10 countries with the lowest GDP per capita, among the counties with population greater than 100 million.**
```
lowest_gdp_df = countries_df[countries_df["population"] > 100000000].sort_values("gdp_per_capita").head(10).reset_index()
lowest_gdp_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a data frame that counts the number countries in each continent?**
*Hint: Use `groupby`, select the `location` column and aggregate using `count`.*
```
country_counts_df = countries_df.groupby("continent")["location"].count()
country_counts_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a data frame showing the total population of each continent.**
*Hint: Use `groupby`, select the population column and aggregate using `sum`.*
```
continent_populations_df = countries_df.groupby("continent")["population"].sum()
continent_populations_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
Let's download another CSV file containing overall Covid-19 stats for various countires, and read the data into another Pandas data frame.
```
urlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/covid-countries-data.csv',
'covid-countries-data.csv')
covid_data_df = pd.read_csv('covid-countries-data.csv')
covid_data_df
```
**Q: Count the number of countries for which the `total_tests` data is missing.**
*Hint: Use the `.isna` method.*
```
total_tests_missing = covid_data_df[covid_data_df["total_tests"].isna()]["location"].count()
print("The data for total tests is missing for {} countries.".format(int(total_tests_missing)))
jovian.commit(project='pandas-practice-assignment', environment=None)
```
Let's merge the two data frames, and compute some more metrics.
**Q: Merge `countries_df` with `covid_data_df` on the `location` column.**
*Hint: Use the `.merge` method on `countries_df`.
```
combined_df = countries_df.merge(covid_data_df, on="location")
combined_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Add columns `tests_per_million`, `cases_per_million` and `deaths_per_million` into `combined_df`.**
```
combined_df['tests_per_million'] = combined_df['total_tests'] * 1e6 / combined_df['population']
combined_df['cases_per_million'] = combined_df['total_cases'] * 1e6 / combined_df['population']
combined_df['deaths_per_million'] = combined_df['total_deaths'] * 1e6 / combined_df['population']
combined_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe with 10 countires that have highest number of tests per million people.**
```
highest_tests_df = combined_df.sort_values("tests_per_million",ascending=False).head(10).reset_index()
highest_tests_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe with 10 countires that have highest number of positive cases per million people.**
```
highest_cases_df = combined_df.sort_values("cases_per_million",ascending=False).head(10).reset_index()
highest_cases_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe with 10 countires that have highest number of deaths cases per million people?**
```
highest_deaths_df = combined_df.sort_values("deaths_per_million",ascending=False).head(10).reset_index()
highest_deaths_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**(Optional) Q: Count number of countries that feature in both the lists of "highest number of tests per million" and "highest number of cases per million".**
```
highest_test_and_cases = highest_cases_df["location"].isin(highest_tests_df["location"]).sum()
print(f"Total number of Contries having both highest number of cases and covid tests are: {highest_test_and_cases}")
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**(Optional) Q: Count number of countries that feature in both the lists "20 countries with lowest GDP per capita" and "20 countries with the lowest number of hospital beds per thousand population". Only consider countries with a population higher than 10 million while creating the list.**
```
lowest_gdp_df = countries_df[countries_df["population"] > 10000000].sort_values("gdp_per_capita").head(20).reset_index()
lowest_gdp_df
lowest_hospital_df = countries_df[countries_df["population"] > 10000000].sort_values("hospital_beds_per_thousand").head(20).reset_index()
lowest_hospital_df
# set(lowest_gdp_df['location']).intersection(set(lowest_hospital_df['location']))
total_countries_having_low_gdp_and_bed = lowest_gdp_df['location'].isin(lowest_hospital_df['location']).sum()
print(f"Countries having lowest gdp per capita and lowest hosiptal bed are: {total_countries_having_low_gdp_and_bed}")
import jovian
jovian.commit(project='pandas-practice-assignment', environment=None)
```
## Submission
Congratulations on making it this far! You've reached the end of this assignment, and you just completed your first real-world data analysis problem. It's time to record one final version of your notebook for submission.
Make a submission here by filling the submission form: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice
Also make sure to help others on the forum: https://jovian.ml/forum/t/assignment-3-pandas-practice/11225/2
```
jovian.submit(assignment="zero-to-pandas-a3")
```
| github_jupyter |
```
import numpy as np
import S_Dbw as sdbw
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs
from sklearn.metrics.pairwise import pairwise_distances_argmin
np.random.seed(0)
S_Dbw_result = []
batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
cluster_std=[0.7,0.3,1.2]
n_clusters = len(centers)
X1, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[0])
X2, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[1])
X3, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[2])
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(9, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.08, top=0.9)
colors = ['#4EACC5', '#FF9C34', '#4E9A06']
for item, X in enumerate(list([X1, X2, X3])):
k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(X)
k_means_cluster_centers = k_means.cluster_centers_
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
KS = sdbw.S_Dbw(X, k_means_labels, k_means_cluster_centers)
S_Dbw_result.append(KS.S_Dbw_result())
ax = fig.add_subplot(1,3,item+1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.')
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
ax.set_title('S_Dbw: %.3f' %(S_Dbw_result[item]))
ax.set_ylim((-4,4))
ax.set_xlim((-4,4))
plt.text(-3.5, 1.8, 'cluster_std: %f' %(cluster_std[item]))
plt.savefig('./pic1.png', dpi=150)
np.random.seed(0)
S_Dbw_result = []
batch_size = 45
centers = [[[1, 1], [-1, -1], [1, -1]],
[[0.8, 0.8], [-0.8, -0.8], [0.8, -0.8]],
[[1.2, 1.2], [-1.2, -1.2], [1.2, -1.2]]]
n_clusters = len(centers)
X1, _ = make_blobs(n_samples=3000, centers=centers[0], cluster_std=0.7)
X2, _ = make_blobs(n_samples=3000, centers=centers[1], cluster_std=0.7)
X3, _ = make_blobs(n_samples=3000, centers=centers[2], cluster_std=0.7)
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.2, top=0.9)
colors = ['#4EACC5', '#FF9C34', '#4E9A06']
for item, X in enumerate(list([X1, X2, X3])):
k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(X)
k_means_cluster_centers = k_means.cluster_centers_
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
KS = sdbw.S_Dbw(X, k_means_labels, k_means_cluster_centers)
S_Dbw_result.append(KS.S_Dbw_result())
ax = fig.add_subplot(1,3,item+1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.')
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
ax.set_title('S_Dbw: %.3f ' %(S_Dbw_result[item]))
# ax.set_xticks(())
# ax.set_yticks(())
ax.set_ylim((-4,4))
ax.set_xlim((-4,4))
ax.set_xlabel('centers: \n%s' %(centers[item]))
plt.savefig('./pic2.png', dpi=150)
```
| github_jupyter |
# Relatório de Análise VII
## Criando Argumentos
```
import pandas as pd
dados = pd.read_csv('dados/aluguel_residencial.csv', sep = ';')
dados.head(10)
```
##### https://pandas.pydata.org/pandas-docs/stable/reference/frame.html
```
dados['Valor'].mean()# Média Geral
dados.Bairro.unique()
bairros = ['Copacabana', 'Jardim Botânico', 'Centro', 'Higienópolis',
'Cachambi', 'Barra da Tijuca', 'Ramos', 'Grajaú',
'Lins de Vasconcelos', 'Taquara', 'Freguesia (Jacarepaguá)',
'Tijuca', 'Olaria', 'Ipanema', 'Campo Grande', 'Botafogo',
'Recreio dos Bandeirantes', 'Leblon', 'Jardim Oceânico', 'Humaitá',
'Península', 'Méier', 'Vargem Pequena', 'Maracanã', 'Jacarepaguá',
'São Conrado', 'Vila Valqueire', 'Gávea', 'Cosme Velho',
'Bonsucesso', 'Todos os Santos', 'Laranjeiras', 'Itanhangá',
'Flamengo', 'Piedade', 'Lagoa', 'Catete', 'Jardim Carioca',
'Benfica', 'Glória', 'Praça Seca', 'Vila Isabel', 'Engenho Novo',
'Engenho de Dentro', 'Pilares', 'Água Santa', 'São Cristóvão',
'Ilha do Governador', 'Jardim Sulacap', 'Oswaldo Cruz',
'Vila da Penha', 'Anil', 'Vargem Grande', 'Tanque', 'Vaz Lobo',
'Madureira', 'São Francisco Xavier', 'Pechincha', 'Leme', 'Irajá',
'Quintino Bocaiúva', 'Urca', 'Penha', 'Gardênia Azul',
'Rio Comprido', 'Andaraí', 'Santa Teresa', 'Inhaúma',
'Marechal Hermes', 'Curicica', 'Santíssimo', 'Moneró', 'Camorim',
'Cascadura', 'Praia da Bandeira', 'Saúde', 'Joá', 'Realengo',
'Fátima', 'Inhoaíba', 'Rocha', 'Jardim Guanabara', 'Jabour',
'Braz de Pina', 'Praça da Bandeira', 'Vila Kosmos', 'Vista Alegre',
'Encantado', 'Campinho', 'Guaratiba', 'Riachuelo', 'Bangu', 'Lapa',
'Catumbi', 'Penha Circular', 'Abolição', 'Tomás Coelho', 'Colégio',
'Pavuna', 'Santa Cruz', 'Alto da Boa Vista', 'Cidade Nova',
'Bento Ribeiro', 'Estácio', 'Jardim América', 'Cordovil', 'Caju',
'Pedra de Guaratiba', 'Padre Miguel', 'Paciência', 'Del Castilho',
'Arpoador', 'Sampaio', 'Anchieta', 'Icaraí', 'Senador Vasconcelos',
'Rocha Miranda', 'Gamboa', 'Maria da Graça', 'Barra de Guaratiba',
'Vicente de Carvalho', 'Paquetá', 'Largo do Machado',
'Parada de Lucas', 'Freguesia (Ilha do Governador)', 'Portuguesa',
'Guadalupe', 'Parque Anchieta', 'Turiaçu', 'Pitangueiras',
'Vila Militar', 'Vidigal', 'Senador Camará', 'Usina',
'Vigário Geral', 'Cosmos', 'Jacaré', 'Cocotá', 'Honório Gurgel',
'Engenho da Rainha', 'Cachamorra', 'Zumbi', 'Tauá', 'Santo Cristo',
'Ribeira', 'Magalhães Bastos', 'Cacuia', 'Bancários', 'Cavalcanti',
'Rio da Prata', 'Cidade Jardim', 'Coelho Neto']
selecao = dados['Bairro'].isin(bairros)
dados = dados[selecao]
dados.Bairro.drop_duplicates()
dados.Bairro.drop_duplicates()
grupo_bairro = dados.groupby('Bairro') # Cria um grupo pegando como referencia o Bairro
type(grupo_bairro)
grupo_bairro.groups # Mostra os Bairros e o grupo de referencias/linhas onde ele aparece.
for bairro, data in grupo_bairro: #Faz um laço para mostrar como os atributos do bairro
print(bairro)
for bairro, dados in grupo_bairro: #Faz um laço para mostrar que está sendo guardado um dataframe para cada bairro.
print(type(dados))
for bairro, dados in grupo_bairro:
print(f'{bairro} --> {dados.Valor.mean()}') # Calculou a media do valor para cada bairro.
grupo_bairro[['Valor', 'Condominio']].mean().round(2) # Maneira mais pratica em fazer o DataFrame.
```
## Estatística Descritiva
```
grupo_bairro['Valor'].describe().round(2).head(10) #Descrição geral de estatística
grupo_bairro['Valor'].aggregate(['min','max']).rename(columns = {'min':'Minimo', 'max':'Máximo'})
#Agrega as estatísticas a sua escolha
%matplotlib inline
import matplotlib.pyplot as plt
plt.rc('figure', figsize = (20,10))
fig = grupo_bairro['Valor'].mean().plot.bar(color = 'blue')
fig.set_ylabel('Valor do Aluguel')
fig.set_title('Valor Médio do Aluguel por Bairro', {'fontsize':22})
```
| github_jupyter |
```
from xml.etree import ElementTree
from xml.dom import minidom
from xml.etree.ElementTree import Element, SubElement, Comment, indent
def prettify(elem):
"""Return a pretty-printed XML string for the Element.
"""
rough_string = ElementTree.tostring(elem, encoding="ISO-8859-1")
reparsed = minidom.parseString(rough_string)
return reparsed.toprettyxml(indent="\t")
import numpy as np
import os
valve_ids = np.arange(2,4+1)
hyb_ids = np.arange(34,36+1)
reg_names = [f'GM{_i}' for _i in np.arange(1,3+1)]
source_folder = r'D:\Shiwei\20210706-P_Forebrain_CTP09_only'
target_drive = r'\\KOLMOGOROV\Chromatin_NAS_5'
imaging_protocol = r'Zscan_750_647_488_s60_n200'
bleach_protocol = r'Bleach_740_647_s5'
cmd_seq = Element('command_sequence')
for _vid, _hid, _rname in zip(valve_ids, hyb_ids, reg_names):
# comments
comment = Comment(f"Hyb {_hid} for {_rname}")
cmd_seq.append(comment)
# TCEP
tcep = SubElement(cmd_seq, 'valve_protocol')
tcep.text = "Flow TCEP"
# flow adaptor
adt = SubElement(cmd_seq, 'valve_protocol')
adt.text = f"Hybridize {_vid}"
# delay time
adt_incubation = SubElement(cmd_seq, 'delay')
adt_incubation.text = "60000"
# change bleach directory
change_dir = SubElement(cmd_seq, 'change_directory')
change_dir.text = os.path.join(source_folder, f"Bleach")
# wakeup
wakeup = SubElement(cmd_seq, 'wakeup')
wakeup.text = "5000"
# bleach loop
loop = SubElement(cmd_seq, 'loop', name='Position Loop Zscan', increment="name")
loop_item = SubElement(loop, 'item', name=bleach_protocol)
loop_item.text = " "
# wash
wash = SubElement(cmd_seq, 'valve_protocol')
wash.text = "Short Wash"
# readouts
readouts = SubElement(cmd_seq, 'valve_protocol')
readouts.text = "Flow Readouts"
# delay time
adt_incubation = SubElement(cmd_seq, 'delay')
adt_incubation.text = "60000"
# change directory
change_dir = SubElement(cmd_seq, 'change_directory')
change_dir.text = os.path.join(source_folder, f"H{_hid}{_rname.upper()}")
# wakeup
wakeup = SubElement(cmd_seq, 'wakeup')
wakeup.text = "5000"
# hybridization loop
loop = SubElement(cmd_seq, 'loop', name='Position Loop Zscan', increment="name")
loop_item = SubElement(loop, 'item', name=imaging_protocol)
loop_item.text = " "
# delay time
delay = SubElement(cmd_seq, 'delay')
delay.text = "2000"
# copy folder
copy_dir = SubElement(cmd_seq, 'copy_directory')
source_dir = SubElement(copy_dir, 'source_path')
source_dir.text = change_dir.text#cmd_seq.findall('change_directory')[-1].text
target_dir = SubElement(copy_dir, 'target_path')
target_dir.text = os.path.join(target_drive,
os.path.basename(os.path.dirname(source_dir.text)),
os.path.basename(source_dir.text))
del_source = SubElement(copy_dir, 'delete_source')
del_source.text = "True"
# empty line
indent(target_dir)
print( prettify(cmd_seq))
```
| github_jupyter |
```
!pip install praw
import praw
from time import sleep
import datetime
from datetime import timedelta
```
I am going to start out using the Reddit app, and pulling from the subreddit, Wallstreet bets. There has been a lot of news about this subreddit and the stock market in coordination with Gamestop, AMC and BB shares. I want to Analyze the lingo used in this forum and the different shares being discussed. My guesses would be GME being the most talked about stock, Lingo would have diamond hands being the most common, and DFV associated with GME over 60% of the time.
```
uname = 'Apetty914'
upassword = 'NOpe noep nopppeee!'
app_id = 'BpXCT5yB3PXJ3A'
app_secret = 'gkCFVhnara4PjBRGnTsNz4JR3N-GPw'
#This is just identifying the app made in reddit
reddit = praw.Reddit(user_agent="Tendies (by /u/Apetty914)",
client_id=app_id, client_secret=app_secret,
username=uname, password=upassword)
```
I have provided my identification and the app being used. Now I want to start pulling the searches I want. I am not going to collect comments, because I will be here all day and night.
```
subreddit = reddit.subreddit('wallstreetbets')
for submission in subreddit.stream.submissions():
print(submission.title)
#I am making the first list of the stocks.
topiclist1 = []
keywords = 'GME','AMC','BB',
subreddit = reddit.subreddit('wallstreetbets')
resp = subreddit.search(keywords, limit=100)
for submission in resp:
topiclist1.append(submission.title)
sleep(2)
print(submission.id,submission.title,submission.author.name)
#now i am going to search for words that are the 'lingo' in the subreddit
topiclist2 = []
keywords = 'tendies','diamond hands', 'hold', 'paper hands'
subreddit = reddit.subreddit('wallstreetbets')
resp = subreddit.search(keywords, limit=100)
for submission in resp:
topiclist2.append(submission.title)
sleep(2)
print(submission.id,submission.title,submission.author.name)
#Lastly im am going to look for the mention of DFV, usually how his username is abbreviated.
topiclist3 = []
keywords = 'DFV'
subreddit = reddit.subreddit('wallstreetbets')
resp = subreddit.search(keywords, limit=100)
for submission in resp:
topiclist3.append(submission.title)
sleep(2)
print(submission.id,submission.title,submission.author.name)
```
Just checking to make sure a topic list has the posts saved.
```
len(topiclist2)
print(topiclist1)
```
I want to run each of my topic lists through a list that will tell me which key word was in the post. Then I can count them and graph them.
```
stock = []
for topic in topiclist1:
if 'GME' in topic:
gamestop=('GME')
stock.append(gamestop)
if 'BB' in topic:
blackberry=('BB')
stock.append(blackberry)
if 'AMC' in topic:
movies=('AMC')
stock.append(movies)
else:
print('error')
#print out the new stock list. Be able to count after.
stock
stock.count('BB')
```
Now I have to make the lists for the other topic lists.
```
lingo = []
for topic in topiclist2:
if 'tendies' in topic:
tendies=('tendies')
lingo.append(tendies)
if 'diamond hands' in topic:
dh=('diamond hands')
lingo.append(dh)
if 'paper hands' in topic:
ph=('paper hands')
lingo.append(ph)
if 'hold' in topic:
hld=('hold')
lingo.append(hld)
else:
print('error')
len(lingo)
# I want to search these titles for words that could be used as following his lead or advice. Things like still in, 'like the stock', and GME as this is the stock he is mostly associated with.
DFVlist = []
for topic in topiclist3:
if 'GME' in topic:
gamestop=('GME')
DFVlist.append(gamestop)
if 'like the stock' in topic:
lts=('like the stock')
DFVlist.append(lts)
if 'still in' in topic:
si=('still in')
DFVlist.append(si)
else:
print('error')
DFVlist
```
So now that I have lists with the 'phrases' isolated, I want to make them into dataframes, so I am able to graph them. I have to import pandas and numpy for that.
```
import pandas as pd
import numpy as np
import matplotlib as plot
#make stock list into dataframe
from pandas import DataFrame
stocks = pd.DataFrame (stock,columns=['ticker'])
stocks
```
Now I know it works I will make the other two lists into dataframes as well.
```
dfvmentions = pd.DataFrame (DFVlist, columns=['mention'])
terms = pd.DataFrame (lingo, columns=['term'])
dfvmentions
terms
```
Know I have these dataframes, I can do some quick looks at the data. The stocks have more than 100 entries, meaning the titles often mentioned more than one of the stocks. DFV mentions only contained a term I looked for 25% of the time, could mean people jus tlike to talk about him for other reasons. Also the lingo terms only had 50 hits, but I pulled 100 titles. I think it has to do with the conjugation of terms I looked for.
I want to get counts of each of my variables in each dataframe. then i will plot them, I want ratios so I will due pie charts.
```
stocks.ticker.value_counts()
dfvmentions.mention.value_counts()
terms.term.value_counts()
```
I can graph these now for a nice display of the data.
```
stocks.ticker.value_counts().plot.pie()
terms.term.value_counts().plot.pie()
dfvmentions.mention.value_counts().plot.pie()
```
To go over my hunches or very basic hypothesis, I am on the right track with GME mentions with DFV if you only take the last 100 posts (Im almost positive if you go back a month you may find different answers.) However, 75% of his mentions do not have a term I looked for in them. Interesting, and more posts should be collected.
The lingo is very telling about the vibe of the subreddit. Diamond hands refers to holding and not selling. Paper hands was by far the most common term used and could be being used as making fun or calling out those who have sold their shares. Emojis replacing the actual words is something to search for in the future.
In regards to the mention of stocks, I was shocked that the last 100 posts were not dominated by GME. I want to only search for the ticker because it makes it easier on a trading page and not worry about spelling/ capitilization. BB being the over arching dominating mention for the last 100 posts is intriguing. Even to me as I have been watching the community for months.
| github_jupyter |
# Multi-variate Rregression Metamodel with DOE based on random sampling
* Input variable space should be constructed using random sampling, not classical factorial DOE
* Linear fit is often inadequate but higher-order polynomial fits often leads to overfitting i.e. learns spurious, flawed relationships between input and output
* R-square fit can often be misleding measure in case of high-dimensional regression
* Metamodel can be constructed by selectively discovering features (or their combination) which matter and shrinking other high-order terms towards zero
** [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics)) is an effective regularization technique for this purpose**
#### LASSO: Least Absolute Shrinkage and Selection Operator
$$ {\displaystyle \min _{\beta _{0},\beta }\left\{{\frac {1}{N}}\sum _{i=1}^{N}(y_{i}-\beta _{0}-x_{i}^{T}\beta )^{2}\right\}{\text{ subject to }}\sum _{j=1}^{p}|\beta _{j}|\leq t.} $$
### Import libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
### Global variables
```
N_points = 20 # Number of sample points
# start with small < 40 points and see how the regularized model makes a difference.
# Then increase the number is see the difference
noise_mult = 50 # Multiplier for the noise term
noise_mean = 10 # Mean for the Gaussian noise adder
noise_sd = 10 # Std. Dev. for the Gaussian noise adder
```
### Generate feature vectors based on random sampling
```
X=np.array(10*np.random.randn(N_points,5))
df=pd.DataFrame(X,columns=['Feature'+str(l) for l in range(1,6)])
df.head()
```
### Plot the random distributions of input features
```
for i in df.columns:
df.hist(i,bins=5,xlabelsize=15,ylabelsize=15,figsize=(8,6))
```
### Generate the output variable by analytic function + Gaussian noise (our goal will be to *'learn'* this function)
#### Let's construst the ground truth or originating function as follows:
$ y=f(x_1,x_2,x_3,x_4,x_5)= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+\psi(x)\ :\ \psi(x) = {\displaystyle f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}}$
```
df['y']=5*df['Feature1']**2+13*df['Feature2']+0.1*df['Feature3']**2*df['Feature1'] \
+2*df['Feature4']*df['Feature5']+0.1*df['Feature5']**3+0.8*df['Feature1']*df['Feature4']*df['Feature5'] \
+noise_mult*np.random.normal(loc=noise_mean,scale=noise_sd)
df.head()
```
### Plot single-variable scatterplots
** It is clear that no clear pattern can be gussed with these single-variable plots **
```
for i in df.columns:
df.plot.scatter(i,'y', edgecolors=(0,0,0),s=50,c='g',grid=True)
```
### Standard linear regression
```
from sklearn.linear_model import LinearRegression
linear_model = LinearRegression(normalize=True)
X_linear=df.drop('y',axis=1)
y_linear=df['y']
linear_model.fit(X_linear,y_linear)
y_pred_linear = linear_model.predict(X_linear)
```
### R-square of simple linear fit is very bad, coefficients have no meaning i.e. we did not 'learn' the function
```
RMSE_linear = np.sqrt(np.sum(np.square(y_pred_linear-y_linear)))
print("Root-mean-square error of linear model:",RMSE_linear)
coeff_linear = pd.DataFrame(linear_model.coef_,index=df.drop('y',axis=1).columns, columns=['Linear model coefficients'])
coeff_linear
print ("R2 value of linear model:",linear_model.score(X_linear,y_linear))
plt.figure(figsize=(12,8))
plt.xlabel("Predicted value with linear fit",fontsize=20)
plt.ylabel("Actual y-values",fontsize=20)
plt.grid(1)
plt.scatter(y_pred_linear,y_linear,edgecolors=(0,0,0),lw=2,s=80)
plt.plot(y_pred_linear,y_pred_linear, 'k--', lw=2)
```
### Create polynomial features
```
from sklearn.preprocessing import PolynomialFeatures
poly1 = PolynomialFeatures(3,include_bias=False)
X_poly = poly1.fit_transform(X)
X_poly_feature_name = poly1.get_feature_names(['Feature'+str(l) for l in range(1,6)])
print("The feature vector list:\n",X_poly_feature_name)
print("\nLength of the feature vector:",len(X_poly_feature_name))
df_poly = pd.DataFrame(X_poly, columns=X_poly_feature_name)
df_poly.head()
df_poly['y']=df['y']
df_poly.head()
X_train=df_poly.drop('y',axis=1)
y_train=df_poly['y']
```
### Polynomial model without regularization and cross-validation
```
poly2 = LinearRegression(normalize=True)
model_poly=poly2.fit(X_train,y_train)
y_poly = poly2.predict(X_train)
RMSE_poly=np.sqrt(np.sum(np.square(y_poly-y_train)))
print("Root-mean-square error of simple polynomial model:",RMSE_poly)
```
### The non-regularized polunomial model (notice the coeficients are not learned properly)
** Recall that the originating function is: **
$ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $
```
coeff_poly = pd.DataFrame(model_poly.coef_,index=df_poly.drop('y',axis=1).columns,
columns=['Coefficients polynomial model'])
coeff_poly
```
#### R-square value of the simple polynomial model is perfect but the model is flawed as shown above i.e. it learned wrong coefficients and overfitted the to the data
```
print ("R2 value of simple polynomial model:",model_poly.score(X_train,y_train))
```
### Polynomial model with cross-validation and LASSO regularization
** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **
```
from sklearn.linear_model import LassoCV
model1 = LassoCV(cv=10,verbose=0,normalize=True,eps=0.001,n_alphas=100, tol=0.0001,max_iter=5000)
model1.fit(X_train,y_train)
y_pred1 = np.array(model1.predict(X_train))
RMSE_1=np.sqrt(np.sum(np.square(y_pred1-y_train)))
print("Root-mean-square error of Metamodel:",RMSE_1)
coeff1 = pd.DataFrame(model1.coef_,index=df_poly.drop('y',axis=1).columns, columns=['Coefficients Metamodel'])
coeff1
model1.score(X_train,y_train)
model1.alpha_
```
### Printing only the non-zero coefficients of the regularized model (notice the coeficients are learned well enough)
** Recall that the originating function is: **
$ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $
```
coeff1[coeff1['Coefficients Metamodel']!=0]
plt.figure(figsize=(12,8))
plt.xlabel("Predicted value with Regularized Metamodel",fontsize=20)
plt.ylabel("Actual y-values",fontsize=20)
plt.grid(1)
plt.scatter(y_pred1,y_train,edgecolors=(0,0,0),lw=2,s=80)
plt.plot(y_pred1,y_pred1, 'k--', lw=2)
```
| github_jupyter |
```
# Dependencies and Setup
import pandas as pd
# import dataframe_image as dfi
# File to Load (Remember to change the path if needed.)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read the School Data and Student Data and store into a Pandas DataFrame
school_data_df = pd.read_csv(school_data_to_load)
student_data_df = pd.read_csv(student_data_to_load)
# Cleaning Student Names and Replacing Substrings in a Python String
# Add each prefix and suffix to remove to a list.
prefixes_suffixes = ["Dr. ", "Mr. ","Ms. ", "Mrs. ", "Miss ", " MD", " DDS", " DVM", " PhD"]
# Iterate through the words in the "prefixes_suffixes" list and replace them with an empty space, "".
for word in prefixes_suffixes:
student_data_df["student_name"] = student_data_df["student_name"].str.replace(word,"")
# Check names.
student_data_df.head(10)
```
## Deliverable 1: Replace the reading and math scores.
### Replace the 9th grade reading and math scores at Thomas High School with NaN.
```
# Install numpy using conda install numpy or pip install numpy.
# Step 1. Import numpy as np.
import numpy as np
student_data_df
# Step 2. Use the loc method on the student_data_df to select all the reading scores from the 9th grade at Thomas High School and replace them with NaN.
student_data_df.loc[((student_data_df['school_name'] == 'Thomas High School') & (student_data_df['grade'] == '9th')), 'reading_score'] = np.nan
# Step 3. Refactor the code in Step 2 to replace the math scores with NaN.
student_data_df.loc[((student_data_df['school_name'] == 'Thomas High School') & (student_data_df['grade'] == '9th')), 'math_score'] = np.nan
# Step 4. Check the student data for NaN's.
student_data_df[student_data_df['reading_score'].isnull() & student_data_df['reading_score'].isnull()]
```
## Deliverable 2 : Repeat the school district analysis
### District Summary
```
# Combine the data into a single dataset
school_data_complete_df = pd.merge(student_data_df, school_data_df, how="left", on=["school_name", "school_name"])
school_data_complete_df.head()
# Calculate the Totals (Schools and Students)
school_count = len(school_data_complete_df["school_name"].unique())
student_count = school_data_complete_df["Student ID"].count()
# Calculate the Total Budget
total_budget = school_data_df["budget"].sum()
# Calculate the Average Scores using the "clean_student_data".
average_reading_score = school_data_complete_df["reading_score"].mean()
average_math_score = school_data_complete_df["math_score"].mean()
# Step 1. Get the number of students that are in ninth grade at Thomas High School.
# These students have no grades.
student_count_ninth_thomas = student_data_df[student_data_df['reading_score'].isnull() & student_data_df['reading_score'].isnull()].count()[0]
# Get the total student count
student_count = school_data_complete_df["Student ID"].count()
# Step 2. Subtract the number of students that are in ninth grade at
# Thomas High School from the total student count to get the new total student count.
new_student_count = student_count - student_count_ninth_thomas
new_student_count
# Calculate the passing rates using the "clean_student_data".
passing_math_count = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)].count()["student_name"]
passing_reading_count = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)].count()["student_name"]
# Step 3. Calculate the passing percentages with the new total student count.
passing_math_percentage = (passing_math_count / new_student_count) * 100
passing_reading_percentage = (passing_reading_count / new_student_count) * 100
passing_math_percentage, passing_reading_percentage
# Calculate the students who passed both reading and math.
passing_math_reading = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)
& (school_data_complete_df["reading_score"] >= 70)]
# Calculate the number of students that passed both reading and math.
overall_passing_math_reading_count = passing_math_reading["student_name"].count()
# Step 4.Calculate the overall passing percentage with new total student count.
overall_passing_math_reading_percentage = (overall_passing_math_reading_count / new_student_count) * 100
overall_passing_math_reading_percentage
# Create a DataFrame
district_summary_df = pd.DataFrame(
[{"Total Schools": school_count,
"Total Students": student_count,
"Total Budget": total_budget,
"Average Math Score": average_math_score,
"Average Reading Score": average_reading_score,
"% Passing Math": passing_math_percentage,
"% Passing Reading": passing_reading_percentage,
"% Overall Passing":
overall_passing_math_reading_percentage}])
# Format the "Total Students" to have the comma for a thousands separator.
district_summary_df["Total Students"] = district_summary_df["Total Students"].map("{:,}".format)
# Format the "Total Budget" to have the comma for a thousands separator, a decimal separator and a "$".
district_summary_df["Total Budget"] = district_summary_df["Total Budget"].map("${:,.2f}".format)
# Format the columns.
district_summary_df["Average Math Score"] = district_summary_df["Average Math Score"].map("{:.1f}".format)
district_summary_df["Average Reading Score"] = district_summary_df["Average Reading Score"].map("{:.1f}".format)
district_summary_df["% Passing Math"] = district_summary_df["% Passing Math"].map("{:.1f}".format)
district_summary_df["% Passing Reading"] = district_summary_df["% Passing Reading"].map("{:.1f}".format)
district_summary_df["% Overall Passing"] = district_summary_df["% Overall Passing"].map("{:.1f}".format)
# Display the data frame
district_summary_df
```
## School Summary
```
# Determine the School Type
per_school_types = school_data_df.set_index(["school_name"])["type"]
# Calculate the total student count.
per_school_counts = school_data_complete_df["school_name"].value_counts()
# Calculate the total school budget and per capita spending
per_school_budget = school_data_complete_df.groupby(["school_name"]).mean()["budget"]
# Calculate the per capita spending.
per_school_capita = per_school_budget / per_school_counts
# Calculate the average test scores.
per_school_math = school_data_complete_df.groupby(["school_name"]).mean()["math_score"]
per_school_reading = school_data_complete_df.groupby(["school_name"]).mean()["reading_score"]
# Calculate the passing scores by creating a filtered DataFrame.
per_school_passing_math = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)]
per_school_passing_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)]
# Calculate the number of students passing math and passing reading by school.
per_school_passing_math = per_school_passing_math.groupby(["school_name"]).count()["student_name"]
per_school_passing_reading = per_school_passing_reading.groupby(["school_name"]).count()["student_name"]
# Calculate the percentage of passing math and reading scores per school.
per_school_passing_math = per_school_passing_math / per_school_counts * 100
per_school_passing_reading = per_school_passing_reading / per_school_counts * 100
# Calculate the students who passed both reading and math.
per_passing_math_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)
& (school_data_complete_df["math_score"] >= 70)]
# Calculate the number of students passing math and passing reading by school.
per_passing_math_reading = per_passing_math_reading.groupby(["school_name"]).count()["student_name"]
# Calculate the percentage of passing math and reading scores per school.
per_overall_passing_percentage = per_passing_math_reading / per_school_counts * 100
# Create the DataFrame
per_school_summary_df = pd.DataFrame({
"School Type": per_school_types,
"Total Students": per_school_counts,
"Total School Budget": per_school_budget,
"Per Student Budget": per_school_capita,
"Average Math Score": per_school_math,
"Average Reading Score": per_school_reading,
"% Passing Math": per_school_passing_math,
"% Passing Reading": per_school_passing_reading,
"% Overall Passing": per_overall_passing_percentage})
per_school_summary_df.head()
# Format the Total School Budget and the Per Student Budget
per_school_summary_df["Total School Budget"] = per_school_summary_df["Total School Budget"].map("${:,.2f}".format)
per_school_summary_df["Per Student Budget"] = per_school_summary_df["Per Student Budget"].map("${:,.2f}".format)
# Display the data frame
per_school_summary_df
# Step 5. Get the number of 10th-12th graders from Thomas High School (THS).
student_count_non_ninth_thomas = school_data_complete_df.loc[(school_data_complete_df['school_name'] == 'Thomas High School')
& (school_data_complete_df['grade'] != '9th')].count()[0]
# Step 6. Get all the students passing math from THS
student_non_ninth_thomas = school_data_complete_df.loc[(school_data_complete_df['school_name'] == 'Thomas High School')
& (school_data_complete_df['grade'] != '9th')]
student_non_ninth_thomas_passing_math = student_non_ninth_thomas.loc[student_non_ninth_thomas['math_score'] >= 70]
student_non_ninth_thomas_passing_math
# Step 7. Get all the students passing reading from THS
student_non_ninth_thomas_passing_reading = student_non_ninth_thomas.loc[student_non_ninth_thomas['reading_score'] >= 70]
student_non_ninth_thomas_passing_reading
# Step 8. Get all the students passing math and reading from THS
student_non_ninth_thomas_passing_math_reading = student_non_ninth_thomas.loc[(student_non_ninth_thomas['math_score'] >= 70) &(student_non_ninth_thomas['reading_score'] >= 70)]
# Step 9. Calculate the percentage of 10th-12th grade students passing math from Thomas High School.
(student_non_ninth_thomas_passing_math.count()[0] / student_non_ninth_thomas.count()[0]) * 100
# Step 10. Calculate the percentage of 10th-12th grade students passing reading from Thomas High School.
(student_non_ninth_thomas_passing_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100
# Step 11. Calculate the overall passing percentage of 10th-12th grade from Thomas High School.
(student_non_ninth_thomas_passing_math_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100
# Step 12. Replace the passing math percent for Thomas High School in the per_school_summary_df.
per_school_summary_df.loc[per_school_summary_df.index == 'Thomas High School', '% Passing Math'] = (student_non_ninth_thomas_passing_math.count()[0] / student_non_ninth_thomas.count()[0]) * 100
# Step 13. Replace the passing reading percentage for Thomas High School in the per_school_summary_df.
per_school_summary_df.loc[per_school_summary_df.index == 'Thomas High School', '% Passing Reading'] = (student_non_ninth_thomas_passing_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100
# Step 14. Replace the overall passing percentage for Thomas High School in the per_school_summary_df.
per_school_summary_df.loc[per_school_summary_df.index == 'Thomas High School', '% Overall Passing'] = (student_non_ninth_thomas_passing_math_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100
per_school_summary_df
```
## High and Low Performing Schools
```
# Sort and show top five schools.
per_school_summary_df.sort_values('% Overall Passing', ascending=False).head(5)
# Sort and show bottom five schools.
per_school_summary_df.sort_values('% Overall Passing', ascending=True).head(5)
```
## Math and Reading Scores by Grade
```
# Create a Series of scores by grade levels using conditionals.
ninth_graders = school_data_complete_df[(school_data_complete_df["grade"] == "9th")]
tenth_graders = school_data_complete_df[(school_data_complete_df["grade"] == "10th")]
eleventh_graders = school_data_complete_df[(school_data_complete_df["grade"] == "11th")]
twelfth_graders = school_data_complete_df[(school_data_complete_df["grade"] == "12th")]
# Group each school Series by the school name for the average math score.
math_score_ninth_graders = ninth_graders.groupby('school_name').mean()['math_score']
math_score_tenth_graders = tenth_graders.groupby('school_name').mean()['math_score']
math_score_eleventh_graders = eleventh_graders.groupby('school_name').mean()['math_score']
math_score_twelfth_graders = twelfth_graders.groupby('school_name').mean()['math_score']
# Group each school Series by the school name for the average reading score.
reading_score_ninth_graders = ninth_graders.groupby('school_name').mean()['reading_score']
reading_score_tenth_graders = tenth_graders.groupby('school_name').mean()['reading_score']
reading_score_eleventh_graders = eleventh_graders.groupby('school_name').mean()['reading_score']
reading_score_twelfth_graders = twelfth_graders.groupby('school_name').mean()['reading_score']
# Combine each Series for average math scores by school into single data frame.
mean_math_school = pd.DataFrame({
'9th': math_score_ninth_graders,
'10th': math_score_tenth_graders,
'11th': math_score_eleventh_graders,
'12th': math_score_twelfth_graders
})
mean_math_school.head()
# Combine each Series for average reading scores by school into single data frame.
mean_reading_school = pd.DataFrame({
'9th': reading_score_ninth_graders,
'10th': reading_score_tenth_graders,
'11th': reading_score_eleventh_graders,
'12th': reading_score_twelfth_graders
})
mean_reading_school.head()
mean_math_school.dtypes
# Format each grade column.
mean_math_school["9th"] = mean_math_school["9th"].map("{:,.1f}".format)
mean_math_school["10th"] = mean_math_school["10th"].map("{:,.1f}".format)
mean_math_school["11th"] = mean_math_school["11th"].map("{:,.1f}".format)
mean_math_school["12th"] = mean_math_school["12th"].map("{:,.1f}".format)
mean_reading_school["9th"] = mean_reading_school["9th"].map("{:,.1f}".format)
mean_reading_school["10th"] = mean_reading_school["10th"].map("{:,.1f}".format)
mean_reading_school["11th"] = mean_reading_school["11th"].map("{:,.1f}".format)
mean_reading_school["12th"] = mean_reading_school["12th"].map("{:,.1f}".format)
# Remove the index.
mean_math_school.index.name = None
# Display the data frame
mean_math_school
## Remove the index.
mean_reading_school.index.name = None
# Display the data frame
mean_reading_school
```
## Scores by School Spending
```
# Establish the spending bins and group names.
spending_bins = [0, 585, 630, 645, 675]
group_names = ["<$584", "$585-629", "$630-644", "$645-675"]
# Categorize spending based on the bins.
per_school_summary_df["Spending Ranges (Per Student)"] = pd.cut(per_school_capita, spending_bins, labels=group_names)
per_school_summary_df
# Calculate averages for the desired columns.
spending_math_scores = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Math Score"]
spending_reading_scores = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Reading Score"]
spending_passing_math = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Math"]
spending_passing_reading = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Reading"]
overall_passing_spending = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Overall Passing"]
# Create the DataFrame
spending_summary_df = pd.DataFrame({
"Average Math Score" : spending_math_scores,
"Average Reading Score": spending_reading_scores,
"% Passing Math": spending_passing_math,
"% Passing Reading": spending_passing_reading,
"% Overall Passing": overall_passing_spending})
spending_summary_df
# Format the DataFrame
spending_summary_df["Average Math Score"] = spending_summary_df["Average Math Score"].map("{:.1f}".format)
spending_summary_df["Average Reading Score"] = spending_summary_df["Average Reading Score"].map("{:.1f}".format)
spending_summary_df["% Passing Math"] = spending_summary_df["% Passing Math"].map("{:.0f}".format)
spending_summary_df["% Passing Reading"] = spending_summary_df["% Passing Reading"].map("{:.0f}".format)
spending_summary_df["% Overall Passing"] = spending_summary_df["% Overall Passing"].map("{:.0f}".format)
spending_summary_df
```
## Scores by School Size
```
# Establish the bins.
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
# Categorize spending based on the bins.
per_school_summary_df["School Size"] = pd.cut(per_school_summary_df['Total Students'], size_bins, labels=group_names)
per_school_summary_df
per_school_summary_df[per_school_summary_df.index == 'Thomas High School']
# Calculate averages for the desired columns.
# Calculate averages for the desired columns.
size_math_scores = per_school_summary_df.groupby(["School Size"]).mean()["Average Math Score"]
size_reading_scores = per_school_summary_df.groupby(["School Size"]).mean()["Average Reading Score"]
size_passing_math = per_school_summary_df.groupby(["School Size"]).mean()["% Passing Math"]
size_passing_reading = per_school_summary_df.groupby(["School Size"]).mean()["% Passing Reading"]
size_overall_passing = per_school_summary_df.groupby(["School Size"]).mean()["% Overall Passing"]
# Assemble into DataFrame.
size_summary_df = pd.DataFrame({
"Average Math Score" : size_math_scores,
"Average Reading Score": size_reading_scores,
"% Passing Math": size_passing_math,
"% Passing Reading": size_passing_reading,
"% Overall Passing": size_overall_passing})
size_summary_df
# Format the DataFrame
# Formatting.
size_summary_df["Average Math Score"] = size_summary_df["Average Math Score"].map("{:.2f}".format)
size_summary_df["Average Reading Score"] = size_summary_df["Average Reading Score"].map("{:.2f}".format)
size_summary_df["% Passing Math"] = size_summary_df["% Passing Math"].map("{:.2f}".format)
size_summary_df["% Passing Reading"] = size_summary_df["% Passing Reading"].map("{:.2f}".format)
size_summary_df["% Overall Passing"] = size_summary_df["% Overall Passing"].map("{:.2f}".format)
size_summary_df
```
## Scores by School Type
```
# Calculate averages for the desired columns.
type_math_scores = per_school_summary_df.groupby(["School Type"]).mean()["Average Math Score"]
type_reading_scores = per_school_summary_df.groupby(["School Type"]).mean()["Average Reading Score"]
type_passing_math = per_school_summary_df.groupby(["School Type"]).mean()["% Passing Math"]
type_passing_reading = per_school_summary_df.groupby(["School Type"]).mean()["% Passing Reading"]
type_overall_passing = per_school_summary_df.groupby(["School Type"]).mean()["% Overall Passing"]
# Assemble into DataFrame.
type_summary_df = pd.DataFrame({
"Average Math Score" : type_math_scores,
"Average Reading Score": type_reading_scores,
"% Passing Math": type_passing_math,
"% Passing Reading": type_passing_reading,
"% Overall Passing": type_overall_passing})
type_summary_df
# # Format the DataFrame
type_summary_df["Average Math Score"] = type_summary_df["Average Math Score"].map("{:.1f}".format)
type_summary_df["Average Reading Score"] = type_summary_df["Average Reading Score"].map("{:.1f}".format)
type_summary_df["% Passing Math"] = type_summary_df["% Passing Math"].map("{:.0f}".format)
type_summary_df["% Passing Reading"] = type_summary_df["% Passing Reading"].map("{:.0f}".format)
type_summary_df["% Overall Passing"] = type_summary_df["% Overall Passing"].map("{:.0f}".format)
type_summary_df
```
| github_jupyter |
# Batch predicting using Cloud Machine Learning Engine
A Kubeflow Pipeline component to submit a batch prediction job against a trained model to Cloud ML Engine service.
## Intended use
Use the component to run a batch prediction job against a deployed model in Cloud Machine Learning Engine. The prediction output will be stored in a Cloud Storage bucket.
## Runtime arguments
Name | Description | Type | Optional | Default
:--- | :---------- | :--- | :------- | :------
project_id | The ID of the parent project of the job. | GCPProjectID | No |
model_path | Required. The path to the model. It can be one of the following paths:<ul><li>`projects/[PROJECT_ID]/models/[MODEL_ID]`</li><li>`projects/[PROJECT_ID]/models/[MODEL_ID]/versions/[VERSION_ID]`</li><li>Cloud Storage path of a model file.</li></ul> | String | No |
input_paths | The Cloud Storage location of the input data files. May contain wildcards. For example: `gs://foo/*.csv` | List | No |
input_data_format | The format of the input data files. See [DataFormat](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#DataFormat). | String | No |
output_path | The Cloud Storage location for the output data. | GCSPath | No |
region | The region in Compute Engine where the prediction job is run. | GCPRegion | No |
output_data_format | The format of the output data files. See [DataFormat](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#DataFormat). | String | Yes | `JSON`
prediction_input | The JSON input parameters to create a prediction job. See [PredictionInput](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#PredictionInput) to know more. | Dict | Yes | ` `
job_id_prefix | The prefix of the generated job id. | String | Yes | ` `
wait_interval | A time-interval to wait for in case the operation has a long run time. | Integer | Yes | `30`
## Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created batch job. | String
output_path | The output path of the batch prediction job | GCSPath
## Cautions & requirements
To use the component, you must:
* Setup cloud environment by following the [guide](https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction#setup).
* The component is running under a secret of [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts) in a Kubeflow cluster. For example:
```python
mlengine_predict_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
* Grant Kubeflow user service account the read access to the Cloud Storage buckets which contains the input data.
* Grant Kubeflow user service account the write access to the Cloud Storage bucket of the output directory.
## Detailed Description
The component accepts following input data:
* A trained model: it can be a model file in Cloud Storage, or a deployed model or version in Cloud Machine Learning Engine. The path to the model is specified by the `model_path` parameter.
* Input data: the data will be used to make predictions against the input trained model. The data can be in [multiple formats](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#DataFormat). The path of the data is specified by `input_paths` parameter and the format is specified by `input_data_format` parameter.
Here are the steps to use the component in a pipeline:
1. Install KFP SDK
```
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
mlengine_batch_predict_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/ml_engine/batch_predict/component.yaml')
help(mlengine_batch_predict_op)
```
For more information about the component, please checkout:
* [Component python code](https://github.com/kubeflow/pipelines/blob/master/component_sdk/python/kfp_component/google/ml_engine/_batch_predict.py)
* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/ml_engine/batch_predict/sample.ipynb)
* [Cloud Machine Learning Engine job REST API](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs)
### Sample Code
Note: the sample code below works in both IPython notebook or python code directly.
In this sample, we batch predict against a pre-built trained model from `gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/` and use the test data from `gs://ml-pipeline-playground/samples/ml_engine/census/test.json`.
#### Inspect the test data
```
!gsutil cat gs://ml-pipeline-playground/samples/ml_engine/census/test.json
```
#### Set sample parameters
```
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'CLOUDML - Batch Predict'
OUTPUT_GCS_PATH = GCS_WORKING_DIR + '/batch_predict/output/'
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='CloudML batch predict pipeline',
description='CloudML batch predict pipeline'
)
def pipeline(
project_id = PROJECT_ID,
model_path = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/',
input_paths = '["gs://ml-pipeline-playground/samples/ml_engine/census/test.json"]',
input_data_format = 'JSON',
output_path = OUTPUT_GCS_PATH,
region = 'us-central1',
output_data_format='',
prediction_input = json.dumps({
'runtimeVersion': '1.10'
}),
job_id_prefix='',
wait_interval='30'):
mlengine_batch_predict_op(
project_id=project_id,
model_path=model_path,
input_paths=input_paths,
input_data_format=input_data_format,
output_path=output_path,
region=region,
output_data_format=output_data_format,
prediction_input=prediction_input,
job_id_prefix=job_id_prefix,
wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
#### Compile the pipeline
```
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
#### Inspect prediction results
```
OUTPUT_FILES_PATTERN = OUTPUT_GCS_PATH + '*'
!gsutil cat OUTPUT_FILES_PATTERN
```
| github_jupyter |
## <div align="center"> 10 Steps to Become a Data Scientist +20Q</div>
<div align="center">**quite practical and far from any theoretical concepts**</div>
<div style="text-align:center">last update: <b>15/01/2019</b></div>
<img src="http://s9.picofile.com/file/8338833934/DS.png"/>
---------------------------------------------------------------------
Fork and Run this course on GitHub:
> #### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)
-------------------------------------------------------------------------------------------------------------
<b>I hope you find this kernel helpful and some <font color="red"> UPVOTES</font> would be very much appreciated.</b>
-----------
<a id="top"></a> <br>
**Notebook Content**
[Introduction](#Introduction)
1. [Python](#Python)
1. [Python Packages](#PythonPackages)
1. [Mathematics and Linear Algebra](#Algebra)
1. [Programming & Analysis Tools](#Programming)
1. [Big Data](#BigData)
1. [Data visualization](#Datavisualization)
1. [Data Cleaning](#DataCleaning)
1. [How to solve Problem?](#Howto)
1. [Machine Learning](#MachineLearning)
1. [Deep Learning](#DeepLearning)
<a id="Introduction"></a> <br>
# Introduction
If you Read and Follow **Job Ads** to hire a machine learning expert or a data scientist, you find that some skills you should have to get the job. In this Kernel, I want to review **10 skills** that are essentials to get the job. In fact, this kernel is a reference for **10 other kernels**, which you can learn with them, all of the skills that you need.
We have used two well-known DataSets **Titanic** and **House prices** for starting but when you learned python and python packages, you can start using other datasets too.
**Ready to learn**! you will learn 10 skills as data scientist:
1. [Learn Python](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)
1. [Learn python packages](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-2)
1. [Linear Algebra for Data Scientists](https://www.kaggle.com/mjbahmani/linear-algebra-for-data-scientists)
1. [Programming & Analysis Tools](https://www.kaggle.com/mjbahmani/machine-learning-workflow-for-house-prices)
1. [Big Data](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)
1. [Top 5 Data Visualization Libraries Tutorial](https://www.kaggle.com/mjbahmani/top-5-data-visualization-libraries-tutorial)
1. [How to solve Problem?](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)
1. [Data Cleaning](https://www.kaggle.com/mjbahmani/some-eda-for-elo)
1. [Machine Learning](https://www.kaggle.com/mjbahmani/a-comprehensive-ml-workflow-with-python)
1. [Deep Learning](https://www.kaggle.com/mjbahmani/top-5-deep-learning-frameworks-tutorial)
Thanks to **Kaggle team** due to provide a great professional community for Data Scientists.
###### [go to top](#top)
<a id="1"></a> <br>
# 1-Python
The first step in this course for beginners is python learning. Just takes **10 hours** to learn python.
for reading this section **please** fork and run the following kernel:
[Learn Python](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)
###### [go to top](#top)
<a id="PythonPackages"></a> <br>
# 2-Python Packages
In the second step, we will learn the necessary libraries that are essential for any data scietist.
1. Numpy
1. Pandas
1. Matplotlib
1. Seaborn
1. TensorFlow
1. NLTK
1. Sklearn
and so on
<img src="http://s8.picofile.com/file/8338227868/packages.png">
for Reading this section **please** fork and run this kernel:
1. [The data scientist's toolbox tutorial 1](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)
1. [The data scientist's toolbox tutorial 2](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-2)
###### [go to top](#top)
<a id="Algebra"></a> <br>
## 3- Mathematics and Linear Algebra
Linear algebra is the branch of mathematics that deals with vector spaces. good understanding of Linear Algebra is intrinsic to analyze Machine Learning algorithms, especially for Deep Learning where so much happens behind the curtain.you have my word that I will try to keep mathematical formulas & derivations out of this completely mathematical topic and I try to cover all of subject that you need as data scientist.
<img src=" https://s3.amazonaws.com/www.mathnasium.com/upload/824/images/algebra.jpg " height="300" width="300">
for Reading this section **please** fork and run this kernel:
[Linear Algebra for Data Scientists](https://www.kaggle.com/mjbahmani/linear-algebra-for-data-scientists)
###### [go to top](#top)
<a id="Programming"></a> <br>
## 4- Programming & Analysis Tools
This section is not completed yet but for reading an alternative for it **please** fork and run this kernel:
[Programming & Analysis Tools](https://www.kaggle.com/mjbahmani/machine-learning-workflow-for-house-prices)
###### [go to top](#top)
<a id="BigData"></a> <br>
## 5- Big Data
This section is not completed yet but for reading an alternative for it **please** fork and run this kernel:
[Big Data](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)
<a id="Datavisualization"></a> <br>
## 6- Data Visualization
For Reading this section **please** fork and upvote this kernel:
[Top 5 Data Visualization Libraries Tutorial](https://www.kaggle.com/mjbahmani/top-5-data-visualization-libraries-tutorial)
<a id="DataCleaning"></a> <br>
## 7- Data Cleaning
Certainly another important step in the way of specialization is learning how to clean the data.
In this section, we will do this on Elo data set.
for Reading this section **please** fork and upvote this kernel:
[Data Cleaning](https://www.kaggle.com/mjbahmani/some-eda-for-elo)
<a id="Howto"></a> <br>
## 8- How to Solve Problem?
If you have already read some [machine learning books](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/tree/master/Ebooks). You have noticed that there are different ways to stream data into machine learning.
most of these books share the following steps (checklist):
* Define the Problem(Look at the big picture)
* Specify Inputs & Outputs
* Data Collection
* Exploratory data analysis
* Data Preprocessing
* Model Design, Training, and Offline Evaluation
* Model Deployment, Online Evaluation, and Monitoring
* Model Maintenance, Diagnosis, and Retraining
**You can see my workflow in the below image** :
<img src="http://s9.picofile.com/file/8338227634/workflow.png" />
## 8-1 Real world Application Vs Competitions
Just a simple comparison between real-world apps with competitions:
<img src="http://s9.picofile.com/file/8339956300/reallife.png" height="600" width="500" />
**you should feel free to adapt this checklist to your needs**
## 8-2 Problem Definition
I think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)
Problem Definition has four steps that have illustrated in the picture below:
<img src="http://s8.picofile.com/file/8338227734/ProblemDefination.png">
### 8-2-1 Problem Feature
The sinking of the Titanic is one of the most infamous shipwrecks in history. **On April 15, 1912**, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing **1502 out of 2224** passengers and crew. That's why the name DieTanic. This is a very unforgetable disaster that no one in the world can forget.
It took about $7.5 million to build the Titanic and it sunk under the ocean due to collision. The Titanic Dataset is a very good dataset for begineers to start a journey in data science and participate in competitions in Kaggle.
we will use the classic titanic data set. This dataset contains information about **11 different variables**:
<img src="http://s9.picofile.com/file/8340453092/Titanic_feature.png" height="500" width="500">
* Survival
* Pclass
* Name
* Sex
* Age
* SibSp
* Parch
* Ticket
* Fare
* Cabin
* Embarked
<font color='red'><b>Question</b></font>
1. It's your train what's House Price Data sets feature?
### 8-2-2 Aim
It is your job to predict if a passenger survived the sinking of the Titanic or not. For each PassengerId in the test set, you must predict a 0 or 1 value for the Survived variable.
### 8-2-3 Variables
1. **Age** ==>> Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5
2. **Sibsp** ==>> The dataset defines family relations in this way...
a. Sibling = brother, sister, stepbrother, stepsister
b. Spouse = husband, wife (mistresses and fiancés were ignored)
3. **Parch** ==>> The dataset defines family relations in this way...
a. Parent = mother, father
b. Child = daughter, son, stepdaughter, stepson
c. Some children travelled only with a nanny, therefore parch=0 for them.
4. **Pclass** ==>> A proxy for socio-economic status (SES)
* 1st = Upper
* 2nd = Middle
* 3rd = Lower
5. **Embarked** ==>> nominal datatype
6. **Name** ==>> nominal datatype . It could be used in feature engineering to derive the gender from title
7. **Sex** ==>> nominal datatype
8. **Ticket** ==>> that have no impact on the outcome variable. Thus, they will be excluded from analysis
9. **Cabin** ==>> is a nominal datatype that can be used in feature engineering
11. **Fare** ==>> Indicating the fare
12. **PassengerID ** ==>> have no impact on the outcome variable. Thus, it will be excluded from analysis
11. **Survival** is ==>> **[dependent variable](http://www.dailysmarty.com/posts/difference-between-independent-and-dependent-variables-in-machine-learning)** , 0 or 1
**<< Note >>**
> You must answer the following question:
How does your company expact to use and benfit from your model.
for Reading this section **please** fork and upvote this kernel:
[How to solve Problem?](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)
###### [Go to top](#top)
<a id="MachineLearning"></a> <br>
## 9- Machine learning
for reading this section **please** fork and upvote this kernel:
[A Comprehensive ML Workflow with Python](https://www.kaggle.com/mjbahmani/a-comprehensive-ml-workflow-with-python)
<a id="DeepLearning"></a> <br>
## 10- Deep Learning
for Reading this section **please** fork and upvote this kernel:
[A-Comprehensive-Deep-Learning-Workflow-with-Python](https://www.kaggle.com/mjbahmani/a-comprehensive-deep-learning-workflow-with-python)
---------------------------
# <div align="center"> 50 machine learning questions & answers for Beginners </div>
If you are studying this kernel, you're probably at beginning of this journey. Here are some useful Python codes you need to get started.
```
import matplotlib.animation as animation
from matplotlib.figure import Figure
import plotly.figure_factory as ff
import matplotlib.pylab as pylab
from ipywidgets import interact
import plotly.graph_objs as go
import plotly.offline as py
from random import randint
from plotly import tools
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import string
import numpy
import csv
import os
```
## 1-how to import your data?
What you have in your data folder?
```
print(os.listdir("../input/"))
```
import all of your data
```
titanic_train=pd.read_csv('../input/train.csv')
titanic_test=pd.read_csv('../input/test.csv')
```
Or import just %10 of your data
```
titanic_train2=pd.read_csv('../input/train.csv',nrows=1000)
```
How to see the size of your data:
```
print("Train: rows:{} columns:{}".format(titanic_train.shape[0], titanic_train.shape[1]))
```
**For reading more about how to import your data you can visit: **[this Kernel](https://www.kaggle.com/dansbecker/finding-your-files-in-kaggle-kernels)
## 2- How to check missed data?
```
titanic_train.isna().sum()
```
or you can use below code
```
total = titanic_train.isnull().sum().sort_values(ascending=False)
percent = (titanic_train.isnull().sum()/titanic_train.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
```
## 3- How to view the statistical characteristics of the data?
```
titanic_train.describe()
```
or just for one column
```
titanic_train['Age'].describe()
```
with a another shape
```
titanic_train.Age.describe()
```
## 4- How check the column's name?
```
titanic_train.columns
```
or you can the check the column name with another ways too
```
titanic_train.head()
```
## 5- how to view randomly your data set ?
```
titanic_train.sample(5)
```
## 6-How random row selection in Pandas dataframe?
```
titanic_train.sample(frac=0.007)
```
## 7- How to copy a column and drop it ?
```
PassengerId=titanic_train['PassengerId'].copy()
PassengerId.head()
type(PassengerId)
titanic_train=titanic_train.drop('PassengerId',1)
titanic_train.head()
titanic_train=pd.read_csv('../input/train.csv')
```
## 8- How to check out last 5 row of the dataset?
we use tail() function
```
titanic_train.tail()
```
# 9- How to concatenation operations along an axis?
```
all_data = pd.concat((titanic_train.loc[:,'Pclass':'Embarked'],
titanic_test.loc[:,'Pclass':'Embarked']))
all_data.head()
titanic_train.shape
titanic_test.shape
all_data.shape
```
## 10- How to see unique values for a culomns?
```
titanic_train['Sex'].unique()
titanic_train['Cabin'].unique()
titanic_train['Pclass'].unique()
```
## 11- How to perform some query on your datasets?
```
titanic_train[titanic_train['Age']>70]
titanic_train[titanic_train['Pclass']==1]
```
---------------------------------------------------------------------
Fork and Run this kernel on GitHub:
> ###### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)
-------------------------------------------------------------------------------------------------------------
<b>I hope you find this kernel helpful and some <font color="red">UPVOTES</font> would be very much appreciated</b>
-----------
| github_jupyter |
```
#!/usr/bin/env python
# coding: utf-8
# ------
# **Dementia Patients -- Analysis and Prediction**
### ***Author : Akhilesh Vyas***
### ****Date : Januaray, 2020****
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
import pickle
import re
# define path
data_path = '../../../datalcdem/data/optima/dementia_18July/class_fast_normal_slow_api_inputs/'
result_path = '../../../datalcdem/data/optima/dementia_18July/class_fast_normal_slow_api_inputs/results/'
# define
best_params_rf_dict = {'n_estimators': 25, 'min_samples_split': 3, 'min_samples_leaf': 1, 'max_features': 'log2', 'max_depth': 8, 'bootstrap': False}
class_names_dict = {'Slow':0, 'Slow_MiS':1, 'Normal':2, 'Normal_MiS':3, 'Fast':4, 'Fast_MiS':5}
# In[3]:
def replaceNamesComandMed(col):
#col1 = re.sub(r'[aA-zZ]*_cui_', 'Episode', col)
if 'Medication_cui_' in col:
return 'Medication_cui_'+str('0_')+col.split('_')[-1]
if 'Comorbidity_cui_' in col:
return 'Comorbidity_cui_'+str('0_')+col.split('_')[-1]
if 'Age_At_Episode' in col:
# print ('Age_At_Episode_'+str('0'))
return 'Age_At_Episode_'+str('0')
return col
def replacewithEpisode(col):
if 'Medication_cui_' in col:
return 'Episode'+str(col.split('_')[-2])+'_Med_'+col.split('_')[-1]
if 'Comorbidity_cui_' in col:
return 'Episode'+str(col.split('_')[-2])+'_Com_'+col.split('_')[-1]
if 'Age_At_Episode' in col:
return 'Episode'+str(col.split('_')[-1])+'_Age'
return col
# In[4]:
# read dataframes
df_fea_all = pd.read_csv(data_path+'final_features_file_without_feature_selection_smote.csv')
# print (df_fea_all.shape)
df_fea_rfecv = pd.read_csv(data_path+'final_features_file_with_feature_selection_rfecv.csv')
df_fea_rfecv.rename(columns={col:col.replace('_TFV_', '_') for col in df_fea_all.columns}, inplace=True)
df_fea_rfecv.rename(columns={col:replaceNamesComandMed(col) for col in df_fea_rfecv.columns.tolist() if not bool(re.search(r'_[1-9]_?',col))}, inplace=True)
df_fea_rfecv.rename(columns={col:replacewithEpisode(col) for col in df_fea_rfecv.columns.tolist()}, inplace=True)
df_fea_rfecv.rename(columns={'CAMDEX SCORES: MINI MENTAL SCORE_CATEGORY_Mild':'Initial_MMSE_Score_Mild',
'CAMDEX SCORES: MINI MENTAL SCORE_CATEGORY_Moderate':'Initial_MMSE_Score_Moderate'}, inplace=True)
# print(df_fea_rfecv.shape)
# read object
data_p_i = pickle.load(open(data_path + 'data_p_i.pickle', 'rb'))
target_p_i = pickle.load(open(data_path + 'target_p_i.pickle', 'rb'))
rfecv_support_ = pickle.load(open(data_path + 'rfecv.support_.pickle', 'rb'))
# print(data_p_i.shape, target_p_i.shape, rfecv_support_.shape)
#read dictionary
# Treatment data
treatmnt_df = pd.read_csv(data_path+'Treatments.csv')
# print(treatmnt_df.head(5))
treatmnt_dict = dict(zip(treatmnt_df['name'], treatmnt_df['CUI_ID']))
# print ('\n Unique Treatment data size: {}\n'.format(len(treatmnt_dict)))
# Comorbidities data
comorb_df = pd.read_csv(data_path+'comorbidities.csv')
# print(comorb_df.head(5))
comorb_dict = dict(zip(comorb_df['name'], comorb_df['CUI_ID']))
# print ('\n Unique Comorbidities data size: {}\n'.format(len(comorb_dict)))
# In[5]:
# for i, j in zip(df_fea_rfecv.columns.to_list(), pd.read_csv(data_path+'final_features_file_with_feature_selection_rfecv.csv').columns.tolist()):
# print (j,' ', i)
# In[6]:
# Classification Model
data_p_grid = data_p_i[:,rfecv_support_]
rf_bp = best_params_rf_dict
rf_classifier=RandomForestClassifier(n_estimators=rf_bp["n_estimators"],
min_samples_split=rf_bp['min_samples_split'],
min_samples_leaf=rf_bp['min_samples_leaf'],
max_features=rf_bp['max_features'],
max_depth=rf_bp['max_depth'],
bootstrap=rf_bp['bootstrap'])
rf_classifier.fit(data_p_grid, target_p_i)
# Example for testing classification model
'''st_ix_p = 21
end_ix_p = 22
p = data_p_grid[st_ix_p:end_ix_p]
t = target_p_i[st_ix_p:end_ix_p]
print ('Mean Accuracy: ', rf_classifier.score(data_p_grid, target_p_i)*100)
print ('Predict Probability: ', rf_classifier.predict_proba(p)*100)
print ('Prediction: ', rf_classifier.predict(p))
print ('Target: ', t)'''
# In[48]:
patient_data_in = {'gender':['Female'], 'dementia':['True'], 'smoker':['no_smoker'], 'alcohol':'mild_drinking', 'education':['medium'], 'bmi':23, 'weight':60,
'apoe':['E3E3'], 'Initial_MMSE_Score':['Mild'], 'Episode1_Com':['C0002965', 'C0042847'], 'Episode1_Med':['C0014695', 'C0039943'], 'Episode1_Age':67, 'Episode2_Com':['C0032533'],
'Episode2_Med':['C1166521'], 'Episode2_Age':69}
def create_patient_feature_vector(patient_data_in):
print ('create_patient_feature_vector')
patient_data = pd.DataFrame(data=np.zeros(shape=df_fea_rfecv.iloc[0:1, 0:-1].shape), columns=df_fea_rfecv.columns[0:-1])
print (patient_data.shape)
for key,value in patient_data_in.items():
try:
if type(value)==list:
if len(value)>0:
print (key, value)
for i in value:
if key+'_'+str(i) in patient_data.columns.tolist():
print ('##############', key+'_'+str(i))
patient_data.at[0, key+'_'+str(i)] = 1.0
elif key+'_'+str(value) in patient_data.columns.tolist():
print ('#########', key+'_'+str(value))
patient_data.at[0, key+'_'+str(value)] = 1.0
elif key in patient_data.columns.tolist():
print ('########',key)
patient_data.at[0, key] = value
except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print (message)
return None
return patient_data
def result(patient_vect):
print (patient_vect.shape)
print (patient_vect)
print ('result')
try:
print ('Predict Probability: ', rf_classifier.predict_proba(patient_vect)*100)
print ('Prediction: ', rf_classifier.predict(patient_vect))
predicted_class = list(class_names_dict.keys())[rf_classifier.predict(patient_vect)[0]]
class_probability = {i:j for i, j in zip(list(class_names_dict.keys()), rf_classifier.predict_proba(patient_vect)[0])}
response= { 'modelName': 'Classification of Dementia Patient Progression',
'class_probability': class_probability,
'predicted_class': predicted_class}
return response
except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print (message)
return None
def main():
# patient_data_in to be get by Web API
try:
print ('Shape of patient vector with all selected Feature', df_fea_all.shape)
print('Shape of patient vector with selected Feature', df_fea_rfecv.shape)
print(data_p_i.shape, target_p_i.shape, rfecv_support_.shape)
print('Medication:\n', treatmnt_df.head(5))
print ('\n Unique Treatment data size: {}\n'.format(len(treatmnt_dict)))
print('Comorbidities\n', comorb_df.head(5))
print ('\n Unique Comorbidities data size: {}\n'.format(len(comorb_dict)))
pat_df = create_patient_feature_vector(patient_data_in)
print ('pat_df', pat_df)
patient_data_fea = pat_df.values.reshape(1,-1)
response = result(patient_data_fea)
print (response)
return response
except Exception as e:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
print (message)
return None
print("\n\n######## Predictive Model for Dementia Patients#############\n\n")
main()
```
| github_jupyter |
## Extracting Important Keywords from Text with TF-IDF and Python's Scikit-Learn
Back in 2006, when I had to use TF-IDF for keyword extraction in Java, I ended up writing all of the code from scratch as Data Science nor GitHub were a thing back then and libraries were just limited. The world is much different today. You have several [libraries](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html#sklearn.feature_extraction.text.TfidfTransformer) and [open-source code on Github](https://github.com/topics/tf-idf?o=desc&s=forks) that provide a decent implementation of TF-IDF. If you don't need a lot of control over how the TF-IDF math is computed then I would highly recommend re-using libraries from known packages such as [Spark's MLLib](https://spark.apache.org/docs/2.2.0/mllib-feature-extraction.html) or [Python's scikit-learn](http://scikit-learn.org/stable/).
The one problem that I noticed with these libraries is that they are meant as a pre-step for other tasks like clustering, topic modeling and text classification. [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) can actually be used to extract important keywords from a document to get a sense of what characterizes a document. For example, if you are dealing with wikipedia articles, you can use tf-idf to extract words that are unique to a given article. These keywords can be used as a very simple summary of the document, it can be used for text-analytics (when we look at these keywords in aggregate), as candidate labels for a document and more.
In this article, I will show you how you can use scikit-learn to extract top keywords for a given document using its tf-idf modules. We will specifically do this on a stackoverflow dataset.
## Dataset
Since we used some pretty clean user reviews in some of my previous tutorials, in this example, we will be using a Stackoverflow dataset which is slightly noisier and simulates what you could be dealing with in real life. You can find this dataset in [my tutorial repo](https://github.com/kavgan/data-science-tutorials/tree/master/tf-idf/data). Notice that there are two files, the larger file with (20,000 posts)[https://github.com/kavgan/data-science-tutorials/tree/master/tf-idf/data] is used to compute the Inverse Document Frequency (IDF) and the smaller file with [500 posts](https://github.com/kavgan/data-science-tutorials/tree/master/tf-idf/data) would be used as a test set for us to extract keywords from. This dataset is based on the publicly available [Stackoverflow dump on Google's Big Query](https://cloud.google.com/bigquery/public-data/stackoverflow).
Let's take a peek at our dataset. The code below reads a one per line json string from `data/stackoverflow-data-idf.json` into a pandas data frame and prints out its schema and total number of posts. Here, `lines=True` simply means we are treating each line in the text file as a separate json string. With this, the json in line 1 is not related to the json in line 2.
```
import pandas as pd
# read json into a dataframe
df_idf=pd.read_json("data/stackoverflow-data-idf.json",lines=True)
# print schema
print("Schema:\n\n",df_idf.dtypes)
print("Number of questions,columns=",df_idf.shape)
```
Take note that this stackoverflow dataset contains 19 fields including post title, body, tags, dates and other metadata which we don't quite need for this tutorial. What we are mostly interested in for this tutorial is the `body` and `title` which is our source of text. We will now create a field that combines both body and title so we have it in one field. We will also print the second `text` entry in our new field just to see what the text looks like.
```
import re
def pre_process(text):
# lowercase
text=text.lower()
#remove tags
text=re.sub("</?.*?>"," <> ",text)
# remove special characters and digits
text=re.sub("(\\d|\\W)+"," ",text)
return text
df_idf['text'] = df_idf['title'] + df_idf['body']
df_idf['text'] = df_idf['text'].apply(lambda x:pre_process(x))
#show the first 'text'
df_idf['text'][2]
```
Hmm, doesn't look very pretty with all the html in there, but that's the point. Even in such a mess we can extract some great stuff out of this. While you can eliminate all code from the text, we will keep the code sections for this tutorial for the sake of simplicity.
## Creating the IDF
### CountVectorizer to create a vocabulary and generate word counts
The next step is to start the counting process. We can use the CountVectorizer to create a vocabulary from all the text in our `df_idf['text']` and generate counts for each row in `df_idf['text']`. The result of the last two lines is a sparse matrix representation of the counts, meaning each column represents a word in the vocabulary and each row represents the document in our dataset where the values are the word counts. Note that with this representation, counts of some words could be 0 if the word did not appear in the corresponding document.
```
from sklearn.feature_extraction.text import CountVectorizer
import re
def get_stop_words(stop_file_path):
"""load stop words """
with open(stop_file_path, 'r', encoding="utf-8") as f:
stopwords = f.readlines()
stop_set = set(m.strip() for m in stopwords)
return frozenset(stop_set)
#load a set of stop words
stopwords=get_stop_words("resources/stopwords.txt")
#get the text column
docs=df_idf['text'].tolist()
#create a vocabulary of words,
#ignore words that appear in 85% of documents,
#eliminate stop words
cv=CountVectorizer(max_df=0.85,stop_words=stopwords)
word_count_vector=cv.fit_transform(docs)
```
Now let's check the shape of the resulting vector. Notice that the shape below is `(20000,149391)` because we have 20,000 documents in our dataset (the rows) and the vocabulary size is `149391` meaning we have `149391` unique words (the columns) in our dataset minus the stopwords. In some of the text mining applications, such as clustering and text classification we limit the size of the vocabulary. It's really easy to do this by setting `max_features=vocab_size` when instantiating CountVectorizer.
```
word_count_vector.shape
```
Let's limit our vocabulary size to 10,000
```
cv=CountVectorizer(max_df=0.85,stop_words=stopwords,max_features=10000)
word_count_vector=cv.fit_transform(docs)
word_count_vector.shape
```
Now, let's look at 10 words from our vocabulary. Sweet, these are mostly programming related.
```
list(cv.vocabulary_.keys())[:10]
```
We can also get the vocabulary by using `get_feature_names()`
```
list(cv.get_feature_names())[2000:2015]
```
### TfidfTransformer to Compute Inverse Document Frequency (IDF)
In the code below, we are essentially taking the sparse matrix from CountVectorizer to generate the IDF when you invoke `fit`. An extremely important point to note here is that the IDF should be based on a large corpora and should be representative of texts you would be using to extract keywords. I've seen several articles on the Web that compute the IDF using a handful of documents. To understand why IDF should be based on a fairly large collection, please read this [page from Standford's IR book](https://nlp.stanford.edu/IR-book/html/htmledition/inverse-document-frequency-1.html).
```
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer=TfidfTransformer(smooth_idf=True,use_idf=True)
tfidf_transformer.fit(word_count_vector)
```
Let's look at some of the IDF values:
```
tfidf_transformer.idf_
```
## Computing TF-IDF and Extracting Keywords
Once we have our IDF computed, we are now ready to compute TF-IDF and extract the top keywords. In this example, we will extract top keywords for the questions in `data/stackoverflow-test.json`. This data file has 500 questions with fields identical to that of `data/stackoverflow-data-idf.json` as we saw above. We will start by reading our test file, extracting the necessary fields (title and body) and get the texts into a list.
```
# read test docs into a dataframe and concatenate title and body
df_test=pd.read_json("data/stackoverflow-test.json",lines=True)
df_test['text'] = df_test['title'] + df_test['body']
df_test['text'] =df_test['text'].apply(lambda x:pre_process(x))
# get test docs into a list
docs_test=df_test['text'].tolist()
docs_title=df_test['title'].tolist()
docs_body=df_test['body'].tolist()
def sort_coo(coo_matrix):
tuples = zip(coo_matrix.col, coo_matrix.data)
return sorted(tuples, key=lambda x: (x[1], x[0]), reverse=True)
def extract_topn_from_vector(feature_names, sorted_items, topn=10):
"""get the feature names and tf-idf score of top n items"""
#use only topn items from vector
sorted_items = sorted_items[:topn]
score_vals = []
feature_vals = []
for idx, score in sorted_items:
fname = feature_names[idx]
#keep track of feature name and its corresponding score
score_vals.append(round(score, 3))
feature_vals.append(feature_names[idx])
#create a tuples of feature,score
#results = zip(feature_vals,score_vals)
results= {}
for idx in range(len(feature_vals)):
results[feature_vals[idx]]=score_vals[idx]
return results
```
The next step is to compute the tf-idf value for a given document in our test set by invoking `tfidf_transformer.transform(...)`. This generates a vector of tf-idf scores. Next, we sort the words in the vector in descending order of tf-idf values and then iterate over to extract the top-n items with the corresponding feature names, In the example below, we are extracting keywords for the first document in our test set.
The `sort_coo(...)` method essentially sorts the values in the vector while preserving the column index. Once you have the column index then its really easy to look-up the corresponding word value as you would see in `extract_topn_from_vector(...)` where we do `feature_vals.append(feature_names[idx])`.
```
# you only needs to do this once
feature_names=cv.get_feature_names()
# get the document that we want to extract keywords from
doc=docs_test[0]
#generate tf-idf for the given document
tf_idf_vector=tfidf_transformer.transform(cv.transform([doc]))
#sort the tf-idf vectors by descending order of scores
sorted_items=sort_coo(tf_idf_vector.tocoo())
#extract only the top n; n here is 10
keywords=extract_topn_from_vector(feature_names,sorted_items,10)
# now print the results
print("\n=====Title=====")
print(docs_title[0])
print("\n=====Body=====")
print(docs_body[0])
print("\n===Keywords===")
for k in keywords:
print(k,keywords[k])
```
From the keywords above, the top keywords actually make sense, it talks about `eclipse`, `maven`, `integrate`, `war` and `tomcat` which are all unique to this specific question. There are a couple of kewyords that could have been eliminated such as `possibility` and perhaps even `project` and you can do this by adding more common words to your stop list and you can even create your own set of stop list, very specific to your domain as [described here](http://kavita-ganesan.com/tips-for-constructing-custom-stop-word-lists/).
```
# put the common code into several methods
def get_keywords(idx):
#generate tf-idf for the given document
tf_idf_vector=tfidf_transformer.transform(cv.transform([docs_test[idx]]))
#sort the tf-idf vectors by descending order of scores
sorted_items=sort_coo(tf_idf_vector.tocoo())
#extract only the top n; n here is 10
keywords=extract_topn_from_vector(feature_names,sorted_items,10)
return keywords
def print_results(idx,keywords):
# now print the results
print("\n=====Title=====")
print(docs_title[idx])
print("\n=====Body=====")
print(docs_body[idx])
print("\n===Keywords===")
for k in keywords:
print(k,keywords[k])
```
Now let's look at keywords generated for a much longer question:
```
idx=120
keywords=get_keywords(idx)
print_results(idx,keywords)
```
Whoala! Now you can extract important keywords from any type of text!
| github_jupyter |
# Population Segmentation with SageMaker
In this notebook, you'll employ two, unsupervised learning algorithms to do **population segmentation**. Population segmentation aims to find natural groupings in population data that reveal some feature-level similarities between different regions in the US.
Using **principal component analysis** (PCA) you will reduce the dimensionality of the original census data. Then, you'll use **k-means clustering** to assign each US county to a particular cluster based on where a county lies in component space. How each cluster is arranged in component space can tell you which US counties are most similar and what demographic traits define that similarity; this information is most often used to inform targeted, marketing campaigns that want to appeal to a specific group of people. This cluster information is also useful for learning more about a population by revealing patterns between regions that you otherwise may not have noticed.
### US Census Data
You'll be using data collected by the [US Census](https://en.wikipedia.org/wiki/United_States_Census), which aims to count the US population, recording demographic traits about labor, age, population, and so on, for each county in the US. The bulk of this notebook was taken from an existing SageMaker example notebook and [blog post](https://aws.amazon.com/blogs/machine-learning/analyze-us-census-data-for-population-segmentation-using-amazon-sagemaker/), and I've broken it down further into demonstrations and exercises for you to complete.
### Machine Learning Workflow
To implement population segmentation, you'll go through a number of steps:
* Data loading and exploration
* Data cleaning and pre-processing
* Dimensionality reduction with PCA
* Feature engineering and data transformation
* Clustering transformed data with k-means
* Extracting trained model attributes and visualizing k clusters
These tasks make up a complete, machine learning workflow from data loading and cleaning to model deployment. Each exercise is designed to give you practice with part of the machine learning workflow, and to demonstrate how to use SageMaker tools, such as built-in data management with S3 and built-in algorithms.
---
First, import the relevant libraries into this SageMaker notebook.
```
# data managing and display libs
import pandas as pd
import numpy as np
import os
import io
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# sagemaker libraries
import boto3
import sagemaker
```
## Loading the Data from Amazon S3
This particular dataset is already in an Amazon S3 bucket; you can load the data by pointing to this bucket and getting a data file by name.
> You can interact with S3 using a `boto3` client.
```
# boto3 client to get S3 data
s3_client = boto3.client('s3')
# S3 bucket name
bucket_name='aws-ml-blog-sagemaker-census-segmentation'
```
Take a look at the contents of this bucket; get a list of objects that are contained within the bucket and print out the names of the objects. You should see that there is one file, 'Census_Data_for_SageMaker.csv'.
```
# get a list of objects in the bucket
obj_list=s3_client.list_objects(Bucket=bucket_name)
# print object(s)in S3 bucket
files=[]
for contents in obj_list['Contents']:
files.append(contents['Key'])
print(files)
# there is one file --> one key
file_name=files[0]
print(file_name)
```
Retrieve the data file from the bucket with a call to `client.get_object()`.
```
# get an S3 object by passing in the bucket and file name
data_object = s3_client.get_object(Bucket=bucket_name, Key=file_name)
# what info does the object contain?
display(data_object)
# information is in the "Body" of the object
data_body = data_object["Body"].read()
print('Data type: ', type(data_body))
```
This is a `bytes` datatype, which you can read it in using [io.BytesIO(file)](https://docs.python.org/3/library/io.html#binary-i-o).
```
# read in bytes data
data_stream = io.BytesIO(data_body)
# create a dataframe
counties_df = pd.read_csv(data_stream, header=0, delimiter=",")
counties_df.head()
```
## Exploratory Data Analysis (EDA)
Now that you've loaded in the data, it is time to clean it up, explore it, and pre-process it. Data exploration is one of the most important parts of the machine learning workflow because it allows you to notice any initial patterns in data distribution and features that may inform how you proceed with modeling and clustering the data.
### EXERCISE: Explore data & drop any incomplete rows of data
When you first explore the data, it is good to know what you are working with. How many data points and features are you starting with, and what kind of information can you get at a first glance? In this notebook, you're required to use complete data points to train a model. So, your first exercise will be to investigate the shape of this data and implement a simple, data cleaning step: dropping any incomplete rows of data.
You should be able to answer the **question**: How many data points and features are in the original, provided dataset? (And how many points are left after dropping any incomplete rows?)
```
# print out stats about data
# rows = data, cols = features
print('(orig) rows, cols: ', counties_df.shape)
# drop any incomplete data
clean_counties_df = counties_df.dropna(axis=0)
print('(clean) rows, cols: ', clean_counties_df.shape)
```
### EXERCISE: Create a new DataFrame, indexed by 'State-County'
Eventually, you'll want to feed these features into a machine learning model. Machine learning models need numerical data to learn from and not categorical data like strings (State, County). So, you'll reformat this data such that it is indexed by region and you'll also drop any features that are not useful for clustering.
To complete this task, perform the following steps, using your *clean* DataFrame, generated above:
1. Combine the descriptive columns, 'State' and 'County', into one, new categorical column, 'State-County'.
2. Index the data by this unique State-County name.
3. After doing this, drop the old State and County columns and the CensusId column, which does not give us any meaningful demographic information.
After completing this task, you should have a DataFrame with 'State-County' as the index, and 34 columns of numerical data for each county. You should get a resultant DataFrame that looks like the following (truncated for display purposes):
```
TotalPop Men Women Hispanic ...
Alabama-Autauga 55221 26745 28476 2.6 ...
Alabama-Baldwin 195121 95314 99807 4.5 ...
Alabama-Barbour 26932 14497 12435 4.6 ...
...
```
```
# index data by 'State-County'
clean_counties_df.index=clean_counties_df['State'] + "-" + clean_counties_df['County']
clean_counties_df.head()
# drop the old State and County columns, and the CensusId column
# clean df should be modified or created anew
drop=["CensusId" , "State" , "County"]
clean_counties_df = clean_counties_df.drop(columns=drop)
clean_counties_df.head()
```
Now, what features do you have to work with?
```
# features
features_list = clean_counties_df.columns.values
print('Features: \n', features_list)
```
## Visualizing the Data
In general, you can see that features come in a variety of ranges, mostly percentages from 0-100, and counts that are integer values in a large range. Let's visualize the data in some of our feature columns and see what the distribution, over all counties, looks like.
The below cell displays **histograms**, which show the distribution of data points over discrete feature ranges. The x-axis represents the different bins; each bin is defined by a specific range of values that a feature can take, say between the values 0-5 and 5-10, and so on. The y-axis is the frequency of occurrence or the number of county data points that fall into each bin. I find it helpful to use the y-axis values for relative comparisons between different features.
Below, I'm plotting a histogram comparing methods of commuting to work over all of the counties. I just copied these feature names from the list of column names, printed above. I also know that all of these features are represented as percentages (%) in the original data, so the x-axes of these plots will be comparable.
```
# transportation (to work)
transport_list = ['Drive', 'Carpool', 'Transit', 'Walk', 'OtherTransp']
n_bins = 50 # can decrease to get a wider bin (or vice versa)
for column_name in transport_list:
ax=plt.subplots(figsize=(6,3))
# get data by column_name and display a histogram
ax = plt.hist(clean_counties_df[column_name], bins=n_bins)
title="Histogram of " + column_name
plt.title(title, fontsize=12)
plt.show()
```
### EXERCISE: Create histograms of your own
Commute transportation method is just one category of features. If you take a look at the 34 features, you can see data on profession, race, income, and more. Display a set of histograms that interest you!
```
# create a list of features that you want to compare or examine
# employment types
my_list = ['PrivateWork', 'PublicWork', 'SelfEmployed', 'FamilyWork', 'Unemployment']
n_bins = 30 # define n_bins
# histogram creation code is similar to above
for column_name in my_list:
ax=plt.subplots(figsize=(6,3))
# get data by column_name and display a histogram
ax = plt.hist(clean_counties_df[column_name], bins=n_bins)
title="Histogram of " + column_name
plt.title(title, fontsize=12)
plt.show()
```
### EXERCISE: Normalize the data
You need to standardize the scale of the numerical columns in order to consistently compare the values of different features. You can use a [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) to transform the numerical values so that they all fall between 0 and 1.
```
# scale numerical features into a normalized range, 0-1
from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler()
# store them in this dataframe
counties_scaled=pd.DataFrame(scaler.fit_transform(clean_counties_df.astype(float)))
# get same features and State-County indices
counties_scaled.columns=clean_counties_df.columns
counties_scaled.index=clean_counties_df.index
counties_scaled.head()
counties_scaled.describe()
```
---
# Data Modeling
Now, the data is ready to be fed into a machine learning model!
Each data point has 34 features, which means the data is 34-dimensional. Clustering algorithms rely on finding clusters in n-dimensional feature space. For higher dimensions, an algorithm like k-means has a difficult time figuring out which features are most important, and the result is, often, noisier clusters.
Some dimensions are not as important as others. For example, if every county in our dataset has the same rate of unemployment, then that particular feature doesn’t give us any distinguishing information; it will not help t separate counties into different groups because its value doesn’t *vary* between counties.
> Instead, we really want to find the features that help to separate and group data. We want to find features that cause the **most variance** in the dataset!
So, before I cluster this data, I’ll want to take a dimensionality reduction step. My aim will be to form a smaller set of features that will better help to separate our data. The technique I’ll use is called PCA or **principal component analysis**
## Dimensionality Reduction
PCA attempts to reduce the number of features within a dataset while retaining the “principal components”, which are defined as *weighted*, linear combinations of existing features that are designed to be linearly independent and account for the largest possible variability in the data! You can think of this method as taking many features and combining similar or redundant features together to form a new, smaller feature set.
We can reduce dimensionality with the built-in SageMaker model for PCA.
### Roles and Buckets
> To create a model, you'll first need to specify an IAM role, and to save the model attributes, you'll need to store them in an S3 bucket.
The `get_execution_role` function retrieves the IAM role you created at the time you created your notebook instance. Roles are essentially used to manage permissions and you can read more about that [in this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). For now, know that we have a FullAccess notebook, which allowed us to access and download the census data stored in S3.
You must specify a bucket name for an S3 bucket in your account where you want SageMaker model parameters to be stored. Note that the bucket must be in the same region as this notebook. You can get a default S3 bucket, which automatically creates a bucket for you and in your region, by storing the current SageMaker session and calling `session.default_bucket()`.
```
from sagemaker import get_execution_role
session = sagemaker.Session() # store the current SageMaker session
# get IAM role
role = get_execution_role()
print(role)
# get default bucket
bucket_name = session.default_bucket()
print(bucket_name)
print()
```
## Define a PCA Model
To create a PCA model, I'll use the built-in SageMaker resource. A SageMaker estimator requires a number of parameters to be specified; these define the type of training instance to use and the model hyperparameters. A PCA model requires the following constructor arguments:
* role: The IAM role, which was specified, above.
* train_instance_count: The number of training instances (typically, 1).
* train_instance_type: The type of SageMaker instance for training.
* num_components: An integer that defines the number of PCA components to produce.
* sagemaker_session: The session used to train on SageMaker.
Documentation on the PCA model can be found [here](http://sagemaker.readthedocs.io/en/latest/pca.html).
Below, I first specify where to save the model training data, the `output_path`.
```
# define location to store model artifacts
prefix = 'counties'
output_path='s3://{}/{}/'.format(bucket_name, prefix)
print('Training artifacts will be uploaded to: {}'.format(output_path))
# define a PCA model
from sagemaker import PCA
# this is current features - 1
# you'll select only a portion of these to use, later
N_COMPONENTS=33
pca_SM = PCA(role=role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path=output_path, # specified, above
num_components=N_COMPONENTS,
sagemaker_session=session)
```
### Convert data into a RecordSet format
Next, prepare the data for a built-in model by converting the DataFrame to a numpy array of float values.
The *record_set* function in the SageMaker PCA model converts a numpy array into a **RecordSet** format that is the required format for the training input data. This is a requirement for _all_ of SageMaker's built-in models. The use of this data type is one of the reasons that allows training of models within Amazon SageMaker to perform faster, especially for large datasets.
```
# convert df to np array
train_data_np = counties_scaled.values.astype('float32')
# convert to RecordSet format
formatted_train_data = pca_SM.record_set(train_data_np)
```
## Train the model
Call the fit function on the PCA model, passing in our formatted, training data. This spins up a training instance to perform the training job.
Note that it takes the longest to launch the specified training instance; the fitting itself doesn't take much time.
```
%%time
# train the PCA mode on the formatted data
pca_SM.fit(formatted_train_data)
```
## Accessing the PCA Model Attributes
After the model is trained, we can access the underlying model parameters.
### Unzip the Model Details
Now that the training job is complete, you can find the job under **Jobs** in the **Training** subsection in the Amazon SageMaker console. You can find the job name listed in the training jobs. Use that job name in the following code to specify which model to examine.
Model artifacts are stored in S3 as a TAR file; a compressed file in the output path we specified + 'output/model.tar.gz'. The artifacts stored here can be used to deploy a trained model.
```
# Get the name of the training job, it's suggested that you copy-paste
# from the notebook or from a specific job in the AWS console
training_job_name='pca-2020-05-10-09-32-14-469'
# where the model is saved, by default
model_key = os.path.join(prefix, training_job_name, 'output/model.tar.gz')
print(model_key)
# download and unzip model
boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')
# unzipping as model_algo-1
os.system('tar -zxvf model.tar.gz')
os.system('unzip model_algo-1')
```
### MXNet Array
Many of the Amazon SageMaker algorithms use MXNet for computational speed, including PCA, and so the model artifacts are stored as an array. After the model is unzipped and decompressed, we can load the array using MXNet.
You can take a look at the MXNet [documentation, here](https://aws.amazon.com/mxnet/).
```
import mxnet as mx
# loading the unzipped artifacts
pca_model_params = mx.ndarray.load('model_algo-1')
# what are the params
print(pca_model_params)
```
## PCA Model Attributes
Three types of model attributes are contained within the PCA model.
* **mean**: The mean that was subtracted from a component in order to center it.
* **v**: The makeup of the principal components; (same as ‘components_’ in an sklearn PCA model).
* **s**: The singular values of the components for the PCA transformation. This does not exactly give the % variance from the original feature space, but can give the % variance from the projected feature space.
We are only interested in v and s.
From s, we can get an approximation of the data variance that is covered in the first `n` principal components. The approximate explained variance is given by the formula: the sum of squared s values for all top n components over the sum over squared s values for _all_ components:
\begin{equation*}
\frac{\sum_{n}^{ } s_n^2}{\sum s^2}
\end{equation*}
From v, we can learn more about the combinations of original features that make up each principal component.
```
# get selected params
s=pd.DataFrame(pca_model_params['s'].asnumpy())
v=pd.DataFrame(pca_model_params['v'].asnumpy())
```
## Data Variance
Our current PCA model creates 33 principal components, but when we create new dimensionality-reduced training data, we'll only select a few, top n components to use. To decide how many top components to include, it's helpful to look at how much **data variance** the components capture. For our original, high-dimensional data, 34 features captured 100% of our data variance. If we discard some of these higher dimensions, we will lower the amount of variance we can capture.
### Tradeoff: dimensionality vs. data variance
As an illustrative example, say we have original data in three dimensions. So, three dimensions capture 100% of our data variance; these dimensions cover the entire spread of our data. The below images are taken from the PhD thesis, [“Approaches to analyse and interpret biological profile data”](https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/index/index/docId/696) by Matthias Scholz, (2006, University of Potsdam, Germany).
<img src='notebook_ims/3d_original_data.png' width=35% />
Now, you may also note that most of this data seems related; it falls close to a 2D plane, and just by looking at the spread of the data, we can visualize that the original, three dimensions have some correlation. So, we can instead choose to create two new dimensions, made up of linear combinations of the original, three dimensions. These dimensions are represented by the two axes/lines, centered in the data.
<img src='notebook_ims/pca_2d_dim_reduction.png' width=70% />
If we project this in a new, 2D space, we can see that we still capture most of the original data variance using *just* two dimensions. There is a tradeoff between the amount of variance we can capture and the number of component-dimensions we use to represent our data.
When we select the top n components to use in a new data model, we'll typically want to include enough components to capture about 80-90% of the original data variance. In this project, we are looking at generalizing over a lot of data and we'll aim for about 80% coverage.
**Note**: The _top_ principal components, with the largest s values, are actually at the end of the s DataFrame. Let's print out the s values for the top n, principal components.
```
# looking at top 5 components
n_principal_components = 5
start_idx = N_COMPONENTS - n_principal_components # 33-n
# print a selection of s
print(s.iloc[start_idx:, :])
```
### EXERCISE: Calculate the explained variance
In creating new training data, you'll want to choose the top n principal components that account for at least 80% data variance.
Complete a function, `explained_variance` that takes in the entire array `s` and a number of top principal components to consider. Then return the approximate, explained variance for those top n components.
For example, to calculate the explained variance for the top 5 components, calculate s squared for *each* of the top 5 components, add those up and normalize by the sum of *all* squared s values, according to this formula:
\begin{equation*}
\frac{\sum_{5}^{ } s_n^2}{\sum s^2}
\end{equation*}
> Using this function, you should be able to answer the **question**: What is the smallest number of principal components that captures at least 80% of the total variance in the dataset?
```
# Calculate the explained variance for the top n principal components
# you may assume you have access to the global var N_COMPONENTS
def explained_variance(s, n_top_components):
'''Calculates the approx. data variance that n_top_components captures.
:param s: A dataframe of singular values for top components;
the top value is in the last row.
:param n_top_components: An integer, the number of top components to use.
:return: The expected data variance covered by the n_top_components.'''
start_idx = N_COMPONENTS - n_top_components ## 33-3 = 30, for example
# calculate approx variance
exp_variance = np.square(s.iloc[start_idx:,:]).sum()/np.square(s).sum()
return exp_variance[0]
```
### Test Cell
Test out your own code by seeing how it responds to different inputs; does it return a reasonable value for the single, top component? What about for the top 5 components?
```
# test cell
n_top_components = 7 # select a value for the number of top components
# calculate the explained variance
exp_variance = explained_variance(s, n_top_components)
print('Explained variance: ', exp_variance)
```
As an example, you should see that the top principal component accounts for about 32% of our data variance! Next, you may be wondering what makes up this (and other components); what linear combination of features make these components so influential in describing the spread of our data?
Below, let's take a look at our original features and use that as a reference.
```
# features
features_list = counties_scaled.columns.values
print('Features: \n', features_list)
```
## Component Makeup
We can now examine the makeup of each PCA component based on **the weightings of the original features that are included in the component**. The following code shows the feature-level makeup of the first component.
Note that the components are again ordered from smallest to largest and so I am getting the correct rows by calling N_COMPONENTS-1 to get the top, 1, component.
```
import seaborn as sns
def display_component(v, features_list, component_num, n_weights=10):
# get index of component (last row - component_num)
row_idx = N_COMPONENTS-component_num
# get the list of weights from a row in v, dataframe
v_1_row = v.iloc[:, row_idx]
v_1 = np.squeeze(v_1_row.values)
# match weights to features in counties_scaled dataframe, using list comporehension
comps = pd.DataFrame(list(zip(v_1, features_list)),
columns=['weights', 'features'])
# we'll want to sort by the largest n_weights
# weights can be neg/pos and we'll sort by magnitude
comps['abs_weights']=comps['weights'].apply(lambda x: np.abs(x))
sorted_weight_data = comps.sort_values('abs_weights', ascending=False).head(n_weights)
# display using seaborn
ax=plt.subplots(figsize=(10,6))
ax=sns.barplot(data=sorted_weight_data,
x="weights",
y="features",
palette="Blues_d")
ax.set_title("PCA Component Makeup, Component #" + str(component_num))
plt.show()
# display makeup of first component
num=2
display_component(v, counties_scaled.columns.values, component_num=num, n_weights=10)
```
# Deploying the PCA Model
We can now deploy this model and use it to make "predictions". Instead of seeing what happens with some test data, we'll actually want to pass our training data into the deployed endpoint to create principal components for each data point.
Run the cell below to deploy/host this model on an instance_type that we specify.
```
%%time
# this takes a little while, around 7mins
pca_predictor = pca_SM.deploy(initial_instance_count=1,
instance_type='ml.t2.medium')
```
We can pass the original, numpy dataset to the model and transform the data using the model we created. Then we can take the largest n components to reduce the dimensionality of our data.
```
# pass np train data to the PCA model
train_pca = pca_predictor.predict(train_data_np)
# check out the first item in the produced training features
data_idx = 0
print(train_pca[data_idx])
```
### EXERCISE: Create a transformed DataFrame
For each of our data points, get the top n component values from the list of component data points, returned by our predictor above, and put those into a new DataFrame.
You should end up with a DataFrame that looks something like the following:
```
c_1 c_2 c_3 c_4 c_5 ...
Alabama-Autauga -0.060274 0.160527 -0.088356 0.120480 -0.010824 ...
Alabama-Baldwin -0.149684 0.185969 -0.145743 -0.023092 -0.068677 ...
Alabama-Barbour 0.506202 0.296662 0.146258 0.297829 0.093111 ...
...
```
```
# create dimensionality-reduced data
def create_transformed_df(train_pca, counties_scaled, n_top_components):
''' Return a dataframe of data points with component features.
The dataframe should be indexed by State-County and contain component values.
:param train_pca: A list of pca training data, returned by a PCA model.
:param counties_scaled: A dataframe of normalized, original features.
:param n_top_components: An integer, the number of top components to use.
:return: A dataframe, indexed by State-County, with n_top_component values as columns.
'''
# create new dataframe to add data to
counties_transformed=pd.DataFrame()
# for each of our new, transformed data points
# append the component values to the dataframe
for data in train_pca:
# get component values for each data point
components=data.label['projection'].float32_tensor.values
counties_transformed=counties_transformed.append([list(components)])
# index by county, just like counties_scaled
counties_transformed.index=counties_scaled.index
# keep only the top n components
start_idx = N_COMPONENTS - n_top_components
counties_transformed = counties_transformed.iloc[:,start_idx:]
# reverse columns, component order
return counties_transformed.iloc[:, ::-1]
```
Now we can create a dataset where each county is described by the top n principle components that we analyzed earlier. Each of these components is a linear combination of the original feature space. We can interpret each of these components by analyzing the makeup of the component, shown previously.
```
# specify top n
top_n = 7
# call your function and create a new dataframe
counties_transformed = create_transformed_df(train_pca, counties_scaled, n_top_components=top_n)
# add descriptive columns
PCA_list=['c_1', 'c_2', 'c_3', 'c_4', 'c_5', 'c_6', 'c_7']
counties_transformed.columns=PCA_list
# print result
counties_transformed.head()
```
### Delete the Endpoint!
Now that we've deployed the model and created our new, transformed training data, we no longer need the PCA endpoint.
As a clean up step, you should always delete your endpoints after you are done using them (and if you do not plan to deploy them to a website, for example).
```
# delete predictor endpoint
session.delete_endpoint(pca_predictor.endpoint)
```
---
# Population Segmentation
Now, you’ll use the unsupervised clustering algorithm, k-means, to segment counties using their PCA attributes, which are in the transformed DataFrame we just created. K-means is a clustering algorithm that identifies clusters of similar data points based on their component makeup. Since we have ~3000 counties and 34 attributes in the original dataset, the large feature space may have made it difficult to cluster the counties effectively. Instead, we have reduced the feature space to 7 PCA components, and we’ll cluster on this transformed dataset.
### EXERCISE: Define a k-means model
Your task will be to instantiate a k-means model. A `KMeans` estimator requires a number of parameters to be instantiated, which allow us to specify the type of training instance to use, and the model hyperparameters.
You can read about the required parameters, in the [`KMeans` documentation](https://sagemaker.readthedocs.io/en/stable/kmeans.html); note that not all of the possible parameters are required.
### Choosing a "Good" K
One method for choosing a "good" k, is to choose based on empirical data. A bad k would be one so *high* that only one or two very close data points are near it, and another bad k would be one so *low* that data points are really far away from the centers.
You want to select a k such that data points in a single cluster are close together but that there are enough clusters to effectively separate the data. You can approximate this separation by measuring how close your data points are to each cluster center; the average centroid distance between cluster points and a centroid. After trying several values for k, the centroid distance typically reaches some "elbow"; it stops decreasing at a sharp rate and this indicates a good value of k. The graph below indicates the average centroid distance for value of k between 5 and 12.
<img src='notebook_ims/elbow_graph.png' width=50% />
A distance elbow can be seen around 8 when the distance starts to increase and then decrease at a slower rate. This indicates that there is enough separation to distinguish the data points in each cluster, but also that you included enough clusters so that the data points aren’t *extremely* far away from each cluster.
```
# define a KMeans estimator
from sagemaker import KMeans
NUM_CLUSTERS = 8
kmeans = KMeans(role=role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path=output_path, # using the same output path as was defined, earlier
k=NUM_CLUSTERS)
```
### EXERCISE: Create formatted, k-means training data
Just as before, you should convert the `counties_transformed` df into a numpy array and then into a RecordSet. This is the required format for passing training data into a `KMeans` model.
```
# convert the transformed dataframe into record_set data
kmeans_train_data_np = counties_transformed.values.astype('float32')
kmeans_formatted_data = kmeans.record_set(kmeans_train_data_np)
```
### EXERCISE: Train the k-means model
Pass in the formatted training data and train the k-means model.
```
%%time
# train kmeans
kmeans.fit(kmeans_formatted_data)
```
### EXERCISE: Deploy the k-means model
Deploy the trained model to create a `kmeans_predictor`.
```
%%time
# deploy the model to create a predictor
kmeans_predictor = kmeans.deploy(initial_instance_count=1,
instance_type='ml.t2.medium')
```
### EXERCISE: Pass in the training data and assign predicted cluster labels
After deploying the model, you can pass in the k-means training data, as a numpy array, and get resultant, predicted cluster labels for each data point.
```
# get the predicted clusters for all the kmeans training data
cluster_info=kmeans_predictor.predict(kmeans_train_data_np)
```
## Exploring the resultant clusters
The resulting predictions should give you information about the cluster that each data point belongs to.
You should be able to answer the **question**: which cluster does a given data point belong to?
```
# print cluster info for first data point
data_idx = 0
print('County is: ', counties_transformed.index[data_idx])
print()
print(cluster_info[data_idx])
```
### Visualize the distribution of data over clusters
Get the cluster labels for each of our data points (counties) and visualize the distribution of points over each cluster.
```
# get all cluster labels
cluster_labels = [c.label['closest_cluster'].float32_tensor.values[0] for c in cluster_info]
# count up the points in each cluster
cluster_df = pd.DataFrame(cluster_labels)[0].value_counts()
print(cluster_df)
# another method of visualizing the distribution
# display a histogram of cluster counts
ax =plt.subplots(figsize=(6,3))
ax = plt.hist(cluster_labels, bins=8, range=(-0.5, 7.5), color='blue', rwidth=0.5)
title="Histogram of Cluster Counts"
plt.title(title, fontsize=12)
plt.show()
```
Now, you may be wondering, what do each of these clusters tell us about these data points? To improve explainability, we need to access the underlying model to get the cluster centers. These centers will help describe which features characterize each cluster.
### Delete the Endpoint!
Now that you've deployed the k-means model and extracted the cluster labels for each data point, you no longer need the k-means endpoint.
```
# delete kmeans endpoint
session.delete_endpoint(kmeans_predictor.endpoint)
```
---
# Model Attributes & Explainability
Explaining the result of the modeling is an important step in making use of our analysis. By combining PCA and k-means, and the information contained in the model attributes within a SageMaker trained model, you can learn about a population and remark on some patterns you've found, based on the data.
### EXERCISE: Access the k-means model attributes
Extract the k-means model attributes from where they are saved as a TAR file in an S3 bucket.
You'll need to access the model by the k-means training job name, and then unzip the file into `model_algo-1`. Then you can load that file using MXNet, as before.
```
# download and unzip the kmeans model file
kmeans_job_name = 'kmeans-2020-05-10-09-48-20-293'
model_key = os.path.join(prefix, kmeans_job_name, 'output/model.tar.gz')
# download the model file
boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')
os.system('tar -zxvf model.tar.gz')
os.system('unzip model_algo-1')
# get the trained kmeans params using mxnet
kmeans_model_params = mx.ndarray.load('model_algo-1')
print(kmeans_model_params)
```
There is only 1 set of model parameters contained within the k-means model: the cluster centroid locations in PCA-transformed, component space.
* **centroids**: The location of the centers of each cluster in component space, identified by the k-means algorithm.
```
# get all the centroids
cluster_centroids=pd.DataFrame(kmeans_model_params[0].asnumpy())
cluster_centroids.columns=counties_transformed.columns
display(cluster_centroids)
```
### Visualizing Centroids in Component Space
You can't visualize 7-dimensional centroids in space, but you can plot a heatmap of the centroids and their location in the transformed feature space.
This gives you insight into what characteristics define each cluster. Often with unsupervised learning, results are hard to interpret. This is one way to make use of the results of PCA + clustering techniques, together. Since you were able to examine the makeup of each PCA component, you can understand what each centroid represents in terms of the PCA components.
```
# generate a heatmap in component space, using the seaborn library
plt.figure(figsize = (12,9))
ax = sns.heatmap(cluster_centroids.T, cmap = 'YlGnBu')
ax.set_xlabel("Cluster")
plt.yticks(fontsize = 16)
plt.xticks(fontsize = 16)
ax.set_title("Attribute Value by Centroid")
plt.show()
```
If you've forgotten what each component corresponds to at an original-feature-level, that's okay! You can use the previously defined `display_component` function to see the feature-level makeup.
```
# what do each of these components mean again?
# let's use the display function, from above
component_num=4
display_component(v, counties_scaled.columns.values, component_num=component_num)
```
### Natural Groupings
You can also map the cluster labels back to each individual county and examine which counties are naturally grouped together.
```
# add a 'labels' column to the dataframe
counties_transformed['labels']=list(map(int, cluster_labels))
# sort by cluster label 0-6
sorted_counties = counties_transformed.sort_values('labels', ascending=True)
# view some pts in cluster 0
sorted_counties.head(20)
```
You can also examine one of the clusters in more detail, like cluster 1, for example. A quick glance at the location of the centroid in component space (the heatmap) tells us that it has the highest value for the `comp_6` attribute. You can now see which counties fit that description.
```
# get all counties with label == 1
cluster=counties_transformed[counties_transformed['labels']==1]
cluster.head()
```
## Final Cleanup!
* Double check that you have deleted all your endpoints.
* I'd also suggest manually deleting your S3 bucket, models, and endpoint configurations directly from your AWS console.
You can find thorough cleanup instructions, [in the documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html).
---
# Conclusion
You have just walked through a machine learning workflow for unsupervised learning, specifically, for clustering a dataset using k-means after reducing the dimensionality using PCA. By accessing the underlying models created within SageMaker, you were able to improve the explainability of your model and draw insights from the resultant clusters.
Using these techniques, you have been able to better understand the essential characteristics of different counties in the US and segment them into similar groups, accordingly.
| github_jupyter |
# Histograms of time-mean surface temperature
## Import the libraries
```
# Data analysis and viz libraries
import aeolus.plot as aplt
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
# Local modules
from calc import sfc_temp
import mypaths
from names import names
from commons import MODELS
import const_ben1_hab1 as const
from plot_func import (
KW_MAIN_TTL,
KW_SBPLT_LABEL,
figsave,
)
plt.style.use("paper.mplstyle")
```
## Load the data
Load the time-averaged data previously preprocessed.
```
THAI_cases = ["Hab1", "Hab2"]
# Load data
datasets = {} # Create an empty dictionary to store all data
# for each of the THAI cases, create a nested directory for models
for THAI_case in THAI_cases:
datasets[THAI_case] = {}
for model_key in MODELS.keys():
datasets[THAI_case][model_key] = xr.open_dataset(
mypaths.datadir / model_key / f"{THAI_case}_time_mean_{model_key}.nc"
)
bin_step = 10
bins = np.arange(170, 321, bin_step)
bin_mid = (bins[:-1] + bins[1:]) * 0.5
t_sfc_step = abs(bins - const.t_melt).max()
ncols = 1
nrows = 2
width = 0.75 * bin_step / len(MODELS)
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols * 8, nrows * 4.5))
iletters = aplt.subplot_label_generator()
for THAI_case, ax in zip(THAI_cases, axs.flat):
ax.set_title(f"{next(iletters)}", **KW_SBPLT_LABEL)
ax.set_xlim(bins[0], bins[-1])
ax.set_xticks(bins)
ax.grid(axis="x")
if ax.get_subplotspec().is_last_row():
ax.set_xlabel("Surface temperature [$K$]")
ax.set_title(THAI_case, **KW_MAIN_TTL)
# ax2 = ax.twiny()
# ax2.set_xlim(bins[0], bins[-1])
# ax2.axvline(const.t_melt, color="k", linestyle="--")
# ax2.set_xticks([const.t_melt])
# ax2.set_xticklabels([const.t_melt])
# ax.vlines(const.t_melt, ymin=0, ymax=38.75, color="k", linestyle="--")
# ax.vlines(const.t_melt, ymin=41.5, ymax=45, color="k", linestyle="--")
# ax.text(const.t_melt, 40, f"{const.t_melt:.2f}", ha="center", va="center", fontsize="small")
ax.imshow(
np.linspace(0, 1, 100).reshape(1, -1),
extent=[const.t_melt - t_sfc_step, const.t_melt + t_sfc_step, 0, 45],
aspect="auto",
cmap="seismic",
alpha=0.25,
)
ax.set_ylim([0, 45])
if ax.get_subplotspec().is_first_col():
ax.set_ylabel("Area fraction [%]")
for i, (model_key, model_dict) in zip([-3, -1, 1, 3], MODELS.items()):
model_names = names[model_key]
ds = datasets[THAI_case][model_key]
arr = sfc_temp(ds, model_key, const)
weights = xr.broadcast(np.cos(np.deg2rad(arr.latitude)), arr)[0].values.ravel()
# tot_pnts = arr.size
hist, _ = np.histogram(
arr.values.ravel(), bins=bins, weights=weights, density=True
)
hist *= 100 * bin_step
# hist = hist / tot_pnts * 100
# hist[hist==0] = np.nan
ax.bar(
bin_mid + (i * width / 2),
hist,
width=width,
facecolor=model_dict["color"],
edgecolor="none",
alpha=0.8,
label=model_dict["title"],
)
ax.legend(loc="upper left")
fig.tight_layout()
fig.align_labels()
figsave(
fig,
mypaths.plotdir / f"{'_'.join(THAI_cases)}__hist__t_sfc_weighted",
)
```
| github_jupyter |
# Near to far field transformation
See on [github](https://github.com/flexcompute/tidy3d-notebooks/blob/main/Near2Far_ZonePlate.ipynb), run on [colab](https://colab.research.google.com/github/flexcompute/tidy3d-notebooks/blob/main/Near2Far_ZonePlate.ipynb), or just follow along with the output below.
This tutorial will show you how to solve for electromagnetic fields far away from your structure using field information stored on a nearby surface.
This technique is called a 'near field to far field transformation' and is very useful for reducing the simulation size needed for structures involving lots of empty space.
As an example, we will simulate a simple zone plate lens with a very thin domain size to get the transmitted fields measured just above the structure. Then, we'll show how to use the `Near2Far` feature from `tidy3D` to extrapolate to the fields at the focal plane above the lens.
```
# get the most recent version of tidy3d
!pip install -q --upgrade tidy3d
# make sure notebook plots inline
%matplotlib inline
# standard python imports
import numpy as np
import matplotlib.pyplot as plt
import sys
# import client side tidy3d
import tidy3d as td
from tidy3d import web
```
## Problem Setup
Below is a rough sketch of the setup of a near field to far field transformation.
The transmitted near fields are measured just above the metalens on the blue line, and the near field to far field transformation is then used to project the fields to the focal plane above at the red line.
<img src="img/n2f_diagram.png" width=800>
## Define Simulation Parameters
As always, we first need to define our simulation parameters. As a reminder, all length units in `tidy3D` are specified in microns.
```
# 1 nanometer in units of microns (for conversion)
nm = 1e-3
# free space central wavelength
wavelength = 1.0
# numerical aperture
NA = 0.8
# thickness of lens features
H = 200 * nm
# space between bottom PML and substrate (-z)
# and the space between lens structure and top pml (+z)
space_below_sub = 1.5 * wavelength
# thickness of substrate (um)
thickness_sub = wavelength / 2
# side length (xy plane) of entire metalens (um)
length_xy = 40 * wavelength
# Lens and substrate refractive index
n_TiO2 = 2.40
n_SiO2 = 1.46
# define material properties
air = td.Medium(epsilon=1.0)
SiO2 = td.Medium(epsilon=n_SiO2**2)
TiO2 = td.Medium(epsilon=n_TiO2**2)
# resolution of simulation (15 or more grids per wavelength is adequate)
grids_per_wavelength = 20
# Number of PML layers to use around edges of simulation, choose thickness of one wavelength to be safe
npml = grids_per_wavelength
```
## Process Geometry
Next we perform some conversions based on these parameters to define the simulation.
```
# grid size (um)
dl = wavelength / grids_per_wavelength
# because the wavelength is in microns, use builtin td.C_0 (um/s) to get frequency in Hz
f0 = td.C_0 / wavelength
# Define PML layers, for this application we surround the whole structure in PML to isolate the fields
pml_layers = [npml, npml, npml]
# domain size in z, note, we're just simulating a thin slice: (space -> substrate -> lens thickness -> space)
length_z = space_below_sub + thickness_sub + H + space_below_sub
# construct simulation size array
sim_size = np.array([length_xy, length_xy, length_z])
```
## Create Geometry
Now we create the ring metalens programatically
```
# define substrate
substrate = td.Box(
center=[0, 0, -length_z/2 + space_below_sub + thickness_sub / 2.0],
size=[td.inf, td.inf, thickness_sub],
material=SiO2)
# create a running list of structures
geometry = [substrate]
# focal length
focal_length = length_xy / 2 / NA * np.sqrt(1 - NA**2)
# location from center for edge of the n-th inner ring, see https://en.wikipedia.org/wiki/Zone_plate
def edge(n):
return np.sqrt(n * wavelength * focal_length + n**2 * wavelength**2 / 4)
# loop through the ring indeces until it's too big and add each to geometry list
n = 1
r = edge(n)
while r < 2 * length_xy:
# progressively wider cylinders, material alternating between air and TiO2
cyl = td.Cylinder(
center = [0,0,-length_z/2 + space_below_sub + thickness_sub + H / 2],
axis='z',
radius=r,
height=H,
material=TiO2 if n % 2 == 0 else air,
name=f'cylinder_n={n}'
)
geometry.append(cyl)
n += 1
r = edge(n)
# reverse geometry list so that inner, smaller rings are added last and therefore override larger rings.
geometry.reverse()
```
## Create Source
Create a plane wave incident from below the metalens
```
# Bandwidth in Hz
fwidth = f0 / 10.0
# Gaussian source offset; the source peak is at time t = offset/fwidth
offset = 4.
# time dependence of source
gaussian = td.GaussianPulse(f0, fwidth, offset=offset, phase=0)
source = td.PlaneWave(
source_time=gaussian,
injection_axis='+z',
position=-length_z/2 + space_below_sub / 2, # halfway between PML and substrate
polarization='x')
# Simulation run time
run_time = 40 / fwidth
```
## Create Monitor
Create a near field monitor to measure the fields just above the metalens
```
# place it halfway between top of lens and PML
monitor_near = td.FreqMonitor(
center=[0., 0., -length_z/2 + space_below_sub + thickness_sub + H + space_below_sub / 2],
size=[length_xy, length_xy, 0],
freqs=[f0],
name='near_field')
```
## Create Simulation
Put everything together and define a simulation object
```
sim = td.Simulation(size=sim_size,
mesh_step=[dl, dl, dl],
structures=geometry,
sources=[source],
monitors=[monitor_near],
run_time=run_time,
pml_layers=pml_layers)
```
## Visualize Geometry
Lets take a look and make sure everything is defined properly
```
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 8))
# Time the visualization of the 2D plane
sim.viz_eps_2D(normal='x', position=0.1, ax=ax1);
sim.viz_eps_2D(normal='z', position=-length_z/2 + space_below_sub + thickness_sub + H / 2, ax=ax2);
```
## Run Simulation
Now we can run the simulation and download the results
```
# Run simulation
project = web.new_project(sim.export(), task_name='near2far_docs')
web.monitor_project(project['taskId'])
# download and load the results
print('Downloading results')
web.download_results(project['taskId'], target_folder='output')
sim.load_results('output/monitor_data.hdf5')
# print stats from the logs
with open("output/tidy3d.log") as f:
print(f.read())
```
## Visualization
Let's inspect the near field using the Tidy3D builtin field visualization methods.
For more details see the documentation of [viz_field_2D](https://simulation.cloud/docs/html/generated/tidy3d.Simulation.viz_field_2D.html#tidy3d.Simulation.viz_field_2D).
```
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
for ax, val in zip(axes, ('re', 'abs', 'int')):
im = sim.viz_field_2D(monitor_near, eps_alpha=0, comp='x', val=val, cbar=True, ax=ax)
plt.show()
```
## Setting Up Near 2 Far
To set up near to far, we first need to grab the data from the nearfield monitor.
```
# near field monitor data dictionary
monitor_data = sim.data(monitor_near)
# grab the raw data for plotting later
xs = monitor_data['xmesh']
ys =monitor_data['ymesh']
E_near = np.squeeze(monitor_data['E'])
```
Then, we create a `td.Near2Far` object using the monitor data dictionary as follows.
This object just stores near field data and provides [various methods](https://simulation.cloud/docs/html/generated/tidy3d.Near2Far.html#tidy3d.Near2Far) for looking at various far field quantities.
```
# from near2far_tidy3d import Near2Far
n2f = td.Near2Far(monitor_data)
```
## Getting Far Field Data
With the `Near2Far` object initialized, we just need to call one of it's methods to get a far field quantity.
For this example, we use `Near2Far.get_fields_cartesian(x,y,z)` to get the fields at an `x,y,z` point relative to the monitor center.
Below, we scan through x and y points in a plane located at `z=z0` and record the far fields.
```
# points to project to
num_far = 40
xs_far = 4 * wavelength * np.linspace(-0.5, 0.5, num_far)
ys_far = 4 * wavelength * np.linspace(-0.5, 0.5, num_far)
# get a mesh in cartesian, convert to spherical
Nx, Ny = len(xs), len(ys)
# initialize the far field values
E_far = np.zeros((3, num_far, num_far), dtype=complex)
H_far = np.zeros((3, num_far, num_far), dtype=complex)
# loop through points in the output plane
for i in range(num_far):
sys.stdout.write(" \rGetting far fields, %2d%% done"%(100*i/(num_far + 1)))
sys.stdout.flush()
x = xs_far[i]
for j in range(num_far):
y = ys_far[j]
# compute and store the outputs from projection function at the focal plane
E, H = n2f.get_fields_cartesian(x, y, focal_length)
E_far[:, i, j] = E
H_far[:, i, j] = H
sys.stdout.write("\nDone!")
```
## Plot Results
Now we can plot the near and far fields together
```
# plot everything
f, ((ax1, ax2, ax3),
(ax4, ax5, ax6)) = plt.subplots(2, 3, tight_layout=True, figsize=(10, 5))
def pmesh(xs, ys, array, ax, cmap):
im = ax.pcolormesh(xs, ys, array.T, cmap=cmap, shading='auto')
return im
im1 = pmesh(xs, ys, np.real(E_near[0]), ax=ax1, cmap='RdBu')
im2 = pmesh(xs, ys, np.real(E_near[1]), ax=ax2, cmap='RdBu')
im3 = pmesh(xs, ys, np.real(E_near[2]), ax=ax3, cmap='RdBu')
im4 = pmesh(xs_far, ys_far, np.real(E_far[0]), ax=ax4, cmap='RdBu')
im5 = pmesh(xs_far, ys_far, np.real(E_far[1]), ax=ax5, cmap='RdBu')
im6 = pmesh(xs_far, ys_far, np.real(E_far[2]), ax=ax6, cmap='RdBu')
ax1.set_title('near field $E_x(x,y)$')
ax2.set_title('near field $E_y(x,y)$')
ax3.set_title('near field $E_z(x,y)$')
ax4.set_title('far field $E_x(x,y)$')
ax5.set_title('far field $E_y(x,y)$')
ax6.set_title('far field $E_z(x,y)$')
plt.colorbar(im1, ax=ax1)
plt.colorbar(im2, ax=ax2)
plt.colorbar(im3, ax=ax3)
plt.colorbar(im4, ax=ax4)
plt.colorbar(im5, ax=ax5)
plt.colorbar(im6, ax=ax6)
plt.show()
# we can also use the far field data and plot the field intensity to see the focusing effect
intensity_far = np.sum(np.square(np.abs(E_far)), axis=0)
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
im1 = pmesh(xs_far, ys_far, intensity_far, ax=ax1, cmap='magma')
im2 = pmesh(xs_far, ys_far, np.sqrt(intensity_far), ax=ax2, cmap='magma')
ax1.set_title('$|E(x,y)|^2$')
ax2.set_title('$|E(x,y)|$')
plt.colorbar(im1, ax=ax1)
plt.colorbar(im2, ax=ax2)
plt.show()
```
| github_jupyter |

# Qiskit Runtime on IBM Cloud
Qiskit Runtime is now part of the IBM Quantum Services on IBM Cloud. To use this service, you'll need to create an IBM Cloud account and a quantum service instance. [This guide](https://cloud.ibm.com/docs/account?topic=account-account-getting-started) contains step-by-step instructions on setting up the account and [this page](https://cloud.ibm.com/docs/quantum-computing?topic=quantum-computing-quickstart) explains how to create a service instance, including directions to find your IBM Cloud API key and Cloud Resource Name (CRN), which you will need later in this tutorial.
This tutorial assumes that you know how to use Qiskit, including using it to create circuits. If you are not familiar with Qiskit, the [Qiskit Textbook](https://qiskit.org/textbook/preface.html) is a great resource to learn about both Qiskit and quantum computation in general.
# qiskit-ibm-runtime
Once you have an IBM Cloud account and service instance set up, you can use `qiskit-ibm-runtime` to access Qiskit Runtime on IBM Cloud. `qiskit-ibm-runtime` provides the interface to interact with Qiskit Runtime. You can, for example, use it to query and execute runtime programs.
## Installation
You can install the `qiskit-ibm-runtime` package using pip:
```
pip install qiskit-ibm-runtime
```
## Account initialization
Before you can start using Qiskit Runtime, you need to initialize your account by calling `QiskitRuntimeService` with your IBM Cloud API key and the CRN or service name of your service instance.
You can also choose to save your credentials on disk (in the `$HOME/.qiskit/qiskit-ibm.json` file). By doing so, you only need to use `QiskitRuntimeService()` in the future to initialize your account.
For more information about account management, such as how to delete or view an account, see [04_account_management.ipynb](04_account_management.ipynb).
<div class="alert alert-block alert-info">
<b>Note:</b> Account credentials are saved in plain text, so only do so if you are using a trusted device.
</div>
```
from qiskit_ibm_runtime import QiskitRuntimeService
# Save account on disk.
# QiskitRuntimeService.save_account(channel="ibm_cloud", token=<IBM Cloud API key>, instance=<IBM Cloud CRN> or <IBM Cloud service name>)
service = QiskitRuntimeService()
```
The `<IBM Cloud API key>` in the example above is your IBM Cloud API key and looks something like
```
kYgdggnD-qx5k2u0AAFUKv3ZPW_avg0eQ9sK75CCW7hw
```
The `<IBM Cloud CRN>` is the Cloud Resource Name and looks something like
```
crn:v1:bluemix:public:quantum-computing:us-east:a/b947c1c5a9378d64aed96696e4d76e8e:a3a7f181-35aa-42c8-94d6-7c8ed6e1a94b::
```
The `<IBM Cloud service name>` is user-provided and defaults to something like
```
Quantum Services-9p
```
If you choose to set `instance` to the service name, the initialization time of the `QiskitRuntimeService` is slightly higher because the required `CRN` value is internally resolved via IBM Cloud APIs.
## Listing programs <a name='listing_program'>
There are three methods that can be used to find metadata of available programs:
- `pprint_programs()`: pretty prints summary metadata of available programs
- `programs()`: returns a list of `RuntimeProgram` instances
- `program()`: returns a single `RuntimeProgram` instance
The metadata of a runtime program includes its ID, name, description, maximum execution time, backend requirements, input parameters, and return values. Maximum execution time is the maximum amount of time, in seconds, a program can run before being forcibly terminated.
To print the summary metadata of the programs (by default first 20 programs are displayed):
```
service.pprint_programs(limit=None)
```
You can use the `limit` and `skip` parameters in `pprint_programs()` and `programs()` to page through all programs.
You can pass `detailed=True` parameter to `pprint_programs()` to view all the metadata for the programs.
The program metadata, once fetched, is cached for performance reasons. But you can pass the `refresh=True` parameter to get the latest data from the server.
To print the metadata of the program `sampler`:
```
program = service.program("sampler")
print(program)
```
As you can see from above, the primitive `sampler` calculates the distributions generated by given circuits executed on the target backend. It takes a number of parameters, but `circuits` and `circuit_indices` are the only required parameters. When the program finishes, it returns the quasi-probabilities for each circuit.
## Invoking a runtime program <a name='invoking_program'>
You can use the [QiskitRuntimeService.run()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.run) method to invoke a runtime program. This method takes the following parameters:
- `program_id`: ID of the program to run.
- `inputs`: Program input parameters. These input values are passed to the runtime program.
- `options`: Runtime options. These options control the execution environment. Currently the only available option is `backend_name`, which is optional for cloud runtime.
- `result_decoder`: Optional class used to decode the job result.
Below is an example of invoking the `sampler` program.
First we need to construct a circuit as the input to `sampler` using Qiskit.
```
from qiskit import QuantumCircuit
N = 6
qc = QuantumCircuit(N)
qc.x(range(0, N))
qc.h(range(0, N))
for kk in range(N // 2, 0, -1):
qc.ch(kk, kk - 1)
for kk in range(N // 2, N - 1):
qc.ch(kk, kk + 1)
qc.measure_all()
qc.draw("mpl", fold=-1)
```
We now use this circuit as the input to `sampler`:
```
# Specify the program inputs here.
program_inputs = {
"circuits": qc,
"circuit_indices": [0],
}
job = service.run(
program_id="sampler",
inputs=program_inputs,
)
# Printing the job ID in case we need to retrieve it later.
print(f"Job ID: {job.job_id}")
# Get the job result - this is blocking and control may not return immediately.
result = job.result()
print(result)
# see which backend the job was executed on
print(job.backend)
```
### Runtime job
The `run()` method returns a [RuntimeJob](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.RuntimeJob.html#qiskit_ibm_runtime.RuntimeJob) instance, which represents the asynchronous execution instance of the program.
Some of the `RuntimeJob` methods:
- `status()`: Return job status.
- `result()`: Wait for the job to finish and return the final result.
- `cancel()`: Cancel the job.
- `wait_for_final_state()`: Wait for the job to finish.
- `logs()`: Return job logs.
- `error_message()`: Returns the reason if the job failed and `None` otherwise.
Some of the `RuntimeJob` attributes:
- `job_id`: Unique identifier of the job.
- `backend`: The backend where the job is run.
- `program_id`: ID of the program the execution is for.
Refer to the [RuntimeJob API documentation](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.RuntimeJob.html#qiskit_ibm_runtime.RuntimeJob) for a full list of methods and usage.
<div class="alert alert-block alert-info">
<b>Note:</b> To ensure fairness, there is a maximum execution time for each Qiskit Runtime job. Refer to <a href="https://qiskit.org/documentation/partners/qiskit_ibm_runtime/max_time.html#qiskit-runtime-on-ibm-cloud">this documentation</a> on what the time limit is.
</div>
## Selecting a backend
A **backend** is a quantum device or simulator capable of running quantum circuits or pulse schedules.
In the example above, we invoked a runtime program without specifying which backend it should run on. In this case the server automatically picks the one that is the least busy. Alternatively, you can choose a specific backend to run your program.
To list all the backends you have access to:
```
service.backends()
```
The [QiskitRuntimeService.backends()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.backends) method also takes filters. For example, to find all real devices that have at least five qubits:
```
service.backends(simulator=False, min_num_qubits=5)
```
[QiskitRuntimeService.backends()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.backends) returns a list of [IBMBackend](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.IBMBackend.html#qiskit_ibm_runtime.IBMBackend) instances. Each instance represents a particular backend. Attributes and methods of an `IBMBackend` provide more information about the backend, such as its qubit count, error rate, and status.
For more information about backends, such as commonly used attributes, see [03_backends.ipynb](03_backends.ipynb).
Once you select a backend to use, you can specify the name of the backend in the `options` parameter:
```
# Specify the program inputs here.
program_inputs = {
"circuits": qc,
"circuit_indices": [0],
}
# Specify the backend name.
options = {"backend_name": "ibmq_qasm_simulator"}
job = service.run(
program_id="sampler",
options=options,
inputs=program_inputs,
)
# Printing the job ID in case we need to retrieve it later.
print(f"Job ID: {job.job_id}")
# Get the job result - this is blocking and control may not return immediately.
result = job.result()
print(result)
```
## Retrieving previously run jobs
You can use the [QiskitRuntimeService.job()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.job) method to retrieve a previously executed runtime job. Attributes of this [RuntimeJob](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.RuntimeJob.html#qiskit_ibm_runtime.RuntimeJob) instance can tell you about the execution:
```
retrieved_job = service.job(job.job_id)
print(
f"Job {retrieved_job.job_id} is an execution instance of runtime program {retrieved_job.program_id}."
)
print(
f"This job ran on backend {retrieved_job.backend} and had input parameters {retrieved_job.inputs}"
)
```
Similarly, you can use [QiskitRuntimeService.jobs()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.jobs) to get a list of jobs. You can specify a limit on how many jobs to return. The default limit is 10:
```
retrieved_jobs = service.jobs(limit=1)
for rjob in retrieved_jobs:
print(rjob.job_id)
```
## Deleting a job
You can use the [QiskitRuntimeService.delete_job()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.delete_job) method to delete a job. You can only delete your own jobs, and this action cannot be reversed.
```
service.delete_job(job.job_id)
```
# Next steps
There are additional tutorials in this directory:
- [02_introduction_ibm_quantum_runtime.ipynb](02_introduction_ibm_quantum_runtime.ipynb) is the corresponding tutorial on using Qiskit Runtime on IBM Quantum. You can skip this tutorial if you don't plan on using Qiskit Runtime on IBM Quantum.
- [03_backends.ipynb](03_backends.ipynb) describes how to find a target backend for the Qiskit Runtime program you want to invoke.
- [04_account_management.ipynb](04_account_management.ipynb) describes how to save, load, and delete your account credentials on disk.
- [qiskit_runtime_vqe_program.ipynb](sample_vqe_program/qiskit_runtime_vqe_program.ipynb) goes into more details on uploading a real-world program (VQE).
- [qka.ipynb](qka.ipynb), [vqe.ipynb](vqe.ipynb), and [qiskit_runtime_expval_program.ipynb](sample_expval_program/qiskit_runtime_expval_program.ipynb) describe how to use the public programs `qka`, `vqe`, and `sample-expval`, respectively. These programs are currently only available in Qiskit Runtime on IBM Quantum.
```
from qiskit.tools.jupyter import *
%qiskit_copyright
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import pathlib as pl
from datetime import datetime
import matplotlib.pyplot as plot
from pandas import DataFrame, Series
# read from csv files
datapath=pl.Path("../csvdata")
file_list=[]
dfs=[]
for x in datapath.glob("*H.csv"):
print("Reading "+x.name)
f=pd.read_csv(x,index_col=0,na_values=" ")
dfs.append(f)
file_list.append(x.name)
f.columns
tr_data=DataFrame()
i=0
for df in dfs:
for c in df.columns:
if c.find("Avg[Va H3]")!=-1:
print (c)
tr_data[c]=df[c]
i+=1
print(i)
train_data=tr_data.dropna()
len(train_data)
train_data=train_data.dropna(axis=1)
corMat=DataFrame(train_data.corr())
plot.pcolor(corMat)
plot.show()
train_data.describe()
from sklearn.model_selection import train_test_split
train,test= train_test_split(train_data,test_size=0.2,random_state=0)
MAX= train.max()
MIN = train.min()
train_s=(train-MIN)/(MAX-MIN)
test_s=(test-MIN)/(MAX-MIN)
train_s=train_s.fillna(0)
test_s=test_s.fillna(0)
# generate X_train,Y_train,X_test,Y_test for all target features
X_train=train_s.copy();Y_train=DataFrame()
for c in train_s.columns:
if c.find("WanKe1") !=-1 :
Y_train[c]=train_s[c]
X_train=X_train.drop(c,axis=1)
X_test=test_s.copy();Y_test=DataFrame()
for c in test_s.columns:
if c.find("WanKe1") !=-1 :
Y_test[c]=test_s[c]
X_test=X_test.drop(c,axis=1)
###### network from keras for SHDKY data simulation ###########
from keras.models import Sequential
from keras.layers import Dense, Activation,Input
import keras
model = Sequential()
model.add(Dense(28, input_dim=14,kernel_initializer="normal"))
model.add(Activation('relu'))
model.add(Dense(7, activation='relu',kernel_initializer="normal"))
model.add(Dense(1, activation='linear',kernel_initializer="normal"))
model.compile(loss='mean_squared_error',
optimizer=keras.optimizers.SGD(lr=0.02))
model.fit(X_train, Y_train, epochs=30,batch_size=30,
shuffle=True,verbose=2,validation_split=0.2)
#((model.predict(X_test)-Y_test)/Y_test)
((model.predict(X_test)-Y_test)).describe()
c=Y_test.columns
R= pd.DataFrame(model.predict(X_test),columns=['V_pred'])
R.index = Y_test.index
R= R.join(Y_test)
R= R*(MAX[c]-MIN[c]).values[0]+MIN[c].values[0]
R_show=R.iloc[200:300,:]
import pylab
pylab.rcParams['figure.figsize'] = (12.0, 4.0)
R_show.plot(alpha =0.5,figsize = (12,4))
R["diff"]=(R["V_pred"]-R[c[0]])
R["diff_ptg"]=(R["V_pred"]-R[c[0]])/R[c[0]]
R.describe()
(R.diff_ptg.quantile(0.05),R.diff_ptg.quantile(0.95))
R[:6]
R.corr()
train_data.corr()[c[0]].sort_values()
model.save("model_h5/M_VH3.h5")
test[c][:6]
R[c][:6]
R[c].hist(bins=100)
R['V_pred'].hist(bins=100)
R.diff_ptg.hist(bins=100)
```
| github_jupyter |
# Data Visualization
The RAPIDS AI ecosystem and `cudf.DataFrame` are built on a series of standards that simplify interoperability with established and emerging data science tools.
With a growing number of libraries adding GPU support, and a `cudf.DataFrame`’s ability to convert `.to_pandas()`, a large portion of the Python Visualization ([PyViz](pyviz.org/tools.html)) stack is immediately available to display your data.
In this Notebook, we’ll walk through some of the data visualization possibilities with BlazingSQL.
Blog post: [Data Visualization with BlazingSQL](https://blog.blazingdb.com/data-visualization-with-blazingsql-12095862eb73?source=friends_link&sk=94fc5ee25f2a3356b4a9b9a49fd0f3a1)
#### Overview
- [Matplotlib](#Matplotlib)
- [Datashader](#Datashader)
- [HoloViews](#HoloViews)
- [cuxfilter](#cuxfilter)
```
from blazingsql import BlazingContext
bc = BlazingContext()
```
### Dataset
The data we’ll be using for this demo comes from the [NYC Taxi dataset](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) and is stored in a public AWS S3 bucket.
```
bc.s3('blazingsql-colab', bucket_name='blazingsql-colab')
bc.create_table('taxi', 's3://blazingsql-colab/yellow_taxi/taxi_data.parquet')
```
Let's give the data a quick look to get a clue what we're looking at.
```
bc.sql('select * from taxi').tail()
```
## Matplotlib
[GitHub](https://github.com/matplotlib/matplotlib)
> _Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python._
By calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and instantly access Matplotlib with `.plot()`.
For example, **does the `passenger_count` influence the `tip_amount`?**
```
bc.sql('SELECT * FROM taxi').to_pandas().plot(kind='scatter', x='passenger_count', y='tip_amount')
```
Other than the jump from 0 to 1 or outliers at 5 and 6, having more passengers might not be a good deal for the driver's `tip_amount`.
Let's see what demand is like. Based on dropoff time, **how many riders were transported by hour?** i.e. column `7` will be the total number of passengers dropped off from 7:00 AM through 7:59 AM for all days in this time period.
```
riders_by_hour = '''
select
sum(passenger_count) as sum_riders,
hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP)) as hour_of_the_day
from
taxi
group by
hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP))
order by
hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP))
'''
bc.sql(riders_by_hour).to_pandas().plot(kind='bar', x='hour_of_the_day', y='sum_riders', title='Sum Riders by Hour', figsize=(12, 6))
```
Looks like the morning gets started around 6:00 AM, and builds up to a sustained lunchtime double peak from 12:00 PM - 3:00 PM. After a quick 3:00 PM - 5:00 PM siesta, we're right back for prime time from 6:00 PM to 8:00 PM. It's downhill from there, but tomorrow is a new day!
```
solo_rate = len(bc.sql('select * from taxi where passenger_count = 1')) / len(bc.sql('select * from taxi')) * 100
print(f'{solo_rate}% of rides have only 1 passenger.')
```
The overwhelming majority of rides have just 1 passenger. How consistent is this solo rider rate? **What's the average `passenger_count` per trip by hour?**
And maybe time of day plays a role in `tip_amount` as well, **what's the average `tip_amount` per trip by hour?**
We can run both queries in the same cell and the results will display inline.
```
xticks = [n for n in range(24)]
avg_riders_by_hour = '''
select
avg(passenger_count) as avg_passenger_count,
hour(dropoff_ts) as hour_of_the_day
from (
select
passenger_count,
cast(tpep_dropoff_datetime || '.0' as TIMESTAMP) dropoff_ts
from
taxi
)
group by
hour(dropoff_ts)
order by
hour(dropoff_ts)
'''
bc.sql(avg_riders_by_hour).to_pandas().plot(kind='line', x='hour_of_the_day', y='avg_passenger_count', title='Avg. # Riders per Trip by Hour', xticks=xticks, figsize=(12, 6))
avg_tip_by_hour = '''
select
avg(tip_amount) as avg_tip_amount,
hour(dropoff_ts) as hour_of_the_day
from (
select
tip_amount,
cast(tpep_dropoff_datetime || '.0' as TIMESTAMP) dropoff_ts
from
taxi
)
group by
hour(dropoff_ts)
order by
hour(dropoff_ts)
'''
bc.sql(avg_tip_by_hour).to_pandas().plot(kind='line', x='hour_of_the_day', y='avg_tip_amount', title='Avg. Tip ($) per Trip by Hour', xticks=xticks, figsize=(12, 6))
```
Interestingly, they almost resemble each other from 8:00 PM to 9:00 AM, but where average `passenger_count` continues to rise until 3:00 PM, average `tip_amount` takes a dip until 3:00 PM.
From 3:00 PM - 8:00 PM average `tip_amount` starts rising and average `passenger_count` waits patiently for it to catch up.
Average `tip_amount` peaks at midnight, and bottoms out at 5:00 AM. Average `passenger_count` is highest around 3:00 AM, and lowest at 6:00 AM.
## Datashader
[GitHub](https://github.com/holoviz/datashader)
> Datashader is a data rasterization pipeline for automating the process of creating meaningful representations of large amounts of data.
As of [holoviz/datashader#793](https://github.com/holoviz/datashader/pull/793), the following Datashader features accept `cudf.DataFrame` and `dask_cudf.DataFrame` input:
- `Canvas.points`, `Canvas.line` and `Canvas.area` rasterization
- All reduction operations except `var` and `std`.
- `transfer_functions.shade` (both 2D and 3D) inputs
#### Colorcet
[GitHub](https://github.com/holoviz/colorcet)
> Colorcet is a collection of perceptually uniform colormaps for use with Python plotting programs like bokeh, matplotlib, holoviews, and datashader based on the set of perceptually uniform colormaps created by Peter Kovesi at the Center for Exploration Targeting.
```
from datashader import Canvas, transfer_functions as tf
from colorcet import fire
```
**Do dropoff locations change based on the time of day?** Let's say 6AM-4PM vs 6PM-4AM.
Dropoffs from 6:00 AM to 4:00 PM
```
query = '''
select
dropoff_x, dropoff_y
from
taxi
where
hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 6 AND 15
'''
nyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')
tf.set_background(tf.shade(nyc, cmap=fire), "black")
```
Dropoffs from 6:00 PM to 4:00 AM
```
query = '''
select
dropoff_x, dropoff_y
from
taxi
where
hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 18 AND 23
OR hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 0 AND 3
'''
nyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')
tf.set_background(tf.shade(nyc, cmap=fire), "black")
```
While Manhattan makes up the majority of the dropoff geography from 6:00 AM to 4:00 PM, Midtown's spark grows and spreads deeper into Brooklyn and Queens in the 6:00 PM to 4:00 AM window.
Consistent with the more decentralized look across the map, dropoffs near LaGuardia Airport (upper-middle right side) also die down relative to surrounding areas as the night rolls in.
## HoloViews
[GitHub](https://github.com/holoviz/holoviews)
> HoloViews is an open-source Python library designed to make data analysis and visualization seamless and simple. With HoloViews, you can usually express what you want to do in very few lines of code, letting you focus on what you are trying to explore and convey, not on the process of plotting.
By calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and hand off to HoloViews or other CPU visualization packages.
```
from holoviews import extension, opts
from holoviews import Scatter, Dimension
import holoviews.operation.datashader as hd
extension('bokeh')
opts.defaults(opts.Scatter(height=425, width=425), opts.RGB(height=425, width=425))
cmap = [(49,130,189), (107,174,214), (123,142,216), (226,103,152), (255,0,104), (50,50,50)]
```
With HoloViews, we can easily explore the relationship of multiple scatter plots by saving them as variables and displaying them side-by-side with the same code cell.
For example, let's reexamine `passenger_count` vs `tip_amount` next to a new `holoviews.Scatter` of `fare_amount` vs `tip_amount`.
**Does `passenger_count` affect `tip_amount`?**
```
s = Scatter(bc.sql('select passenger_count, tip_amount from taxi').to_pandas(), 'passenger_count', 'tip_amount')
# 0-6 passengers, $0-$100 tip
ranged = s.redim.range(passenger_count=(-0.5, 6.5), tip_amount=(0, 100))
shaded = hd.spread(hd.datashade(ranged, x_sampling=0.25, cmap=cmap))
riders_v_tip = shaded.redim.label(passenger_count="Passenger Count", tip_amount="Tip ($)")
```
**How do `fare_amount` and `tip_amount` relate?**
```
s = Scatter(bc.sql('select fare_amount, tip_amount from taxi').to_pandas(), 'fare_amount', 'tip_amount')
# 0-30 miles, $0-$60 tip
ranged = s.redim.range(fare_amount=(0, 100), tip_amount=(0, 100))
shaded = hd.spread(hd.datashade(ranged, cmap=cmap))
fare_v_tip = shaded.redim.label(fare_amount="Fare Amount ($)", tip_amount="Tip ($)")
```
Display the answers to both side by side.
```
riders_v_tip + fare_v_tip
```
## cuxfilter
[GitHub](https://github.com/rapidsai/cuxfilter)
> cuxfilter (ku-cross-filter) is a RAPIDS framework to connect web visualizations to GPU accelerated crossfiltering. Inspired by the javascript version of the original, it enables interactive and super fast multi-dimensional filtering of 100 million+ row tabular datasets via cuDF.
cuxfilter allows us to culminate these charts into a dashboard.
```
import cuxfilter
```
Create `cuxfilter.DataFrame` from a `cudf.DataFrame`.
```
cux_df = cuxfilter.DataFrame.from_dataframe(bc.sql('SELECT passenger_count, tip_amount, dropoff_x, dropoff_y FROM taxi'))
```
Create some charts & define a dashboard object.
```
chart_0 = cuxfilter.charts.datashader.scatter_geo(x='dropoff_x', y='dropoff_y')
chart_1 = cuxfilter.charts.bokeh.bar('passenger_count', add_interaction=False)
chart_2 = cuxfilter.charts.datashader.heatmap(x='passenger_count', y='tip_amount', x_range=[-0.5, 6.5], y_range=[0, 100],
color_palette=cmap, title='Passenger Count vs Tip Amount ($)')
dashboard = cux_df.dashboard([chart_0, chart_1, chart_2], title='NYC Yellow Cab')
```
Display charts in Notebook with `.view()`.
```
chart_0.view()
chart_2.view()
```
## Multi-GPU Data Visualization
Packages like Datashader and cuxfilter support dask_cudf distributed objects (Series, DataFrame).
```
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
cluster = LocalCUDACluster()
client = Client(cluster)
bc = BlazingContext(dask_client=client, network_interface='lo')
bc.s3('blazingsql-colab', bucket_name='blazingsql-colab')
bc.create_table('distributed_taxi', 's3://blazingsql-colab/yellow_taxi/taxi_data.parquet')
```
Dropoffs from 6:00 PM to 4:00 AM
```
query = '''
select
dropoff_x, dropoff_y
from
distributed_taxi
where
hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 18 AND 23
OR hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 0 AND 3
'''
nyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')
tf.set_background(tf.shade(nyc, cmap=fire), "black")
```
## That's the Data Vizualization Tour!
You've seen the basics of Data Visualization in BlazingSQL Notebooks and how to utilize it. Now is a good time to experiment with your own data and see how to parse, clean, and extract meaningful insights from it.
We'll now get into how to run Machine Learning with popular Python and GPU-accelerated Python packages.
Continue to the [Machine Learning introductory Notebook](machine_learning.ipynb)
| github_jupyter |
# Bgt2Vec
Original code is generated from © Yuriy Guts, 2016
## Imports
```
from __future__ import absolute_import, division, print_function
import codecs
import glob
import logging
import multiprocessing
import os
import pprint
import re
import nltk
import gensim.models.word2vec as w2v
import sklearn.manifold
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
%pylab inline
```
**Set up logging**
```
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
```
**Download NLTK tokenizer models (only the first time)**
```
nltk.download("punkt")
```
## Prepare Corpus
```
# change the current directory to read the data
os.chdir(r"C:\Users\Sultan\Desktop\data\PreprocessedData")
df = pd.read_csv('CombinedData.csv', engine='python')
df.head()
# Rename col 0
df.columns = ['word','organization','year']
df.head()
corpus = df.word
# Join the elements and sperate them by a single space
corpus = ' '.join(word for word in corpus)
corpus[:196]
# change the current directory to read the data
os.chdir(r"C:\Users\Sultan\Desktop\data\PreprocessedData\TextFiles")
# Creating a text file
text_data = open("CombinedData.txt","a")
# Writing the string to the file
text_data.write(corpus)
# Closing the file
text_data.close()
```
**Load files**
```
bgt_filename = "CombinedData.txt"
corpus_raw = u""
print("Reading '{0}'...".format(bgt_filename))
with codecs.open(bgt_filename, "r", "utf-8") as book_file:
corpus_raw += book_file.read()
print("Corpus is now {0} characters long".format(len(corpus_raw)))
print()
```
**Split the corpus into sentences**
```
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
raw_sentences = tokenizer.tokenize(corpus_raw)
def sentence_to_wordlist(raw):
clean = re.sub("[^a-zA-Z]"," ", raw)
words = clean.split()
return words
sentences = []
for raw_sentence in raw_sentences:
if len(raw_sentence) > 0:
sentences.append(sentence_to_wordlist(raw_sentence))
token_count = sum([len(sentence) for sentence in sentences])
print("corpus contains {0:,} tokens".format(token_count))
```
## Train Word2Vec
```
# Dimensionality of the resulting word vectors.
num_features = 300
# Minimum word count threshold.
min_word_count = 3
# Number of threads to run in parallel.
num_workers = multiprocessing.cpu_count()
# Context window length.
context_size = 7
# Downsample setting for frequent words.
downsampling = 1e-3
# Seed for the RNG, to make the results reproducible.
seed = 1
bgt2vec = w2v.Word2Vec(
sg=1,
seed=seed,
workers=num_workers,
size=num_features,
min_count=min_word_count,
window=context_size,
sample=downsampling
)
bgt2vec.build_vocab(sentences)
print("Word2Vec vocabulary length:", len(bgt2vec.wv.vocab))
```
**Start training**
```
bgt2vec.train(sentences, total_examples=bgt2vec.corpus_count, epochs=50)
```
**Save to file, can be useful later**
```
if not os.path.exists("trained"):
os.makedirs("trained")
bgt2vec.save(os.path.join("trained", "bgt2vec.w2v"))
```
## Explore the trained model.
```
thrones2vec = w2v.Word2Vec.load(os.path.join("trained", "bgt2vec.w2v"))
```
### Compress the word vectors into 2D space and plot them
```
tsne = sklearn.manifold.TSNE(n_components=2, random_state=0)
all_word_vectors_matrix = bgt2vec.wv.syn0
```
**Train t-SNE**
```
all_word_vectors_matrix_2d = tsne.fit_transform(all_word_vectors_matrix)
```
**Plot the big picture**
```
points = pd.DataFrame(
[
(word, coords[0], coords[1])
for word, coords in [
(word, all_word_vectors_matrix_2d[bgt2vec.wv.vocab[word].index])
for word in bgt2vec.wv.vocab
]
],
columns=["word", "x", "y"]
)
points.head(10)
sns.set_context("poster")
points.plot.scatter("x", "y", s=10, figsize=(20, 12))
```
**Zoom in to some interesting places**
```
def plot_region(x_bounds, y_bounds):
slice = points[
(x_bounds[0] <= points.x) &
(points.x <= x_bounds[1]) &
(y_bounds[0] <= points.y) &
(points.y <= y_bounds[1])
]
ax = slice.plot.scatter("x", "y", s=35, figsize=(10, 8))
for i, point in slice.iterrows():
ax.text(point.x + 0.005, point.y + 0.005, point.word, fontsize=11)
```
**words related endup together**
```
plot_region(x_bounds=(5, 10), y_bounds=(-0.5, -0.1))
```
### Explore semantic similarities between words
**Words closest to the given word**
```
bgt2vec.most_similar("guilford")
bgt2vec.most_similar("year")
```
**Linear relationships between word pairs**
```
def nearest_similarity_cosmul(start1, end1, end2):
similarities = bgt2vec.most_similar_cosmul(
positive=[end2, start1],
negative=[end1]
)
start2 = similarities[0][0]
print("{start1} is related to {end1}, as {start2} is related to {end2}".format(**locals()))
return start2
nearest_similarity_cosmul("guilford","county","year")
```
| github_jupyter |
# Demo
Minimal working examples with Catalyst.
- ML - Projector, aka "Linear regression is my profession"
- CV - mnist classification, autoencoder, variational autoencoder
- GAN - mnist again :)
- NLP - sentiment analysis
- RecSys - movie recommendations
```
! pip install -U torch==1.4.0 torchvision==0.5.0 torchtext==0.5.0 catalyst==20.05 pandas==1.0.1 tqdm==4.43
# for tensorboard integration
# !pip install tensorflow
# %load_ext tensorboard
# %tensorboard --logdir ./logs
import torch
import torchvision
import torchtext
import catalyst
print(
"torch", torch.__version__, "\n",
"torchvision", torchvision.__version__, "\n",
"torchtext", torchtext.__version__, "\n",
"catalyst", catalyst.__version__,
)
```
---
# ML - Projector
```
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst.dl import SupervisedRunner
# experiment setup
logdir = "./logdir"
num_epochs = 8
# data
num_samples, num_features = int(1e4), int(1e1)
X, y = torch.rand(num_samples, num_features), torch.rand(num_samples)
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])
# model training
runner = SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir=logdir,
num_epochs=num_epochs,
verbose=True,
)
```
---
# MNIST - classification
```
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
model = torch.nn.Linear(28 * 28, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
"valid": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
}
from catalyst import dl
from catalyst.utils import metrics
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
y_hat = self.model(x.view(x.size(0), -1))
loss = F.cross_entropy(y_hat, y)
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.state.batch_metrics = {
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.state.is_train_loader:
loss.backward()
self.state.optimizer.step()
self.state.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
num_epochs=1,
verbose=True,
timeit=False,
logdir="./logs_custom"
)
```
---
# MNIST - classification with AutoEncoder
```
import os
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
class ClassifyAE(nn.Module):
def __init__(self, in_features, hid_features, out_features):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(in_features, hid_features), nn.Tanh())
self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())
self.clf = nn.Linear(hid_features, out_features)
def forward(self, x):
z = self.encoder(x)
y_hat = self.clf(z)
x_ = self.decoder(z)
return y_hat, x_
model = ClassifyAE(28 * 28, 128, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
"valid": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
}
from catalyst import dl
from catalyst.utils import metrics
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
x = x.view(x.size(0), -1)
y_hat, x_ = self.model(x)
loss_clf = F.cross_entropy(y_hat, y)
loss_ae = F.mse_loss(x_, x)
loss = loss_clf + loss_ae
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.state.batch_metrics = {
"loss_clf": loss_clf,
"loss_ae": loss_ae,
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.state.is_train_loader:
loss.backward()
self.state.optimizer.step()
self.state.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
num_epochs=1,
verbose=True,
timeit=False,
logdir="./logs_custom_ae"
)
```
---
# MNIST - classification with Variational AutoEncoder
```
import os
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
LOG_SCALE_MAX = 2
LOG_SCALE_MIN = -10
def normal_sample(mu, sigma):
"""
Sample from multivariate Gaussian distribution z ~ N(z|mu,sigma)
while supporting backpropagation through its mean and variance.
"""
return mu + sigma * torch.randn_like(sigma)
def normal_logprob(mu, sigma, z):
"""
Probability density function of multivariate Gaussian distribution
N(z|mu,sigma).
"""
normalization_constant = (-sigma.log() - 0.5 * np.log(2 * np.pi))
square_term = -0.5 * ((z - mu) / sigma)**2
logprob_vec = normalization_constant + square_term
logprob = logprob_vec.sum(1)
return logprob
class ClassifyVAE(torch.nn.Module):
def __init__(self, in_features, hid_features, out_features):
super().__init__()
self.encoder = torch.nn.Linear(in_features, hid_features * 2)
self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())
self.clf = torch.nn.Linear(hid_features, out_features)
def forward(self, x, deterministic=False):
z = self.encoder(x)
bs, z_dim = z.shape
loc, log_scale = z[:, :z_dim // 2], z[:, z_dim // 2:]
log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)
scale = torch.exp(log_scale)
z_ = loc if deterministic else normal_sample(loc, scale)
z_logprob = normal_logprob(loc, scale, z_)
z_ = z_.view(bs, -1)
x_ = self.decoder(z_)
y_hat = self.clf(z_)
return y_hat, x_, z_logprob, loc, log_scale
model = ClassifyVAE(28 * 28, 64, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
"valid": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
}
from catalyst import dl
from catalyst.utils import metrics
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
kld_regularization = 0.1
logprob_regularization = 0.01
x, y = batch
x = x.view(x.size(0), -1)
y_hat, x_, z_logprob, loc, log_scale = self.model(x)
loss_clf = F.cross_entropy(y_hat, y)
loss_ae = F.mse_loss(x_, x)
loss_kld = -0.5 * torch.mean(
1 + log_scale - loc.pow(2) - log_scale.exp()
) * kld_regularization
loss_logprob = torch.mean(z_logprob) * logprob_regularization
loss = loss_clf + loss_ae + loss_kld + loss_logprob
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.state.batch_metrics = {
"loss_clf": loss_clf,
"loss_ae": loss_ae,
"loss_kld": loss_kld,
"loss_logprob": loss_logprob,
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.state.is_train_loader:
loss.backward()
self.state.optimizer.step()
self.state.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
num_epochs=1,
verbose=True,
timeit=False,
logdir="./logs_custom_vae"
)
```
---
# MNIST - segmentation with classification auxiliary task
```
import os
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
class ClassifyUnet(nn.Module):
def __init__(self, in_channels, in_hw, out_features):
super().__init__()
self.encoder = nn.Sequential(nn.Conv2d(in_channels, in_channels, 3, 1, 1), nn.Tanh())
self.decoder = nn.Conv2d(in_channels, in_channels, 3, 1, 1)
self.clf = nn.Linear(in_channels * in_hw * in_hw, out_features)
def forward(self, x):
z = self.encoder(x)
z_ = z.view(z.size(0), -1)
y_hat = self.clf(z_)
x_ = self.decoder(z)
return y_hat, x_
model = ClassifyUnet(1, 28, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
"valid": DataLoader(
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()),
batch_size=32),
}
from catalyst import dl
from catalyst.utils import metrics
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
x_noise = (x + torch.rand_like(x)).clamp_(0, 1)
y_hat, x_ = self.model(x_noise)
loss_clf = F.cross_entropy(y_hat, y)
iou = metrics.iou(x_, x)
loss_iou = 1 - iou
loss = loss_clf + loss_iou
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.state.batch_metrics = {
"loss_clf": loss_clf,
"loss_iou": loss_iou,
"loss": loss,
"iou": iou,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.state.is_train_loader:
loss.backward()
self.state.optimizer.step()
self.state.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
num_epochs=1,
verbose=True,
timeit=False,
logdir="./logs_custom_unet"
)
```
---
# GAN
```
import torch
from torch import nn
from torch.nn import functional as F
from catalyst.contrib.nn.modules import GlobalMaxPool2d, Flatten, Lambda
# Create the discriminator
discriminator = nn.Sequential(
nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
GlobalMaxPool2d(),
Flatten(),
nn.Linear(128, 1)
)
# Create the generator
latent_dim = 128
generator = nn.Sequential(
# We want to generate 128 coefficients to reshape into a 7x7x128 map
nn.Linear(128, 128 * 7 * 7),
nn.LeakyReLU(0.2, inplace=True),
Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),
nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(128, 1, (7, 7), padding=3),
nn.Sigmoid(),
)
# Final model
model = {
"generator": generator,
"discriminator": discriminator,
}
optimizer = {
"generator": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),
"discriminator": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),
}
from catalyst import dl
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
real_images, _ = batch
batch_metrics = {}
# Sample random points in the latent space
batch_size = real_images.shape[0]
random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.device)
# Decode them to fake images
generated_images = self.model["generator"](random_latent_vectors).detach()
# Combine them with real images
combined_images = torch.cat([generated_images, real_images])
# Assemble labels discriminating real from fake images
labels = torch.cat([
torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))
]).to(self.device)
# Add random noise to the labels - important trick!
labels += 0.05 * torch.rand(labels.shape).to(self.device)
# Train the discriminator
predictions = self.model["discriminator"](combined_images)
batch_metrics["loss_discriminator"] = \
F.binary_cross_entropy_with_logits(predictions, labels)
# Sample random points in the latent space
random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.device)
# Assemble labels that say "all real images"
misleading_labels = torch.zeros((batch_size, 1)).to(self.device)
# Train the generator
generated_images = self.model["generator"](random_latent_vectors)
predictions = self.model["discriminator"](generated_images)
batch_metrics["loss_generator"] = \
F.binary_cross_entropy_with_logits(predictions, misleading_labels)
self.state.batch_metrics.update(**batch_metrics)
import os
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
loaders = {
"train": DataLoader(
MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()),
batch_size=64),
}
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
callbacks=[
dl.OptimizerCallback(
optimizer_key="generator",
metric_key="loss_generator"
),
dl.OptimizerCallback(
optimizer_key="discriminator",
metric_key="loss_discriminator"
),
],
main_metric="loss_generator",
num_epochs=20,
verbose=True,
logdir="./logs_gan",
)
```
# NLP
```
import torch
from torch import nn, optim
import torch.nn.functional as F
import torchtext
from torchtext.datasets import text_classification
NGRAMS = 2
import os
if not os.path.isdir('./data'):
os.mkdir('./data')
if not os.path.isdir('./data/nlp'):
os.mkdir('./data/nlp')
train_dataset, valid_dataset = text_classification.DATASETS['AG_NEWS'](
root='./data/nlp', ngrams=NGRAMS, vocab=None)
VOCAB_SIZE = len(train_dataset.get_vocab())
EMBED_DIM = 32
NUM_CLASS = len(train_dataset.get_labels())
BATCH_SIZE = 32
def generate_batch(batch):
label = torch.tensor([entry[0] for entry in batch])
text = [entry[1] for entry in batch]
offsets = [0] + [len(entry) for entry in text]
# torch.Tensor.cumsum returns the cumulative sum
# of elements in the dimension dim.
# torch.Tensor([1.0, 2.0, 3.0]).cumsum(dim=0)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text = torch.cat(text)
output = {
"text": text,
"offsets": offsets,
"label": label
}
return output
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=BATCH_SIZE,
shuffle=True,
collate_fn=generate_batch,
)
valid_loader = torch.utils.data.DataLoader(
valid_dataset,
batch_size=BATCH_SIZE,
shuffle=False,
collate_fn=generate_batch,
)
class TextSentiment(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super().__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUM_CLASS)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=4.0)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
from catalyst.dl import SupervisedRunner, \
CriterionCallback, AccuracyCallback
# input_keys - which key from dataloader we need to pass to the model
runner = SupervisedRunner(input_key=["text", "offsets"])
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders={'train': train_loader, 'valid': valid_loader},
logdir="./logs_nlp",
num_epochs=3,
verbose=True,
# input_key - which key from dataloader we need to pass to criterion as target label
callbacks=[
CriterionCallback(input_key="label"),
AccuracyCallback(input_key="label")
]
)
```
# RecSys
```
import time
import os
import requests
import tqdm
import numpy as np
import pandas as pd
import scipy.sparse as sp
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as td
import torch.optim as to
import matplotlib.pyplot as pl
import seaborn as sns
# Configuration
# The directory to store the data
data_dir = "data/recsys"
train_rating = "ml-1m.train.rating"
test_negative = "ml-1m.test.negative"
# NCF config
train_negative_samples = 4
test_negative_samples = 99
embedding_dim = 64
hidden_dim = 32
# Training config
batch_size = 256
epochs = 10 # Original implementation uses 20
top_k=10
if not os.path.isdir('./data'):
os.mkdir('./data')
if not os.path.isdir('./data/recsys'):
os.mkdir('./data/recsys')
for file_name in [train_rating, test_negative]:
file_path = os.path.join(data_dir, file_name)
if os.path.exists(file_path):
print("Skip loading " + file_name)
continue
with open(file_path, "wb") as tf:
print("Load " + file_name)
r = requests.get("https://raw.githubusercontent.com/hexiangnan/neural_collaborative_filtering/master/Data/" + file_name, allow_redirects=True)
tf.write(r.content)
def preprocess_train():
train_data = pd.read_csv(os.path.join(data_dir, train_rating), sep='\t', header=None, names=['user', 'item'], usecols=[0, 1], dtype={0: np.int32, 1: np.int32})
user_num = train_data['user'].max() + 1
item_num = train_data['item'].max() + 1
train_data = train_data.values.tolist()
# Convert ratings as a dok matrix
train_mat = sp.dok_matrix((user_num, item_num), dtype=np.float32)
for user, item in train_data:
train_mat[user, item] = 1.0
return train_data, train_mat, user_num, item_num
train_data, train_mat, user_num, item_num = preprocess_train()
def preprocess_test():
test_data = []
with open(os.path.join(data_dir, test_negative)) as tnf:
for line in tnf:
parts = line.split('\t')
assert len(parts) == test_negative_samples + 1
user, positive = eval(parts[0])
test_data.append([user, positive])
for negative in parts[1:]:
test_data.append([user, int(negative)])
return test_data
valid_data = preprocess_test()
class NCFDataset(td.Dataset):
def __init__(self, positive_data, item_num, positive_mat, negative_samples=0):
super(NCFDataset, self).__init__()
self.positive_data = positive_data
self.item_num = item_num
self.positive_mat = positive_mat
self.negative_samples = negative_samples
self.reset()
def reset(self):
print("Resetting dataset")
if self.negative_samples > 0:
negative_data = self.sample_negatives()
data = self.positive_data + negative_data
labels = [1] * len(self.positive_data) + [0] * len(negative_data)
else:
data = self.positive_data
labels = [0] * len(self.positive_data)
self.data = np.concatenate([
np.array(data),
np.array(labels)[:, np.newaxis]],
axis=1
)
def sample_negatives(self):
negative_data = []
for user, positive in self.positive_data:
for _ in range(self.negative_samples):
negative = np.random.randint(self.item_num)
while (user, negative) in self.positive_mat:
negative = np.random.randint(self.item_num)
negative_data.append([user, negative])
return negative_data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
user, item, label = self.data[idx]
output = {
"user": user,
"item": item,
"label": np.float32(label),
}
return output
class SamplerWithReset(td.RandomSampler):
def __iter__(self):
self.data_source.reset()
return super().__iter__()
train_dataset = NCFDataset(
train_data,
item_num,
train_mat,
train_negative_samples
)
train_loader = td.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=4,
sampler=SamplerWithReset(train_dataset)
)
valid_dataset = NCFDataset(valid_data, item_num, train_mat)
valid_loader = td.DataLoader(
valid_dataset,
batch_size=test_negative_samples+1,
shuffle=False,
num_workers=0
)
class Ncf(nn.Module):
def __init__(self, user_num, item_num, embedding_dim, hidden_dim):
super(Ncf, self).__init__()
self.user_embeddings = nn.Embedding(user_num, embedding_dim)
self.item_embeddings = nn.Embedding(item_num, embedding_dim)
self.layers = nn.Sequential(
nn.Linear(2 * embedding_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, 1)
)
self.initialize()
def initialize(self):
nn.init.normal_(self.user_embeddings.weight, std=0.01)
nn.init.normal_(self.item_embeddings.weight, std=0.01)
for layer in self.layers:
if isinstance(layer, nn.Linear):
nn.init.xavier_uniform_(layer.weight)
layer.bias.data.zero_()
def forward(self, user, item):
user_embedding = self.user_embeddings(user)
item_embedding = self.item_embeddings(item)
concat = torch.cat((user_embedding, item_embedding), -1)
return self.layers(concat).view(-1)
def name(self):
return "Ncf"
def hit_metric(recommended, actual):
return int(actual in recommended)
def dcg_metric(recommended, actual):
if actual in recommended:
index = recommended.index(actual)
return np.reciprocal(np.log2(index + 2))
return 0
model = Ncf(user_num, item_num, embedding_dim, hidden_dim)
criterion = nn.BCEWithLogitsLoss()
optimizer = to.Adam(model.parameters())
from catalyst.dl import Callback, CallbackOrder, State
class NdcgLoaderMetricCallback(Callback):
def __init__(self):
super().__init__(CallbackOrder.Metric)
def on_batch_end(self, state: State):
item = state.input["item"]
predictions = state.output["logits"]
_, indices = torch.topk(predictions, top_k)
recommended = torch.take(item, indices).cpu().numpy().tolist()
item = item[0].item()
state.batch_metrics["hits"] = hit_metric(recommended, item)
state.batch_metrics["dcgs"] = dcg_metric(recommended, item)
from catalyst.dl import SupervisedRunner, CriterionCallback
# input_keys - which key from dataloader we need to pass to the model
runner = SupervisedRunner(input_key=["user", "item"])
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
loaders={'train': train_loader, 'valid': valid_loader},
logdir="./logs_recsys",
num_epochs=3,
verbose=True,
# input_key - which key from dataloader we need to pass to criterion as target label
callbacks=[
CriterionCallback(input_key="label"),
NdcgLoaderMetricCallback()
]
)
```
| github_jupyter |
# Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Weight initialization happens once, when a model is created and before it trains. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
<img src="notebook_ims/neuron_weights.png" width=40%/>
## Initial Weights and Observing Training Loss
To see how different weights perform, we'll test on the same dataset and neural network. That way, we know that any changes in model behavior are due to the weights and not any changing data or model structure.
> We'll instantiate at least two of the same models, with _different_ initial weights and see how the training loss decreases over time, such as in the example below.
<img src="notebook_ims/loss_comparison_ex.png" width=60%/>
Sometimes the differences in training loss, over time, will be large and other times, certain weights offer only small improvements.
### Dataset and Model
We'll train an MLP to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist) to demonstrate the effect of different initial weights. As a reminder, the FashionMNIST dataset contains images of clothing types; `classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']`. The images are normalized so that their pixel values are in a range [0.0 - 1.0). Run the cell below to download and load the dataset.
---
#### EXERCISE
[Link to normalized distribution, exercise code](#normalex)
---
### Import Libraries and Load [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
```
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 100
# percentage of training set to use as validation
valid_size = 0.2
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.FashionMNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.FashionMNIST(root='data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
### Visualize Some Training Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title(classes[labels[idx]])
```
## Define the Model Architecture
We've defined the MLP that we'll use for classifying the dataset.
### Neural Network
<img style="float: left" src="notebook_ims/neural_net.png" width=50%/>
* A 3 layer MLP with hidden dimensions of 256 and 128.
* This MLP accepts a flattened image (784-value long vector) as input and produces 10 class scores as output.
---
We'll test the effect of different initial weights on this 3 layer neural network with ReLU activations and an Adam optimizer.
The lessons you learn apply to other neural networks, including different activations and optimizers.
---
## Initialize Weights
Let's start looking at some initial weights.
### All Zeros or Ones
If you follow the principle of [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights by defining two models with those constant weights.
Below, we are using PyTorch's [nn.init](https://pytorch.org/docs/stable/nn.html#torch-nn-init) to initialize each Linear layer with a constant weight. The init library provides a number of weight initialization functions that give you the ability to initialize the weights of each layer according to layer type.
In the case below, we look at every layer/module in our model. If it is a Linear layer (as all three layers are for this MLP), then we initialize those layer weights to be a `constant_weight` with bias=0 using the following code:
>```
if isinstance(m, nn.Linear):
nn.init.constant_(m.weight, constant_weight)
nn.init.constant_(m.bias, 0)
```
The `constant_weight` is a value that you can pass in when you instantiate the model.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class Net(nn.Module):
def __init__(self, hidden_1=256, hidden_2=128, constant_weight=None):
super(Net, self).__init__()
# linear layer (784 -> hidden_1)
self.fc1 = nn.Linear(28 * 28, hidden_1)
# linear layer (hidden_1 -> hidden_2)
self.fc2 = nn.Linear(hidden_1, hidden_2)
# linear layer (hidden_2 -> 10)
self.fc3 = nn.Linear(hidden_2, 10)
# dropout layer (p=0.2)
self.dropout = nn.Dropout(0.2)
# initialize the weights to a specified, constant value
if(constant_weight is not None):
for m in self.modules():
if isinstance(m, nn.Linear):
nn.init.constant_(m.weight, constant_weight)
nn.init.constant_(m.bias, 0)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add hidden layer, with relu activation function
x = F.relu(self.fc2(x))
# add dropout layer
x = self.dropout(x)
# add output layer
x = self.fc3(x)
return x
```
### Compare Model Behavior
Below, we are using `helpers.compare_init_weights` to compare the training and validation loss for the two models we defined above, `model_0` and `model_1`. This function takes in a list of models (each with different initial weights), the name of the plot to produce, and the training and validation dataset loaders. For each given model, it will plot the training loss for the first 100 batches and print out the validation accuracy after 2 training epochs. *Note: if you've used a small batch_size, you may want to increase the number of epochs here to better compare how models behave after seeing a few hundred images.*
We plot the loss over the first 100 batches to better judge which model weights performed better at the start of training. **I recommend that you take a look at the code in `helpers.py` to look at the details behind how the models are trained, validated, and compared.**
Run the cell below to see the difference between weights of all zeros against all ones.
```
# initialize two NN's with 0 and 1 constant weights
model_0 = Net(constant_weight=0)
model_1 = Net(constant_weight=1)
import helpers
# put them in list form to compare
model_list = [(model_0, 'All Zeros'),
(model_1, 'All Ones')]
# plot the loss over the first 100 batches
helpers.compare_init_weights(model_list,
'All Zeros vs All Ones',
train_loader,
valid_loader)
```
As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
### Uniform Distribution
A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continuous distribution, so the chance of picking the same number is low. We'll use NumPy's `np.random.uniform` function to pick random numbers from a uniform distribution.
>#### [`np.random_uniform(low=0.0, high=1.0, size=None)`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html)
>Outputs random values from a uniform distribution.
>The generated values follow a uniform distribution in the range [low, high). The lower bound minval is included in the range, while the upper bound maxval is excluded.
>- **low:** The lower bound on the range of random values to generate. Defaults to 0.
- **high:** The upper bound on the range of random values to generate. Defaults to 1.
- **size:** An int or tuple of ints that specify the shape of the output array.
We can visualize the uniform distribution by using a histogram. Let's map the values from `np.random_uniform(-3, 3, [1000])` to a histogram using the `helper.hist_dist` function. This will be `1000` random float values from `-3` to `3`, excluding the value `3`.
```
helpers.hist_dist('Random Uniform (low=-3, high=3)', np.random.uniform(-3, 3, [1000]))
```
The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the uniform function, let's use PyTorch's `nn.init` to apply it to a model's initial weights.
### Uniform Initialization, Baseline
Let's see how well the neural network trains using a uniform weight initialization, where `low=0.0` and `high=1.0`. Below, I'll show you another way (besides in the Net class code) to initialize the weights of a network. To define weights outside of the model definition, you can:
>1. Define a function that assigns weights by the type of network layer, *then*
2. Apply those weights to an initialized model using `model.apply(fn)`, which applies a function to each model layer.
This time, we'll use `weight.data.uniform_` to initialize the weights of our model, directly.
```
# takes in a module and applies the specified weight initialization
def weights_init_uniform(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# apply a uniform distribution to the weights and a bias=0
m.weight.data.uniform_(0.0, 1.0)
m.bias.data.fill_(0)
# create a new model with these weights
model_uniform = Net()
model_uniform.apply(weights_init_uniform)
# evaluate behavior
helpers.compare_init_weights([(model_uniform, 'Uniform Weights')],
'Uniform Baseline',
train_loader,
valid_loader)
```
---
The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction!
## General rule for setting weights
The general rule for setting the weights in a neural network is to set them to be close to zero without being too small.
>Good practice is to start your weights in the range of $[-y, y]$ where $y=1/\sqrt{n}$
($n$ is the number of inputs to a given neuron).
Let's see if this holds true; let's create a baseline to compare with and center our uniform range over zero by shifting it over by 0.5. This will give us the range [-0.5, 0.5).
```
# takes in a module and applies the specified weight initialization
def weights_init_uniform_center(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# apply a centered, uniform distribution to the weights
m.weight.data.uniform_(-0.5, 0.5)
m.bias.data.fill_(0)
# create a new model with these weights
model_centered = Net()
model_centered.apply(weights_init_uniform_center)
```
Then let's create a distribution and model that uses the **general rule** for weight initialization; using the range $[-y, y]$, where $y=1/\sqrt{n}$ .
And finally, we'll compare the two models.
```
# takes in a module and applies the specified weight initialization
def weights_init_uniform_rule(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# get the number of the inputs
n = m.in_features
y = 1.0/np.sqrt(n)
m.weight.data.uniform_(-y, y)
m.bias.data.fill_(0)
# create a new model with these weights
model_rule = Net()
model_rule.apply(weights_init_uniform_rule)
# compare these two models
model_list = [(model_centered, 'Centered Weights [-0.5, 0.5)'),
(model_rule, 'General Rule [-y, y)')]
# evaluate behavior
helpers.compare_init_weights(model_list,
'[-0.5, 0.5) vs [-y, y)',
train_loader,
valid_loader)
```
This behavior is really promising! Not only is the loss decreasing, but it seems to do so very quickly for our uniform weights that follow the general rule; after only two epochs we get a fairly high validation accuracy and this should give you some intuition for why starting out with the right initial weights can really help your training process!
---
Since the uniform distribution has the same chance to pick *any value* in a range, what if we used a distribution that had a higher chance of picking numbers closer to 0? Let's look at the normal distribution.
### Normal Distribution
Unlike the uniform distribution, the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from NumPy's `np.random.normal` function to a histogram.
>[np.random.normal(loc=0.0, scale=1.0, size=None)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html)
>Outputs random values from a normal distribution.
>- **loc:** The mean of the normal distribution.
- **scale:** The standard deviation of the normal distribution.
- **shape:** The shape of the output array.
```
helpers.hist_dist('Random Normal (mean=0.0, stddev=1.0)', np.random.normal(size=[1000]))
```
Let's compare the normal distribution against the previous, rule-based, uniform distribution.
<a id='normalex'></a>
#### TODO: Define a weight initialization function that gets weights from a normal distribution
> The normal distribution should have a mean of 0 and a standard deviation of $y=1/\sqrt{n}$
```
## complete this function
def weights_init_normal(m):
'''Takes in a module and initializes all linear layers with weight
values taken from a normal distribution.'''
classname = m.__class__.__name__
# for every Linear layer in a model
# m.weight.data shoud be taken from a normal distribution
# m.bias.data should be 0
if classname.find('Linear') != -1:
y = 1/np.sqrt(m.in_features)
m.weight.data.normal_(0.0, y)
m.bias.data.fill_(0)
## -- no need to change code below this line -- ##
# create a new model with the rule-based, uniform weights
model_uniform_rule = Net()
model_uniform_rule.apply(weights_init_uniform_rule)
# create a new model with the rule-based, NORMAL weights
model_normal_rule = Net()
model_normal_rule.apply(weights_init_normal)
# compare the two models
model_list = [(model_uniform_rule, 'Uniform Rule [-y, y)'),
(model_normal_rule, 'Normal Distribution')]
# evaluate behavior
helpers.compare_init_weights(model_list,
'Uniform vs Normal',
train_loader,
valid_loader)
```
The normal distribution gives us pretty similar behavior compared to the uniform distribution, in this case. This is likely because our network is so small; a larger neural network will pick more weight values from each of these distributions, magnifying the effect of both initialization styles. In general, a normal distribution will result in better performance for a model.
---
### Automatic Initialization
Let's quickly take a look at what happens *without any explicit weight initialization*.
```
## Instantiate a model with _no_ explicit weight initialization
### Create new models again to avoid accidently train them for more epics.
model_no=model_centered=model_uniform_rule=model_normal_rule = Net()
model_centered.apply(weights_init_uniform_center)
model_uniform_rule.apply(weights_init_uniform_rule)
model_normal_rule.apply(weights_init_normal)
## evaluate the behavior using helpers.compare_init_weights
model_list = [(model_no, 'No explicit init.'),
(model_centered, 'Centered Weights [-0.5, 0.5)'),
(model_uniform_rule, 'Uniform Rule [-y, y)'),
(model_normal_rule, 'Normal Distribution (m=0, std=y)')]
# evaluate behavior
helpers.compare_init_weights(model_list,
'no vs centered vs uniform vs normal',
train_loader,
valid_loader)
```
As you complete this exercise, keep in mind these questions:
* What initializaion strategy has the lowest training loss after two epochs? What about highest validation accuracy?
* After testing all these initial weight options, which would you decide to use in a final classification model?
| github_jupyter |
<a href="https://colab.research.google.com/github/AWH-GlobalPotential-X/AWH-Geo/blob/master/notebooks/AWH-Geo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Welcome to AWH-Geo
This tool requires a [Google Drive](https://drive.google.com/drive/my-drive) and [Earth Engine](https://developers.google.com/earth-engine/) Account.
[Start here](https://drive.google.com/drive/u/1/folders/1EzuqsbADrtdXChcpHqygTh7SuUw0U_QB) to create a new Output Table from the template:
1. Right-click on "OutputTable_TEMPLATE" file > Make a Copy to your own Drive folder
2. Rename the new file "OuputTable_CODENAME" with CODENAME (max 83 characters!) as a unique output table code. If including a date in the code, use the YYYYMMDD date format.
3. Enter in the output values in L/hr to each cell in each of the 10%-interval rH bins... interpolate in Sheets as necessary.
Then, click "Connect" at the top right of this notebook.
Then run each of the code blocks below, following instructions. For "OutputTableCode" inputs, use the CODENAME you created in Sheets.
```
#@title Basic setup and earthengine access.
print('Welcome to AWH-Geo')
# import, authenticate, then initialize EarthEngine module ee
# https://developers.google.com/earth-engine/python_install#package-import
import ee
print('Make sure the EE version is v0.1.215 or greater...')
print('Current EE version = v' + ee.__version__)
print('')
ee.Authenticate()
ee.Initialize()
worldGeo = ee.Geometry.Polygon( # Created for some masking and geo calcs
coords=[[-180,-90],[-180,0],[-180,90],[-30,90],[90,90],[180,90],
[180,0],[180,-90],[30,-90],[-90,-90],[-180,-90]],
geodesic=False,
proj='EPSG:4326'
)
#@title Test Earth Engine connection (see Mt Everest elev and a green map)
# Print the elevation of Mount Everest.
dem = ee.Image('USGS/SRTMGL1_003')
xy = ee.Geometry.Point([86.9250, 27.9881])
elev = dem.sample(xy, 30).first().get('elevation').getInfo()
print('Mount Everest elevation (m):', elev)
# Access study assets
from IPython.display import Image
jmpGeofabric_image = ee.Image('users/awhgeoglobal/jmpGeofabric_image') # access to study folder in EE
Image(url=jmpGeofabric_image.getThumbUrl({'min': 0, 'max': 1, 'dimensions': 512,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))
#@title Set up access to Google Sheets (follow instructions)
from google.colab import auth
auth.authenticate_user()
# gspread is module to access Google Sheets through python
# https://gspread.readthedocs.io/en/latest/index.html
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default()) # get credentials
#@title STEP 1: Export timeseries for given OutputTable: enter CODENAME (without "OutputTable_" prefix) below
OutputTableCode = "" #@param {type:"string"}
StartYear = 2010 #@param {type:"integer"}
EndYear = 2020 #@param {type:"integer"}
ExportWeekly_1_or_0 = 0#@param {type:"integer"}
ee_username = ee.String(ee.Dictionary(ee.List(ee.data.getAssetRoots()).get(0)).get('id'))
ee_username = ee_username.getInfo()
years = list(range(StartYear,EndYear))
print('Time Period: ', years)
def timeseriesExport(outputTable_code):
"""
This script runs the output table value over the climate variables using the
nearest lookup values, worldwide, every three hours during a user-determined
period. It then resamples the temporal interval by averaging the hourly output
over semi-week periods. It then converts the resulting image collection into a
single image with several bands, each of which representing one (hourly or
semi-week) interval. Finally, it exports this image over 3-month tranches and
saves each as an EE Image Assets with appropriate names corresponding to the
tranche's time period.
"""
# print the output table code from user input for confirmation
print('outputTable code:', outputTable_code)
# CLIMATE DATA PRE-PROCESSING
# ERA5-Land climate dataset used for worldwide (derived) climate metrics
# https://www.ecmwf.int/en/era5-land
# era5-land HOURLY images in EE catalog
era5Land = ee.ImageCollection('ECMWF/ERA5_LAND/HOURLY')
# print('era5Land',era5Land.limit(50)) # print some data for inspection (debug)
era5Land_proj = era5Land.first().projection() # get ERA5-Land projection & scale for export
era5Land_scale = era5Land_proj.nominalScale()
print('era5Land_scale (should be ~11132):',era5Land_scale.getInfo())
era5Land_filtered = era5Land.filterDate( # ERA5-Land climate data
str(StartYear-1) + '-12-31', str(EndYear) + '-01-01').select( # filter by date
# filter by ERA5-Land image collection bands
[
'dewpoint_temperature_2m', # K (https://apps.ecmwf.int/codes/grib/param-db?id=168)
'surface_solar_radiation_downwards', # J/m^2 (Accumulated value. Divide by 3600 to get W/m^2 over hourly interval https://apps.ecmwf.int/codes/grib/param-db?id=176)
'temperature_2m' # K
])
# print('era5Land_filtered',era5Land_filtered.limit(50))
print('Wait... retrieving data from sheets takes a couple minutes')
# COLLECT OUTPUT TABLE DATA FROM SHEETS INTO PYTHON ARRAYS
# gspread function which will look in list of gSheets accessible to user
# in Earth Engine, an array is a list of lists.
# loop through worksheet tabs and build a list of lists of lists (3 dimensional)
# to organize output values [L/hr] by the 3 physical variables in the following
# order: by temperature (first nesting leve), ghi (second nesting level), then
# rH (third nesting level).
spreadsheet = gc.open('OutputTable_' + outputTable_code)
outputArray = list() # create empty array
rH_labels = ['rH0','rH10','rH20','rH30','rH40','rH50', # worksheet tab names
'rH60','rH70','rH80','rH90','rH100']
for rH in rH_labels: # loop to create 3-D array (list of lists of lists)
rH_interval_array = list()
worksheet = spreadsheet.worksheet(rH)
for x in list(range(7,26)): # relevant ranges in output table sheet
rH_interval_array.append([float(y) for y in worksheet.row_values(x)])
outputArray.append(rH_interval_array)
# print('Output Table values:', outputArray) # for debugging
# create an array image in EE (each pixel is a multi-dimensional matrix)
outputImage_arrays = ee.Image(ee.Array(outputArray)) # values are in [L/hr]
def processTimeseries(i): # core processing algorithm with lookups to outputTable
"""
This is the core AWH-Geo algorithm to convert image-based input climate data
into an image of AWG device output [L/time] based on a given output lookup table.
It runs across the ERA5-Land image collection timeseries and runs the lookup table
on each pixel of each image representing each hourly climate timestep.
"""
i = ee.Image(i) # cast as image
i = i.updateMask(i.select('temperature_2m').mask()) # ensure mask is applied to all bands
timestamp_millis = ee.Date(i.get('system:time_start'))
i_previous = ee.Image(era5Land_filtered.filterDate(
timestamp_millis.advance(-1,'hour')).first())
rh = ee.Image().expression( # relative humidity calculation [%]
# from http://bmcnoldy.rsmas.miami.edu/Humidity.html
'100 * (e**((17.625 * Td) / (To + Td)) / e**((17.625 * T) / (To + T)))', {
'e': 2.718281828459045, # Euler's constant
'T': i.select('temperature_2m').subtract(273.15), # temperature K converted to Celsius [°C]
'Td': i.select('dewpoint_temperature_2m').subtract(273.15), # dewpoint temperature K converted to Celsius [°C]
'To': 243.04 # reference temperature [K]
}).rename('rh')
ghi = ee.Image(ee.Algorithms.If( # because this parameter in ERA5 is cumulative in J/m^2...
condition=ee.Number(timestamp_millis.get('hour')).eq(1), # ...from last obseration...
trueCase=i.select('surface_solar_radiation_downwards'), # ...current value must be...
falseCase=i.select('surface_solar_radiation_downwards').subtract( # ...subtracted from last...
i_previous.select('surface_solar_radiation_downwards')) # ... then divided by seconds
)).divide(3600).rename('ghi') # solar global horizontal irradiance [W/m^2]
temp = i.select('temperature_2m'
).subtract(273.15).rename('temp') # temperature K converted to Celsius [°C]
rhClamp = rh.clamp(0.1,100) # relative humdity clamped to output table range [%]
ghiClamp = ghi.clamp(0.1,1300) # global horizontal irradiance clamped to range [W/m^2]
tempClamp = temp.clamp(0.1,45) # temperature clamped to output table range [°C]
# convert climate variables to lookup integers
rhLookup = rhClamp.divide(10
).round().int().rename('rhLookup') # rH lookup interval
tempLookup = tempClamp.divide(2.5
).round().int().rename('tempLookup') # temp lookup interval
ghiLookup = ghiClamp.divide(100
).add(1).round().int().rename('ghiLookup') # ghi lookup interval
# combine lookup values in a 3-band image
xyzLookup = ee.Image(rhLookup).addBands(tempLookup).addBands(ghiLookup)
# lookup values in 3D array for each pixel to return AWG output from table [L/hr]
# set output to 0 if temperature is less than 0 deg C
output = outputImage_arrays.arrayGet(xyzLookup).multiply(temp.gt(0))
nightMask = ghi.gt(0.5) # mask pixels which have no incident sunlight
return ee.Image(output.rename('O').addBands( # return image of output labeled "O" [L/hr]
rh.updateMask(nightMask)).addBands(
ghi.updateMask(nightMask)).addBands(
temp.updateMask(nightMask)).setMulti({ # add physical variables as bands
'system:time_start': timestamp_millis # set time as property
})).updateMask(1) # close partial masks at continental edges
def outputHourly_export(timeStart, timeEnd, year):
"""
Run the lookup processing function (from above) across the entire climate
timeseries at the finest temporal interval (1 hr for ERA5-Land). Convert the
resulting image collection as a single image with a band for each timestep
to allow for export as an Earth Engine asset (you cannot export/save image
collections as assets).
"""
# filter ERA5-Land climate data by time
era5Land_filtered_section = era5Land_filtered.filterDate(timeStart, timeEnd)
# print('era5Land_filtered_section',era5Land_filtered_section.limit(1).getInfo())
outputHourly = era5Land_filtered_section.map(processTimeseries)
# outputHourly_toBands_pre = outputHourly.select(['ghi']).toBands()
outputHourly_toBands_pre = outputHourly.select(['O']).toBands()
outputHourly_toBands = outputHourly_toBands_pre.select(
# input climate variables as multiband image with each band representing timestep
outputHourly_toBands_pre.bandNames(),
# rename bands by timestamp
outputHourly_toBands_pre.bandNames().map(
lambda name: ee.String('H').cat( # "H" for hourly
ee.String(name).replace('T','')
)
)
)
# notify user of export
print('Exporting outputHourly year:', year)
task = ee.batch.Export.image.toAsset(
image=ee.Image(outputHourly_toBands),
region=worldGeo,
description='O_hourly_' + outputTable_code + '_' + year,
assetId=ee_username + '/O_hourly_' + outputTable_code + '_' + year,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
task.start()
# run timeseries export on entire hourly ERA5-Land for each yearly tranche
for y in years:
y = str(y)
outputHourly_export(y + '-01-01', y + '-04-01', y + 'a')
outputHourly_export(y + '-04-01', y + '-07-01', y + 'b')
outputHourly_export(y + '-07-01', y + '-10-01', y + 'c')
outputHourly_export(y + '-10-01', str(int(y)+1) + '-01-01', y + 'd')
def outputWeekly_export(timeStart, timeEnd, year):
era5Land_filtered_section = era5Land_filtered.filterDate(timeStart, timeEnd) # filter ERA5-Land climate data by time
outputHourly = era5Land_filtered_section.map(processTimeseries)
# resample values over time by 2-week aggregations
# Define a time interval
start = ee.Date(timeStart)
end = ee.Date(timeEnd)
# Number of years, in DAYS_PER_RANGE-day increments.
DAYS_PER_RANGE = 14
# DateRangeCollection, which contains the ranges we're interested in.
drc = ee.call("BetterDateRangeCollection",
start,
end,
DAYS_PER_RANGE,
"day",
True)
# This filter will join images with the date range that contains their start time.
filter = ee.Filter.dateRangeContains("date_range", None, "system:time_start")
# Save all of the matching values under "matches".
join = ee.Join.saveAll("matches")
# Do the join.
joinedResult = join.apply(drc, outputHourly, filter)
# print('joinedResult',joinedResult)
# Map over the functions, and add the mean of the matches as "meanForRange".
joinedResult = joinedResult.map(
lambda e: e.set("meanForRange", ee.ImageCollection.fromImages(e.get("matches")).mean())
)
# print('joinedResult',joinedResult)
# roll resampled images into new image collection
outputWeekly = ee.ImageCollection(joinedResult.map(
lambda f: ee.Image(f.get('meanForRange'))
))
# print('outputWeekly',outputWeekly.getInfo())
# convert image collection into image with many bands which can be saved as EE asset
outputWeekly_toBands_pre = outputWeekly.toBands()
outputWeekly_toBands = outputWeekly_toBands_pre.select(
outputWeekly_toBands_pre.bandNames(), # input climate variables as multiband image with each band representing timestep
outputWeekly_toBands_pre.bandNames().map(
lambda name: ee.String('W').cat(name)
)
)
task = ee.batch.Export.image.toAsset(
image=ee.Image(outputWeekly_toBands),
region=worldGeo,
description='O_weekly_' + outputTable_code + '_' + year,
assetId=ee_username + '/O_weekly_' + outputTable_code + '_' + year,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
if ExportWeekly_1_or_0 == 1:
task.start() # remove comment hash if weekly exports are desired
print('Exporting outputWeekly year:', year)
# run semi-weekly timeseries export on ERA5-Land by year
for y in years:
y = str(y)
outputWeekly_export(y + '-01-01', y + '-04-01', y + 'a')
outputWeekly_export(y + '-04-01', y + '-07-01', y + 'b')
outputWeekly_export(y + '-07-01', y + '-10-01', y + 'c')
outputWeekly_export(y + '-10-01', str(int(y)+1) + '-01-01', y + 'd')
timeseriesExport(OutputTableCode)
print('Complete! Read instructions below')
```
# *Before moving on to the next step... Wait until above tasks are complete in the task manager: https://code.earthengine.google.com/*
(right pane, tab "tasks", click "refresh"; the should show up once the script prints "Exporting...")
```
#@title Re-instate earthengine access (follow instructions)
print('Welcome Back to AWH-Geo')
print('')
# import, authenticate, then initialize EarthEngine module ee
# https://developers.google.com/earth-engine/python_install#package-import
import ee
print('Make sure the EE version is v0.1.215 or greater...')
print('Current EE version = v' + ee.__version__)
print('')
ee.Authenticate()
ee.Initialize()
worldGeo = ee.Geometry.Polygon( # Created for some masking and geo calcs
coords=[[-180,-90],[-180,0],[-180,90],[-30,90],[90,90],[180,90],
[180,0],[180,-90],[30,-90],[-90,-90],[-180,-90]],
geodesic=False,
proj='EPSG:4326'
)
#@title STEP 2: Export statistical results for given OutputTable: enter CODENAME (without "OutputTable_" prefix) below
ee_username = ee.String(ee.Dictionary(ee.List(ee.data.getAssetRoots()).get(0)).get('id'))
ee_username = ee_username.getInfo()
OutputTableCode = "" #@param {type:"string"}
StartYear = 2010 #@param {type:"integer"}
EndYear = 2020 #@param {type:"integer"}
SuffixName_optional = "" #@param {type:"string"}
ExportMADP90s_1_or_0 = 0#@param {type:"integer"}
years = list(range(StartYear,EndYear))
print('Time Period: ', years)
def generateStats(outputTable_code):
"""
This function generates single images which contain time-aggregated output
statistics including overall mean and shortfall metrics such as MADP90s.
"""
# CLIMATE DATA PRE-PROCESSING
# ERA5-Land climate dataset used for worldwide (derived) climate metrics
# https://www.ecmwf.int/en/era5-land
# era5-land HOURLY images in EE catalog
era5Land = ee.ImageCollection('ECMWF/ERA5_LAND/HOURLY')
# print('era5Land',era5Land.limit(50)) # print some data for inspection (debug)
era5Land_proj = era5Land.first().projection() # get ERA5-Land projection & scale for export
era5Land_scale = era5Land_proj.nominalScale()
# setup the image collection timeseries to chart
# unravel and concatenate all the image stages into a single image collection
def unravel(i): # function to "unravel" image bands into an image collection
def setDate(bandName): # loop over band names in image and return a LIST of ...
dateCode = ee.Date.parse( # ... images, one for each band
format='yyyyMMddHH',
date=ee.String(ee.String(bandName).split('_').get(0)).slice(1) # get date periods from band name
)
return i.select([bandName]).rename('O').set('system:time_start',dateCode)
i = ee.Image(i)
return i.bandNames().map(setDate) # returns a LIST of images
yearCode_list = ee.List(sum([[ # each image units in [L/hr]
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'a')),
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'b')),
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'c')),
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'d'))
] for y in years], [])).flatten()
outputTimeseries = ee.ImageCollection(yearCode_list)
Od_overallMean = outputTimeseries.mean().multiply(24).rename('Od') # hourly output x 24 = mean daily output [L/day]
# export overall daily mean
task = ee.batch.Export.image.toAsset(
image=Od_overallMean,
region=worldGeo,
description='Od_overallMean_' + outputTable_code + SuffixName_optional,
assetId=ee_username + '/Od_overallMean_' + outputTable_code + SuffixName_optional,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
task.start()
print('Exporting Od_overallMean_' + outputTable_code + SuffixName_optional)
## run the moving average function over the timeseries using DAILY averages
# start and end dates over which to calculate aggregate statistics
startDate = ee.Date(str(StartYear) + '-01-01')
endDate = ee.Date(str(EndYear) + '-01-01')
# resample values over time by daily aggregations
# Number of years, in DAYS_PER_RANGE-day increments.
DAYS_PER_RANGE = 1
# DateRangeCollection, which contains the ranges we're interested in.
drc = ee.call('BetterDateRangeCollection',
startDate,
endDate,
DAYS_PER_RANGE,
'day',
True)
# This filter will join images with the date range that contains their start time.
filter = ee.Filter.dateRangeContains('date_range', None, 'system:time_start')
# Save all of the matching values under "matches".
join = ee.Join.saveAll('matches')
# Do the join.
joinedResult = join.apply(drc, outputTimeseries, filter)
# print('joinedResult',joinedResult)
# Map over the functions, and add the mean of the matches as "meanForRange".
joinedResult = joinedResult.map(
lambda e: e.set('meanForRange', ee.ImageCollection.fromImages(e.get('matches')).mean())
)
# print('joinedResult',joinedResult)
# roll resampled images into new image collection
outputDaily = ee.ImageCollection(joinedResult.map(
lambda f: ee.Image(f.get('meanForRange')).set(
'system:time_start',
ee.Date.parse('YYYYMMdd',f.get('system:index')).millis()
)
))
# print('outputDaily',outputDaily.getInfo())
outputDaily_p90 = ee.ImageCollection( # collate rolling periods into new image collection of rolling average values
outputDaily.toList(outputDaily.size())).reduce(
ee.Reducer.percentile( # reduce image collection by percentile
[10] # 100% - 90% = 10%
)).multiply(24).rename('Od')
task = ee.batch.Export.image.toAsset(
image=outputDaily_p90,
region=worldGeo,
description='Od_DailyP90_' + outputTable_code + SuffixName_optional,
assetId=ee_username + '/Od_DailyP90_' + outputTable_code + SuffixName_optional,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
if ExportMADP90s_1_or_0 == 1:
task.start()
print('Exporting Od_DailyP90_' + outputTable_code + SuffixName_optional)
def rollingStats(period): # run rolling stat function for each rolling period scenerio
# collect neighboring time periods into a join
timeFilter = ee.Filter.maxDifference(
difference=float(period)/2 * 24 * 60 * 60 * 1000, # mid-centered window
leftField='system:time_start',
rightField='system:time_start'
)
rollingPeriod_join = ee.ImageCollection(ee.Join.saveAll('images').apply(
primary=outputDaily, # apply the join on itself to collect images
secondary=outputDaily,
condition=timeFilter
))
def rollingPeriod_mean(i): # get the mean across each collected periods
i = ee.Image(i) # collected images stored in "images" property of each timestep image
return ee.ImageCollection.fromImages(i.get('images')).mean()
outputDaily_rollingMean = rollingPeriod_join.filterDate(
startDate.advance(float(period)/2,'days'),
endDate.advance(float(period)/-2,'days')
).map(rollingPeriod_mean,True)
Od_p90_rolling = ee.ImageCollection( # collate rolling periods into new image collection of rolling average values
outputDaily_rollingMean.toList(outputDaily_rollingMean.size())).reduce(
ee.Reducer.percentile( # reduce image collection by percentile
[10] # 100% - 90% = 10%
)).multiply(24).rename('Od') # hourly output x 24 = mean daily output [L/day]
task = ee.batch.Export.image.toAsset(
image=Od_p90_rolling,
region=worldGeo,
description='Od_MADP90_'+ period + 'day_' + outputTable_code + SuffixName_optional,
assetId=ee_username + '/Od_MADP90_'+ period + 'day_' + outputTable_code + SuffixName_optional,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
if ExportMADP90s_1_or_0 == 1:
task.start()
print('Exporting Od_MADP90_' + period + 'day_' + outputTable_code + SuffixName_optional)
rollingPeriods = [
'007',
'030',
# '060',
'090',
# '180',
] # define custom rolling periods over which to calc MADP90 [days]
for period in rollingPeriods: # execute the calculations & export
# print(period)
rollingStats(period)
generateStats(OutputTableCode) # run stats function
print('Complete! Go to next step.')
```
Wait until these statistics are completed processing. Track them in the task manager: https://code.earthengine.google.com/
When they are finished.... [Go here to see maps](https://code.earthengine.google.com/fac0cc72b2ac2e431424cbf45b2852cf)
| github_jupyter |
# Scraping Geofenced Anti Vax Tweets across Australia
Firstly, a new environment is created and activated.
In my initial attempt, the scraping didn't work. After some research on Stack Overflow I determined that a particular version of TWINT was currently needed to get around Twitter's suppression of the library:
pip3 install --upgrade git+https://github.com/himanshudabas/twint.git@origin/twint-fixes#egg=twint
Additionally, h3, shapely, asyncio, nest_asyncio were pip installed.
```
# import the necessary libraries and modules
import folium
import h3
from shapely.geometry import Polygon, shape
import pandas as pd
import shapely
import numpy as np
import asyncio
import aiohttp
import nest_asyncio
nest_asyncio.apply()
import twint
import os
import time
# coordinates of the outline of Australia, obtained from google maps, in format [latitude, longitude]
# as a list for GeoJSON
aus_coordinates = [
[-10.001449, 142.461773],
[-13.955529, 144.074703],
[-14.211275, 145.327144],
[-18.802451, 146.733394],
[-20.169543, 149.150386],
[-21.912601, 149.831538],
[-22.289226, 150.864253],
[-24.986185, 152.819819],
[-24.467278, 153.347163],
[-28.661794, 153.874507],
[-31.043641, 153.259273],
[-32.454273, 152.611080],
[-37.770824, 150.018307],
[-39.317407, 146.458737],
[-40.780646, 146.766354],
[-40.614057, 147.777096],
[-39.707293, 147.579342],
[-39.622721, 148.304440],
[-41.755025, 148.546139],
[-43.628223, 147.996823],
[-43.771193, 145.711667],
[-39.567694, 143.442990],
[-39.465991, 144.283444],
[-40.986166, 146.494657],
[-39.102407, 145.989286],
[-38.589017, 144.451200],
[-39.136501, 143.704130],
[-38.279207, 140.056669],
[-36.038046, 138.738310],
[-36.463314, 136.387236],
[-32.701853, 133.311064],
[-31.884608, 130.938017],
[-32.590845, 126.147978],
[-33.153702, 124.379179],
[-34.114131, 123.730986],
[-34.205040, 120.061552],
[-35.216505, 118.007109],
[-35.000805, 116.106474],
[-34.359361, 114.908964],
[-33.429202, 114.914457],
[-33.520842, 115.331937],
[-31.996484, 115.419827],
[-25.530107, 112.695218],
[-21.460796, 113.705960],
[-20.105003, 116.606351],
[-19.111493, 120.638333],
[-17.106727, 121.824856],
[-14.229836, 124.923000],
[-13.504545, 126.900539],
[-14.527816, 129.053859],
[-10.863039, 130.097560],
[-10.690358, 132.646388],
[-11.402017, 133.503322],
[-11.875475, 135.250148],
[-10.895406, 136.755275],
[-15.831953, 137.436427],
[-16.190997, 140.424708],
[-12.371950, 140.997474],
[-10.001449, 142.461773]
]
# for GeoJSON object, the coordinates must be in format [longitude, latitude]
for coordinate in aus_coordinates:
coordinate.reverse()
# Create a GeoJSON object representing a bounding box for the region of interest, e.g. using the reversed aus coordinates
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates":[
[
[142.461773, -10.001449],
[144.074703, -13.955529],
[145.327144, -14.211275],
[146.733394, -18.802451],
[149.150386, -20.169543],
[149.831538, -21.912601],
[150.864253, -22.289226],
[152.819819, -24.986185],
[153.347163, -24.467278],
[153.874507, -28.661794],
[153.259273, -31.043641],
[152.61108, -32.454273],
[150.018307, -37.770824],
[146.458737, -39.317407],
[146.766354, -40.780646],
[147.777096, -40.614057],
[147.579342, -39.707293],
[148.30444, -39.622721],
[148.546139, -41.755025],
[147.996823, -43.628223],
[145.711667, -43.771193],
[143.44299, -39.567694],
[144.283444, -39.465991],
[146.494657, -40.986166],
[145.989286, -39.102407],
[144.4512, -38.589017],
[143.70413, -39.136501],
[140.056669, -38.279207],
[138.73831, -36.038046],
[136.387236, -36.463314],
[133.311064, -32.701853],
[130.938017, -31.884608],
[126.147978, -32.590845],
[124.379179, -33.153702],
[123.730986, -34.114131],
[120.061552, -34.20504],
[118.007109, -35.216505],
[116.106474, -35.000805],
[114.908964, -34.359361],
[114.914457, -33.429202],
[115.331937, -33.520842],
[115.419827, -31.996484],
[112.695218, -25.530107],
[113.70596, -21.460796],
[116.606351, -20.105003],
[120.638333, -19.111493],
[121.824856, -17.106727],
[124.923, -14.229836],
[126.900539, -13.504545],
[129.053859, -14.527816],
[130.09756, -10.863039],
[132.646388, -10.690358],
[133.503322, -11.402017],
[135.250148, -11.875475],
[136.755275, -10.895406],
[137.436427, -15.831953],
[140.424708, -16.190997],
[140.997474, -12.37195],
[142.461773, -10.001449
]
]
]
}
}
]
}
# Add a 0.1 degree buffer around this area
# 0.1 degree is chosen as there is already a buffer around the country
# but this feature is useful to have for future projects
s = shape(geojson['features'][0]['geometry'])
s = s.buffer(0.1)
feature = {'type': 'Feature', 'properties': {}, 'geometry': shapely.geometry.mapping(s)}
feature['geometry']['coordinates'] = [[[v[0], v[1]] for v in feature['geometry']['coordinates'][0]]]
feature = feature['geometry']
feature['coordinates'][0] = [[v[1], v[0]] for v in feature['coordinates'][0]]
# map H3 hexagons (code taken from H3 example: https://github.com/uber/h3-py-notebooks/blob/master/notebooks/usage.ipynb)
def visualize_hexagons(hexagons, color="red", folium_map=None):
"""
hexagons is a list of hexcluster. Each hexcluster is a list of hexagons.
eg. [[hex1, hex2], [hex3, hex4]]
"""
polylines = []
lat = []
lng = []
for hex in hexagons:
polygons = h3.h3_set_to_multi_polygon([hex], geo_json=False)
# flatten polygons into loops.
outlines = [loop for polygon in polygons for loop in polygon]
polyline = [outline + [outline[0]] for outline in outlines][0]
lat.extend(map(lambda v:v[0],polyline))
lng.extend(map(lambda v:v[1],polyline))
polylines.append(polyline)
if folium_map is None:
m = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=13, tiles='cartodbpositron')
else:
m = folium_map
for polyline in polylines:
my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color=color)
m.add_child(my_PolyLine)
return m
def visualize_polygon(polyline, color):
polyline.append(polyline[0])
lat = [p[0] for p in polyline]
lng = [p[1] for p in polyline]
m = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=13, tiles='cartodbpositron')
my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color=color)
m.add_child(my_PolyLine)
return m
# find all hexagons with a center that falls within our buffered area of interest from above
polyline = feature['coordinates'][0]
polyline.append(polyline[0])
lat = [p[0] for p in polyline]
lng = [p[1] for p in polyline]
m = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=6, tiles='cartodbpositron')
my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color="green")
m.add_child(my_PolyLine)
# make the list of hexagon IDs in our AOI
hexagons = list(h3.polyfill(feature, 3))
# map the hexagons
polylines = []
lat = []
lng = []
for hex in hexagons:
polygons = h3.h3_set_to_multi_polygon([hex], geo_json=False)
# flatten polygons into loops.
outlines = [loop for polygon in polygons for loop in polygon]
polyline = [outline + [outline[0]] for outline in outlines][0]
lat.extend(map(lambda v:v[0],polyline))
lng.extend(map(lambda v:v[1],polyline))
polylines.append(polyline)
for polyline in polylines:
my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color='red')
m.add_child(my_PolyLine)
display(m)
# determine how many hexagons there are
len(hexagons)
```
725 hexagons tile this area at this zoom level. However, notice that some of them are fully over water and Australia is very sparsely populated, so not every area will have data.
```
np.sqrt((h3.cell_area(hexagons[0], unit='km^2')/0.827)/np.pi)
```
The ratio of the area of a regular hexagon to a circumscribed circle is 0.827. Scaling the area of each H3 cell by this factor can be used to find the radius of a circle that approximately circumscribes that cell. (This is approximate because H3 cells are not regular, as they must tile a sphere. In fact, some of them are pentagons. However, this approximation works for these purposes.
With this method, search queries can be automatically generated for each cell:
```
for hex in hexagons:
center = h3.h3_to_geo(hex)
r = np.sqrt((h3.cell_area(hex, unit='km^2')/0.827)/np.pi)
query = "(antivax) AND geocode:" + str(center[0]) + "," + str(center[1]) + "," + str(r) + "km"
print("Example query:", query)
```
Next, these locations and radii can be used to download tweets using TWINT.
```
# create a folder to save each hexagons csv tweet file in
os.mkdir('data_by_hex')
# Download tweets using TWINT for each hexagon in the list
hexagons_left = hexagons
# while loop to iterate through each hexagon
while len(hexagons_left) > 0:
# remove one hexagon at a time
hex = hexagons_left.pop(0)
# check if a csv file already exists for each hexagon
if not os.path.exists('data_by_hex/' + hex + '.csv'):
# find center of the hexagon
center = h3.h3_to_geo(hex)
# approximate radius of hexagon/circle
r = np.sqrt(h3.cell_area(hex, unit='km^2')*1.20919957616/np.pi)
# query terms and geocode
query = "(antivax OR anti vax) AND geocode:" + str(center[0]) + "," + str(center[1]) + "," + str(r) + "km"
print(query)
# configure twint
c = twint.Config()
c.Search = query
# save as csv files
c.Store_csv = True
# save in directory created previously, name after hexagon
c.Output = "data_by_hex/" + hex + ".csv"
# do not show 'live' results
c.Hide_output = True
# start date of search, this will search til present
c.Since = "2021-01-01"
try:
twint.run.Search(c)
except:
os.remove('data_by_hex/' + hex + '.csv')
time.sleep(10)
print('error, removing this one')
hexagons_left.append(hex)
print(len(hexagons_left), "remaining")
```
Quite surprisingly, or perhaps not due to the population density of Australia, only 33 csv files were created.
These 33 files show that from 1/1/21 - 20/6/21 only 33 out of a potential 725 hexagons had tweets sent from them containing antivax mentions.
Analysis will be done on these files and an choropleth map with time slider will be created to highlight anti vax hotspots.
| github_jupyter |
# Programming Assignment
## CNN classifier for the MNIST dataset
### Instructions
In this notebook, you will write code to build, compile and fit a convolutional neural network (CNN) model to the MNIST dataset of images of handwritten digits.
Some code cells are provided you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line:
`#### GRADED CELL ####`
Don't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly. Inside these graded cells, you can use any functions or classes that are imported below, but make sure you don't use any variables that are outside the scope of the function.
### How to submit
Complete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook.
### Let's get started!
We'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here.
```
#### PACKAGE IMPORTS ####
# Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# If you would like to make further imports from Tensorflow, add them here
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Softmax, Conv2D, MaxPooling2D
```
#### The MNIST dataset
In this assignment, you will use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). It consists of a training set of 60,000 handwritten digits with corresponding labels, and a test set of 10,000 images. The images have been normalised and centred. The dataset is frequently used in machine learning research, and has become a standard benchmark for image classification models.
- Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based learning applied to document recognition." Proceedings of the IEEE, 86(11):2278-2324, November 1998.
Your goal is to construct a neural network that classifies images of handwritten digits into one of 10 classes.
#### Load and preprocess the data
```
# Run this cell to load the MNIST data
mnist_data = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist_data.load_data()
```
First, preprocess the data by scaling the training and test images so their values lie in the range from 0 to 1.
```
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def scale_mnist_data(train_images, test_images):
"""
This function takes in the training and test images as loaded in the cell above, and scales them
so that they have minimum and maximum values equal to 0 and 1 respectively.
Your function should return a tuple (train_images, test_images) of scaled training and test images.
"""
train_images = train_images / 255.
test_images = test_images / 255.
return (train_images, test_images)
# Run your function on the input data
scaled_train_images, scaled_test_images = scale_mnist_data(train_images, test_images)
# Add a dummy channel dimension
scaled_train_images = scaled_train_images[..., np.newaxis]
scaled_test_images = scaled_test_images[..., np.newaxis]
```
#### Build the convolutional neural network model
We are now ready to construct a model to fit to the data. Using the Sequential API, build your CNN model according to the following spec:
* The model should use the `input_shape` in the function argument to set the input size in the first layer.
* A 2D convolutional layer with a 3x3 kernel and 8 filters. Use 'SAME' zero padding and ReLU activation functions. Make sure to provide the `input_shape` keyword argument in this first layer.
* A max pooling layer, with a 2x2 window, and default strides.
* A flatten layer, which unrolls the input into a one-dimensional tensor.
* Two dense hidden layers, each with 64 units and ReLU activation functions.
* A dense output layer with 10 units and the softmax activation function.
In particular, your neural network should have six layers.
```
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_model(input_shape):
"""
This function should build a Sequential model according to the above specification. Ensure the
weights are initialised by providing the input_shape argument in the first layer, given by the
function argument.
Your function should return the model.
"""
model = Sequential([
Conv2D(8, (3, 3), padding = 'same', activation='relu', input_shape=input_shape),
MaxPooling2D((2, 2)),
Flatten(),
Dense(64, activation='relu'),
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])
return model
# Run your function to get the model
model = get_model(scaled_train_images[0].shape)
```
#### Compile the model
You should now compile the model using the `compile` method. To do so, you need to specify an optimizer, a loss function and a metric to judge the performance of your model.
```
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def compile_model(model):
"""
This function takes in the model returned from your get_model function, and compiles it with an optimiser,
loss function and metric.
Compile the model using the Adam optimiser (with default settings), the cross-entropy loss function and
accuracy as the only metric.
Your function doesn't need to return anything; the model will be compiled in-place.
"""
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Run your function to compile the model
compile_model(model)
```
#### Fit the model to the training data
Now you should train the model on the MNIST dataset, using the model's `fit` method. Set the training to run for 5 epochs, and return the training history to be used for plotting the learning curves.
```
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def train_model(model, scaled_train_images, train_labels):
"""
This function should train the model for 5 epochs on the scaled_train_images and train_labels.
Your function should return the training history, as returned by model.fit.
"""
history = model.fit(scaled_train_images, train_labels, epochs=5)
print(history)
return history
# Run your function to train the model
history = train_model(model, scaled_train_images, train_labels)
```
#### Plot the learning curves
We will now plot two graphs:
* Epoch vs accuracy
* Epoch vs loss
We will load the model history into a pandas `DataFrame` and use the `plot` method to output the required graphs.
```
# Run this cell to load the model history into a pandas DataFrame
frame = pd.DataFrame(history.history)
# Run this cell to make the Accuracy vs Epochs plot
acc_plot = frame.plot(y="accuracy", title="Accuracy vs Epochs", legend=False)
acc_plot.set(xlabel="Epochs", ylabel="Accuracy")
# Run this cell to make the Loss vs Epochs plot
acc_plot = frame.plot(y="loss", title = "Loss vs Epochs",legend=False)
acc_plot.set(xlabel="Epochs", ylabel="Loss")
```
#### Evaluate the model
Finally, you should evaluate the performance of your model on the test set, by calling the model's `evaluate` method.
```
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def evaluate_model(model, scaled_test_images, test_labels):
"""
This function should evaluate the model on the scaled_test_images and test_labels.
Your function should return a tuple (test_loss, test_accuracy).
"""
test_loss, test_accuracy = model.evaluate(scaled_test_images, test_labels, verbose=2)
return (test_loss, test_accuracy)
# Run your function to evaluate the model
test_loss, test_accuracy = evaluate_model(model, scaled_test_images, test_labels)
print(f"Test loss: {test_loss}")
print(f"Test accuracy: {test_accuracy}")
```
#### Model predictions
Let's see some model predictions! We will randomly select four images from the test data, and display the image and label for each.
For each test image, model's prediction (the label with maximum probability) is shown, together with a plot showing the model's categorical distribution.
```
# Run this cell to get model predictions on randomly selected test images
num_test_images = scaled_test_images.shape[0]
random_inx = np.random.choice(num_test_images, 4)
random_test_images = scaled_test_images[random_inx, ...]
random_test_labels = test_labels[random_inx, ...]
predictions = model.predict(random_test_images)
fig, axes = plt.subplots(4, 2, figsize=(16, 12))
fig.subplots_adjust(hspace=0.4, wspace=-0.2)
for i, (prediction, image, label) in enumerate(zip(predictions, random_test_images, random_test_labels)):
axes[i, 0].imshow(np.squeeze(image))
axes[i, 0].get_xaxis().set_visible(False)
axes[i, 0].get_yaxis().set_visible(False)
axes[i, 0].text(10., -1.5, f'Digit {label}')
axes[i, 1].bar(np.arange(len(prediction)), prediction)
axes[i, 1].set_xticks(np.arange(len(prediction)))
axes[i, 1].set_title(f"Categorical distribution. Model prediction: {np.argmax(prediction)}")
plt.show()
```
Congratulations for completing this programming assignment! In the next week of the course we will take a look at including validation and regularisation in our model training, and introduce Keras callbacks.
| github_jupyter |
# Chapter 7
```
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import statsmodels.api as sm
import statsmodels.formula.api as smf
from patsy import dmatrix
from scipy import stats
from scipy.special import logsumexp
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
az.rcParams["stats.hdi_prob"] = 0.89 # set credible interval for entire notebook
np.random.seed(0)
```
#### Code 7.1
```
brains = pd.DataFrame.from_dict(
{
"species": [
"afarensis",
"africanus",
"habilis",
"boisei",
"rudolfensis",
"ergaster",
"sapiens",
],
"brain": [438, 452, 612, 521, 752, 871, 1350], # volume in cc
"mass": [37.0, 35.5, 34.5, 41.5, 55.5, 61.0, 53.5], # mass in kg
}
)
brains
# Figure 7.2
plt.scatter(brains.mass, brains.brain)
# point labels
for i, r in brains.iterrows():
if r.species == "afarensis":
plt.text(r.mass + 0.5, r.brain, r.species, ha="left", va="center")
elif r.species == "sapiens":
plt.text(r.mass, r.brain - 25, r.species, ha="center", va="top")
else:
plt.text(r.mass, r.brain + 25, r.species, ha="center")
plt.xlabel("body mass (kg)")
plt.ylabel("brain volume (cc)");
```
#### Code 7.2
```
brains.loc[:, "mass_std"] = (brains.loc[:, "mass"] - brains.loc[:, "mass"].mean()) / brains.loc[
:, "mass"
].std()
brains.loc[:, "brain_std"] = brains.loc[:, "brain"] / brains.loc[:, "brain"].max()
```
#### Code 7.3
This is modified from [Chapter 6 of 1st Edition](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb) (6.2 - 6.6).
```
m_7_1 = smf.ols("brain_std ~ mass_std", data=brains).fit()
m_7_1.summary()
```
#### Code 7.4
```
p, cov = np.polyfit(brains.loc[:, "mass_std"], brains.loc[:, "brain_std"], 1, cov=True)
post = stats.multivariate_normal(p, cov).rvs(1000)
az.summary({k: v for k, v in zip("ba", post.T)}, kind="stats")
```
#### Code 7.5
```
1 - m_7_1.resid.var() / brains.brain_std.var()
```
#### Code 7.6
```
def R2_is_bad(model):
return 1 - model.resid.var() / brains.brain_std.var()
R2_is_bad(m_7_1)
```
#### Code 7.7
```
m_7_2 = smf.ols("brain_std ~ mass_std + I(mass_std**2)", data=brains).fit()
m_7_2.summary()
```
#### Code 7.8
```
m_7_3 = smf.ols("brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3)", data=brains).fit()
m_7_4 = smf.ols(
"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4)",
data=brains,
).fit()
m_7_5 = smf.ols(
"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4) + I(mass_std**5)",
data=brains,
).fit()
```
#### Code 7.9
```
m_7_6 = smf.ols(
"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4) + I(mass_std**5) + I(mass_std**6)",
data=brains,
).fit()
```
#### Code 7.10
The chapter gives code to produce the first panel of Figure 7.3. Here, produce the entire figure by looping over models 7.1-7.6.
To sample the posterior predictive on a new independent variable we make use of theano SharedVariable objects, as outlined [here](https://docs.pymc.io/notebooks/data_container.html)
```
models = [m_7_1, m_7_2, m_7_3, m_7_4, m_7_5, m_7_6]
names = ["m_7_1", "m_7_2", "m_7_3", "m_7_4", "m_7_5", "m_7_6"]
mass_plot = np.linspace(33, 62, 100)
mass_new = (mass_plot - brains.mass.mean()) / brains.mass.std()
fig, axs = plt.subplots(3, 2, figsize=[6, 8.5], sharex=True, sharey="row")
for model, name, ax in zip(models, names, axs.flat):
prediction = model.get_prediction({"mass_std": mass_new})
pred = prediction.summary_frame(alpha=0.11) * brains.brain.max()
ax.plot(mass_plot, pred["mean"])
ax.fill_between(mass_plot, pred["mean_ci_lower"], pred["mean_ci_upper"], alpha=0.3)
ax.scatter(brains.mass, brains.brain, color="C0", s=15)
ax.set_title(f"{name}: R^2: {model.rsquared:.2f}", loc="left", fontsize=11)
if ax.is_first_col():
ax.set_ylabel("brain volume (cc)")
if ax.is_last_row():
ax.set_xlabel("body mass (kg)")
if ax.is_last_row():
ax.set_ylim(-500, 2100)
ax.axhline(0, ls="dashed", c="k", lw=1)
ax.set_yticks([0, 450, 1300])
else:
ax.set_ylim(300, 1600)
ax.set_yticks([450, 900, 1300])
fig.tight_layout()
```
#### Code 7.11 - this is R specific notation for dropping rows
```
brains_new = brains.drop(brains.index[-1])
# Figure 7.4
# this code taken from PyMC3 port of Rethinking/Chp_06.ipynb
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8, 3))
ax1.scatter(brains.mass, brains.brain, alpha=0.8)
ax2.scatter(brains.mass, brains.brain, alpha=0.8)
for i in range(len(brains)):
d_new = brains.drop(brains.index[-i]) # drop each data point in turn
# first order model
m0 = smf.ols("brain ~ mass", d_new).fit()
# need to calculate regression line
# need to add intercept term explicitly
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame(
{"mass": np.linspace(x.mass.min() - 10, x.mass.max() + 10, 50)}
) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m0.predict(x_pred2) # calculate predicted values
ax1.plot(x_pred, y_pred, "gray", alpha=0.5)
ax1.set_ylabel("body mass (kg)", fontsize=12)
ax1.set_xlabel("brain volume (cc)", fontsize=12)
ax1.set_title("Underfit model")
# fifth order model
m1 = smf.ols(
"brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)", data=d_new
).fit()
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame(
{"mass": np.linspace(x.mass.min() - 10, x.mass.max() + 10, 200)}
) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m1.predict(x_pred2) # calculate predicted values from fitted model
ax2.plot(x_pred, y_pred, "gray", alpha=0.5)
ax2.set_xlim(32, 62)
ax2.set_ylim(-250, 2200)
ax2.set_ylabel("body mass (kg)", fontsize=12)
ax2.set_xlabel("brain volume (cc)", fontsize=12)
ax2.set_title("Overfit model")
```
#### Code 7.12
```
p = np.array([0.3, 0.7])
-np.sum(p * np.log(p))
# Figure 7.5
p = np.array([0.3, 0.7])
q = np.arange(0.01, 1, 0.01)
DKL = np.sum(p * np.log(p / np.array([q, 1 - q]).T), 1)
plt.plot(q, DKL)
plt.xlabel("q[1]")
plt.ylabel("Divergence of q from p")
plt.axvline(0.3, ls="dashed", color="k")
plt.text(0.315, 1.22, "q = p");
```
#### Code 7.13 & 7.14
```
n_samples = 3000
intercept, slope = stats.multivariate_normal(m_7_1.params, m_7_1.cov_params()).rvs(n_samples).T
pred = intercept + slope * brains.mass_std.values.reshape(-1, 1)
n, ns = pred.shape
# PyMC3 does not have a way to calculate LPPD directly, so we use the approach from 7.14
sigmas = (np.sum((pred - brains.brain_std.values.reshape(-1, 1)) ** 2, 0) / 7) ** 0.5
ll = np.zeros((n, ns))
for s in range(ns):
logprob = stats.norm.logpdf(brains.brain_std, pred[:, s], sigmas[s])
ll[:, s] = logprob
lppd = np.zeros(n)
for i in range(n):
lppd[i] = logsumexp(ll[i]) - np.log(ns)
lppd
```
#### Code 7.15
```
# make an lppd function that can be applied to all models (from code above)
def lppd(model, n_samples=1e4):
n_samples = int(n_samples)
pars = stats.multivariate_normal(model.params, model.cov_params()).rvs(n_samples).T
dmat = dmatrix(
model.model.data.design_info, brains, return_type="dataframe"
).values # get model design matrix
pred = dmat.dot(pars)
n, ns = pred.shape
# this approach for calculating lppd isfrom 7.14
sigmas = (np.sum((pred - brains.brain_std.values.reshape(-1, 1)) ** 2, 0) / 7) ** 0.5
ll = np.zeros((n, ns))
for s in range(ns):
logprob = stats.norm.logpdf(brains.brain_std, pred[:, s], sigmas[s])
ll[:, s] = logprob
lppd = np.zeros(n)
for i in range(n):
lppd[i] = logsumexp(ll[i]) - np.log(ns)
return lppd
# model 7_6 does not work with OLS because its covariance matrix is not finite.
lppds = np.array(list(map(lppd, models[:-1], [1000] * len(models[:-1]))))
lppds.sum(1)
```
#### Code 7.16
This relies on the `sim.train.test` function in the `rethinking` package. [This](https://github.com/rmcelreath/rethinking/blob/master/R/sim_train_test.R) is the original function.
The python port of this function below is from [Rethinking/Chp_06](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb) Code 6.12.
```
def sim_train_test(N=20, k=3, rho=[0.15, -0.4], b_sigma=100):
n_dim = 1 + len(rho)
if n_dim < k:
n_dim = k
Rho = np.diag(np.ones(n_dim))
Rho[0, 1:3:1] = rho
i_lower = np.tril_indices(n_dim, -1)
Rho[i_lower] = Rho.T[i_lower]
x_train = stats.multivariate_normal.rvs(cov=Rho, size=N)
x_test = stats.multivariate_normal.rvs(cov=Rho, size=N)
mm_train = np.ones((N, 1))
np.concatenate([mm_train, x_train[:, 1:k]], axis=1)
# Using pymc3
with pm.Model() as m_sim:
vec_V = pm.MvNormal(
"vec_V",
mu=0,
cov=b_sigma * np.eye(n_dim),
shape=(1, n_dim),
testval=np.random.randn(1, n_dim) * 0.01,
)
mu = pm.Deterministic("mu", 0 + pm.math.dot(x_train, vec_V.T))
y = pm.Normal("y", mu=mu, sd=1, observed=x_train[:, 0])
with m_sim:
trace_m_sim = pm.sample(return_inferencedata=True)
vec = az.summary(trace_m_sim)["mean"][:n_dim]
vec = np.array([i for i in vec]).reshape(n_dim, -1)
dev_train = -2 * sum(stats.norm.logpdf(x_train, loc=np.matmul(x_train, vec), scale=1))
mm_test = np.ones((N, 1))
mm_test = np.concatenate([mm_test, x_test[:, 1 : k + 1]], axis=1)
dev_test = -2 * sum(stats.norm.logpdf(x_test[:, 0], loc=np.matmul(mm_test, vec), scale=1))
return np.mean(dev_train), np.mean(dev_test)
n = 20
tries = 10
param = 6
r = np.zeros(shape=(param - 1, 4))
train = []
test = []
for j in range(2, param + 1):
print(j)
for i in range(1, tries + 1):
tr, te = sim_train_test(N=n, k=param)
train.append(tr), test.append(te)
r[j - 2, :] = (
np.mean(train),
np.std(train, ddof=1),
np.mean(test),
np.std(test, ddof=1),
)
```
#### Code 7.17
Does not apply because multi-threading is automatic in PyMC3.
#### Code 7.18
```
num_param = np.arange(2, param + 1)
plt.figure(figsize=(10, 6))
plt.scatter(num_param, r[:, 0], color="C0")
plt.xticks(num_param)
for j in range(param - 1):
plt.vlines(
num_param[j],
r[j, 0] - r[j, 1],
r[j, 0] + r[j, 1],
color="mediumblue",
zorder=-1,
alpha=0.80,
)
plt.scatter(num_param + 0.1, r[:, 2], facecolors="none", edgecolors="k")
for j in range(param - 1):
plt.vlines(
num_param[j] + 0.1,
r[j, 2] - r[j, 3],
r[j, 2] + r[j, 3],
color="k",
zorder=-2,
alpha=0.70,
)
dist = 0.20
plt.text(num_param[1] - dist, r[1, 0] - dist, "in", color="C0", fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] - dist, "out", color="k", fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] + r[1, 3] - dist, "+1 SD", color="k", fontsize=10)
plt.text(num_param[1] + dist, r[1, 2] - r[1, 3] - dist, "+1 SD", color="k", fontsize=10)
plt.xlabel("Number of parameters", fontsize=14)
plt.ylabel("Deviance", fontsize=14)
plt.title(f"N = {n}", fontsize=14)
plt.show()
```
These uncertainties are a *lot* larger than in the book... MCMC vs OLS again?
#### Code 7.19
7.19 to 7.25 transcribed directly from 6.15-6.20 in [Chapter 6 of 1st Edition](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb).
```
data = pd.read_csv("Data/cars.csv", sep=",", index_col=0)
with pm.Model() as m:
a = pm.Normal("a", mu=0, sd=100)
b = pm.Normal("b", mu=0, sd=10)
sigma = pm.Uniform("sigma", 0, 30)
mu = pm.Deterministic("mu", a + b * data["speed"])
dist = pm.Normal("dist", mu=mu, sd=sigma, observed=data["dist"])
m = pm.sample(5000, tune=10000)
```
#### Code 7.20
```
n_samples = 1000
n_cases = data.shape[0]
logprob = np.zeros((n_cases, n_samples))
for s in range(0, n_samples):
mu = m["a"][s] + m["b"][s] * data["speed"]
p_ = stats.norm.logpdf(data["dist"], loc=mu, scale=m["sigma"][s])
logprob[:, s] = p_
```
#### Code 7.21
```
n_cases = data.shape[0]
lppd = np.zeros(n_cases)
for a in range(1, n_cases):
lppd[a] = logsumexp(logprob[a]) - np.log(n_samples)
```
#### Code 7.22
```
pWAIC = np.zeros(n_cases)
for i in range(1, n_cases):
pWAIC[i] = np.var(logprob[i])
```
#### Code 7.23
```
-2 * (sum(lppd) - sum(pWAIC))
```
#### Code 7.24
```
waic_vec = -2 * (lppd - pWAIC)
(n_cases * np.var(waic_vec)) ** 0.5
```
#### Setup for Code 7.25+
Have to reproduce m6.6-m6.8 from Code 6.13-6.17 in Chapter 6
```
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = pm.Deterministic("mu", p * d.h0)
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = pm.Deterministic("mu", p * d.h0)
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = pm.Deterministic("mu", p * d.h0)
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample(return_inferencedata=True)
```
#### Code 7.25
```
az.waic(m_6_7_trace, m_6_7, scale="deviance")
```
#### Code 7.26
```
compare_df = az.compare(
{
"m_6_6": m_6_6_trace,
"m_6_7": m_6_7_trace,
"m_6_8": m_6_8_trace,
},
method="pseudo-BMA",
ic="waic",
scale="deviance",
)
compare_df
```
#### Code 7.27
```
waic_m_6_7 = az.waic(m_6_7_trace, pointwise=True, scale="deviance")
waic_m_6_8 = az.waic(m_6_8_trace, pointwise=True, scale="deviance")
# pointwise values are stored in the waic_i attribute.
diff_m_6_7_m_6_8 = waic_m_6_7.waic_i - waic_m_6_8.waic_i
n = len(diff_m_6_7_m_6_8)
np.sqrt(n * np.var(diff_m_6_7_m_6_8)).values
```
#### Code 7.28
```
40.0 + np.array([-1, 1]) * 10.4 * 2.6
```
#### Code 7.29
```
az.plot_compare(compare_df);
```
#### Code 7.30
```
waic_m_6_6 = az.waic(m_6_6_trace, pointwise=True, scale="deviance")
diff_m6_6_m6_8 = waic_m_6_6.waic_i - waic_m_6_8.waic_i
n = len(diff_m6_6_m6_8)
np.sqrt(n * np.var(diff_m6_6_m6_8)).values
```
#### Code 7.31
dSE is calculated by compare above, but `rethinking` produces a pairwise comparison. This is not implemented in `arviz`, but we can hack it together:
```
dataset_dict = {"m_6_6": m_6_6_trace, "m_6_7": m_6_7_trace, "m_6_8": m_6_8_trace}
# compare all models
s0 = az.compare(dataset_dict, ic="waic", scale="deviance")["dse"]
# the output compares each model to the 'best' model - i.e. two models are compared to one.
# to complete a pair-wise comparison we need to compare the remaining two models.
# to do this, remove the 'best' model from the input data
del dataset_dict[s0.index[0]]
# re-run compare with the remaining two models
s1 = az.compare(dataset_dict, ic="waic", scale="deviance")["dse"]
# s0 compares two models to one model, and s1 compares the remaining two models to each other
# now we just nee to wrangle them together!
# convert them both to dataframes, setting the name to the 'best' model in each `compare` output.
# (i.e. the name is the model that others are compared to)
df_0 = s0.to_frame(name=s0.index[0])
df_1 = s1.to_frame(name=s1.index[0])
# merge these dataframes to create a pairwise comparison
pd.merge(df_0, df_1, left_index=True, right_index=True)
```
**Note:** this work for three models, but will get increasingly hack-y with additional models. The function below can be applied to *n* models:
```
def pairwise_compare(dataset_dict, metric="dse", **kwargs):
"""
Calculate pairwise comparison of models in dataset_dict.
Parameters
----------
dataset_dict : dict
A dict containing two ore more {'name': pymc3.backends.base.MultiTrace}
items.
metric : str
The name of the matric to be calculated. Can be any valid column output
by `arviz.compare`. Note that this may change depending on the **kwargs
that are specified.
kwargs
Arguments passed to `arviz.compare`
"""
data_dict = dataset_dict.copy()
dicts = []
while len(data_dict) > 1:
c = az.compare(data_dict, **kwargs)[metric]
dicts.append(c.to_frame(name=c.index[0]))
del data_dict[c.index[0]]
return pd.concat(dicts, axis=1)
dataset_dict = {"m_6_6": m_6_6_trace, "m_6_7": m_6_7_trace, "m_6_8": m_6_8_trace}
pairwise_compare(dataset_dict, metric="dse", ic="waic", scale="deviance")
```
#### Code 7.32
```
d = pd.read_csv("Data/WaffleDivorce.csv", delimiter=";")
d["A"] = stats.zscore(d["MedianAgeMarriage"])
d["D"] = stats.zscore(d["Divorce"])
d["M"] = stats.zscore(d["Marriage"])
with pm.Model() as m_5_1:
a = pm.Normal("a", 0, 0.2)
bA = pm.Normal("bA", 0, 0.5)
mu = a + bA * d["A"]
sigma = pm.Exponential("sigma", 1)
D = pm.Normal("D", mu, sigma, observed=d["D"])
m_5_1_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_5_2:
a = pm.Normal("a", 0, 0.2)
bM = pm.Normal("bM", 0, 0.5)
mu = a + bM * d["M"]
sigma = pm.Exponential("sigma", 1)
D = pm.Normal("D", mu, sigma, observed=d["D"])
m_5_2_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_5_3:
a = pm.Normal("a", 0, 0.2)
bA = pm.Normal("bA", 0, 0.5)
bM = pm.Normal("bM", 0, 0.5)
mu = a + bA * d["A"] + bM * d["M"]
sigma = pm.Exponential("sigma", 1)
D = pm.Normal("D", mu, sigma, observed=d["D"])
m_5_3_trace = pm.sample(return_inferencedata=True)
```
#### Code 7.33
```
az.compare(
{"m_5_1": m_5_1_trace, "m_5_2": m_5_2_trace, "m_5_3": m_5_3_trace},
scale="deviance",
)
```
#### Code 7.34
```
psis_m_5_3 = az.loo(m_5_3_trace, pointwise=True, scale="deviance")
waic_m_5_3 = az.waic(m_5_3_trace, pointwise=True, scale="deviance")
# Figure 7.10
plt.scatter(psis_m_5_3.pareto_k, waic_m_5_3.waic_i)
plt.xlabel("PSIS Pareto k")
plt.ylabel("WAIC");
# Figure 7.11
v = np.linspace(-4, 4, 100)
g = stats.norm(loc=0, scale=1)
t = stats.t(df=2, loc=0, scale=1)
fig, (ax, lax) = plt.subplots(1, 2, figsize=[8, 3.5])
ax.plot(v, g.pdf(v), color="b")
ax.plot(v, t.pdf(v), color="k")
lax.plot(v, -g.logpdf(v), color="b")
lax.plot(v, -t.logpdf(v), color="k");
```
#### Code 7.35
```
with pm.Model() as m_5_3t:
a = pm.Normal("a", 0, 0.2)
bA = pm.Normal("bA", 0, 0.5)
bM = pm.Normal("bM", 0, 0.5)
mu = a + bA * d["A"] + bM * d["M"]
sigma = pm.Exponential("sigma", 1)
D = pm.StudentT("D", 2, mu, sigma, observed=d["D"])
m_5_3t_trace = pm.sample(return_inferencedata=True)
az.loo(m_5_3t_trace, pointwise=True, scale="deviance")
az.plot_forest([m_5_3_trace, m_5_3t_trace], model_names=["m_5_3", "m_5_3t"], figsize=[6, 3.5]);
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
```
import numpy as np
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def build_vocabulary(sentence_iterator, word_count_threshold=0, save_variables=False): # borrowed this function from NeuralTalk
print 'preprocessing word counts and creating vocab based on word count threshold %d' % (word_count_threshold)
length_of_longest_sentence = np.max(map(lambda x: len(x.split(' ')), sentence_iterator))
print 'Length of the longest sentence is %s'%length_of_longest_sentence
word_counts = {}
number_of_sentences = 0
length_of_sentences =list()
for sentence in sentence_iterator:
number_of_sentences += 1
length_of_sentences.append(len(sentence.lower().split(' ')))
for current_word in sentence.lower().split(' '):
word_counts[current_word] = word_counts.get(current_word, 0) + 1
vocab = [current_word for current_word in word_counts if word_counts[current_word] >= word_count_threshold]
print 'filtered words from %d to %d' % (len(word_counts), len(vocab))
index_to_word_list = {}
index_to_word_list[0] = '#END#' # end token at the end of the sentence. make first dimension be end token
word_to_index_list = {}
word_to_index_list['#START#'] = 0 # make first vector be the start token
current_index = 1
for current_word in vocab:
word_to_index_list[current_word] = current_index
index_to_word_list[current_index] = current_word
current_index += 1
word_counts['#END#'] = number_of_sentences
plt.subplot(2, 1, 1)
plt.plot(length_of_sentences)
plt.title('Word distribution')
plt.xlabel('Sample #')
plt.ylabel('Length of words')
print np.mean(length_of_sentences)
print np.max(length_of_sentences)
print np.min(length_of_sentences)
print np.median(length_of_sentences)
if save_variables:
print 'Completed processing captions. Saving work now ...'
word_to_index_path = 'word_to_index.p'
index_to_word_path = 'index_to_word.p'
word_count_path = 'word_count.p'
pickle.dump(word_to_index_list, open(word_to_index_path, "wb"))
pickle.dump(index_to_word_list, open(index_to_word_path, "wb"))
pickle.dump(word_counts, open(word_count_path, "wb"))
return word_to_index_list, index_to_word_list, word_counts
annotation_path = 'training_set_recipes.p'
annotation_data = pickle.load(open(annotation_path, "rb"))
captions = annotation_data.values()
build_vocabulary(captions)
```
| github_jupyter |
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
import random
import numpy as np
from src.codeGameSimulation.GameUr import GameSettings
from theory.helpers import labelLine,draw_squares,draw_circles,draw_stars,draw_path,draw_fives,draw_4fives,draw_4eyes,draw_steps
import gameBoardDisplay as gbd
from scipy import stats
# %config InlineBackend.figure_formats = ['svg']
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.style as mplstyle
from matplotlib.colors import LinearSegmentedColormap
import matplotlib.collections as collections
import matplotlib.patches as mpatches
import matplotlib.axes as maxes
from matplotlib import patheffects
mplstyle.use('fast')
mplstyle.use('default')
mpl.rcParams['figure.dpi'] = 30
mpl.rcParams['figure.figsize'] = [10, 20]
font = {'family' : 'normal',
# 'weight' : 'bold',
'size' : 30}
mpl.rc('font', **font)
colors = ["lightgreen", "yellow", "red"]
cmap = LinearSegmentedColormap.from_list("mycmap", colors)
def draw_gameboard(ax,noEnd=False,noStart=False, rotate=False,dimitry=False,clear=False):
if rotate:
draw_squares(ax,[1,2,3,4],[0,2],"prepare",clear)
draw_squares(ax,list(range(1,9)),[1],"fight",clear)
if dimitry:
draw_squares(ax,[7,8],[0,2],"fight",clear)
else:
draw_squares(ax,[7,8],[0,2],"retreat",clear)
draw_stars(ax,[1,7],[0,2])
draw_stars(ax,[4],[1])
if not clear:
if not noStart:
draw_circles(ax,[5],[0,2],"start")
if not noEnd:
draw_circles(ax,[6],[0,2],"ende")
def formatAxis(ax,xlim,ylim,xticksRange=(0,9)):
ax.set_yticks([0, 1, 2], ["A", "B", "C"])
ax.set_xticks(range(*xticksRange),range(xticksRange[0]-1,xticksRange[1]-1))
ax.set_aspect("equal", "box")
ax.set_ylim(*ylim)
ax.set_xlim(*xlim)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_color("gray")
ax.spines["bottom"].set_color("gray")
figGB, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_gameboard(ax["p0"],rotate=True,clear=True)
formatAxis(ax["p0"],(.4,8.6),(-0.6,2.6))
figSimple, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_gameboard(ax["p0"],rotate=True)
y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]
x = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6]
x0 = y[:5] + [x_ - 0.2 for x_ in y[5:13]] + y[13:]
x1 = [x_ + 2 for x_ in y[:5]] + [x_ + 0.2 for x_ in y[5:13]] + [x_ + 2 for x_ in y[13:]]
draw_path(ax["p0"], x, x0, "red")
draw_path(ax["p0"], x, x1, "green")
formatAxis(ax["p0"],(.4,8.6),(-0.6,2.6))
figFinkel, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_gameboard(ax["p0"],rotate=True,dimitry=True, noEnd=True)
draw_circles(ax["p0"],6,0,"ende")
y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0,0,0]
x = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 8, 8,7,6]
x0 = y[:5] + [x_ - 0.2 for x_ in y[5:14]] + y[14:15]+ [x_ + .2 for x_ in y[15:]]
x1 = [x_ + 2 for x_ in y[:5]] + [x_ + 0.2 for x_ in y[5:14]] + y[14:15]+ [x_ - .2 for x_ in y[15:]]
y0=x[:10]+[y_ + .2 for y_ in x[10:13]]+[y_ - .2 for y_ in x[13:16]] +x[16:]
y1=x[:10]+[y_ - .2 for y_ in x[10:13]]+[y_ + .2 for y_ in x[13:16]] +x[16:]
draw_path(ax["p0"], y0, x0, "red")
draw_path(ax["p0"], y1, x1, "green")
formatAxis(ax["p0"],(.4,8.6),(-0.6,2.6))
figDimitry, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_gameboard(ax["p0"],noEnd=True,noStart=True,rotate=True,dimitry=True,clear=True)
# draw_circles(ax["p0"],0,1,"ende")
y = [0,0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 2,2,1,1,1,1,1,1,1,1]
x = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 8, 8,7,7,6,5,4,3,2,1,0]
x0 = y[:5] + [x_ - 0.2 for x_ in y[5:12]]+y[12:17] + [x_ + 0.2 for x_ in y[17:]]
# draw_path(ax["p0"],x, x0, "red")
# draw_circles(ax["p0"],8,1,"turn")
draw_fives(ax["p0"],8,[0,2],"small")
draw_fives(ax["p0"],3,[0,2],"normal")
draw_fives(ax["p0"],[2,5,8],1,"normal")
draw_4fives(ax["p0"],[3,6],1)
draw_4eyes(ax["p0"],[2,4],[0,2])
draw_4eyes(ax["p0"],7,1)
draw_steps(ax["p0"],1,1)
formatAxis(ax["p0"],(.4,8.6),(-0.6,2.6))
figDimitryPath, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_gameboard(ax["p0"],noEnd=True,rotate=True,dimitry=True)
draw_circles(ax["p0"],0,1,"ende")
y = [0,0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 2,2,1,1,1,1,1,1,1,1]
x = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 8, 8,7,7,6,5,4,3,2,1,0]
x0 = y[:5] + [x_ - 0.2 for x_ in y[5:12]]+y[12:17] + [x_ + 0.2 for x_ in y[17:]]
draw_path(ax["p0"],x, x0, "red")
draw_circles(ax["p0"],8,0,"turn")
# draw_fives(ax["p0"],8,[0,2],"small")
# draw_fives(ax["p0"],3,[0,2],"normal")
# draw_fives(ax["p0"],[2,5,8],1,"normal")
# draw_4fives(ax["p0"],[3,6],1)
# draw_4eyes(ax["p0"],[2,4],[0,2])
# draw_4eyes(ax["p0"],7,1)
# draw_steps(ax["p0"],1,1)
formatAxis(ax["p0"],(-.6,8.6),(-0.6,2.6))
figFinkelGameBoard, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_circles(ax["p0"],5,[0,2],"start")
draw_squares(ax["p0"], [1, 2, 3, 4], [0, 2], "prepare", False)
draw_squares(ax["p0"],list(range(1,13)),[1],"fight",False)
draw_circles(ax["p0"],13,1,"ende")
draw_stars(ax["p0"],[4,8,12],1)
draw_stars(ax["p0"],1,[0,2])
formatAxis(ax["p0"],(.4,13.6),(-.6,2.6),(1,14))
y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
x = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
x0 = y[:5] + [x_ - 0.2 for x_ in y[5:]]
x1 = [x_ + 2 for x_ in y[:5]] + [x_ + 0.2 for x_ in y[5:]]
y0 = x[:10]+[y_ + .2 for y_ in x[10:13]]+[y_ - .2 for y_ in x[13:16]] + x[16:]
y1 = x[:10]+[y_ - .2 for y_ in x[10:13]]+[y_ + .2 for y_ in x[13:16]] + x[16:]
draw_path(ax["p0"], y0, x0, "red")
draw_path(ax["p0"], y1, x1, "green")
figSimple_singleside, ax = plt.subplot_mosaic([["p0"]], figsize=[15, 5], constrained_layout=True)
draw_circles(ax["p0"], 5, 0, "start")
draw_squares(ax["p0"], [1, 2, 3, 4], 0, "prepare", False)
draw_squares(ax["p0"], list(range(1, 9)), [1], "fight", False)
draw_squares(ax["p0"], list(range(7, 9)), 0, "retreat", False)
draw_circles(ax["p0"], 6, 0, "ende")
draw_stars(ax["p0"], 7, 0)
draw_stars(ax["p0"], 4, 1)
draw_stars(ax["p0"], 1, 0)
formatAxis(ax["p0"],(.4,8.6),(-0.6,1.6))
y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]
x = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6]
y0 = [y_ + 0.2 for y_ in y[:5]] + [y_ - 0.05 for y_ in y[5:13]] +[y_ + 0.2 for y_ in y[13:]]
y1 = [y_ -.2 for y_ in y[:5]] + [y_ + 0.05 for y_ in y[5:13]] + [y_ -.2 for y_ in y[13:]]
x0 = x[:4]+[x_+.2 for x_ in x[4:6]]+x[6:12]+[x_-.2 for x_ in x[12:14]]+x[14:]
x1 = x[:4]+[x_-.2 for x_ in x[4:6]]+x[6:12]+[x_+.2 for x_ in x[12:14]]+x[14:]
draw_path(ax["p0"], x0, y0, "red")
draw_path(ax["p0"], x1, y1, "green")
# ax["p0"].plot(.5,.5,marker="o",color="black",fillstyle="full",markersize=30,markeredgecolor="black")
# ax["p0"].plot(8.5, .5, marker="o", color="black",
# fillstyle="full", markersize=30, markeredgecolor="black")
figSimple_straight, ax = plt.subplot_mosaic(
[["p0"]], figsize=[35, 5], constrained_layout=True)
draw_circles(ax["p0"], 0, 0, "start")
draw_squares(ax["p0"], list(range(1, 5)), 0, "prepare", False)
draw_squares(ax["p0"], list(range(5, 13)), 0, "fight", False)
draw_squares(ax["p0"], list(range(13, 15)), 0, "retreat", False)
draw_circles(ax["p0"], 15, 0, "ende")
draw_stars(ax["p0"], [4,8,14], 0)
formatAxis(ax["p0"], (-.6, 15.6), (-0.6, 0.6),(0,16))
y = [0]*16
x = list(range(0,16))
y0 = [y_ + 0.2 for y_ in y[:5]] + [y_ + .05 for y_ in y[5:13]] + [y_ + 0.2 for y_ in y[13:]]
y1 = [y_ - .2 for y_ in y[:5]] + [y_ - .05 for y_ in y[5:13]] + [y_ - .2 for y_ in y[13:]]
draw_path(ax["p0"], x, y0, "red")
draw_path(ax["p0"], x, y1, "green")
ax["p0"].spines["left"].set_visible(False)
ax["p0"].set_yticks([0],"")
ax["p0"].set_xticks(range(0, 16), range(0, 16))
# ax["p0"].plot(4.5,-.5,marker="o",color="black",fillstyle="full",markersize=30,markeredgecolor="black")
# ax["p0"].plot(12.5,-.5,marker="o",color="black",fillstyle="full",markersize=30,markeredgecolor="black")
figGB.savefig("../../tex/game_ur_ba_thesis/img/Grafiken/Gameboard.png",dpi=300,)
figFinkelGameBoard.savefig(
"../../tex/game_ur_ba_thesis/img/Grafiken/GameboardFinkel.png", dpi=300,)
figSimple.savefig("../../tex/game_ur_ba_thesis/img/Grafiken/simplePathGameboard.png",dpi=300,)
figFinkel.savefig("../../tex/game_ur_ba_thesis/img/Grafiken/FinkelPathGameboard.png",dpi=300,)
figDimitry.savefig("../../tex/game_ur_ba_thesis/img/Grafiken/DimitryGameboard.png",dpi=300,)
figDimitryPath.savefig("../../tex/game_ur_ba_thesis/img/Grafiken/DimitryPathGameboard.png",dpi=300,)
figSimple_singleside.savefig( "../../tex/game_ur_ba_thesis/img/Grafiken/Gameboard_singleside.png", dpi=300,)
figSimple_straight.savefig("../../tex/game_ur_ba_thesis/img/Grafiken/Gameboard_straight.png",dpi=300,)
y=[-1,1]*2
x=[-1]*2+[1]*2
list(zip(y,x))
```
| github_jupyter |
# Introduction to Decision Theory using Probabilistic Graphical Models
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/b/bb/Risk_aversion_curve.jpg" width="400px" height="300px" />
> So far, we have seen that probabilistic graphical models are useful for modeling situations that involve uncertainty. Furthermore, we will see in the next module how using inference algorithms we will also reach conclusions abount the current situation from partial evidence: predictions.
>
> On the other hand, we do not only want to obtain these conclusions (predictions), but actually make decisions on top of these conclusions.
>
> It turns out that we can actually use probabilistic graphical models to encode not only the uncertain situations, but also the decision making agents with all the policies they are allowed to implement and the possible utilities that one may obtain.
> **Objetives:**
> - To learn how to represent decision situations using PGMs.
> - To understand the maximum expected utility principle.
> - To learn how to measure the value of information when making a decision.
> **References:**
> - Probabilistic Graphical Models: Principles and Techniques, By Daphne Koller and Nir Friedman. Ch. 22 - 23.
> - Probabilistic Graphical Models Specialization, offered through Coursera. Prof. Daphne Koller.
<p style="text-align:right;"> Imagen recuperada de: https://upload.wikimedia.org/wikipedia/commons/b/bb/Risk_aversion_curve.jpg.</p>
___
# 1. Maximizing Expected Utility
The theoretical foundations of decision theory were established long befor probabilistic graphical models came to live. The framework of *maximum expected utility* allows to formulate and solve decision problems that involve uncertainty.
Before continuing, we should be clear that the **utility** is a numerical function that assigns numbers to the various possible outcomes, encoding the preferences of the agent. These numbers:
- Do not have meanings in themselves.
- We only know that the larger, the better, according to the preferences of the agent.
- Usually, we compare the utility of two outcomes by means of the $\Delta U$, which represents the strength of the "happiness" change from an outcome w.r.t. the other.
The outcomes we were talking about above can vary along *multiple dimensions*. One of those dimensions is often the monetary gain, but most of the settings consider other dimensions as well.
## 1.1. Problem formulation and maximum expected utility principle
A simple **decision making situation** $\mathcal{D}$ is defined by:
- A set of possible actions $A$, with $\mathrm{Val}(A)=\{a_1, \dots, a_k\}$.
- A set of possible states (RVs) $\bar{X}$, with $\mathrm{Val}(\bar{X})=\{\bar{x}_1, \dots, \bar{x}_n\}$.
- A conditional distribution $P(\bar{X}|A)$.
- A utility function $U(\bar{X}, A)$, which expresses the agent's preferences.
The **expected utility** on the above decision making situation, given that $A=a$, is
$$EU[\mathcal{D}[a]] = \sum_{\bar{X}}P(\bar{X}|a)U(\bar{X},a).$$
Furthermore, the **maximum expected utility (MEU)** principle states that we should choose the action that maximizes the expected utility
$$a^\ast = \arg\max_{a\in\mathrm{Val}(A)} EU[\mathcal{D}[a]].$$
**How can we represent the above using PGMs?**
We can use the ideas we developed for PGMs to represent the decision making situations in a very interpretable way.
In this sense, we have:
- Random variables are represented by *ovals* and stand for the state.
- Actions are represented by *rectangles*.
- Utilities are represented by *diamonds*. These have no children.
**Example.** Consider the decision situation $\mathcal{D}$ where a graduate of the Master's Degree in Data Science is deciding whether to found a Data Science consultancy company or not.
While this person does not exactly know which will be the demand of consultancy services, he/she knows that the demand will be either:
- $m^0$: nonexistent, with probability $0.5$;
- $m^1$: moderate, with probability $0.3$;
- $m^2$: high, with probability $0.2$.
Moreover, he/she will obtain a utility $U(M, f^0)=0$ for $M=m^0, m^1, m^2$ in the case that he/she doesn't found the company, or the following utilities in the case that he/she found the company:
- $U(m^0, f^1)=-7$;
- $U(m^1, f^1)=5$;
- $U(m^2, f^1)=20$.
Let's represent the graphical model corresponding to this situation:
```
from IPython.display import Image
# First draw in the white board, then show (first_representation)
Image("figures/first_representation.png")
```
Then, according to this:
- What are the expected utilities for each action?
- $E[\mathcal{D}[f^0]]=0$
- $E[\mathcal{D}[f^1]]=0.5 \times -7 + 0.3 \times 5 + 0.2 \times 20=2.$
- Which is the optimal action?
- $f^1 = \arg \max_{f=f^0, f^1} E[\mathcal{D}[f]]$.
With `pgmpy`:
```
# Import pgmpy.factors.discrete.DiscreteFactor
from pgmpy.factors.discrete import DiscreteFactor
# Define factors P(M), U(M,F)
P_M = DiscreteFactor(variables=["M"],
cardinality=[3],
values=[0.5, 0.3, 0.2])
U_MF = DiscreteFactor(variables=["M", "F"],
cardinality=[3, 2],
values=[0, -7, 0, 5, 0, 20])
print(P_M)
print(U_MF)
# Find Expected Utility
EU = (P_M * U_MF).marginalize(variables=["M"],
inplace=False)
print(EU)
```
**Multiple utility nodes**
In the above example, we only had one utility node. However, we may include as many utility nodes as wanted to reduce the number of parameters:
```
Image("figures/student_utility.png")
```
Where:
- $V_G$: Happiness with the grade itself.
- $V_Q$: Quality of life during studies.
- $V_S$: Value of getting a good job.
The total utility can be formulated as:
$$U=V_G+V_Q+V_S.$$
*Question.* If $|\mathrm{Val}(D)| = 2$, $|\mathrm{Val}(S)| = 2$, $|\mathrm{Val}(G)| = 3$, and $|\mathrm{Val}(J)| = 2$, how many parameters do you need to completely specify the utility?
- $|\mathrm{Val}(V_G)| = 3$; $|\mathrm{Val}(V_Q)| = 4$; $|\mathrm{Val}(V_S)| = 2$. We need $3+4+2=9$ parameters.
*Question.* How many if the utility weren't decomposed?
- $3\times 2 \times 2 \times 2 = 24$.
## 1.2. Information edges and decision rules
The influence diagrams we have depicted above, also allow us to capture the notion of information available to the agent when they make their decision.
**Example.** In the example of the Master's graduate deciding founding or not founding his company, let's assume that he/she has the opportunity to carry out a survey to measure the overall market demand for Data Science consultancy before making the decision.
In this sense, the graph now looks like the following:
```
# First draw in the white board, then show (second_representation)
Image("figures/second_representation.png")
```
Hence, the agent can make its decision depending on the value of the survey, which is denoted by the precense of the edge.
Formally,
> *Definition.* A **decision rule** $\delta_A$ at an action node $A$ is a conditional probability $P(A|\mathrm{Pa}A)$ (a function that maps each instatiation of $\mathrm{Pa}A$ to a probability distribution $\delta_A$ over $\mathrm{Val}(A)$).
Given the above, **what is the expected utility with information?**
Following the same sort of ideas we get that
$$EU[\mathcal{D}[\delta_A]] = \sum_{\bar{X}, A}P_{\delta_A}(\bar{X},A)U(\bar{X},A),$$
where $P_{\delta_A}(\bar{X},A)$ is the joint probability distribution over $\bar{X}$ and $A$. The subindex $\delta_A$ makes reference that this joind distribution depends on the selection of the decision rule $\delta_A$.
Now, following the MEU, the optimal decision rule is:
$$\delta_A^\ast = \arg \max_{\delta_A} EU[\mathcal{D}[\delta_A]],$$
and the MEU is
$$MEU(\mathcal{D}) = \max_{\delta_A} EU[\mathcal{D}[\delta_A]].$$
**How can we find optimal decision rules?**
In our entrepreneur example, we have that
\begin{align}
EU[\mathcal{D}[\delta_A]] &= \sum_{\bar{X}, A}P_{\delta_A}(\bar{X},A)U(\bar{X},A) \\
& = \sum_{M,S,F} P(M)P(S|M) \delta_F(F|S) U(M,F)\\
& = \sum_{S,F} \delta_F(F|S) \sum_M P(M)P(S|M)U(M,F)\\
& = \sum_{S,F} \delta_F(F|S) \mu(S,F)
\end{align}
(see in the whiteboard, then show equations)
Thus, let's calculate $\mu(S,F)$ using `pgmpy`:
```
print(P_M), print(U_MF)
# We already have P(M), and U(F,M). Define P(S|M)
P_S_given_M = DiscreteFactor(variables=["S", "M"],
cardinality=[3, 3],
values=[0.6, 0.3, 0.1, 0.3, 0.4, 0.4, 0.1, 0.3, 0.5])
# Compute mu(F,S)
mu_FS = (P_M * P_S_given_M * U_MF).marginalize(variables=["M"], inplace=False)
# Print mu(F,S)
print(mu_FS)
```
Following the MEU principle, we should select for each state (the Survey, in this case) the action that maximizes $\mu$.
In this case:
```
Image("figures/table.png")
```
Finally,
$$MEU[\mathcal{D}] = \sum_{S,F} \delta_F^\ast(F|S) \mu(S,F) = 0 + 1.15 + 2.1 = 3.25$$
```
print(mu_FS)
print(mu_FS.maximize(variables=["F"], inplace=False))
# Define optimal decision rule
delta_F_given_S = DiscreteFactor(variables=["F", "S"],
cardinality=[2, 3],
values=[1, 0, 0, 0, 1, 1])
print(delta_F_given_S)
# Obtain MEU
MEU = (delta_F_given_S * mu_FS).marginalize(variables=["F", "S"], inplace=False)
print(MEU)
```
Without this observation the MEU was 2, and now the MEU has increased more than 50%.
Nice, huh?
___
# 2. Utility functions
## 2.1. Utility of money
We have used utility functions in all the first section assuming that they were known. In this section we study these functions in more detail.
**Utility functions** are a necessary tool that enables us to compare complex scenarios that involve uncertainty or risk.
The first thing that we should understand is that utility is not the same as expected payoff.
**Example.** An investor must decide between a participation in company A, where he would earn $\$3$ million without risk, and a participation in company B, where he would earn $\$4$ million with probability 0.8 and $\$0$ with probability 0.2.
```
Image("figures/utility_first.png")
```
Which one do you prefer?
- The risk-free one.
What is the expected payof of the company A?
- $\$3$ M.
What is the expected payoff of the company B?
- $0.8 \times \$4$ M + $0.2 \times \$0$ M = $\$3.2$ M.
**Example.**
Another common example that reflects this fact is the well-known St. Petersburg Paradox:
- A fair coin is tossed repeatedly until it comes up Heads.
- Each toss that it doesn't come up Heads, the payoff is doubled.
- In this sense, if the coin comes up Heads in the $n$-th toss, then the payoff will be $\$2^n$.
How much are you willing to pay to enter to this game?
- 20, 10, 1, 5.
What happens is that if we compute the expected payoff it is:
- $P(\text{comes up Heads in } n-\text{th toss}) = \frac{1}{2^n}$
$$E[\text{Payoff}] = \sum_{n=1}^{\infty}P(\text{comes up Heads in } n-\text{th toss}) \text{Payoff}(n) = \sum_{n=1}^{\infty} \frac{1}{2^n} 2^n = \sum_{n=1}^{\infty} 1 = \infty.$$
With these two examples, we have shown that almost always people do not always choose to maximize their monetary gain. What this implies is that the utility of money is not the money itself.
In fact, at this point of the history and after several psychological studies, we know that utility functions of money for most people look like
$$U(W) = \alpha + \beta \log(W + \gamma),$$
which is a nice *concave* function.
**How does this function look like?** (see in the whiteboard).
**How does the actual form of the curve have to be the attitude towards risk?**
## 2.2. Utility of multiple attributes
All the attributes affecting the utility must be integrated into one utility function.
This may be a difficult task, since we can enter into some complex fields, far beyond math, probability, and graphs.
- For instance, how do we compare human life with money?
- A low cost airline is considering the decision of not to run mainteinance plans over the aircraft at every arrival.
- If you have car, you don't change the tires that often (every 3 months).
There have been several attempts to addres this problem:
- Micromorts: $\frac{1}{10^6}$ chance of death.
- [QALY](https://en.wikipedia.org/wiki/Quality-adjusted_life_year).
# 3. Value of perfect information
We used influence diagrams to make decisions given a set of observations.
Another type of question that may arise is **which observations sould I make before making a decision?**.
- Initially one may think that the more information, the better (because information is power).
- But the answer to this question is far from being that simple.
For instance:
- In our entrepreneur example, we saw that including the information of the survey increased the MEU significatively. However, we did not take into account the costs of performing that survey. What if the cost of performing that survey makes the money gains of the company negative or too little?
- Medical diagnosis relies on tests. Some of these tests are painful, risky and/or very expensive.
A notion that allows us to adress this question is the **Value of Perfect Information**.
- The value of perfect information $\mathrm{VPI}(A|\bar{X})$ is the value (in utility units) of observing $\bar{X}$ before choosing an action at the node $A$.
- If $\mathcal{D}$ is the original influence diagram, and
- $\mathcal{D}_{\bar{X}\to A}$ is the influence diagram with the edge(s) $\bar{X}\to A$,
- then
$$\mathrm{VPI}(A|\bar{X}) = MEU(\mathcal{D}_{\bar{X}\to A}) - MEU(\mathcal{D}).$$
In the entrepreneur example,
$$\mathrm{VPI}(F|S) = MEU(\mathcal{D}_{S\to F}) - MEU(\mathcal{D})=3.25 - 2 = 1.25.$$
> *Theorem.* The value of perfect information satisfies:
>
> (i) $\mathrm{VPI}(A|\bar{X})\geq 0$.
>
> (ii) $\mathrm{VPI}(A|\bar{X})= 0$ if and only if the optimal decision rule for $\mathcal{D}$ is also optimal for $\mathcal{D}_{\bar{X}\to A}$.
This theorem practically says that the information is valuable if and only if it changes the agent's decision at least in one case.
**Example.** Consider the case that you are interested in two job offers in two different companies. Furthermore, these two companies are startups and both are looking for funding, which highly depends on the organizational quality of the company.
This situation can be modeled as:
```
Image("figures/vpi.png")
```
Let's find the MEU using `pgmpy`:
```
# Define factors
P_C1 = DiscreteFactor(variables=["C1"],
cardinality=[3],
values=[0.1, 0.2, 0.7])
P_C2 = DiscreteFactor(variables=["C2"],
cardinality=[3],
values=[0.4, 0.5, 0.1])
P_F1_given_C1 = DiscreteFactor(variables=["F1", "C1"],
cardinality=[2, 3],
values=[0.9, 0.6, 0.1, 0.1, 0.4, 0.9])
P_F2_given_C2 = DiscreteFactor(variables=["F2", "C2"],
cardinality=[2, 3],
values=[0.9, 0.6, 0.1, 0.1, 0.4, 0.9])
U_F1F2D = DiscreteFactor(variables=["F1", "F2", "D"],
cardinality=[2, 2, 2],
values=[0, 0, 0, 1, 1, 0, 1, 1])
# Obtain Expected utility
EU = (P_C1 * P_C2 * P_F1_given_C1 * P_F2_given_C2 * U_F1F2D).marginalize(variables=[
"C1", "C2", "F1", "F2"], inplace=False)
# Print EU
print(EU)
# Obtain MEU(D)
MEU = EU.maximize(variables=["D"], inplace=False)
print(MEU)
```
Now, let's say that a friend of yours already works in the Company 2, so he informs you about the organizational status of that company. What is the value of that information?
```
Image("figures/vpi2.png")
# Obtain the factor mu(D, C2)
mu_DC2 = (P_C1 * P_C2 * P_F1_given_C1 * P_F2_given_C2 * U_F1F2D).marginalize(
variables=["C1", "F1", "F2"], inplace=False
)
# Print
print(mu_DC2)
# Select optimal decision
delta_D_given_C2 = DiscreteFactor(variables=["C2", "D"],
cardinality=[3, 2],
values=[1, 0, 1, 0, 0, 1])
print(delta_D_given_C2)
# Obtain MEU(D_C2->D)
MEU_ = (delta_D_given_C2 * mu_DC2).marginalize(variables=["D", "C2"], inplace=False)
print(MEU_)
# Obtain VPI
VPI = MEU_.values - MEU.values
VPI
```
# Announcements
## Exam of module 1.
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| github_jupyter |
```
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ================================
```
<img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
# Exporting Ranking Models
In this example notebook we demonstrate how to export (save) NVTabular `workflow` and a `ranking model` for model deployment with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library.
Learning Objectives:
- Export NVTabular workflow for model deployment
- Export TensorFlow DLRM model for model deployment
We will follow the steps below:
- Prepare the data with NVTabular and export NVTabular workflow
- Train a DLRM model with Merlin Models and export the trained model
## Importing Libraries
Let's start with importing the libraries that we'll use in this notebook.
```
import os
import nvtabular as nvt
from nvtabular.ops import *
from merlin.models.utils.example_utils import workflow_fit_transform
from merlin.schema.tags import Tags
import merlin.models.tf as mm
from merlin.io.dataset import Dataset
import tensorflow as tf
```
## Feature Engineering with NVTabular
We use the synthetic train and test datasets generated by mimicking the real [Ali-CCP: Alibaba Click and Conversion Prediction](https://tianchi.aliyun.com/dataset/dataDetail?dataId=408#1) dataset to build our recommender system ranking models.
If you would like to use real Ali-CCP dataset instead, you can download the training and test datasets on [tianchi.aliyun.com](https://tianchi.aliyun.com/dataset/dataDetail?dataId=408#1). You can then use [get_aliccp()](https://github.com/NVIDIA-Merlin/models/blob/main/merlin/datasets/ecommerce/aliccp/dataset.py#L43) function to curate the raw csv files and save them as the parquet.
```
from merlin.datasets.synthetic import generate_data
DATA_FOLDER = os.environ.get("DATA_FOLDER", "/workspace/data/")
NUM_ROWS = os.environ.get("NUM_ROWS", 1000000)
train, valid = generate_data("aliccp-raw", int(NUM_ROWS), set_sizes=(0.7, 0.3))
# save the datasets as parquet files
train.to_ddf().to_parquet(os.path.join(DATA_FOLDER, "train"))
valid.to_ddf().to_parquet(os.path.join(DATA_FOLDER, "valid"))
```
Let's define our input and output paths.
```
train_path = os.path.join(DATA_FOLDER, "train", "part.0.parquet")
valid_path = os.path.join(DATA_FOLDER, "valid", "part.0.parquet")
output_path = os.path.join(DATA_FOLDER, "processed")
```
After we execute `fit()` and `transform()` functions on the raw dataset applying the operators defined in the NVTabular workflow pipeline below, the processed parquet files are saved to `output_path`.
```
%%time
user_id = ["user_id"] >> Categorify() >> TagAsUserID()
item_id = ["item_id"] >> Categorify() >> TagAsItemID()
targets = ["click"] >> AddMetadata(tags=[Tags.BINARY_CLASSIFICATION, "target"])
item_features = ["item_category", "item_shop", "item_brand"] >> Categorify() >> TagAsItemFeatures()
user_features = (
[
"user_shops",
"user_profile",
"user_group",
"user_gender",
"user_age",
"user_consumption_2",
"user_is_occupied",
"user_geography",
"user_intentions",
"user_brands",
"user_categories",
]
>> Categorify()
>> TagAsUserFeatures()
)
outputs = user_id + item_id + item_features + user_features + targets
workflow = nvt.Workflow(outputs)
train_dataset = nvt.Dataset(train_path)
valid_dataset = nvt.Dataset(valid_path)
workflow.fit(train_dataset)
workflow.transform(train_dataset).to_parquet(output_path=output_path + "/train/")
workflow.transform(valid_dataset).to_parquet(output_path=output_path + "/valid/")
```
We save NVTabular `workflow` model in the current working directory.
```
workflow.save("workflow")
```
Let's check out our saved workflow model folder.
```
!apt-get update
!apt-get install tree
!tree ./workflow
```
## Build and Train a DLRM model
In this example, we build, train, and export a Deep Learning Recommendation Model [(DLRM)](https://arxiv.org/abs/1906.00091) architecture. To learn more about how to train different deep learning models, how easily transition from one model to another and the seamless integration between data preparation and model training visit [03-Exploring-different-models.ipynb](https://github.com/NVIDIA-Merlin/models/blob/main/examples/03-Exploring-different-models.ipynb) notebook.
NVTabular workflow above exports a schema file, schema.pbtxt, of our processed dataset. To learn more about the schema object, schema file and `tags`, you can explore [02-Merlin-Models-and-NVTabular-integration.ipynb](02-Merlin-Models-and-NVTabular-integration.ipynb).
```
# define train and valid dataset objects
train = Dataset(os.path.join(output_path, "train", "*.parquet"))
valid = Dataset(os.path.join(output_path, "valid", "*.parquet"))
# define schema object
schema = train.schema
target_column = schema.select_by_tag(Tags.TARGET).column_names[0]
target_column
model = mm.DLRMModel(
schema,
embedding_dim=64,
bottom_block=mm.MLPBlock([128, 64]),
top_block=mm.MLPBlock([128, 64, 32]),
prediction_tasks=mm.BinaryClassificationTask(target_column, metrics=[tf.keras.metrics.AUC()]),
)
%%time
model.compile("adam", run_eagerly=False)
model.fit(train, validation_data=valid, batch_size=512)
```
### Save model
The last step of machine learning (ML)/deep learning (DL) pipeline is to deploy the ETL workflow and saved model to production. In the production setting, we want to transform the input data as done during training (ETL). We need to apply the same mean/std for continuous features and use the same categorical mapping to convert the categories to continuous integer before we use the DL model for a prediction. Therefore, we deploy the NVTabular workflow with the Tensorflow model as an ensemble model to Triton Inference using [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library very easily. The ensemble model guarantees that the same transformation is applied to the raw inputs.
Let's save our DLRM model.
```
model.save("dlrm")
```
We have NVTabular wokflow and DLRM model exported, now it is time to move on to the next step: model deployment with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems).
### Deploying the model with Merlin Systems
We trained and exported our ranking model and NVTabular workflow. In the next step, we will learn how to deploy our trained DLRM model into [Triton Inference Server](https://github.com/triton-inference-server/server) with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library. NVIDIA Triton Inference Server (TIS) simplifies the deployment of AI models at scale in production. TIS provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. It supports a number of different machine learning frameworks such as TensorFlow and PyTorch.
For the next step, visit [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library and execute [Serving-Ranking-Models-With-Merlin-Systems](https://github.com/NVIDIA-Merlin/systems/blob/main/examples/Getting_Started/Serving-Ranking-Models-With-Merlin-Systems.ipynb) notebook to deploy our saved DLRM and NVTabular workflow models as an ensemble to TIS and obtain prediction results for a qiven request. In doing so, you need to mount the saved DLRM and NVTabular workflow to the inference container following the instructions in the [README.md](https://github.com/NVIDIA-Merlin/systems/blob/main/examples/Getting_Started/README.md).
| github_jupyter |
# MaxEnt Gridworld
> Implementation of MaxEnt-IRL model for FBER recommendation system, based on the approach of Ziebart et al. 2008 paper: Maximum Entropy Inverse Reinforcement Learning.
```
import copy
import warnings
warnings.filterwarnings('ignore')
"""
Find the value function associated with a policy. Based on Sutton & Barto, 1998.
Matthew Alger, 2015
matthew.alger@anu.edu.au
"""
import numpy as np
def value(policy, n_states, transition_probabilities, reward, discount,
threshold=1e-2):
"""
Find the value function associated with a policy.
policy: List of action ints for each state.
n_states: Number of states. int.
transition_probabilities: Function taking (state, action, state) to
transition probabilities.
reward: Vector of rewards for each state.
discount: MDP discount factor. float.
threshold: Convergence threshold, default 1e-2. float.
-> Array of values for each state
"""
v = np.zeros(n_states)
diff = float("inf")
while diff > threshold:
diff = 0
for s in range(n_states):
vs = v[s]
a = policy[s]
v[s] = sum(transition_probabilities[s, a, k] *
(reward[k] + discount * v[k])
for k in range(n_states))
diff = max(diff, abs(vs - v[s]))
return v
def optimal_value(n_states, n_actions, transition_probabilities, reward,
discount, threshold=1e-2):
"""
Find the optimal value function.
n_states: Number of states. int.
n_actions: Number of actions. int.
transition_probabilities: Function taking (state, action, state) to
transition probabilities.
reward: Vector of rewards for each state.
discount: MDP discount factor. float.
threshold: Convergence threshold, default 1e-2. float.
-> Array of values for each state
"""
v = np.zeros(n_states)
diff = float("inf")
while diff > threshold:
diff = 0
for s in range(n_states):
max_v = float("-inf")
for a in range(n_actions):
tp = transition_probabilities[s, a, :]
max_v = max(max_v, np.dot(tp, reward + discount*v))
new_diff = abs(v[s] - max_v)
if new_diff > diff:
diff = new_diff
v[s] = max_v
return v
def find_policy(n_states, n_actions, transition_probabilities, reward, discount,
threshold=1e-2, v=None, stochastic=True):
"""
Find the optimal policy.
n_states: Number of states. int.
n_actions: Number of actions. int.
transition_probabilities: Function taking (state, action, state) to
transition probabilities.
reward: Vector of rewards for each state.
discount: MDP discount factor. float.
threshold: Convergence threshold, default 1e-2. float.
v: Value function (if known). Default None.
stochastic: Whether the policy should be stochastic. Default True.
-> Action probabilities for each state or action int for each state
(depending on stochasticity).
"""
if v is None:
v = optimal_value(n_states, n_actions, transition_probabilities, reward,
discount, threshold)
if stochastic:
# Get Q using equation 9.2 from Ziebart's thesis.
Q = np.zeros((n_states, n_actions))
for i in range(n_states):
for j in range(n_actions):
p = transition_probabilities[i, j, :]
Q[i, j] = p.dot(reward + discount*v)
Q -= Q.max(axis=1).reshape((n_states, 1)) # For numerical stability.
Q = np.exp(Q)/np.exp(Q).sum(axis=1).reshape((n_states, 1))
return Q
def _policy(s):
return max(list(range(n_actions)),
key=lambda a: sum(transition_probabilities[s, a, k] *
(reward[k] + discount * v[k])
for k in range(n_states)))
policy = np.array([_policy(s) for s in range(n_states)])
return policy
"""
Implements the gridworld MDP.
Matthew Alger, 2015
matthew.alger@anu.edu.au
"""
import numpy as np
import numpy.random as rn
class Gridworld(object):
"""
Gridworld MDP.
"""
def __init__(self, grid_size, wind, discount):
"""
grid_size: Grid size. int.
wind: Chance of moving randomly. float.
discount: MDP discount. float.
-> Gridworld
"""
self.actions = ((1, 0), (0, 1), (-1, 0), (0, -1))
self.n_actions = len(self.actions)
self.n_states = grid_size**2
self.grid_size = grid_size
self.wind = wind
self.discount = discount
# Preconstruct the transition probability array.
self.transition_probability = np.array(
[[[self._transition_probability(i, j, k)
for k in range(self.n_states)]
for j in range(self.n_actions)]
for i in range(self.n_states)])
def __str__(self):
return "Gridworld({}, {}, {})".format(self.grid_size, self.wind,
self.discount)
def feature_vector(self, i, feature_map="ident"):
"""
Get the feature vector associated with a state integer.
i: State int.
feature_map: Which feature map to use (default ident). String in {ident,
coord, proxi}.
-> Feature vector.
"""
if feature_map == "coord":
f = np.zeros(self.grid_size)
x, y = i % self.grid_size, i // self.grid_size
f[x] += 1
f[y] += 1
return f
if feature_map == "proxi":
f = np.zeros(self.n_states)
x, y = i % self.grid_size, i // self.grid_size
for b in range(self.grid_size):
for a in range(self.grid_size):
dist = abs(x - a) + abs(y - b)
f[self.point_to_int((a, b))] = dist
return f
# Assume identity map.
f = np.zeros(self.n_states)
f[i] = 1
return f
def feature_matrix(self, feature_map="ident"):
"""
Get the feature matrix for this gridworld.
feature_map: Which feature map to use (default ident). String in {ident,
coord, proxi}.
-> NumPy array with shape (n_states, d_states).
"""
features = []
for n in range(self.n_states):
f = self.feature_vector(n, feature_map)
features.append(f)
return np.array(features)
def int_to_point(self, i):
"""
Convert a state int into the corresponding coordinate.
i: State int.
-> (x, y) int tuple.
"""
return (i % self.grid_size, i // self.grid_size)
def point_to_int(self, p):
"""
Convert a coordinate into the corresponding state int.
p: (x, y) tuple.
-> State int.
"""
return p[0] + p[1]*self.grid_size
def neighbouring(self, i, k):
"""
Get whether two points neighbour each other. Also returns true if they
are the same point.
i: (x, y) int tuple.
k: (x, y) int tuple.
-> bool.
"""
return abs(i[0] - k[0]) + abs(i[1] - k[1]) <= 1
def _transition_probability(self, i, j, k):
"""
Get the probability of transitioning from state i to state k given
action j.
i: State int.
j: Action int.
k: State int.
-> p(s_k | s_i, a_j)
"""
xi, yi = self.int_to_point(i)
xj, yj = self.actions[j]
xk, yk = self.int_to_point(k)
if not self.neighbouring((xi, yi), (xk, yk)):
return 0.0
# Is k the intended state to move to?
if (xi + xj, yi + yj) == (xk, yk):
return 1 - self.wind + self.wind/self.n_actions
# If these are not the same point, then we can move there by wind.
if (xi, yi) != (xk, yk):
return self.wind/self.n_actions
# If these are the same point, we can only move here by either moving
# off the grid or being blown off the grid. Are we on a corner or not?
if (xi, yi) in {(0, 0), (self.grid_size-1, self.grid_size-1),
(0, self.grid_size-1), (self.grid_size-1, 0)}:
# Corner.
# Can move off the edge in two directions.
# Did we intend to move off the grid?
if not (0 <= xi + xj < self.grid_size and
0 <= yi + yj < self.grid_size):
# We intended to move off the grid, so we have the regular
# success chance of staying here plus an extra chance of blowing
# onto the *other* off-grid square.
return 1 - self.wind + 2*self.wind/self.n_actions
else:
# We can blow off the grid in either direction only by wind.
return 2*self.wind/self.n_actions
else:
# Not a corner. Is it an edge?
if (xi not in {0, self.grid_size-1} and
yi not in {0, self.grid_size-1}):
# Not an edge.
return 0.0
# Edge.
# Can only move off the edge in one direction.
# Did we intend to move off the grid?
if not (0 <= xi + xj < self.grid_size and
0 <= yi + yj < self.grid_size):
# We intended to move off the grid, so we have the regular
# success chance of staying here.
return 1 - self.wind + self.wind/self.n_actions
else:
# We can blow off the grid only by wind.
return self.wind/self.n_actions
def reward(self, state_int):
"""
Reward for being in state state_int.
state_int: State integer. int.
-> Reward.
"""
if state_int == self.n_states - 1:
return 1
return 0
def average_reward(self, n_trajectories, trajectory_length, policy):
"""
Calculate the average total reward obtained by following a given policy
over n_paths paths.
policy: Map from state integers to action integers.
n_trajectories: Number of trajectories. int.
trajectory_length: Length of an episode. int.
-> Average reward, standard deviation.
"""
trajectories = self.generate_trajectories(n_trajectories,
trajectory_length, policy)
rewards = [[r for _, _, r in trajectory] for trajectory in trajectories]
rewards = np.array(rewards)
# Add up all the rewards to find the total reward.
total_reward = rewards.sum(axis=1)
# Return the average reward and standard deviation.
return total_reward.mean(), total_reward.std()
def optimal_policy(self, state_int):
"""
The optimal policy for this gridworld.
state_int: What state we are in. int.
-> Action int.
"""
sx, sy = self.int_to_point(state_int)
if sx < self.grid_size and sy < self.grid_size:
return rn.randint(0, 2)
if sx < self.grid_size-1:
return 0
if sy < self.grid_size-1:
return 1
raise ValueError("Unexpected state.")
def optimal_policy_deterministic(self, state_int):
"""
Deterministic version of the optimal policy for this gridworld.
state_int: What state we are in. int.
-> Action int.
"""
sx, sy = self.int_to_point(state_int)
if sx < sy:
return 0
return 1
def generate_trajectories(self, n_trajectories, trajectory_length, policy,
random_start=False):
"""
Generate n_trajectories trajectories with length trajectory_length,
following the given policy.
n_trajectories: Number of trajectories. int.
trajectory_length: Length of an episode. int.
policy: Map from state integers to action integers.
random_start: Whether to start randomly (default False). bool.
-> [[(state int, action int, reward float)]]
"""
trajectories = []
for _ in range(n_trajectories):
if random_start:
sx, sy = rn.randint(self.grid_size), rn.randint(self.grid_size)
else:
sx, sy = 0, 0
trajectory = []
for _ in range(trajectory_length):
if rn.random() < self.wind:
action = self.actions[rn.randint(0, 4)]
else:
# Follow the given policy.
action = self.actions[policy(self.point_to_int((sx, sy)))]
if (0 <= sx + action[0] < self.grid_size and
0 <= sy + action[1] < self.grid_size):
next_sx = sx + action[0]
next_sy = sy + action[1]
else:
next_sx = sx
next_sy = sy
state_int = self.point_to_int((sx, sy))
action_int = self.actions.index(action)
next_state_int = self.point_to_int((next_sx, next_sy))
reward = self.reward(next_state_int)
trajectory.append((state_int, action_int, reward))
sx = next_sx
sy = next_sy
trajectories.append(trajectory)
return np.array(trajectories)
# Quick unit test using gridworld.
gw = Gridworld(3, 0.3, 0.9)
v = value([gw.optimal_policy_deterministic(s) for s in range(gw.n_states)],
gw.n_states,
gw.transition_probability,
[gw.reward(s) for s in range(gw.n_states)],
gw.discount)
assert np.isclose(v,
[5.7194282, 6.46706692, 6.42589811,
6.46706692, 7.47058224, 7.96505174,
6.42589811, 7.96505174, 8.19268666], 1).all()
opt_v = optimal_value(gw.n_states,
gw.n_actions,
gw.transition_probability,
[gw.reward(s) for s in range(gw.n_states)],
gw.discount)
assert np.isclose(v, opt_v).all()
"""
Implements maximum entropy inverse reinforcement learning (Ziebart et al., 2008)
Matthew Alger, 2015
matthew.alger@anu.edu.au
"""
from itertools import product
import numpy as np
import numpy.random as rn
def irl(feature_matrix, n_actions, discount, transition_probability,
trajectories, epochs, learning_rate):
"""
Find the reward function for the given trajectories.
feature_matrix: Matrix with the nth row representing the nth state. NumPy
array with shape (N, D) where N is the number of states and D is the
dimensionality of the state.
n_actions: Number of actions A. int.
discount: Discount factor of the MDP. float.
transition_probability: NumPy array mapping (state_i, action, state_k) to
the probability of transitioning from state_i to state_k under action.
Shape (N, A, N).
trajectories: 3D array of state/action pairs. States are ints, actions
are ints. NumPy array with shape (T, L, 2) where T is the number of
trajectories and L is the trajectory length.
epochs: Number of gradient descent steps. int.
learning_rate: Gradient descent learning rate. float.
-> Reward vector with shape (N,).
"""
n_states, d_states = feature_matrix.shape
# Initialise weights.
alpha = rn.uniform(size=(d_states,))
# Calculate the feature expectations \tilde{phi}.
feature_expectations = find_feature_expectations(feature_matrix,
trajectories)
# Gradient descent on alpha.
for i in range(epochs):
# print("i: {}".format(i))
r = feature_matrix.dot(alpha)
expected_svf = find_expected_svf(n_states, r, n_actions, discount,
transition_probability, trajectories)
grad = feature_expectations - feature_matrix.T.dot(expected_svf)
alpha += learning_rate * grad
return feature_matrix.dot(alpha).reshape((n_states,))
def find_svf(n_states, trajectories):
"""
Find the state visitation frequency from trajectories.
n_states: Number of states. int.
trajectories: 3D array of state/action pairs. States are ints, actions
are ints. NumPy array with shape (T, L, 2) where T is the number of
trajectories and L is the trajectory length.
-> State visitation frequencies vector with shape (N,).
"""
svf = np.zeros(n_states)
for trajectory in trajectories:
for state, _, _ in trajectory:
svf[state] += 1
svf /= trajectories.shape[0]
return svf
def find_feature_expectations(feature_matrix, trajectories):
"""
Find the feature expectations for the given trajectories. This is the
average path feature vector.
feature_matrix: Matrix with the nth row representing the nth state. NumPy
array with shape (N, D) where N is the number of states and D is the
dimensionality of the state.
trajectories: 3D array of state/action pairs. States are ints, actions
are ints. NumPy array with shape (T, L, 2) where T is the number of
trajectories and L is the trajectory length.
-> Feature expectations vector with shape (D,).
"""
feature_expectations = np.zeros(feature_matrix.shape[1])
for trajectory in trajectories:
for state, _, _ in trajectory:
feature_expectations += feature_matrix[state]
feature_expectations /= trajectories.shape[0]
return feature_expectations
def find_expected_svf(n_states, r, n_actions, discount,
transition_probability, trajectories):
"""
Find the expected state visitation frequencies using algorithm 1 from
Ziebart et al. 2008.
n_states: Number of states N. int.
alpha: Reward. NumPy array with shape (N,).
n_actions: Number of actions A. int.
discount: Discount factor of the MDP. float.
transition_probability: NumPy array mapping (state_i, action, state_k) to
the probability of transitioning from state_i to state_k under action.
Shape (N, A, N).
trajectories: 3D array of state/action pairs. States are ints, actions
are ints. NumPy array with shape (T, L, 2) where T is the number of
trajectories and L is the trajectory length.
-> Expected state visitation frequencies vector with shape (N,).
"""
n_trajectories = trajectories.shape[0]
trajectory_length = trajectories.shape[1]
policy = find_policy(n_states, r, n_actions, discount,
transition_probability)
# policy = find_policy(n_states, n_actions,
# transition_probability, r, discount)
start_state_count = np.zeros(n_states)
for trajectory in trajectories:
start_state_count[trajectory[0, 0]] += 1
p_start_state = start_state_count/n_trajectories
expected_svf = np.tile(p_start_state, (trajectory_length, 1)).T
for t in range(1, trajectory_length):
expected_svf[:, t] = 0
for i, j, k in product(list(range(n_states)), list(range(n_actions)), list(range(n_states))):
expected_svf[k, t] += (expected_svf[i, t-1] *
policy[i, j] * # Stochastic policy
transition_probability[i, j, k])
return expected_svf.sum(axis=1)
def softmax(x1, x2):
"""
Soft-maximum calculation, from algorithm 9.2 in Ziebart's PhD thesis.
x1: float.
x2: float.
-> softmax(x1, x2)
"""
max_x = max(x1, x2)
min_x = min(x1, x2)
return max_x + np.log(1 + np.exp(min_x - max_x))
def find_policy(n_states, r, n_actions, discount,
transition_probability):
"""
Find a policy with linear value iteration. Based on the code accompanying
the Levine et al. GPIRL paper and on Ziebart's PhD thesis (algorithm 9.1).
n_states: Number of states N. int.
r: Reward. NumPy array with shape (N,).
n_actions: Number of actions A. int.
discount: Discount factor of the MDP. float.
transition_probability: NumPy array mapping (state_i, action, state_k) to
the probability of transitioning from state_i to state_k under action.
Shape (N, A, N).
-> NumPy array of states and the probability of taking each action in that
state, with shape (N, A).
"""
# V = value_iteration.value(n_states, transition_probability, r, discount)
# NumPy's dot really dislikes using inf, so I'm making everything finite
# using nan_to_num.
V = np.nan_to_num(np.ones((n_states, 1)) * float("-inf"))
diff = np.ones((n_states,))
while (diff > 1e-4).all(): # Iterate until convergence.
new_V = r.copy()
for j in range(n_actions):
for i in range(n_states):
new_V[i] = softmax(new_V[i], r[i] + discount*
np.sum(transition_probability[i, j, k] * V[k]
for k in range(n_states)))
# # This seems to diverge, so we z-score it (engineering hack).
new_V = (new_V - new_V.mean())/new_V.std()
diff = abs(V - new_V)
V = new_V
# We really want Q, not V, so grab that using equation 9.2 from the thesis.
Q = np.zeros((n_states, n_actions))
for i in range(n_states):
for j in range(n_actions):
p = np.array([transition_probability[i, j, k]
for k in range(n_states)])
Q[i, j] = p.dot(r + discount*V)
# Softmax by row to interpret these values as probabilities.
Q -= Q.max(axis=1).reshape((n_states, 1)) # For numerical stability.
Q = np.exp(Q)/np.exp(Q).sum(axis=1).reshape((n_states, 1))
return Q
def expected_value_difference(n_states, n_actions, transition_probability,
reward, discount, p_start_state, optimal_value, true_reward):
"""
Calculate the expected value difference, which is a proxy to how good a
recovered reward function is.
n_states: Number of states. int.
n_actions: Number of actions. int.
transition_probability: NumPy array mapping (state_i, action, state_k) to
the probability of transitioning from state_i to state_k under action.
Shape (N, A, N).
reward: Reward vector mapping state int to reward. Shape (N,).
discount: Discount factor. float.
p_start_state: Probability vector with the ith component as the probability
that the ith state is the start state. Shape (N,).
optimal_value: Value vector for the ground reward with optimal policy.
The ith component is the value of the ith state. Shape (N,).
true_reward: True reward vector. Shape (N,).
-> Expected value difference. float.
"""
policy = value_iteration.find_policy(n_states, n_actions,
transition_probability, reward, discount)
value = value_iteration.value(policy.argmax(axis=1), n_states,
transition_probability, true_reward, discount)
evd = optimal_value.dot(p_start_state) - value.dot(p_start_state)
return evd
"""
Run maximum entropy inverse reinforcement learning on the gridworld MDP.
Matthew Alger, 2015
matthew.alger@anu.edu.au
"""
import numpy as np
import matplotlib.pyplot as plt
def main(grid_size, discount, n_trajectories, epochs, learning_rate):
"""
Run maximum entropy inverse reinforcement learning on the gridworld MDP.
Plots the reward function.
grid_size: Grid size. int.
discount: MDP discount factor. float.
n_trajectories: Number of sampled trajectories. int.
epochs: Gradient descent iterations. int.
learning_rate: Gradient descent learning rate. float.
"""
wind = 0.3
trajectory_length = 3*grid_size
gw = Gridworld(grid_size, wind, discount)
trajectories = gw.generate_trajectories(n_trajectories,
trajectory_length,
gw.optimal_policy)
feature_matrix = gw.feature_matrix()
ground_r = np.array([gw.reward(s) for s in range(gw.n_states)])
r = irl(feature_matrix, gw.n_actions, discount,
gw.transition_probability, trajectories, epochs, learning_rate)
plt.subplot(1, 2, 1)
plt.pcolor(ground_r.reshape((grid_size, grid_size)))
plt.colorbar()
plt.title("Groundtruth reward")
plt.subplot(1, 2, 2)
plt.pcolor(r.reshape((grid_size, grid_size)))
plt.colorbar()
plt.title("Recovered reward")
plt.show()
if __name__ == '__main__':
main(5, 0.01, 20, 200, 0.01)
```
| github_jupyter |
MIT License
Copyright (c) 2017 Erik Linder-Norén
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
#Please make sure you have at least TensorFlow version 1.12 installed, if not please uncomment and use the
# pip command below to upgrade. When in a jupyter environment (especially IBM Watson Studio),
# please don't forget to restart the kernel
import tensorflow as tf
tf.__version__
!pip install --upgrade tensorflow
from __future__ import print_function, division
from keras.datasets import mnist
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import Adam
import matplotlib.pyplot as plt
import sys
import numpy as np
img_rows = 28
img_cols = 28
channels = 1
latent_dim = 100
img_shape = (img_rows, img_cols, channels)
def build_generator():
model = Sequential()
model.add(Dense(128 * 7 * 7, activation="relu", input_dim=latent_dim))
model.add(Reshape((7, 7, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(channels, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=(latent_dim,))
img = model(noise)
return Model(noise, img)
def build_discriminator():
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=img_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=img_shape)
validity = model(img)
return Model(img, validity)
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
generator = build_generator()
# The generator takes noise as input and generates imgs
z = Input(shape=(latent_dim,))
img = generator(z)
# For the combined model we will only train the generator
discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
valid = discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
combined = Model(z, valid)
combined.compile(loss='binary_crossentropy', optimizer=optimizer)
def save_imgs(epoch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, latent_dim))
gen_imgs = generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
fig.savefig("images/mnist_%d.png" % epoch)
plt.close()
def train(epochs, batch_size=128, save_interval=50):
# Load the dataset
(X_train, _), (_, _) = mnist.load_data()
# Rescale -1 to 1
X_train = X_train / 127.5 - 1.
X_train = np.expand_dims(X_train, axis=3)
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random half of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
# Sample noise and generate a batch of new images
noise = np.random.normal(0, 1, (batch_size, latent_dim))
gen_imgs = generator.predict(noise)
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = discriminator.train_on_batch(imgs, valid)
d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
# Train the generator (wants discriminator to mistake images as real)
g_loss = combined.train_on_batch(noise, valid)
# Plot the progress
print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# If at save interval => save generated image samples
if epoch % save_interval == 0:
save_imgs(epoch)
!mkdir -p images
train(epochs=4000, batch_size=32, save_interval=50)
ls images
from IPython.display import display
from PIL import Image
path="images/mnist_0.png"
display(Image.open(path))
from IPython.display import display
from PIL import Image
path="images/mnist_3950.png"
display(Image.open(path))
```
| github_jupyter |
# Training and hosting SageMaker Models using the Apache MXNet Gluon API
When there is a person in front of you, your human eyes can immediately recognize what direction the person is looking at (e.g. either facing straight up to you or looking at somewhere else). The direction is defined as the head-pose. We are going to develop a deep neural learning model to estimate such a head-pose based on an input human head image. The **SageMaker Python SDK** makes it easy to train and deploy MXNet models. In this example, we train a ResNet-50 model using the Apache MXNet [Gluon API](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html) and the head-pose dataset.
The task at hand is to train a model using the head-pose dataset so that the trained model is able to classify head-pose into 9 different categories (the combinations of 3 tilt and 3 pan angles).
```
import sys
print(sys.version)
```
### Setup
First we need to define a few variables that will be needed later in the example.
```
from sagemaker import get_execution_role
s3_bucket = '<your S3 bucket >'
headpose_folder = 'headpose'
#Bucket location to save your custom code in tar.gz format.
custom_code_upload_location = 's3://{}/{}/customMXNetcodes'.format(s3_bucket, headpose_folder)
#Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/{}/artifacts'.format(s3_bucket, headpose_folder)
#IAM execution role that gives SageMaker access to resources in your AWS account.
#We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
```
### The training script
The ``EntryPt-headpose.py`` script provides all the code we need for training and hosting a SageMaker model. The script we will use is adapted and modified from Apache MXNet [MNIST tutorial](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-python-sdk/mxnet_mnist) and [HeadPose tutorial](https://)
```
!cat EntryPt-headpose-Gluon.py
```
You may find a similarity between this ``EntryPt-headpose-Gluon.py`` and [Head Pose Gluon Tutorial](https://)
### SageMaker's MXNet estimator class
The SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.
When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``EntryPt-headpose.py`` script above.
For this example, we will choose one ``ml.p2.xlarge`` instance.
```
from sagemaker.mxnet import MXNet
headpose_estimator = MXNet(entry_point='EntryPt-headpose-Gluon.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={'learning_rate': 0.0005},
train_max_run = 432000,
train_volume_size=100)
```
The default volume size in a training instance is 30GB. However, the actual free space is much less. Make sure that you have enough free space in the training instance to download your training data (e.g. 100 GB).
```
print(headpose_estimator.train_volume_size)
```
We name this job as **deeplens-sagemaker-headpose**. The ``base_job_name`` will be the prefix of output folders we are going to create.
```
headpose_estimator.base_job_name = 'deeplens-sagemaker-headpose'
```
### Running the Training Job
After we've constructed our MXNet object, we can fit it using data stored in S3.
During training, SageMaker makes this data stored in S3 available in the local filesystem where the headpose script is running. The ```EntryPt-headpose-Gluon.py``` script simply loads the train and test data from disk.
```
%%time
'''
# Load preprocessed data and run the training#
'''
# Head-pose dataset "HeadPoseData_trn_test_x15_py2.pkl" is in the following S3 folder.
dataset_location = 's3://{}/{}/datasets'.format(s3_bucket, headpose_folder)
# You can specify multiple input file directories (i.e. channel_input_dirs) in the dictionary.
# e.g. {'dataset1': dataset1_location, 'dataset2': dataset2_location, 'dataset3': dataset3_location}
# Start training !
headpose_estimator.fit({'dataset': dataset_location})
```
The latest training job name is...
```
print(headpose_estimator.latest_training_job.name)
```
The training is done
### Creating an inference Endpoint
After training, we use the ``MXNet estimator`` object to build and deploy an ``MXNetPredictor``. This creates a Sagemaker **Endpoint** -- a hosted prediction service that we can use to perform inference.
The arguments to the ``deploy`` function allow us to set the number and type of instances that will be used for the Endpoint. These do not need to be the same as the values we used for the training job. For example, you can train a model on a set of GPU-based instances, and then deploy the Endpoint to a fleet of CPU-based instances. Here we will deploy the model to a single ``ml.c4.xlarge`` instance.
```
from sagemaker.mxnet.model import MXNetModel
'''
You will find the name of training job on the top of the training log.
e.g.
INFO:sagemaker:Creating training-job with name: HeadPose-Gluon-YYYY-MM-DD-HH-MM-SS-XXX
'''
training_job_name = headpose_estimator.latest_training_job.name
sagemaker_model = MXNetModel(model_data= model_artifacts_location + '/{}/output/model.tar.gz'.format(training_job_name),
role=role,
entry_point='EntryPt-headpose-Gluon-wo-cv2.py')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.c4.xlarge')
```
The request handling behavior of the Endpoint is determined by the ``EntryPt-headpose-Gluon-wo-cv2.py`` script. The difference between ``EntryPt-headpose-Gluon-wo-cv2.py`` and ``EntryPt-headpose-Gluon.py`` is just OpenCV module (``cv2``). We found the inference instance does not support ``cv2``. If you use ``EntryPt-headpose-Gluon.py``, the inference instance will return an error `` AllTraffic did not pass the ping health check``.
### Making an inference request
Now our Endpoint is deployed and we have a ``predictor`` object, we can use it to classify the head-pose of our own head-torso image.
```
import cv2
import numpy as np
import boto3
import os
import matplotlib.pyplot as plt
%matplotlib inline
role = get_execution_role()
import urllib.request
sample_ims_location = 'https://s3.amazonaws.com/{}/{}/testIMs/IMG_1242.JPG'.format(s3_bucket,headpose_folder)
print(sample_ims_location)
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
return cv2.imread(filename)
im_true = download(sample_ims_location)
im = im_true.astype(np.float32)/255 # Normalized
crop_uly = 62
crop_height = 360
crop_ulx = 100
crop_width = 360
im = im[crop_uly:crop_uly + crop_height, crop_ulx:crop_ulx + crop_width]
im_crop = im
plt.imshow(im_crop[:,:,::-1])
plt.show()
im = cv2.resize(im, (84, 84))
plt.imshow(im[:,:,::-1])
plt.show()
im = np.swapaxes(im, 0, 2)
im = np.swapaxes(im, 1, 2)
im = im[np.newaxis, :]
im = (im -0.5) * 2
print(im.shape)
```
Now we can use the ``predictor`` object to classify the head pose:
```
data = im
prob = predictor.predict(data)
print('Raw prediction result:')
print(prob)
labeled_predictions = list(zip(range(10), prob[0]))
print('Labeled predictions: ')
print(labeled_predictions)
labeled_predictions.sort(key=lambda label_and_prob: 1.0 - label_and_prob[1])
print('Most likely answer: {}'.format(labeled_predictions[0]))
n_grid_cls = 9
n_tilt_cls = 3
pred = labeled_predictions[0][0]
### Tilt Prediction
pred_tilt_pic = pred % n_tilt_cls
### Pan Prediction
pred_pan_pic = pred // n_tilt_cls
extent = 0, im_true.shape[1]-1, im_true.shape[0]-1, 0
Panel_Pred = np.zeros((n_tilt_cls, n_tilt_cls))
Panel_Pred[pred_tilt_pic, pred_pan_pic] = 1
Panel_Pred = np.fliplr(Panel_Pred)
Panel_Pred = np.flipud(Panel_Pred)
plt.imshow(im_true[:,:,[2,1,0]], extent=extent)
plt.imshow(Panel_Pred, cmap=plt.cm.Blues, alpha=.2, interpolation='nearest', extent=extent)
plt.axis('off')
arrw_mg = 100
arrw_x_rad = 1 * (prob[0][0] + prob[0][1] + prob[0][2] - prob[0][6] -prob[0][7] - prob[0][8]) * 90 * np.pi / 180.
arrw_y_rad = 1 * (prob[0][0] + prob[0][3] + prob[0][6] - prob[0][2] -prob[0][5] - prob[0][8]) * 90 * np.pi / 180.
plt.arrow(im_true.shape[1]//2, im_true.shape[0]//2,
np.sin(arrw_x_rad) * arrw_mg, np.sin(arrw_y_rad) * arrw_mg,
head_width=10, head_length=10, fc='b', ec='b')
plt.show()
```
# (Optional) Delete the Endpoint
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
print("Endpoint name: " + predictor.endpoint)
import sagemaker
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
# End
| github_jupyter |
# Numpy
```
import numpy as np
```
## 1) Array Creation
### 1.1 From function
```
lst = list(range(1,100,5))
print(lst)
arr = np.arange(1,100,5)
print(arr)
print(type(arr))
```
### 1.2 From list
```
l1 = [1,2,3,4]
arr1 = np.array(l1)
print(l1)
print(arr1)
list2d = [[1,2],[3,4],[5,6]]
arr2d = np.array(list2d)
print(list2d)
print(arr2d)
print(type(arr2d[0]))
#Numpy array should be homogeneous
list2d = [[1],[3,4],[5,6,7]]
arr2d = np.array(list2d)
print(list2d)
print(arr2d)
type(arr2d[0])
#Numpy array should be homogeneous
list2d = [[1,2],[3,4],['5','6']]
arr2d = np.array(list2d)
print(list2d)
print(arr2d)
type(arr2d[0])
```
### 1.3 Character array
```
charar = np.chararray((3, 3))
charar[:] = 'h'
print(charar)
charar2 = np.chararray((3, 3),itemsize=5)
charar2[:] = 'hello'
charar2[:] = 'hello world'
print(charar2)
```
### 1.4 Some others arrays
```
#Zero matrix
np.zeros(4)
#Zero matrix
np.zeros((4,4))
#Identity matrix
np.eye(5)
np.linspace(0,100,5)
#[0,1] in uniform distribution
np.random.rand(4)
#in normal distribution
np.random.randn(4)
# random integers
np.random.randint(1,50,8)
```
## 2) Operations
### 2.1 Addition, Subtraction, Multiplication, Division
```
arr = np.arange(1,10,1)
print(arr)
#print(arr + 5)
arr1 = np.arange(10,20,1)
#shape must be same
arr2 = np.arange(10,20,1)
print(arr1)
print(arr2)
print(arr1+arr2)
# In list concatenation occurs
l1 = [1,2,3]
l2 = [5,6,7]
l3 = [4]
print(l1+l2+l3)
# for inf check
arr = np.arange(1,20,2)
#arr = np.arange(0,19,2)
print(arr)
arr2 = np.arange(0,100,10)
print(arr2)
print(arr*arr2)
print(arr2/arr)
#division with zero handled but should be avoided
print(arr/arr2)
s = arr/arr2
#s[0]==float('inf')
s[0]==np.inf
#s[0]==np.nan
#np.isnan(s[0])
```
### 2.2 dtype shape reshape
```
arr1 = np.array([1,2,3,4,5])
arr2 = np.array([[1,2,3,4,5,6],[7,8,9,10,11,12]])
print(arr1.dtype)
print(np.result_type(arr1))
print(arr2.dtype)
print(np.result_type(arr2))
#explicitly give type
arrf = np.array([1.5,2,3,4,5], dtype=np.float64)
print(arrf)
print(arrf.dtype)
lst = ['1','2','3']
lst2 = [[1,2,3],[5,6,7]]
lst+lst2+[1,2]
print(arr1)
arrnew = np.append(arr1, 'hi')
print(arrnew)
print(arrnew.dtype)
arr1 = np.array([1,2,3,4,5])
arr2 = np.array([[1,2,3,4,5,6],[7,8,9,10,11,12]])
print(arr1.shape)
#print(arr2.shape)
print(arr2)
print(arr2.shape)
arr3 = arr2.reshape(4,3)
print(arr3)
#print(arr3.shape)
```
### 2.3 other numpy operation
```
arr1 = np.array([1,2,3,4,5])
arr2 = np.array([[1,2,3,4,5,6],[7,8,9,10,11,12]])
print(arr1.max())
print(arr2)
print(arr2.max())
print(arr2[0].max())
print(arr2[1].max())
arr2.min()
print(arr2.argmin())
#Cannot be used to access element but only know the position
print(arr2.argmax())
#arr1[arr1.argmax()]
#arr1[arr1.argmin()]
arr2[arr2.argmax()]
#arr2[1][5]
# Way1
#i,j = np.unravel_index(arr2.argmax(), arr2.shape)
#print(i,j)
#arr2[i][j]
# Way2
#arr2[np.where(arr2==arr2.max())]
#print(np.where(arr2==arr2.max()))
print(arr1)
print(np.sqrt(arr1))
print(np.log(arr1))
print(np.exp(arr1))
print(np.sin(arr1))
print(np.cos(arr1))
print(np.tan(arr1))
```
### 2.4 indexing
```
arr1 = np.array([1,2,3,4,5])
arr2 = np.array([ [1,2,3,4,5,6], [7,8,9,10,11,12], [0,2,4,3,1,5] , [9,8,7,6,5,4] ])
print(arr1[0:2])
arr1[0:2] = 0
print(arr1)
arr1 = np.array([1,2,3,4,5])
arr = arr1.copy()
arr[:] = 0
print(arr)
print(arr1)
print(arr2)
# row,col
print(arr2[2][2])
print(arr2[2])
print(arr2[2:5])
print(arr2[2][1:3])
print(arr1)
print(arr1>3)
print(arr1[arr1>3])
print(np.where(arr1>3))
arr1[np.where(arr1>3)]
arr[arr1>3]
```
| github_jupyter |
```
# Main code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import re
def parse_order_info(order_info, player_name):
order_list = re.findall(r"\[(.*?)\]", order_info)[0].split(", ")
position = 1
for i in range(len(order_list)):
if player_name in order_list[i]:
score = re.findall(r"\((.*?)\)", order_list[i])[0]
position = i + 1
break
return score, position
def parse_gamelog(log_str):
games = re.findall(r"Game started.*?Final results.*?\n", log_str, re.DOTALL)
games_data = []
hands_data = []
for game in games:
try:
log_info = re.findall(r"\?log=(.*)", game)
if log_info:
game_index = log_info[0]
else:
game_index = np.random.randint(0, 100000)
player_name = re.findall(r"\[(.*?)\ ", re.findall(r"Players.*?\n", game)[0])[0]
hands = re.findall(r"Round:.*?Cowboy: Score.*?\n", game, re.DOTALL)
final_info = re.findall(r"Final results:.*?\n", game)[0]
south_info = re.findall(r"Round: 4.*?]", game, re.DOTALL)
if south_info:
south_info = south_info[0]
else:
# when the game have no South rounds
south_info = final_info
south_score, south_position = parse_order_info(south_info, player_name)
final_score, final_position = parse_order_info(final_info, player_name)
games_data.append([game_index, south_score, final_score, south_position, final_position])
for hand in hands:
round_num = re.findall(r"Round: (\d)", hand)[0]
result = re.findall(r"Cowboy: Score.*?\n", hand)[0]
score, score_diff, play_state, state_index = re.findall(r"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\n", result)[0]
meld_times = len(re.findall(r"With hand:", hand))
reached = bool(re.findall(r"Go for it!", hand))
is_dealer = player_name == re.findall(r"Dealer: (.*) ", hand)[0]
hands_data.append([
score, score_diff, play_state,
state_index, meld_times, reached,
is_dealer, game_index, round_num,
south_position, final_position,
])
except Exception as e:
print(e)
games_col = ['game_index',
'south_score',
'final_score',
'south_position',
'final_position']
hands_col = ['score',
'score_diff',
'play_state',
'state_index',
'meld_times',
'reached',
'is_dealer',
'game_index',
'round_num',
'south_position',
'final_position',
]
games_df = pd.DataFrame(data=np.array(games_data), columns=games_col)
hands_df = pd.DataFrame(data=np.array(hands_data), columns=hands_col)
return games_df, hands_df
# exec
filepath = "../logs225/" # CHANGE THIS
log_files = ["gamelog.txt", "gamelog1.txt", "gamelog2.txt", "gamelog3.txt", "gamelog4.txt", "gamelog7.txt", "gamelog8.txt"] # CHANGE THIS
games_df_list = []
hands_df_list = []
for filename in log_files:
with open(filepath+filename) as f:
log_str = f.read()
games_df, hands_df = parse_gamelog(log_str)
games_df_list.append(games_df)
hands_df_list.append(hands_df)
# Change file name here
games_df_filename = "games7" # CHANGE THIS
hands_df_filename = "hands7" # CHANGE THIS
pd.concat(games_df_list).to_csv(games_df_filename + ".csv")
pd.concat(hands_df_list).to_csv(hands_df_filename + ".csv")
# Check the df
games = pd.read_csv(games_df_filename+".csv")
hands = pd.read_csv(hands_df_filename+".csv")
games.shape, hands.shape
hands.head(20)
```
## Develop
```
origin_str = ""
with open("../gamelog.txt") as f:
origin_str = f.read()
```
## Get 1 game
```
test_str = origin_str[:50000]
re.findall(r"Final result.*\n", test_str)
games = re.findall(r"Game started.*?Final results.*?\n", test_str, re.DOTALL)
len(games)
test_game = games[1]
game_index = re.findall(r"\?log=(.*)", test_game)[0]
game_index
test_hands = re.findall(r"Round:.*?Cowboy: Score.*?\n", test_game, re.DOTALL)
len(test_hands)
test_hand = test_hands[0]
print(test_hand)
test_result = re.findall(r"Cowboy: Score.*?\n", test_hand)[0]
test_result
meld_times = len(re.findall(r"With hand:", test_hand))
meld_times
reached = bool(re.findall(r"Go for it!", test_hand))
reached
south_info = re.findall(r"Round: 4.*?]", test_game, re.DOTALL)[0]
final_info = re.findall(r"Final results:.*?\n", test_game)[0]
order_info = re.findall(r"\[.*?\]", south_info)[0]
order_info[1:-1].split(", ")
player_name = re.findall(r"\[.*?\(", re.findall(r"Players.*?\n", test_game)[0])[0][1:-2]
position = 1
order_list = order_info.split(", ")
for i in range(len(order_list)):
if player_name in order_list[i]:
position = i + 1
break
position
def parse_order_info(order_info, player_name):
order_list = re.findall(r"\[(.*?)\]", order_info)[0].split(", ")
position = 1
for i in range(len(order_list)):
if player_name in order_list[i]:
score = re.findall(r"\((.*?)\)", order_list[i])[0]
position = i + 1
break
return score, position
parse_order_info(final_info, player_name)
re.findall(r"Dealer: (.*) ", test_hand)
is_dealer = player_name == re.findall(r"Dealer: (.*) ", test_hand)
test_result
re.findall(r"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\n", test_result)
score, score_diff, play_state, state_index = re.findall(r"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\n", test_result)[0]
player_name = re.findall(r"\[(.*?)\ ", re.findall(r"Players.*?\n", test_game)[0])[0]
games = re.findall(r"Game started.*?Final results.*?\n", test_str, re.DOTALL)
games_data = []
hands_data = []
for game_index, game in enumerate(games):
player_name = re.findall(r"\[(.*?)\ ", re.findall(r"Players.*?\n", game)[0])[0]
hands = re.findall(r"Round:.*?Cowboy: Score.*?\n", test_game, re.DOTALL)
south_info = re.findall(r"Round: 4.*?]", game, re.DOTALL)[0]
final_info = re.findall(r"Final results:.*?\n", game)[0]
south_score, south_position = parse_order_info(south_info, player_name)
final_score, final_position = parse_order_info(final_info, player_name)
games_data.append([game_index, south_score, final_score, south_position, final_position])
for hand in hands:
result = re.findall(r"Cowboy: Score.*?\n", hand)[0]
score, score_diff, play_state, state_index = re.findall(r"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\n", result)[0]
meld_times = len(re.findall(r"With hand:", hand))
reached = bool(re.findall(r"Go for it!", hand))
is_dealer = player_name == re.findall(r"Dealer: (.*) ", hand)[0]
hands_data.append([score, score_diff, play_state, state_index, meld_times, reached, is_dealer, game_index])
games_col = ['game_index',
'south_score',
'final_score',
'south_position',
'final_position']
hands_col = ['score',
'score_diff',
'play_state',
'state_index',
'meld_times',
'reached',
'is_dealer',
'game_index']
games_df = pd.DataFrame(data=np.array(games_data), columns=games_col)
games_df
def parse_order_info(order_info, player_name):
order_list = re.findall(r"\[(.*?)\]", order_info)[0].split(", ")
position = 1
for i in range(len(order_list)):
if player_name in order_list[i]:
score = re.findall(r"\((.*?)\)", order_list[i])[0]
position = i + 1
break
return score, positionhands_df = pd.DataFrame(data=np.array(hands_data), columns=hands_col)
hands_df
```
## Functions
```
def parse_order_info(order_info, player_name):
order_list = re.findall(r"\[(.*?)\]", order_info)[0].split(", ")
position = 1
for i in range(len(order_list)):
if player_name in order_list[i]:
score = re.findall(r"\((.*?)\)", order_list[i])[0]
position = i + 1
break
return score, position
def parse_gamelog(log_str):
games = re.findall(r"Game started.*?Final results.*?\n", log_str, re.DOTALL)
games_data = []
hands_data = []
for game_index, game in enumerate(games):
try:
player_name = re.findall(r"\[(.*?)\ ", re.findall(r"Players.*?\n", game)[0])[0]
hands = re.findall(r"Round:.*?Cowboy: Score.*?\n", game, re.DOTALL)
final_info = re.findall(r"Final results:.*?\n", game)[0]
south_info = re.findall(r"Round: 4.*?]", game, re.DOTALL)
if south_info:
south_info = south_info[0]
else:
# when the game have no South rounds
south_info = final_info
south_score, south_position = parse_order_info(south_info, player_name)
final_score, final_position = parse_order_info(final_info, player_name)
games_data.append([game_index, south_score, final_score, south_position, final_position])
for hand in hands:
result = re.findall(r"Cowboy: Score.*?\n", hand)[0]
score, score_diff, play_state, state_index = re.findall(r"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\n", result)[0]
meld_times = len(re.findall(r"With hand:", hand))
reached = bool(re.findall(r"Go for it!", hand))
is_dealer = player_name == re.findall(r"Dealer: (.*) ", hand)[0]
hands_data.append([score, score_diff, play_state, state_index, meld_times, reached, is_dealer, game_index, final_position])
except Exception as e:
print(e)
games_col = ['game_index',
'south_score',
'final_score',
'south_position',
'final_position']
hands_col = ['score',
'score_diff',
'play_state',
'state_index',
'meld_times',
'reached',
'is_dealer',
'game_index',
'final_position',
]
games_df = pd.DataFrame(data=np.array(games_data), columns=games_col)
hands_df = pd.DataFrame(data=np.array(hands_data), columns=hands_col)
return games_df, hands_df
games_df, hands_df = parse_gamelog(origin_str)
gamelog2 = ""
with open("../gamelog2.txt") as f:
gamelog2 = f.read()
games_df2, hands_df2 = parse_gamelog(gamelog2)
pd.concat([games_df, games_df2]).to_csv("games1.csv")
pd.concat([hands_df, hands_df2]).to_csv("hands1.csv")
hands_df
for i in re.findall(r"(http.*)\n", origin_str):
print(i)
```
| github_jupyter |
# Kaggle San Francisco Crime Classification
## Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
### Environment and Data
```
# Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
```
### Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
```
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
####
X = np.around(X, decimals=2)
####
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
train_data, train_labels = X[:700000], y[:700000]
mini_train_data, mini_train_labels = X[:200000], y[:200000]
mini_dev_data, mini_dev_labels = X[430000:480000], y[430000:480000]
crime_labels = list(set(y))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev))
print(len(train_data),len(train_labels))
print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
print(len(test_data),len(test_labels))
```
### Logistic Regression
###### Hyperparameter tuning:
For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')
###### Model calibration:
See above
## Fit the Best LR Parameters
```
bestLR = LogisticRegression(penalty='l2', solver='newton-cg', tol=0.01, C=500)
bestLR.fit(mini_train_data, mini_train_labels)
bestLRPredictions = bestLR.predict(mini_dev_data)
bestLRPredictionProbabilities = bestLR.predict_proba(mini_dev_data)
print("L1 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = bestLRPredictionProbabilities, \
labels = crime_labels_mini_dev), "\n\n")
pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()
```
## Error Analysis: Calibration
```
#clf_probabilities, clf_predictions, labels
def error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels):
"""inputs:
clf_probabilities = clf.predict_proba(dev_data)
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
#buckets = [0.05, 0.15, 0.3, 0.5, 0.8]
#buckets = [0.15, 0.25, 0.3, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]
lLimit = 0
uLimit = 0
for i in range(len(buckets)):
uLimit = buckets[i]
for j in range(clf_probabilities.shape[0]):
if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit):
if clf_predictions[j] == labels[j]:
correct[i] += 1
total[i] += 1
lLimit = uLimit
print(sum(correct))
print(sum(total))
print(correct)
print(total)
#here we report the classifier accuracy for each posterior probability bucket
accuracies = []
for k in range(len(buckets)):
print(1.0*correct[k]/total[k])
accuracies.append(1.0*correct[k]/total[k])
print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \
%(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k]))
plt.plot(buckets,accuracies)
plt.title("Calibration Analysis")
plt.xlabel("Posterior Probability")
plt.ylabel("Classifier Accuracy")
return buckets, accuracies
#i think you'll need to look at how the posteriors are distributed in order to set the best bins in 'buckets'
pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()
buckets = [0.15, 0.25, 0.3, 1.0]
calibration_buckets, calibration_accuracies = error_analysis_calibration(buckets, clf_probabilities=bestLRPredictionProbabilities, \
clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
```
## Error Analysis: Classification Report
```
def error_analysis_classification_report(clf_predictions, labels):
"""inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
print('Classification Report:')
report = classification_report(labels, clf_predictions)
print(report)
return report
classification_report = error_analysis_classification_report(clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
```
## Error Analysis: Confusion Matrix
```
crime_labels_mini_dev
def error_analysis_confusion_matrix(label_names, clf_predictions, labels):
"""inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names))
cm.columns=label_names
cm.index=label_names
cm.to_csv(path_or_buf="./confusion_matrix.csv")
#print(cm)
return cm
error_analysis_confusion_matrix(label_names=crime_labels_mini_dev, clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
```
| github_jupyter |
```
from Toolv1 import MotionGenerator, GenerateTraj,random_rot,traj_to_dist
from Toolv1 import diffusive,subdiffusive,directed,accelerated,slowed,still
from Toolv1 import diffusive_confined,subdiffusive_confined, continuous_time_random_walk
from Toolv1 import continuous_time_random_walk_confined
import numpy as np
ndim = 2
np.random.seed(6)
def add_miss_tracking(traj,N,f=10):
step = traj[1:]-traj[:-1]
std = np.average(np.sum(step**2,axis=1)**0.5)
for i in range(N):
w = np.random.randint(0,len(traj))
traj[w] = np.random.normal(traj[w],f*std)
return traj
def generate_N_nstep(N,nstep):
add = 0
index_zero = 8
ndim = 2
if ndim == 3:
add = 1
size = nstep
X_train = np.zeros((N,size,5))
Y_trains_b = np.zeros((N,size,9))
Y_train_traj = []
#12
for i in range(N):
#for i in range(1000):
#if i % 1000 == 0:
# print i
sigma = max(np.random.normal(0.5,1),0.05)
step = max(np.random.normal(1,1),0.2)
tryagain = True
while tryagain:
try:
clean=False
time=size
ndim=2
list_generator = [ MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=still),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=subdiffusive_confined),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=subdiffusive),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=diffusive_confined),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=diffusive),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=continuous_time_random_walk),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=continuous_time_random_walk_confined),
MotionGenerator(time,ndim,
parameters=np.random.rand(3),
generate_motion=directed) ]
A = GenerateTraj(time,n_max=4,list_max_possible=[3,3,3,3,3,3,3,3],list_generator=list_generator)
#Clean small seq
all_states = set(A.sequence)
n_states = [A.sequence.count(istate) for istate in all_states]
for s,ns in zip(all_states,n_states):
A.sequence = np.array(A.sequence)
if size > 25 and ns < 10:
A.sequence[A.sequence == s] = "%i_0"%(index_zero)
def map_sequence(sequence):
ns = []
for iseque in sequence:
i0,j0 = map(int,iseque.split("_"))
ns.append(i0)
return ns
real_traj = A.traj
sc = map_sequence(A.sequence)
alpharot = 2*3.14*np.random.random()
real_traj = random_rot(real_traj,alpharot,ndim=ndim)
#Noise
dt = real_traj[1:]-real_traj[:-1]
std = np.mean(np.sum(dt**2,axis=1)/3)**0.5
if std == 0 :
std = 1
noise_level = 0.25*np.random.rand()
real_traj += np.random.normal(0,noise_level*std,real_traj.shape)
alligned_traj,normed,alpha,_ = traj_to_dist(real_traj,ndim=ndim)
nzeros = np.random.randint(0,10)
Z = []
for _ in range(nzeros):
Z.append(np.random.randint(len(sc)-1))
sc[Z[-1]] = index_zero
for i_isc,isc in enumerate(sc):
if isc == index_zero:
normed[i_isc,::] = 0
#print alligned_traj.shape ,len(sc)
tryagain=False
except IndexError:
tryagain=True
Y_train_traj.append(real_traj)
#print X_train.shape
X_train[i] = normed
Y_trains_b[i][range(time),np.array(sc,dtype=np.int)] = 1
#print sc
return X_train,Y_trains_b,Y_train_traj
print generate_N_nstep(100,600)[1].shape
# this returns a tensor
from keras.layers import Input, Embedding, LSTM, Dense, merge,TimeDistributed
from Bilayer import BiLSTMv1 as BiLSTM
from keras.models import Model
ncats = 6
inputs = Input(shape=(None,5),name="Input")
l1 = BiLSTM(output_dim=50,activation='tanh',return_sequences=True)(inputs)
l2 = BiLSTM(output_dim=50,activation='tanh',return_sequences=True)(merge([inputs,l1],mode="concat"))
l3 = BiLSTM(output_dim=50,activation='tanh',return_sequences=True)(merge([inputs,l2],mode="concat"))
brownian = BiLSTM(output_dim=50,activation='tanh',return_sequences=True,name="brownian_i")(merge([inputs,l1,l2,l3],mode="concat"))
brownian = TimeDistributed(Dense(9,activation="softmax"),name="brownian")(brownian)
model = Model(input=[inputs],output=[brownian])#,sigma])
model.compile(optimizer='adadelta',
loss={'brownian': 'categorical_crossentropy'})
#loss_weights={'sigma': .1, 'brownian': .9})
"""
model.compile(optimizer='adadelta',
loss={ 'brownian': 'binary_crossentropy'})"""
#w = "ftest_2_135"
"""
w = 'ftest_rea_2_40'
try:
model.load_weights("/home/jarbona/cluster_theano/"+w)
except:
model.load_weights(w)"""
import keras
import cPickle
class LossHistory(keras.callbacks.Callback):
#losses = []
#val_losses = []
def __init__(self,name):
super(LossHistory, self).__init__()
self.name=name
self.losses = []
self.val_losses = []
def on_train_begin(self, logs={}):
pass
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
#self.val_losses.append(logs.get('val_loss'))
def on_epoch_end(self, epoch, logs={}):
self.losses.append(logs.get('loss'))
self.val_losses.append(logs.get('val_loss'))
cPickle.dump((self.losses,self.val_losses), open(self.name, 'wb'))
#from Specialist_layer import return_three_layer,return_three_bis
#history = LossHistory("losses15.pick")
#TRaining of graph 1
#print lr
#lr = 0.1
lr = 1.0
wgraph = model
wgraph.optimizer.lr.set_value(lr)
for i in range(15):
for j in range(0,6*6*4,1):
modulo = 6
size = (1 + j % modulo)*50
if j % modulo == 4:
size=200
if j % modulo == 5:
size=400
if j % modulo == 3:
size=26
"""
if j % modulo == 6:
size=600
if j % modulo == 7:
size=800
if j % modulo == 8:
size=1000
if j % modulo == 9:
size=1500 """
#size=26
print size
try:
X_train,Y_trains_b,scs = generate_N_nstep(4000,size)
inp = {"Input":X_train}
ret={"brownian":Y_trains_b}
"""#
ret = {"input1":X_train,
"output":Y_trains,
"outputtype":convert_output(Y_trains),
"category":Y_train_cat[::,::,:12]}"""
except:
print "pb"
pass
#next(generator())
#print ret["category"].shape
if size == 400:
wgraph.optimizer.lr.set_value(lr)
print wgraph.optimizer.lr.get_value()
if size == 50:
wgraph.optimizer.lr.set_value(lr)
print wgraph.optimizer.lr.get_value()
"""
if size != 600:
batch = 50
else:
batch = 20"""
batch = 50
#print ret.keys()
#print ret["category"].shape, ret["output"].shape,
wgraph.fit(inp,ret,batch, nb_epoch=1,validation_split=0.05)#, callbacks=[history])
if i == 3:
lr = 0.1
if j % modulo == 0 :
name = "ftest_rea_continuous"
wgraph.save_weights(name + "_%i_%i"%(i+2,j),overwrite=True)
#sub_with_noise (30p)
#if np.isnan(graph.evaluate(ret)):
# graph = return_three_layer()
# graph.load_weights("transition_l8_%i_diff_size_50"%(i+2))
#if i % 3 == 0 and i != 0:
# lr /= 2.
# graph.optimizer.lr.set_value(lr)
# print graph.optimizer.lr.get_value()
#score = model.evaluate(X_test, Y_test, batch_size=16)
w = 'ftest_rea_6_24'
try:
model.load_weights("/home/jarbona/cluster_theano/"+w)
except:
model.load_weights(w)
size=600
batch = 10
X_train,Y_trains_b,scs = generate_N_nstep(1000,size)
inp = {"Input":X_train}
ret={"brownian":Y_trains_b}
model.optimizer.lr.set_value(0.1)
model.fit(inp,ret,batch, nb_epoch=1,validation_split=0.05)#, callbacks=[history])
model.save_weights("test")
size=100
X_train,Y_trains_b,scs = generate_N_nstep(200,size)
inp = {"Input":X_train}
ret={"brownian":Y_trains_b}
resp = model.predict(inp["Input"][:20])
from Tools import plot_by_class,plot_label,plot_by_class
import copy
for w in range(10):
res = copy.deepcopy(resp[w])
res = np.argmax(res,axis=1)
gt = np.argmax(Y_trains_b[w],axis=1)
#print res
#print gt
print set(gt)
#print resp[1][w][::,0]
fig = figure()
ax = fig.add_subplot(131)
plot_label(scs[w],res,remove6=9)
ax = fig.add_subplot(132)
plot_label(scs[w],gt,remove6=9)
ax = fig.add_subplot(133)
plot_by_class(scs[w],np.array(gt))
"""
figure()
plot(resp[1][w][::,0])
plot(ret["sigma"][w][::,0])
"""
print ret["brownian"][w][:10]
```
| github_jupyter |
# Data Cleaning
For each IMU file, clean the IMU data, adjust the labels, and output these as CSV files.
```
%load_ext autoreload
%autoreload 2
%matplotlib notebook
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.ensemble import GradientBoostingClassifier
from matplotlib.lines import Line2D
import joblib
from src.data.labels_util import load_labels, LabelCol, get_labels_file, load_clean_labels, get_workouts
from src.data.imu_util import (
get_sensor_file, ImuCol, load_imu_data, Sensor, fix_epoch, resample_uniformly, time_to_row_range, get_data_chunk,
normalize_with_bounds, data_to_features, list_imu_abspaths, clean_imu_data
)
from src.data.util import find_nearest, find_nearest_index, shift, low_pass_filter, add_col
from src.data.workout import Activity, Workout
from src.data.data import DataState
from src.data.clean_dataset import main as clean_dataset
from src.data.clean_labels import main as clean_labels
from src.visualization.visualize import multiplot
# import data types
from pandas import DataFrame
from numpy import ndarray
from typing import List, Tuple, Optional
```
## Clean IMU data
```
# Clean data (UNCOMMENT when needed)
# clean_dataset()
# Test
cleaned_files = list_imu_abspaths(sensor_type=Sensor.Accelerometer, data_state=DataState.Clean)
def plot_helper(idx, plot):
imu_data = np.load(cleaned_files[idx])
plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.XACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])
multiplot(len(cleaned_files), plot_helper)
```
## Adjust Labels
A few raw IMU files seems to have corrupted timestamps, causing some labels to not properly map to their data point. We note these labels in the cleaned/adjusted labels. They'll be handled in the model fitting.
```
# Adjust labels (UNCOMMENT when needed)
# clean_labels()
# Test
raw_boot_labels: ndarray = load_labels(get_labels_file(Activity.Boot, DataState.Raw), Activity.Boot)
raw_pole_labels: ndarray = load_labels(get_labels_file(Activity.Pole, DataState.Raw), Activity.Pole)
clean_boot_labels: ndarray = load_clean_labels(Activity.Boot)
clean_pole_labels: ndarray = load_clean_labels(Activity.Pole)
# Check cleaned data content
# print('Raw Boot')
# print(raw_boot_labels[:50,])
# print('Clean Boot')
# print(clean_boot_labels[:50,])
# print('Raw Pole')
# print(raw_pole_labels[:50,])
# print('Clean Pole')
# print(clean_pole_labels[:50,])
```
## Examine Data Integrity
Make sure that labels for steps are still reasonable after data cleaning.
**Something to consider**: one area of concern are the end steps labels for poles labels. Pole lift-off (end of step) occurs at a min-peak. Re-sampling, interpolation, and the adjustment of labels may cause the end labels to deviate slightly from the min-peak. (The graph seems okay, with some points slightly off the peak, but it's not too common.) We can make the reasonable assumption that data points are sampled approximately uniformly. This may affect the accuracy of using a low-pass filter and (for workout detection) FFT.
```
# CHOOSE a workout and test type (pole or boot) to examine
workout_idx = 5
selected_labels = clean_boot_labels
workouts: List[Workout] = get_workouts(selected_labels)
print('Number of workouts: %d' % len(workouts))
workout = workouts[workout_idx]
print('Sensor %s' % workout.sensor)
def plot_helper(idx, plot):
# Plot IMU data
imu_data: ndarray = np.load(
get_sensor_file(sensor_name=workout.sensor, sensor_type=Sensor.Accelerometer, data_state=DataState.Clean))
plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.XACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])
plot.set_xlabel('Epoch Time')
# Plot step labels
for i in range(workout.labels.shape[0]):
start_row, end_row = workout.labels[i, LabelCol.START], workout.labels[i, LabelCol.END]
plot.axvline(x=imu_data[start_row, ImuCol.TIME], color='green', linestyle='dashed')
plot.axvline(x=imu_data[end_row, ImuCol.TIME], color='red', linestyle='dotted')
legend_items = [Line2D([], [], color='green', linestyle='dashed', label='Step start'),
Line2D([], [], color='red', linestyle='dotted', label='Step end')]
plot.legend(handles=legend_items)
# Zoom (REMOVE to see the entire graph)
# plot.set_xlim([1597340600000, 1597340615000])
multiplot(1, plot_helper)
```
Let's compare the cleaned labels to the original labels.
```
# CHOOSE a workout and test type (pole or boot) to examine
workout_idx = 5
selected_labels = raw_pole_labels
workouts: List[Workout] = get_workouts(selected_labels)
print('Number of workouts: %d' % len(workouts))
workout = workouts[workout_idx]
print('Sensor %s' % workout.sensor)
def plot_helper(idx, plot):
# Plot IMU data
imu_data: ndarray = load_imu_data(
get_sensor_file(sensor_name=workout.sensor, sensor_type=Sensor.Accelerometer, data_state=DataState.Raw))
plot.plot(imu_data[:, ImuCol.XACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])
plot.set_xlabel('Row Index')
# Plot step labels
for i in range(workout.labels.shape[0]):
# find labels rows
start_epoch, end_epoch = workout.labels[i, LabelCol.START], workout.labels[i, LabelCol.END]
start_row = np.where(imu_data[:, ImuCol.TIME].astype(int) == int(start_epoch))[0]
end_row = np.where(imu_data[:, ImuCol.TIME].astype(int) == int(end_epoch))[0]
if len(start_row) != 1 or len(end_row) != 1:
print('Bad workout')
return
start_row, end_row = start_row[0], end_row[0]
plot.axvline(x=start_row, color='green', linestyle='dashed')
plot.axvline(x=end_row, color='red', linestyle='dotted')
legend_items = [Line2D([], [], color='green', linestyle='dashed', label='Step start'),
Line2D([], [], color='red', linestyle='dotted', label='Step end')]
plot.legend(handles=legend_items)
# Zoom (REMOVE to see the entire graph)
plot.set_xlim([124500, 125000])
multiplot(1, plot_helper)
```
Make sure NaN labels were persisted during the label data's save/load process.
```
def count_errors(labels: ndarray):
for workout in get_workouts(labels):
boot: ndarray = workout.labels
num_errors = np.count_nonzero(
np.isnan(boot[:, LabelCol.START].astype(np.float64)) | np.isnan(boot[:, LabelCol.END].astype(np.float64)))
if num_errors != 0:
print('Number of labels that could not be mapped for sensor %s: %d' % (workout.sensor, num_errors))
clean_boot_labels: ndarray = load_clean_labels(Activity.Boot)
clean_pole_labels: ndarray = load_clean_labels(Activity.Pole)
print('Boot labels')
count_errors(clean_boot_labels)
print('Pole labels')
count_errors(clean_pole_labels)
```
| github_jupyter |
```
import csv
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from google.colab import files
```
The data for this exercise is available at: https://www.kaggle.com/datamunge/sign-language-mnist/home
Sign up and download to find 2 CSV files: sign_mnist_test.csv and sign_mnist_train.csv -- You will upload both of them using this button before you can continue.
```
uploaded=files.upload()
def get_data(filename):
# You will need to write code that will read the file passed
# into this function. The first line contains the column headers
# so you should ignore it
# Each successive line contians 785 comma separated values between 0 and 255
# The first value is the label
# The rest are the pixel values for that picture
# The function will return 2 np.array types. One with all the labels
# One with all the images
#
# Tips:
# If you read a full line (as 'row') then row[0] has the label
# and row[1:785] has the 784 pixel values
# Take a look at np.array_split to turn the 784 pixels into 28x28
# You are reading in strings, but need the values to be floats
# Check out np.array().astype for a conversion
with open(filename) as training_file:
# Your code starts here
# Your code ends here
return images, labels
training_images, training_labels = get_data('sign_mnist_train.csv')
testing_images, testing_labels = get_data('sign_mnist_test.csv')
# Keep these
print(training_images.shape)
print(training_labels.shape)
print(testing_images.shape)
print(testing_labels.shape)
# Their output should be:
# (27455, 28, 28)
# (27455,)
# (7172, 28, 28)
# (7172,)
# In this section you will have to add another dimension to the data
# So, for example, if your array is (10000, 28, 28)
# You will need to make it (10000, 28, 28, 1)
# Hint: np.expand_dims
training_images = # Your Code Here
testing_images = # Your Code Here
# Create an ImageDataGenerator and do Image Augmentation
train_datagen = ImageDataGenerator(
# Your Code Here
)
validation_datagen = ImageDataGenerator(
# Your Code Here)
# Keep These
print(training_images.shape)
print(testing_images.shape)
# Their output should be:
# (27455, 28, 28, 1)
# (7172, 28, 28, 1)
# Define the model
# Use no more than 2 Conv2D and 2 MaxPooling2D
model = tf.keras.models.Sequential([
# Your Code Here
)
# Compile Model.
model.compile(# Your Code Here)
# Train the Model
history = model.fit_generator(# Your Code Here)
model.evaluate(testing_images, testing_labels)
# The output from model.evaluate should be close to:
[6.92426086682151, 0.56609035]
# Plot the chart for accuracy and loss on both training and validation
import matplotlib.pyplot as plt
acc = # Your Code Here
val_acc = # Your Code Here
loss = # Your Code Here
val_loss = # Your Code Here
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
# Getting Started with *pyFTracks* v 1.0
**Romain Beucher, Roderick Brown, Louis Moresi and Fabian Kohlmann**
The Australian National University
The University of Glasgow
Lithodat
*pyFTracks* is a Python package that can be used to predict Fission Track ages and Track lengths distributions for some given thermal-histories and kinetic parameters.
*pyFTracks* is an open-source project licensed under the MiT license. See LICENSE.md for details.
The functionalities provided are similar to Richard Ketcham HeFTy sofware.
The main advantage comes from its Python interface which allows users to easily integrate *pyFTracks* with other Python libraries and existing scientific applications.
*pyFTracks* is available on all major operating systems.
For now, *pyFTracks* only provide forward modelling functionalities. Integration with inverse problem schemes is planned for version 2.0.
# Installation
*pyFTracks* is availabe on pypi. The code should work on all major operating systems (Linux, MaxOSx and Windows)
`pip install pyFTracks`
# Importing *pyFTracks*
The recommended way to import pyFTracks is to run:
```
import pyFTracks as FT
```
# Input
## Specifying a Thermal history
```
thermal_history = FT.ThermalHistory(name="My Thermal History",
time=[0., 43., 44., 100.],
temperature=[283., 283., 403., 403.])
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 5))
plt.plot(thermal_history.input_time, thermal_history.input_temperature, label=thermal_history.name, marker="o")
plt.xlim(100., 0.)
plt.ylim(150. + 273.15, 0.+273.15)
plt.ylabel("Temperature in degC")
plt.xlabel("Time in (Ma)")
plt.legend()
```
## Predefined thermal histories
We provide predefined thermal histories for convenience.
```
from pyFTracks.thermal_history import WOLF1, WOLF2, WOLF3, WOLF4, WOLF5, FLAXMANS1, VROLIJ
thermal_histories = [WOLF1, WOLF2, WOLF3, WOLF4, WOLF5, FLAXMANS1, VROLIJ]
plt.figure(figsize=(15, 5))
for thermal_history in thermal_histories:
plt.plot(thermal_history.input_time, thermal_history.input_temperature, label=thermal_history.name, marker="o")
plt.xlim(100., 0.)
plt.ylim(150. + 273.15, 0.+273.15)
plt.ylabel("Temperature in degC")
plt.xlabel("Time in (Ma)")
plt.legend()
```
## Annealing Models
```
annealing_model = FT.Ketcham1999(kinetic_parameters={"ETCH_PIT_LENGTH": 1.65})
annealing_model.history = WOLF1
annealing_model.calculate_age()
annealing_model = FT.Ketcham2007(kinetic_parameters={"ETCH_PIT_LENGTH": 1.65})
annealing_model.history = WOLF1
annealing_model.calculate_age()
FT.Viewer(history=WOLF1, annealing_model=annealing_model)
```
# Simple Fission-Track data Predictions
```
Ns = [31, 19, 56, 67, 88, 6, 18, 40, 36, 54, 35, 52, 51, 47, 27, 36, 64, 68, 61, 30]
Ni = [41, 22, 63, 71, 90, 7, 14, 41, 49, 79, 52, 76, 74, 66, 39, 44, 86, 90, 91, 41]
zeta = 350.
zeta_err = 10. / 350.
rhod = 1.304
rhod_err = 0.
Nd = 2936
FT.central_age(Ns, Ni, zeta, zeta_err, rhod, Nd)
FT.pooled_age(Ns, Ni, zeta, zeta_err, rhod, Nd)
FT.single_grain_ages(Ns, Ni, zeta, zeta_err, rhod, Nd)
FT.chi2_test(Ns, Ni)
```
# Included datasets
*pyFTracks* comes with some sample datasets that can be used for testing and designing general code.
```
from pyFTracks.ressources import Gleadow
from pyFTracks.ressources import Miller
Gleadow
FT.central_age(Gleadow.Ns,
Gleadow.Ni,
Gleadow.zeta,
Gleadow.zeta_error,
Gleadow.rhod,
Gleadow.nd)
Miller
FT.central_age(Miller.Ns,
Miller.Ni,
Miller.zeta,
Miller.zeta_error,
Miller.rhod,
Miller.nd)
Miller.calculate_central_age()
Miller.calculate_pooled_age()
Miller.calculate_ages()
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import cycluster as cy
import os.path as op
import numpy as np
import palettable
from custom_legends import colorLegend
import seaborn as sns
from hclusterplot import *
import matplotlib
import matplotlib.pyplot as plt
import pprint
import openpyxl
sns.set_context('paper')
path = "./"
inf = "NIC FPID Cytokines JC.csv"
dataFilename = op.join(path,inf)
"""A long df has one analyte measurement per row"""
longDf = pd.read_csv(dataFilename)
print(longDf)
# longDf['Groups']=longDf['ID'].astype(str)+'_'+longDf['Strain']# longDf = longDf.drop(columns= ['ID', 'Influenza.Status', 'Strain', 'Age', 'Sex', 'CMV.Status', 'EBV.Status', 'HSV1_2.Status', 'HHV6.Status', 'VZV.Status'])
longDf = longDf.drop(columns= ['ID', 'Influenza.Status', 'Strain', 'Age', 'Sex', 'CMV.Status', 'EBV.Status', 'HSV1_2.Status', 'HHV6.Status', 'VZV.Status', 'Strains'])
longDf
Df = longDf.pivot_table(index='Groups')
Df.to_excel('Example_1.xlsx')
"""A wide df has one sample per row (analyte measurements across the columns)"""
def _prepCyDf(tmp, K=3, normed=False, cluster="Cluster", percent= 0, rtol= None, atol= None):
# dayDf = longDf
# tmp = tmp.pivot_table(index='ptid', columns='cytokine', values='log10_conc')
if rtol or atol == None:
noVar = tmp.columns[np.isclose(tmp.std(), 0)].tolist()
else:
noVar = tmp.columns[np.isclose(tmp.std(), 0), rtol, atol].tolist()
naCols = tmp.columns[(tmp.isnull().sum()) / (((tmp.isnull()).sum()) + (tmp.notnull().sum())) > (percent / 100)].tolist() + ["il21"]
keepCols = [c for c in tmp.columns if not c in (noVar + naCols)]
# dayDf = dayDf.pivot_table(index='ptid', columns='cytokine', values='log10_conc')[keepCols]
"""By setting normed=True the data our normalized based on correlation with mean analyte concentration"""
rcyc = cy.cyclusterClass(studyStr='ADAMTS', sampleStr=cluster, normed=normed, rCyDf=tmp)
rcyc.clusterCytokines(K=K, metric='spearman-signed', minN=0)
rcyc.printModules()
return rcyc
test = _prepCyDf(Df, K=3, normed=False, cluster="All", percent= 10)
"""Now you can use attributes in nserum for plots and testing: cyDf, modDf, dmatDf, etc."""
plt.figure(41, figsize=(15.5, 9.5))
colInds = plotHColCluster(rcyc.cyDf,
method='complete',
metric='pearson-signed',
col_labels=rcyc.labels,
col_dmat=rcyc.dmatDf,
tickSz='large',
vRange=(0,1))
plt.figure(43, figsize = (15.5, 9.5))
colInds = cy.plotting.plotHierClust(1 - rcyc.pwrel,
rcyc.Z,
labels=rcyc.labels,
titleStr='Pairwise reliability (%s)' % rcyc.name,
vRange=(0, 1),
tickSz='large')
plt.figure(901, figsize=(13, 9.7))
cy.plotting.plotModuleEmbedding(rcyc.dmatDf, rcyc.labels, method='kpca', txtSize='large')
colors = palettable.colorbrewer.get_map('Set1', 'qualitative', len(np.unique(rcyc.labels))).mpl_colors
colorLegend(colors, ['%s%1.0f' % (rcyc.sampleStr, i) for i in np.unique(rcyc.labels)], loc='lower left')
import scipy.stats
"""df here should have one column per module and the genotype column"""
ptidDf = longDf[['ptid', 'sample', 'genotype', 'dpi']].drop_duplicates().set_index('ptid')
df = rcyc.modDf.join(ptidDf)
ind = df.genotype == 'WT'
col = 'LUNG1'
# stats.ranksums(df[col].loc[ind], df[col].loc[~ind])
```
| github_jupyter |
```
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as Data
import torchvision
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
path = 'data/mnist/'
raw_train = pd.read_csv(path + 'train.csv')
raw_test = pd.read_csv(path + 'train.csv')
raw_train_array = raw_train.values
raw_test_array = raw_test.values
raw_train_array = np.random.permutation(raw_train_array)
len(raw_train_array)
raw_train = raw_train_array[:40000, :]
raw_valid = raw_train_array[40000:, :]
# train_label = np.eye(10)[raw_train[:,0]]
train_label = raw_train[:,0]
train_data = raw_train[:,1:]
# valid_label = np.eye(10)[raw_valid[:,0]]
valid_label = raw_valid[:,0]
valid_data = raw_valid[:,1:]
train_data.shape
def reshape(data, target_size): return np.reshape(data, target_size)
train_data = reshape(train_data, [40000, 1, 28, 28])
valid_data = reshape(valid_data, [2000, 1, 28, 28])
train_data.shape, train_label.shape, valid_label.shape, valid_data.shape
BATCH_SIZE = 64
LEARNING_RATE = 0.1
EPOCH = 2
#convert to pytorch tensor
train_data = torch.from_numpy(train_data)..type(torch.FloatTensor)
train_label = torch.from_numpy(train_label).type(torch.LongTensor)
val_data = torch.from_numpy(valid_data).type(torch.FloatTensor)
val_label = torch.from_numpy(valid_label).type(torch.LongTensor)
train_data.size(),train_label.size(),val_data.size(),val_label.size()
train_dataset = Data.TensorDataset(data_tensor=train_data, target_tensor=train_label)
val_dataset = Data.TensorDataset(data_tensor=val_data, target_tensor=val_label)
train_loader = Data.DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
val_loader = Data.DataLoader(dataset=val_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
#pyton opp
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
#in_chanel out_chanel kernel stride padding
self.conv1 = nn.Conv2d(1, 32, 3)
self.conv2 = nn.Conv2d(32, 32, 3)
self.conv3 = nn.Conv2d(32, 64, 3)
self.conv4 = nn.Conv2d(64, 64, 3)
self.fc1 = nn.Linear(64*4*4, 512)
self.fc2 = nn.Linear(512, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = F.relu(self.conv3(x))
x = F.max_pool2d(F.relu(self.conv4(x)), 2)
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
cnn = CNN()
print(cnn)
list(cnn.parameters())[2].size() #conv2 weights
#Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(cnn.parameters(), lr=LEARNING_RATE)
#train the model
for epoch in range(2):
for i, (images, labels) in enumerate(train_loader):
# print(type(images))
# print(type(labels))
images = Variable(images)
labels = Variable(labels)
#print(type(images))
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = cnn(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print(loss.data)
print ('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f'
%(epoch+1, 2, i+1, len(train_dataset)//BATCH_SIZE, loss.data[0]))
# save and load(保存和提取)
# save
def save():
pass
#torch.save(net_name, 'net.pkl')
#torch.save(net_name.state_dict(), 'net_params.pkl')
# load
def restore_net():
pass
#net_new = torch.load('net.pkl')
def restore_params():
pass
#net_new_old_params = NET()
#net_new_old_params = net_new_old_params.load_state_dict(torch.load()'net_params.pkl'))
#批处理
#optimizer 优化器
# optimizer = torch.optim.SGD()
# torch.optim.Adam
# momentum (m)
# alpha (RMSprop)
# Adam (betas)
```
| github_jupyter |
# Building and Visualizing word frequencies
In this lab, we will focus on the `build_freqs()` helper function and visualizing a dataset fed into it. In our goal of tweet sentiment analysis, this function will build a dictionary where we can lookup how many times a word appears in the lists of positive or negative tweets. This will be very helpful when extracting the features of the dataset in the week's programming assignment. Let's see how this function is implemented under the hood in this notebook.
## Setup
Let's import the required libraries for this lab:
```
import nltk # Python library for NLP
from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK
import matplotlib.pyplot as plt # visualization library
import numpy as np # library for scientific computing and matrix operations
```
#### Import some helper functions that we provided in the utils.py file:
* `process_tweet()`: Cleans the text, tokenizes it into separate words, removes stopwords, and converts words to stems.
* `build_freqs()`: This counts how often a word in the 'corpus' (the entire set of tweets) was associated with a positive label `1` or a negative label `0`. It then builds the `freqs` dictionary, where each key is a `(word,label)` tuple, and the value is the count of its frequency within the corpus of tweets.
```
# download the stopwords for the process_tweet function
nltk.download('stopwords')
# import our convenience functions
from utils import process_tweet, build_freqs
```
## Load the NLTK sample dataset
As in the previous lab, we will be using the [Twitter dataset from NLTK](http://www.nltk.org/howto/twitter.html#Using-a-Tweet-Corpus).
```
# select the lists of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
# concatenate the lists, 1st part is the positive tweets followed by the negative
tweets = all_positive_tweets + all_negative_tweets
# let's see how many tweets we have
print("Number of tweets: ", len(tweets))
```
Next, we will build a labels array that matches the sentiments of our tweets. This data type works pretty much like a regular list but is optimized for computations and manipulation. The `labels` array will be composed of 10000 elements. The first 5000 will be filled with `1` labels denoting positive sentiments, and the next 5000 will be `0` labels denoting the opposite. We can do this easily with a series of operations provided by the `numpy` library:
* `np.ones()` - create an array of 1's
* `np.zeros()` - create an array of 0's
* `np.append()` - concatenate arrays
```
# make a numpy array representing labels of the tweets
labels = np.append(np.ones((len(all_positive_tweets))), np.zeros((len(all_negative_tweets))))
```
## Dictionaries
In Python, a dictionary is a mutable and indexed collection. It stores items as key-value pairs and uses [hash tables](https://en.wikipedia.org/wiki/Hash_table) underneath to allow practically constant time lookups. In NLP, dictionaries are essential because it enables fast retrieval of items or containment checks even with thousands of entries in the collection.
### Definition
A dictionary in Python is declared using curly brackets. Look at the next example:
```
dictionary = {'key1': 1, 'key2': 2}
```
The former line defines a dictionary with two entries. Keys and values can be almost any type ([with a few restriction on keys](https://docs.python.org/3/tutorial/datastructures.html#dictionaries)), and in this case, we used strings. We can also use floats, integers, tuples, etc.
### Adding or editing entries
New entries can be inserted into dictionaries using square brackets. If the dictionary already contains the specified key, its value is overwritten.
```
# Add a new entry
dictionary['key3'] = -5
# Overwrite the value of key1
dictionary['key1'] = 0
print(dictionary)
```
### Accessing values and lookup keys
Performing dictionary lookups and retrieval are common tasks in NLP. There are two ways to do this:
* Using square bracket notation: This form is allowed if the lookup key is in the dictionary. It produces an error otherwise.
* Using the [get()](https://docs.python.org/3/library/stdtypes.html#dict.get) method: This allows us to set a default value if the dictionary key does not exist.
Let us see these in action:
```
# Square bracket lookup when the key exist
print(dictionary['key2'])
```
However, if the key is missing, the operation produce an error
```
# The output of this line is intended to produce a KeyError
print(dictionary['key8'])
```
When using a square bracket lookup, it is common to use an if-else block to check for containment first (with the keyword `in`) before getting the item. On the other hand, you can use the `.get()` method if you want to set a default value when the key is not found. Let's compare these in the cells below:
```
# This prints a value
if 'key1' in dictionary:
print("item found: ", dictionary['key1'])
else:
print('key1 is not defined')
# Same as what you get with get
print("item found: ", dictionary.get('key1', -1))
# This prints a message because the key is not found
if 'key7' in dictionary:
print(dictionary['key7'])
else:
print('key does not exist!')
# This prints -1 because the key is not found and we set the default to -1
print(dictionary.get('key7', -1))
```
## Word frequency dictionary
Now that we know the building blocks, let's finally take a look at the **build_freqs()** function in **utils.py**. This is the function that creates the dictionary containing the word counts from each corpus.
```python
def build_freqs(tweets, ys):
"""Build frequencies.
Input:
tweets: a list of tweets
ys: an m x 1 array with the sentiment label of each tweet
(either 0 or 1)
Output:
freqs: a dictionary mapping each (word, sentiment) pair to its
frequency
"""
# Convert np array to list since zip needs an iterable.
# The squeeze is necessary or the list ends up with one element.
# Also note that this is just a NOP if ys is already a list.
yslist = np.squeeze(ys).tolist()
# Start with an empty dictionary and populate it by looping over all tweets
# and over all processed words in each tweet.
freqs = {}
for y, tweet in zip(yslist, tweets):
for word in process_tweet(tweet):
pair = (word, y)
if pair in freqs:
freqs[pair] += 1
else:
freqs[pair] = 1
return freqs
```
You can also do the for loop like this to make it a bit more compact:
```python
for y, tweet in zip(yslist, tweets):
for word in process_tweet(tweet):
pair = (word, y)
freqs[pair] = freqs.get(pair, 0) + 1
```
As shown above, each key is a 2-element tuple containing a `(word, y)` pair. The `word` is an element in a processed tweet while `y` is an integer representing the corpus: `1` for the positive tweets and `0` for the negative tweets. The value associated with this key is the number of times that word appears in the specified corpus. For example:
```
# "folowfriday" appears 25 times in the positive tweets
('followfriday', 1.0): 25
# "shame" appears 19 times in the negative tweets
('shame', 0.0): 19
```
Now, it is time to use the dictionary returned by the `build_freqs()` function. First, let us feed our `tweets` and `labels` lists then print a basic report:
```
# create frequency dictionary
freqs = build_freqs(tweets, labels)
# check data type
print(f'type(freqs) = {type(freqs)}')
# check length of the dictionary
print(f'len(freqs) = {len(freqs)}')
```
Now print the frequency of each word depending on its class.
```
print(freqs)
```
Unfortunately, this does not help much to understand the data. It would be better to visualize this output to gain better insights.
## Table of word counts
We will select a set of words that we would like to visualize. It is better to store this temporary information in a table that is very easy to use later.
```
# select some words to appear in the report. we will assume that each word is unique (i.e. no duplicates)
keys = ['happi', 'merri', 'nice', 'good', 'bad', 'sad', 'mad', 'best', 'pretti',
'❤', ':)', ':(', '😒', '😬', '😄', '😍', '♛',
'song', 'idea', 'power', 'play', 'magnific']
# list representing our table of word counts.
# each element consist of a sublist with this pattern: [<word>, <positive_count>, <negative_count>]
data = []
# loop through our selected words
for word in keys:
# initialize positive and negative counts
pos = 0
neg = 0
# retrieve number of positive counts
if (word, 1) in freqs:
pos = freqs[(word, 1)]
# retrieve number of negative counts
if (word, 0) in freqs:
neg = freqs[(word, 0)]
# append the word counts to the table
data.append([word, pos, neg])
data
```
We can then use a scatter plot to inspect this table visually. Instead of plotting the raw counts, we will plot it in the logarithmic scale to take into account the wide discrepancies between the raw counts (e.g. `:)` has 3568 counts in the positive while only 2 in the negative). The red line marks the boundary between positive and negative areas. Words close to the red line can be classified as neutral.
```
fig, ax = plt.subplots(figsize = (8, 8))
# convert positive raw counts to logarithmic scale. we add 1 to avoid log(0)
x = np.log([x[1] + 1 for x in data])
# do the same for the negative counts
y = np.log([x[2] + 1 for x in data])
# Plot a dot for each pair of words
ax.scatter(x, y)
# assign axis labels
plt.xlabel("Log Positive count")
plt.ylabel("Log Negative count")
# Add the word as the label at the same position as you added the points just before
for i in range(0, len(data)):
ax.annotate(data[i][0], (x[i], y[i]), fontsize=12)
ax.plot([0, 9], [0, 9], color = 'red') # Plot the red line that divides the 2 areas.
plt.show()
```
This chart is straightforward to interpret. It shows that emoticons `:)` and `:(` are very important for sentiment analysis. Thus, we should not let preprocessing steps get rid of these symbols!
Furthermore, what is the meaning of the crown symbol? It seems to be very negative!
### That's all for this lab! We've seen how to build a word frequency dictionary and this will come in handy when extracting the features of a list of tweets. Next up, we will be reviewing Logistic Regression. Keep it up!
| github_jupyter |
```
# all_slow
#default_exp medical.imaging
```
# Medical Imaging
> Helpers for working with DICOM files
```
#export
from fastai.basics import *
from fastai.vision.all import *
from fastai.data.transforms import *
import pydicom,kornia,skimage
from pydicom.dataset import Dataset as DcmDataset
from pydicom.tag import BaseTag as DcmTag
from pydicom.multival import MultiValue as DcmMultiValue
from PIL import Image
try:
import cv2
cv2.setNumThreads(0)
except: pass
#hide
from nbdev.showdoc import *
matplotlib.rcParams['image.cmap'] = 'bone'
#export
_all_ = ['DcmDataset', 'DcmTag', 'DcmMultiValue', 'dcmread', 'get_dicom_files']
```
## Patching
```
#export
def get_dicom_files(path, recurse=True, folders=None):
"Get dicom files in `path` recursively, only in `folders`, if specified."
return get_files(path, extensions=[".dcm"], recurse=recurse, folders=folders)
#export
@patch
def dcmread(fn:Path, force = False):
"Open a `DICOM` file"
return pydicom.dcmread(str(fn), force)
#export
class TensorDicom(TensorImage): _show_args = {'cmap':'gray'}
#export
class PILDicom(PILBase):
_open_args,_tensor_cls,_show_args = {},TensorDicom,TensorDicom._show_args
@classmethod
def create(cls, fn:(Path,str,bytes), mode=None)->None:
"Open a `DICOM file` from path `fn` or bytes `fn` and load it as a `PIL Image`"
if isinstance(fn,bytes): im = Image.fromarray(pydicom.dcmread(pydicom.filebase.DicomBytesIO(fn)).pixel_array)
if isinstance(fn,(Path,str)): im = Image.fromarray(dcmread(fn).pixel_array)
im.load()
im = im._new(im.im)
return cls(im.convert(mode) if mode else im)
PILDicom._tensor_cls = TensorDicom
# #export
# @patch
# def png16read(self:Path): return array(Image.open(self), dtype=np.uint16)
TEST_DCM = Path('images/sample.dcm')
dcm = TEST_DCM.dcmread()
#export
@patch_property
def pixels(self:DcmDataset):
"`pixel_array` as a tensor"
return tensor(self.pixel_array.astype(np.float32))
#export
@patch_property
def scaled_px(self:DcmDataset):
"`pixels` scaled by `RescaleSlope` and `RescaleIntercept`"
img = self.pixels
return img if self.Modality == "CR" else img * self.RescaleSlope + self.RescaleIntercept
#export
def array_freqhist_bins(self, n_bins=100):
"A numpy based function to split the range of pixel values into groups, such that each group has around the same number of pixels"
imsd = np.sort(self.flatten())
t = np.array([0.001])
t = np.append(t, np.arange(n_bins)/n_bins+(1/2/n_bins))
t = np.append(t, 0.999)
t = (len(imsd)*t+0.5).astype(np.int)
return np.unique(imsd[t])
#export
@patch
def freqhist_bins(self:Tensor, n_bins=100):
"A function to split the range of pixel values into groups, such that each group has around the same number of pixels"
imsd = self.view(-1).sort()[0]
t = torch.cat([tensor([0.001]),
torch.arange(n_bins).float()/n_bins+(1/2/n_bins),
tensor([0.999])])
t = (len(imsd)*t).long()
return imsd[t].unique()
#export
@patch
def hist_scaled_pt(self:Tensor, brks=None):
# Pytorch-only version - switch to this if/when interp_1d can be optimized
if brks is None: brks = self.freqhist_bins()
brks = brks.to(self.device)
ys = torch.linspace(0., 1., len(brks)).to(self.device)
return self.flatten().interp_1d(brks, ys).reshape(self.shape).clamp(0.,1.)
#export
@patch
def hist_scaled(self:Tensor, brks=None):
if self.device.type=='cuda': return self.hist_scaled_pt(brks)
if brks is None: brks = self.freqhist_bins()
ys = np.linspace(0., 1., len(brks))
x = self.numpy().flatten()
x = np.interp(x, brks.numpy(), ys)
return tensor(x).reshape(self.shape).clamp(0.,1.)
#export
@patch
def hist_scaled(self:DcmDataset, brks=None, min_px=None, max_px=None):
px = self.scaled_px
if min_px is not None: px[px<min_px] = min_px
if max_px is not None: px[px>max_px] = max_px
return px.hist_scaled(brks=brks)
#export
@patch
def windowed(self:Tensor, w, l):
px = self.clone()
px_min = l - w//2
px_max = l + w//2
px[px<px_min] = px_min
px[px>px_max] = px_max
return (px-px_min) / (px_max-px_min)
#export
@patch
def windowed(self:DcmDataset, w, l):
return self.scaled_px.windowed(w,l)
#export
# From https://radiopaedia.org/articles/windowing-ct
dicom_windows = types.SimpleNamespace(
brain=(80,40),
subdural=(254,100),
stroke=(8,32),
brain_bone=(2800,600),
brain_soft=(375,40),
lungs=(1500,-600),
mediastinum=(350,50),
abdomen_soft=(400,50),
liver=(150,30),
spine_soft=(250,50),
spine_bone=(1800,400)
)
#export
class TensorCTScan(TensorImageBW): _show_args = {'cmap':'bone'}
#export
class PILCTScan(PILBase): _open_args,_tensor_cls,_show_args = {},TensorCTScan,TensorCTScan._show_args
#export
@patch
@delegates(show_image)
def show(self:DcmDataset, scale=True, cmap=plt.cm.bone, min_px=-1100, max_px=None, **kwargs):
px = (self.windowed(*scale) if isinstance(scale,tuple)
else self.hist_scaled(min_px=min_px,max_px=max_px,brks=scale) if isinstance(scale,(ndarray,Tensor))
else self.hist_scaled(min_px=min_px,max_px=max_px) if scale
else self.scaled_px)
show_image(px, cmap=cmap, **kwargs)
scales = False, True, dicom_windows.brain, dicom_windows.subdural
titles = 'raw','normalized','brain windowed','subdural windowed'
for s,a,t in zip(scales, subplots(2,2,imsize=4)[1].flat, titles):
dcm.show(scale=s, ax=a, title=t)
dcm.show(cmap=plt.cm.gist_ncar, figsize=(6,6))
#export
@patch
def pct_in_window(dcm:DcmDataset, w, l):
"% of pixels in the window `(w,l)`"
px = dcm.scaled_px
return ((px > l-w//2) & (px < l+w//2)).float().mean().item()
dcm.pct_in_window(*dicom_windows.brain)
#export
def uniform_blur2d(x,s):
w = x.new_ones(1,1,1,s)/s
# Factor 2d conv into 2 1d convs
x = unsqueeze(x, dim=0, n=4-x.dim())
r = (F.conv2d(x, w, padding=s//2))
r = (F.conv2d(r, w.transpose(-1,-2), padding=s//2)).cpu()[:,0]
return r.squeeze()
ims = dcm.hist_scaled(), uniform_blur2d(dcm.hist_scaled(),50)
show_images(ims, titles=('orig', 'blurred'))
#export
def gauss_blur2d(x,s):
s2 = int(s/4)*2+1
x2 = unsqueeze(x, dim=0, n=4-x.dim())
res = kornia.filters.gaussian_blur2d(x2, (s2,s2), (s,s), 'replicate')
return res.squeeze()
#export
@patch
def mask_from_blur(x:Tensor, window, sigma=0.3, thresh=0.05, remove_max=True):
p = x.windowed(*window)
if remove_max: p[p==1] = 0
return gauss_blur2d(p, s=sigma*x.shape[-1])>thresh
#export
@patch
def mask_from_blur(x:DcmDataset, window, sigma=0.3, thresh=0.05, remove_max=True):
return to_device(x.scaled_px).mask_from_blur(window, sigma, thresh, remove_max=remove_max)
mask = dcm.mask_from_blur(dicom_windows.brain)
wind = dcm.windowed(*dicom_windows.brain)
_,ax = subplots(1,1)
show_image(wind, ax=ax[0])
show_image(mask, alpha=0.5, cmap=plt.cm.Reds, ax=ax[0]);
#export
def _px_bounds(x, dim):
c = x.sum(dim).nonzero().cpu()
idxs,vals = torch.unique(c[:,0],return_counts=True)
vs = torch.split_with_sizes(c[:,1],tuple(vals))
d = {k.item():v for k,v in zip(idxs,vs)}
default_u = tensor([0,x.shape[-1]-1])
b = [d.get(o,default_u) for o in range(x.shape[0])]
b = [tensor([o.min(),o.max()]) for o in b]
return torch.stack(b)
#export
def mask2bbox(mask):
no_batch = mask.dim()==2
if no_batch: mask = mask[None]
bb1 = _px_bounds(mask,-1).t()
bb2 = _px_bounds(mask,-2).t()
res = torch.stack([bb1,bb2],dim=1).to(mask.device)
return res[...,0] if no_batch else res
bbs = mask2bbox(mask)
lo,hi = bbs
show_image(wind[lo[0]:hi[0],lo[1]:hi[1]]);
#export
def _bbs2sizes(crops, init_sz, use_square=True):
bb = crops.flip(1)
szs = (bb[1]-bb[0])
if use_square: szs = szs.max(0)[0][None].repeat((2,1))
overs = (szs+bb[0])>init_sz
bb[0][overs] = init_sz-szs[overs]
lows = (bb[0]/float(init_sz))
return lows,szs/float(init_sz)
#export
def crop_resize(x, crops, new_sz):
# NB assumes square inputs. Not tested for non-square anythings!
bs = x.shape[0]
lows,szs = _bbs2sizes(crops, x.shape[-1])
if not isinstance(new_sz,(list,tuple)): new_sz = (new_sz,new_sz)
id_mat = tensor([[1.,0,0],[0,1,0]])[None].repeat((bs,1,1)).to(x.device)
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=UserWarning)
sp = F.affine_grid(id_mat, (bs,1,*new_sz))+1.
grid = sp*unsqueeze(szs.t(),1,n=2)+unsqueeze(lows.t()*2.,1,n=2)
return F.grid_sample(x.unsqueeze(1), grid-1)
px256 = crop_resize(to_device(wind[None]), bbs[...,None], 128)[0]
show_image(px256)
px256.shape
#export
@patch
def to_nchan(x:Tensor, wins, bins=None):
res = [x.windowed(*win) for win in wins]
if not isinstance(bins,int) or bins!=0: res.append(x.hist_scaled(bins).clamp(0,1))
dim = [0,1][x.dim()==3]
return TensorCTScan(torch.stack(res, dim=dim))
#export
@patch
def to_nchan(x:DcmDataset, wins, bins=None):
return x.scaled_px.to_nchan(wins, bins)
#export
@patch
def to_3chan(x:Tensor, win1, win2, bins=None):
return x.to_nchan([win1,win2],bins=bins)
#export
@patch
def to_3chan(x:DcmDataset, win1, win2, bins=None):
return x.scaled_px.to_3chan(win1, win2, bins)
show_images(dcm.to_nchan([dicom_windows.brain,dicom_windows.subdural,dicom_windows.abdomen_soft]))
#export
@patch
def save_jpg(x:(Tensor,DcmDataset), path, wins, bins=None, quality=90):
fn = Path(path).with_suffix('.jpg')
x = (x.to_nchan(wins, bins)*255).byte()
im = Image.fromarray(x.permute(1,2,0).numpy(), mode=['RGB','CMYK'][x.shape[0]==4])
im.save(fn, quality=quality)
#export
@patch
def to_uint16(x:(Tensor,DcmDataset), bins=None):
d = x.hist_scaled(bins).clamp(0,1) * 2**16
return d.numpy().astype(np.uint16)
#export
@patch
def save_tif16(x:(Tensor,DcmDataset), path, bins=None, compress=True):
fn = Path(path).with_suffix('.tif')
Image.fromarray(x.to_uint16(bins)).save(str(fn), compression='tiff_deflate' if compress else None)
_,axs=subplots(1,2)
with tempfile.TemporaryDirectory() as f:
f = Path(f)
dcm.save_jpg(f/'test.jpg', [dicom_windows.brain,dicom_windows.subdural])
show_image(Image.open(f/'test.jpg'), ax=axs[0])
dcm.save_tif16(f/'test.tif')
show_image(Image.open(str(f/'test.tif')), ax=axs[1]);
#export
@patch
def set_pixels(self:DcmDataset, px):
self.PixelData = px.tobytes()
self.Rows,self.Columns = px.shape
DcmDataset.pixel_array = property(DcmDataset.pixel_array.fget, set_pixels)
#export
@patch
def zoom(self:DcmDataset, ratio):
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
self.pixel_array = ndimage.zoom(self.pixel_array, ratio)
#export
@patch
def zoom_to(self:DcmDataset, sz):
if not isinstance(sz,(list,tuple)): sz=(sz,sz)
rows,cols = sz
self.zoom((rows/self.Rows,cols/self.Columns))
#export
@patch_property
def shape(self:DcmDataset): return self.Rows,self.Columns
dcm2 = TEST_DCM.dcmread()
dcm2.zoom_to(90)
test_eq(dcm2.shape, (90,90))
dcm2 = TEST_DCM.dcmread()
dcm2.zoom(0.25)
dcm2.show()
#export
def _cast_dicom_special(x):
cls = type(x)
if not cls.__module__.startswith('pydicom'): return x
if cls.__base__ == object: return x
return cls.__base__(x)
def _split_elem(res,k,v):
if not isinstance(v,DcmMultiValue): return
res[f'Multi{k}'] = 1
for i,o in enumerate(v): res[f'{k}{"" if i==0 else i}']=o
#export
@patch
def as_dict(self:DcmDataset, px_summ=True, window=dicom_windows.brain):
pxdata = (0x7fe0,0x0010)
vals = [self[o] for o in self.keys() if o != pxdata]
its = [(v.keyword,v.value) for v in vals]
res = dict(its)
res['fname'] = self.filename
for k,v in its: _split_elem(res,k,v)
if not px_summ: return res
stats = 'min','max','mean','std'
try:
pxs = self.pixel_array
for f in stats: res['img_'+f] = getattr(pxs,f)()
res['img_pct_window'] = self.pct_in_window(*window)
except Exception as e:
for f in stats: res['img_'+f] = 0
print(res,e)
for k in res: res[k] = _cast_dicom_special(res[k])
return res
#export
def _dcm2dict(fn, **kwargs): return fn.dcmread().as_dict(**kwargs)
#export
@delegates(parallel)
def _from_dicoms(cls, fns, n_workers=0, **kwargs):
return pd.DataFrame(parallel(_dcm2dict, fns, n_workers=n_workers, **kwargs))
pd.DataFrame.from_dicoms = classmethod(_from_dicoms)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Data Visualization Lab**
Estimated time needed: **45 to 60** minutes
In this assignment you will be focusing on the visualization of data.
The data set will be presented to you in the form of a RDBMS.
You will have to use SQL queries to extract the data.
## Objectives
In this lab you will perform the following:
* Visualize the distribution of data.
* Visualize the relationship between two features.
* Visualize composition of data.
* Visualize comparison of data.
<hr>
## Demo: How to work with database
Download database file.
```
!wget https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m4_survey_data.sqlite
```
Connect to the database.
```
import sqlite3
conn = sqlite3.connect("m4_survey_data.sqlite") # open a database connection
```
Import pandas module.
```
import pandas as pd
```
## Demo: How to run an sql query
```
# print how many rows are there in the table named 'master'
QUERY = """
SELECT COUNT(*)
FROM master
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
df = pd.read_sql_query(QUERY,conn)
df.head()
```
## Demo: How to list all tables
```
# print all the tables names in the database
QUERY = """
SELECT name as Table_Name FROM
sqlite_master WHERE
type = 'table'
"""
# the read_sql_query runs the sql query and returns the data as a dataframe
pd.read_sql_query(QUERY,conn)
```
## Demo: How to run a group by query
```
QUERY = """
SELECT Age,COUNT(*) as count
FROM master
group by age
order by age
"""
pd.read_sql_query(QUERY,conn)
```
## Demo: How to describe a table
```
table_name = 'master' # the table you wish to describe
QUERY = """
SELECT sql FROM sqlite_master
WHERE name= '{}'
""".format(table_name)
df = pd.read_sql_query(QUERY,conn)
print(df.iat[0,0])
```
# Hands-on Lab
## Visualizing distribution of data
### Histograms
Plot a histogram of `ConvertedComp.`
```
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# your code goes here
QUERY= """
"""
df=pd.read_sql_query("SELECT*FROM master",conn)
df
plt.hist(df["ConvertedComp"])
plt.title("Histogram of ConvertedComp")
plt.xlabel("ConvertedComp")
plt.ylabel("Number of Respondents")
plt.show()
```
### Box Plots
Plot a box plot of `Age.`
```
# your code goes here
QUERY="""
"""
df=pd.read_sql_query("SELECT*FROM master",conn)
df['Age'].plot(kind='box',figsize=(20,8),vert=False)
plt.title('Box Plot of Age')
plt.show()
```
## Visualizing relationships in data
### Scatter Plots
Create a scatter plot of `Age` and `WorkWeekHrs.`
```
# your code goes here
column = "Age"
column2 = "WorkWeekHrs"
table_name = "master"
QUERY= """
SELECT Age, AVG(WorkWeekHrs) FROM master GROUP BY Age
""".format(column,column2,table_name)
df_age_work=pd.read_sql_query(QUERY,conn)
df_age_work.dropna(inplace=True)
df_age_work.rename(columns={"AVG(WorkWeekHrs)":"WorkWeekHrs"},inplace=True)
df_age_work.plot(kind="scatter",x="Age",y="WorkWeekHrs",figsize=(9,6),color="red")
plt.title("Scatter Plot of Work Hours by Age")
plt.xlabel("Age")
plt.ylabel("Week Work Hours")
plt.show()
plt.close()
```
### Bubble Plots
Create a bubble plot of `WorkWeekHrs` and `CodeRevHrs`, use `Age` column as bubble size.
```
import plotly.express as px
# your code goes here
QUERY= """
SELECT Age, ConvertedComp, WorkWeekHrs, CodeRevHrs
FROM master
"""
df=pd.read_sql_query(QUERY,conn)
norm_age=(df['Age']-df['Age'].min())/(df['Age'].max()-df['Age'].min())
df.plot(kind='scatter',x='WorkWeekHrs',y='CodeRevHrs',s=norm_age*1000,figsize=(10,6),color='yellow')
plt.title('Bubble Plot of Work Week Hours and Code Rev Hours')
plt.xlabel('WorkWeekHrs')
plt.ylabel('CodeRevHrs')
plt.show
```
## Visualizing composition of data
### Pie Charts
Create a pie chart of the top 5 databases that respondents wish to learn next year. Label the pie chart with database names. Display percentages of each database on the pie chart.
```
# your code goes here
QUERY= """
SELECT DatabaseDesireNextYear, Count(*)as count
FROM DatabaseDesireNextYear
group by DatabaseDesireNextYear
order by Count desc limit 5
"""
df=pd.read_sql_query(QUERY,conn)
df.head()
#code to create a pie chart
QUERY= """
SELECT DatabaseDesireNextYear, Count(*)as count
FROM DatabaseDesireNextYear
group by DatabaseDesireNextYear
order by Count desc limit 5
"""
df=pd.read_sql_query(QUERY,conn)
df.head()
df.set_index('DatabaseDesireNextYear',inplace=True)
sizes=df.iloc[:,0]
color_list=['red','yellowgreen','gold','blue','lightcoral','pink','lightgreen']
labels=['PostgreSQL','MongoDB','Redis','MySQL','Elasticsearch']
df.plot(kind='pie',
figsize=(20,6),
autopct='%1.1f%%',
startangle=90,
shadow=True,
labels=None,
pctdistance=1.12,
subplots=True,
colors=color_list)
plt.title('Pie Chart of Top 5 Database Desire Next Year')
plt.axis('equal')
plt.legend(labels, loc='upper left')
plt.show()
```
### Stacked Charts
Create a stacked chart of median `WorkWeekHrs` and `CodeRevHrs` for the age group 30 to 35.
```
# your code goes here
QUERY= """
SELECT Age, WorkWeekHrs, CodeRevHrs
FROM master
WHERE Age BETWEEN 30 AND 35
"""
df=pd.read_sql_query(QUERY,conn)
df=df.groupby('Age').median()
print(df)
# your code goes here
QUERY= """
SELECT Age, WorkWeekHrs, CodeRevHrs
FROM master
WHERE Age Between 30 AND 35
"""
df=pd.read_sql_query(QUERY,conn)
df.set_index('Age',inplace=True)
order=['WorkWeekHrs','CodeRevHrs']
df.groupby('Age')[order].median().plot.bar(stacked=True)
```
## Visualizing comparison of data
### Line Chart
Plot the median `ConvertedComp` for all ages from 45 to 60.
```
# your code goes here
QUERY= """
SELECT(ConvertedComp) as ConvertedComp, Age as Age
FROM master
WHERE Age BETWEEN 45 AND 60
"""
df=pd.read_sql_query(QUERY,conn)
df.set_index('Age',inplace=True)
df.dropna(subset=['ConvertedComp'],inplace=True)
order=['ConvertedComp']
df.groupby('Age')[order].median().plot(kind='line',figsize=(14,8))
plt.title('Line Chart of ConvertedComp by Age')
plt.show()
```
### Bar Chart
Create a horizontal bar chart using column `MainBranch.`
```
# your code goes here
QUERY= """
SELECT Count(MainBranch) as count, MainBranch
FROM master
group by MainBranch
"""
df=pd.read_sql_query(QUERY,conn)
df['MainBranch'].value_counts().plot(kind='barh',figsize=(10,10),color='skyblue')
plt.title('Bar Chart of Main Branch')
plt.show()
# What is the rank of Python in most popular Language?
# write code here
QUERY= """
SELECT LanguageWorkedWith
FROM LanguageWorkedWith
"""
df=pd.read_sql_query(QUERY,conn)
df['LanguageWorkedWith'].value_counts()
# How many respondents indicated they currently work with SQL?
# write your code here
QUERY= """
SELECT LanguageWorkedWith, count(Respondent) as count
FROM LanguageWorkedWith
WHERE LanguageWorkedWith ='SQL'
"""
df=pd.read_sql_query(QUERY,conn)
df
# How many respondents indicated they work on MySQL only?
# write your code here
QUERY= """
SELECT DatabaseWorkedWith, Respondent
FROM DatabaseWorkedWith
"""
df=pd.read_sql_query(QUERY,conn)
df1=df.groupby('Respondent').sum()
df1[df1['DatabaseWorkedWith']=='MySQL'].count()
# Majority of the Survey Responders are?
# write your code here
QUERY= """
SELECT DevType
FROM DevType
"""
df=pd.read_sql_query(QUERY,conn)
df['DevType'].value_counts()
QUERY= """
SELECT DevType, count(Respondent) as count
FROM DevType
"""
df=pd.read_sql_query(QUERY,conn)
df
```
Close the database connection.
```
conn.close()
```
## Authors
Ramesh Sannareddy
### Other Contributors
Rav Ahuja
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ----------------- | ---------------------------------- |
| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |
Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBM-DA0321EN-SkillsNetwork-21426264&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ).
| github_jupyter |
4th order runge kutta with adapted step size
- small time step to improve accuracy
- integration more efficient (partition)
## a simple coupled ODE
d^2y/dx^2 = -y
for all x the second derivative of y is = -y (sin or cos curve)
- specify boundary conditions to determine which
- y(0) = 0 and dy/dx (x = 0) = 1 --> sin(x)
rewrte as coupled ODEs to solve numerically (slide 8)
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
#define coupled derivatives to integrate
def dydx(x,y):
#y is a 2D array
#equation is d^2y/dx^2 = -y
#so: dydx = z, dz/dx = -y
#set y = y[0], z = y[1]
#declare array
y_derivs = np.zeros(2)
y_derivs[0] = y[1]
y_derivs[1] = -1*y[0]
return y_derivs
#can't evolve one without evolving the other, dependent variables
#define 4th order RK method
def rk_mv_core(dydx,xi,yi,nv,h):
#nv = number of variables
# h = width
#declare k arrays
k1 = np.zeros(nv)
k2 = np.zeros(nv)
k3 = np.zeros(nv)
k4 = np.zeros(nv)
#define x at half step
x_ipoh = xi + 0.5*h
#define x at 1 step
x_ipo = xi + h
#declare a temp y array
y_temp = np.zeros(nv)
#find k1 values
y_derivs = dydx(xi,yi) #array of y derivatives
k1[:] = h*y_derivs[:] #taking diff euler steps for derivs
#get k2 values
y_temp[:] = yi[:] + 0.5*k1[:]
y_derivs = dydx(x_ipoh,y_temp)
k2[:] = h*y_derivs[:]
#get k3 values
y_temp[:] = yi[:] + 0.5*k1[:]
y_derivs = dydx(x_ipoh,y_temp)
k3[:] = h*y_derivs[:]
#get k4 values
y_temp[:] = yi[:] + k3[:]
k4[:] = h*y_derivs[:]
#advance y by step h
yipo = yi + (k1 + 2*k2 + 2*k3 + k4)/6. #this is an array
return yipo
```
before, we took a single step
now we take two different versions of the same equation for the step
can be used as a check for the previous technique
the difference should be within tolerance to be valid (if the steps are too big and outside of tolerance then they need to be smaller bebeh steps)
```
#define adaptive step size for RK4
def rk4_mv_ad(dydx,x_i,y_i,nv,h,tol):
#define safety scale
SAFETY = 0.9
H_NEW_FAC = 2.0
#set max number of iterations
imax = 10000
#set iteration variable, num of iterations taken
i = 0
#create an error (array)
Delta = np.full(nv,2*tol) #twice the tol, if it exceeds tol
#steps need to be smoler
#remember step
h_step = h
#adjust step
while(Delta.max()/tol > 1.0): #while loop
#estimate error by taking one step of size h vs two steps of size h/2
y_2 = rk4_mv_core(dydx,x_i,y_i,nv,h_step)
y_1 = rk4_mv_core(dydx,x_i,y_i,nv,0.5*h_step)
y_11 = rk4_mv_core(dydx,x_i+0.5*h_step,y_1,0.5*h_step)
#compute error
Delta = np.fabs(y_2 - y_1)
#if the error is too large
if(Delta.max()/tol > 1.0):
h_step *= SAFETY * (Delta.max()/tol)**(-0.25) #decreases h step size
#check iteration
if(i>=imax):
print("Too many iterations in rk4_mv_ad()")
raise StopIteration("Ending after i = ",i)
#iterate
i+=1
#leave while loop, to try bigger steps
h_new = np.fmin(h_step * (Delta.amx()/tol)**(-0.9), h_step*H_NEW_FAC)
#return the answer, the new step, and the step actually taken
return y_2, h_new, h_step
#wrapper function
def rk4_mv(dydx,a,b,y_a,tol):
#dydx = deriv wrt x
#a = lower bound
#b = upper bound
#y_a = boundary conditions (0,1)
#tol = tolerance for integrating y
#define starting step
xi = a
yi = y_a.copy()
#initial step size (smallllll)
h = 1.0e-4 * (b-a)
#max number of iterations
imax = 10000
#set iteration variable
i = 0
#set the number of coupled ODEs to the size of y_a
nv = len(y-a)
#set initial conditions
x = np.sull(1,a)
y = np.full((1,nv),y_a) #2 dimensional array
#set flag
flag = 1
#loop until we reach the right side
while(flag):
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.